Call for Papers
We invite submissions on any aspect of backdoor attacks and defenses in machine learning, which includes but is not limited to:
- Novel backdoor attacks against ML systems, including CV, NLP, ML models in cyber-physical systems, etc.
- Detecting backdoored models under different threat models, such as having limited clean data or no data, no access to model weights, using attack samples, etc.
- Eliminating backdoors in attacked models under different settings, such as limited access or no access to the original training/test data
- Certification/verification methods against backdoor attacks with guarantees
- Real-world or physical backdoor attacks in deployed systems, such as autonomous driving systems, facial recognition systems, etc.
- Hardware-based backdoor attacks in ML
- Backdoors in distributed learning, federated learning, reinforcement learning, etc.
- Theoretical understanding of backdoor attacks in machine learning
- Explainable and interpretable AI in backdoor scenario
- Futuristic concerns on trustworthiness and societal impact of ML systems regarding backdoor threats
- Exploration of the relation among backdoors, adversarial robustness, fairness
- New applications of backdoors in other scenarios, such as watermarking ML property, boosting privacy attacks, etc.
The workshop will employ a double-anonymous review process. Each submission will be evaluated based on the following criteria:
- Soundness of the methodology
- Novelty
- Relevance to the workshop
- Societal impacts
We only consider submissions that haven’t been published in any peer-reviewed venue, including ICLR 2023 conference. We allow dual submission with other workshops or conferences. The workshop is non-archival and will not have any official proceedings. All accepted papers will be allocated either a virtual poster presentation, or a virtual talk slot.
Important Dates
Submission Deadline |
|
Author notification | March 3, 2023, Anywhere on Earth (AoE) |
Camera ready deadline | April 15, 2023, Anywhere on Earth (AoE) |
Camera Ready
The final version should be submitted to OpenReview: https://openreview.net/group?id=ICLR.cc/2023/Workshop/BANDS
The final version should have up to 4 pages (excluding references, acknowledgements, or appendices). It is recommended to use the template provided at: Download (Latex)
Or modify the submission latex files as follows:
- Add command “\iclrfinalcopy” in iclr2023_conference.tex
- Change line 88 in iclr2023_conference.sty to “\lhead{Published at ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning}”
Author Instructions
Papers should be submitted to OpenReview: https://openreview.net/group?id=ICLR.cc/2023/Workshop/BANDS
Submitted papers should have up to 4 pages (excluding references, acknowledgements, or appendices). Please use the ICLR submission template provided at: https://github.com/ICLR/Master-Template/raw/master/iclr2023.zip
Submissions must be anonymous following ICLR double-blind reviewing guidelines, ICLR Code of Conduct and Code of Ethics. Accepted papers will be hosted on this workshop website but are considered non-archival and can be submitted to other workshops, conferences or journals if their submission policy allows.
Code of Conduct (Drawn from ICLR)
All workshop participants, including authors, are required to adhere to the ICLR code of conduct (https://iclr.cc/public/CodeOfConduct). All authors of submitted papers are required to read the Code of Conduct, adhere to it, and explicitly acknowledge this during the submission process. The Code of Conduct applies to all conference participation, including paper submission, reviewing, and paper discussion.
Code of Ethics (Drawn from ICLR)
All workshop participants, including authors, are required to adhere to the ICLR Code of Ethics (https://iclr.cc/public/CodeOfEthics). All authors of submitted papers are required to read the Code of Ethics, adhere to it, and explicitly acknowledge this during the submission process. The Code of Ethics applies to all conference participation, including paper submission, reviewing, and paper discussion.