Home

Artificial intelligence (AI), including machine learning and computer vision, has many historical roots in research for milliary applications [1,2] and much of the scientific community remains deeply connected to the surveillance and defense industries [3]. Yet, even today, the military uses of AI are often obscured, whether research is directly funded by defense agencies or developed for civilian purposes but with dual-use implications. As a result, many researchers and developers remain unaware of how their work might be deployed in conflicts [4], and the extent to which they might contribute to intentional harm, including potentially violations of international law [5].
Although AI in conflict and surveillance is a key topic of public and policy debate [6,7,8], there are currently no formal spaces for AI researchers themselves within the main machine learning conferences to articulate and discuss their positions towards the weaponization of their research. Given this gap, ICLR–with its key position in the research field–is an ideal host for a forum to consider both the harms associated with research dissemination and design decisions, and the opportunities for affirmatively building our research and development agenda from the starting position of non-violent harm prevention, research ethics, and respect for international law, including international humanitarian and human rights law.
In this workshop, we aim to address the critically under-discussed issue of AI’s dual-use nature [9], focusing on how machine learning technologies are being adapted for military purposes [10,11], potentially without the researchers’ knowledge or consent. While attending to the heightened risks associated with particular areas and systems of research, we will also be collectively thinking through what it looks like to engage productively in research and development activities that considers ethics and international law at its core. Our objectives are to:
- Increase transparency about the pipelines through which AI research enters into military and surveillance applications [3].
- Develop collective strategies to address ethical and legal risks as a community of researchers [12].
- Highlight and support research efforts that contribute to peace-building applications [13,14], including those helping to surface or elucidate harmful applications of AI [15,16].
A key avenue of exploration will be to invite parallels between current conversations in AI and similar debates (with longer histories) in other scientific fields, such as genetic biology and nuclear physics; where researchers have grappled with similar ethical challenges and proposed concrete professional responses.
Quick links
References
[1] K. Crawford. The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press, 2021.
[2] J. E. Dobson. The Birth of Computer Vision. U of Minnesota Press, 2023.
[3] P. R. Kalluri, W. Agnew, M. Cheng, K. Owens, L. Soldaini, and A. Birhane. The surveillance AI pipeline. arXiv preprint arXiv:2309.15084, 2023.
[4] S. Schwartz, L. G. Guntrum, and C. Reuter. Vision or threat—awareness for dual-use in the development of autonomous driving. IEEE Transactions on Technology and Society, 3(3):163–174, 2022.
[5] S. Fereidooni and V. Heidt. The fallacy of precision: Deconstructing the narrative supporting ai-enhanced military weaponry. In Harms and Risks of AI in the Military, 2024.
[6] The Future of Life Institute. Autonomous weapons open letter: AI & robotics researchers, 2016. https://futureoflife.org/open-letter/open-letter-autonomous-weapons-ai-robotics/.
[7] H. Khlaaf, S. M. West, and M. Whittaker. Mind the gap: Foundation models and the covert proliferation of military intelligence, surveillance, and targeting. arXiv preprint arXiv:2410.14831, 2024.
[8] S. Romansky. Lessons from the EU on confidence-building measures around artificial intelligence in the military domain. SIPRI Publications, 2025. https://www.sipri.org/sites/default/files/2025-05/eunpdc_no_97.pdf.
[9] A. Brenneis. Assessing dual use risks in ai research: necessity, challenges and mitigation strategies. Research Ethics, 21(2):302–330, 2025.
[10] A. Loewenstein. The Palestine laboratory: How Israel exports the technology of occupation around the world. Verso Books, 2024.
[11] P. Scharre. Four battlegrounds: Power in the age of artificial intelligence. WW Norton & Company, 2023.
[12] L.-A. Kaffee, A. Arora, Z. Talat, and I. Augenstein. Thorny roses: Investigating the dual use dilemma in natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13977–13998, 2023.
[13] R. Sefala, T. Gebru, L. Mfupe, N. Moorosi, and R. Klein. Constructing a visual dataset to study the effects of spatial apartheid in south africa. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
[14] J. Filipi, V. Stojnić, M. Muštra, R. N. Gillanders, V. Jovanović, S. Gajić, G. A. Turnbull, Z. Babić, N. Kezić, and V. Risojević. Honeybee-based biohybrid system for landmine detection. Science of the total environment, 803:150041, 2022.
[15] Amnesty International. Israel and occupied palestinian territories: Automated apartheid: How facial recognition fragments, segregates and controls palestinians in the OPT, 2023. https://www.amnesty.org/en/documents/mde15/6701/2023/en/.
[16] S. Goodfriend. Algorithmic state violence: Automated surveillance and palestinian dispossession in hebron’s old city. International Journal of Middle East Studies, 55(3):461–478, 2023.