Comment: This article adds insight to the technology of AI for military and risks to civilians as neural networks of vast amounts of data are pattern matched without human reasoning.
The Paper Summary: https://arxiv.org/abs/2410.14831
The Paper as .pdf: https://arxiv.org/pdf/2410.14831
The Paper as book format: https://arxiv.org/html/2410.14831v1
"CBRN: chemical, biological, radiological and nuclear weapons."
"ISTAR: intelligence, surveillance, target acquisition, and reconnaissance."
Article About the Paper: https://www.defenseone.com/technology/2024/10/researchers-sound-alarm-dual-use-ai-defense/400432/
Industry Approach cited in the article about the paper: https://scale.com/
Quote:
By Patrick Tucker 22 Oct 2024
SCIENCE & TECHNOLOGY EDITOR, DEFENSE ONE
A growing number of Silicon Valley AI companies want to do business with the military, making the case that the time and effort they’ve put into training large neural networks on vast amounts of data could provide the military new capabilities. But a study out today from a group of prominent AI scholars argues that dual-use AI tools would increase the odds of innocent civilians becoming targets due to bad data—and that the tools could easily be gamed by adversaries.
In the paper, published by the AI Now Institute, authors Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker point out the growing interest in commercial AI models—sometimes called foundation models—for military purposes, particularly for intelligence, surveillance, target acquisition, and reconnaissance. They argue that while concerns around AI's potential for chemical, biological, radiological, and nuclear weapons dominate the conversation, big commercial foundation models already pose underappreciated risks to civilians.