Using AI responsibly in the military

The U.S. Department of State released 10 measures to enable AI in the military while lessening risks.

We’ve all seen movies such as The Terminator concerning machines with artificial intelligence (AI) rising up against humanity. Obviously, life isn’t imitating art currently and to keep sci-fi from becoming reality, the United States laid out guidelines to promote responsible military use of AI.

AI is being used in weapons systems, for decision support, finance, payroll and accounting, recruiting, retention, and promotion of personnel, and the collection of intelligence, surveillance, and reconnaissance data.

At the end of 2023, the U.S. Department of State released 10 measures to enable AI in the military while lessening risks:

1. States should ensure their military organizations adopt and implement these principles for the responsible development, deployment, and use of AI capabilities.

2. States should take appropriate steps to ensure their military AI capabilities will be used consistent with their respective obligations under international law, in particular international humanitarian law. States should also consider how to use military AI capabilities to enhance their implementation of international humanitarian law and improve protection of civilians and civilian objects in armed conflict.

3. States should ensure senior officials effectively and appropriately oversee the development and deployment of military AI capabilities with high-consequence applications, including, but not limited to, such weapon systems.

4. States should take proactive steps to minimize unintended bias in AI capabilities.

5. States should ensure relevant personnel exercise appropriate care in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.

6. States should ensure military AI capabilities are developed with methodologies, data sources, design procedures, and documentation transparent to and auditable by their relevant defense personnel.

7. States should ensure personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those systems to make appropriate context-informed judgments on the use of those systems and to mitigate the risk of automation bias.

8. States should ensure military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.

9. States should ensure the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life cycles. For self-learning or continuously updating military AI capabilities, states should ensure critical safety features haven’t been degraded.

10. States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior.

We’ll continue to monitor AI so it won’t be “Hasta la vista, baby” anytime soon.

January/February 2024
Explore the January/February 2024 Issue

Check out more from this issue and find your next story to read.