WASHINGTON — The Pentagon today updated its decade-old guidelines for autonomous weapons systems to include advancements in artificial intelligence, a new high-level oversight group, and clarification of the roles that various offices within the department will play.
“I think one of the things we tried to accomplish over the course of the update is to clarify the language to ensure a common understanding both inside and outside the Pentagon of what the directive says,” Michael Horowitz, director of the Pentagon’s Emerging Capabilities Policy Office, told reporters today ahead of the release of the revised guidance, calling it “not a major policy change.” “The directive does not prohibit the development of a particular weapon system. It places demands on autonomous and semi-autonomous weapon systems.”
DoD directive 3000.09originally signed on November 21, 2012 by then-Deputy Secretary of Defense Ash Carter, “establishes DoD policies and assigns responsibilities for the development and use of autonomous and semi-autonomous capabilities in weapons systems, including manned and unmanned platforms.”
Last May was Breaking Defense report first some details of the upcoming revision. At the time, Horowitz said in an interview that the “fundamental approach in the directive remains sound, that the directive sets out a very responsible approach to the integration of autonomy and weapons systems.”
Still, one of the biggest things is the revised guideline [PDF] is responsible for the “dramatic, comprehensive vision” for AI’s role in future military operations, he added. The revisions reflect DoD’s work on its “responsible AI” and AI ethical principles initiatives.
“And for autonomous weapon systems that include artificial intelligence… the directive now specifies that, like any system that uses artificial intelligence, whether it is a weapon system or not, they must also follow those guidelines,” Horowitz said, referring to DoD’s chief and ethical AI initiatives. “So, you know, part of the motivation here was to make sure that AI policy is included as part of this directive, even though the directive itself is about autonomous weapon systems, which are certainly not synonymous with artificial intelligence.”
The directive also further requires additional high-level assessments for the development and deployment of autonomous weapon systems and “continues to require that autonomous and semi-autonomous weapon systems be designed to enable commanders and operators to exercise appropriate human judgment on the use of violence,” Horowitz told reporters.
The senior-level review will take the form of the new Autonomous Weapon Systems Working Group, which is made up of various departments within the department — such as acquisition and maintenance and research and engineering — which will support the Office of the Undersecretary of Defense for Policy.
“This is part of what we consider good governance,” Horowitz said, referring to the working group. “And… the directive… does not change the approval requirements. You still have senior reviewers who ultimately call on these systems… . What the Autonomous Weapons Working Group does is collect the information that senior leaders need to make effective decisions, to essentially put together the paper package, to have an effective review process, to make sure that either prior development or prior to handling that a proposed autonomous weapon system will meet the requirements set out in the directive.
The updated document also assigns responsibilities to DoD offices, including offices that did not exist when the directive was passed in 2012. the Chief Digital and AI Office created last year will be responsible for several efforts, including monitoring and evaluating AI capabilities and cybersecurity for both autonomous and semi-autonomous weapon systems and advising the Secretary of Defense on these matters.
The CDAO will also work with the Secretary of Defense’s Office of Research and Engineering to, among other things, “formulate concrete, verifiable requirements” for the implementation of DoD’s responsible and ethical AI initiatives.
In May, Horowitz predicted that the core of the autonomous policy wouldn’t change all that much. But he said, ‘You know, it’s been ten years. And it is entirely plausible that there are some updates and clarifications that could be helpful.”