The Pentagon promises it will use artificial intelligence 'for good, not evil'
Artificial intelligence is already revolutionizing the world around us. It's already powering our social media feed experiences, being employed to fight human trafficking, and in the not-so-distance future, humans will be able to interact directly with computers.
But in places like China, it's being used for more nefarious reasons. It what sounds like the plot to a mediocre dystopian movie, China has a "social currency system" that monitors civilians. If you don't have high enough "social credit", you're unable to buy things like tickets for a train. This system, in part, is powered by artificial intelligence using facial recognition algorithms.
With that in mind, juxtaposed against an increasingly out-of-control Trump administration, it makes sense that the US Military would create parameters for their implementation of artificial technology. And they've reportedly promised to use artificial technology for good, not evil. As Meghann Myers writes in Military Times:
Defense Secretary Mark Esper signed off on five guidelines in a memo released Monday,
“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” said Esper wrote. “AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior.”
The list is the result of a 15-month study by the Defense Innovation Board, which is made up of academics and executives in tech and business, who presented their proposed principles in a public forum at Georgetown University in October.
According to Esper’s Monday memo, the Pentagon pledges that its AI efforts will be: 1) Responsible, 2) Equitable, 3) Traceable, 4) Reliable and 5) Governable.
Lt. Gen. Jack Shanahan, head of the Joint Artificial Intelligence Center, told reporters that “We owe it to the American people and our men and women in uniform to adopt AI principles that reflect our nation’s values of a free and open society. This runs in stark contrast to Russia and China, whose use of AI tech for military purposes raises serious concern about human rights, ethics and international norms.”
Supposedly, the White House will release their own vision for the use of artificial intelligence in the military this summer.
Call me crazy, but more often than not, the venn diagram of "ethics" and "military leadership" is two completely separate circles.
I mean sure, there are applications that are pretty cut-and-dry. Logistics, examining waste, etc. But as Myers noted in her article, Project Maven, a secret project by the Department of Defense, was granted access to software from Google that allowed the military to use artificial intelligence to identify objects in drone footage, including people. Lt. Gen. Shanahan's hindsight on Project Maven leaves a bit more to be desired, as he didn't really say if this would have been a violation of the DoD's new guidelines.
Shanahan said "If we would have had the AI ethics principles three years ago … and our starting point with one of the big tech companies was that — and we were transparent about what we were trying to do and why we were trying to do it — maybe we would have had a different outcome."
Maybe a different outcome? Maybe?
Well, for me at least, maybe that non-committal statement makes me feel maybe a little more comfortable.
Or maybe not.