Arms Race
Modern warfare is set to undergo major changes, thanks to new technologies springing forth from the fields of artificial intelligence and robotics. As Jon Wolfsthal sees it, the US isn't doing enough to ensure that these advances are made with the proper consideration.
Wolfsthal is a non-resident fellow at Harvard University's Managing the Atom project, and at the Carnegie Endowment for International Peace. Between 2014 and 2017, he acted as the senior director for arms control and nonproliferation at the National Security Council, serving as a special assistant to President Barack Obama.
In a guest post submitted to DefenseNews, Wolfsthal argues that while AI and autonomous weapons stand to improve national security and mitigate the risks taken by servicemen and women, the need to compete with other technologically advanced nations is resulting in a lack of oversight.
Neither the government nor the general public seems interested in having a serious discussion about the ethical ramifications and the legal basis of developing these programs, says Wolfsthal. As a result, bodies like the Department of Defense are focusing on what they can create, rather than whether they should.
He suggests that the National Security Council needs a better process for assessing the technologies the US wants to pursue, and what's being investigated by other nations. He adds that Congress should be more proactive in developing policy, and that the Senate and House Armed Services committees should be be fostering debate and discussion. Wolfsthal also criticizes President Trump for failing to staff the White House's Office of Science and Technology Policy, a decision he describes as "unconscionable."
Risk and Reward
"The possible advantages to the United States are endless," writes Wolfsthal. "But so too are the risks." AI and autonomous weapons aren't necessarily something that the military should shy away from — adoption of these technologies seems like something of a foregone conclusion — but they need to be implemented with care and consideration.
This stance mirrors the one taken by Elon Musk. The Tesla and SpaceX CEO has made no secret of his concerns about AI. However, last month he clarified his position, stating that the technology offers up huge benefits if we can avoid its most perilous pitfalls.
Now is the time for these discussions to take place. We're already seeing drones employed by the US Army, even if the hardware is sometimes imperfect. Meanwhile, Russia is thought to be developing missiles that make use of AI, and China is working on its own intelligent weapons systems.
It might seem like an exaggeration to compare the advent AI and autonomous weapons to the introduction of nuclear weaponry, but there are some broad similarities. These are instruments of death that can be used at long range, reducing the risk of friendly casualties.
It is likely naive to think that there's still an opportunity to reverse course and curb the implementation of these technologies in a military context. At this point, the priority has to be making sure that we don't allow these advances to be utilized recklessly. Like nuclear armaments, these technologies stand to completely revolutionize the way nations go to war. And before a technologically augmented conflict begins in earnest, it would be wise for the government and the public to figure out where they stand on how these weapons are wielded.
Share This Article