A senior Pentagon official was quoted yesterday saying “We’re going to invest in autonomous killer robots.” That is, perhaps, an alarming sentence to hear. The real threat is not the robots, however, but the autonomy.
Quote in full:
This administration cares about weapon systems and business systems and not ‘technologies.’ We’re not going to be investing in ‘artificial intelligence’ because I don’t know what that means. We’re going to invest in autonomous killer robots.
I want to move past the initial “yikes” reaction that many will have before going on with their lives. I’m more interested in the earlier parts of the quote.
It sounds like this particular senior official is thinking that the most strategically relevant output of recent AI research is along the lines of “better drones”. They’re partly correct; we’ll probably see better drones.
It is, actually, completely in character for the US military to want better drones. Each marginal automation can save many soldiers’ lives.
But the quote betrays an alarming lack of focus on a very real threat to our national security: the research into ever-more-strategically-competent machines that is happening within our own borders.
Machines can already outperform humans on Ph.D. level math problems. Labs are pushing hard for long-term planning and agency. It’s a matter of time before AI commanders start winning wargames. And it’s not much farther from there to outplanning, outthinking, and outmaneuvering humanity’s best military command structures.
We still have no idea how to make these systems work for us long-term. At some point, AIs are likely to wield superior strategic capabilities against humanity, and not on behalf of the US DoD.
The strategic community is not prepared for this development.
The US government cannot afford to ignore this research, or allow it to proceed unchecked. The thing that should make you go “yikes” is not better drones, but better generals. It is not wise to build and employ killer robots while having a less-than-solid understanding of the mechanisms of the minds that drive them.
I still expect that a hostile superintelligence could find a more creative and efficient way to kill us all than with bullets fired by red-eyed automatons. Sure, it’d be a shame to hand a hostile advanced AI a bunch of autonomous weapons. But the first and very lethal mistake is building the AI in the first place.
We should fear the Skynets of the world more than the Terminators.