To paraphrase Pogo Possum: We have met the future and it is now.
Machines able to think freely, and perhaps with self awareness, are evidently just around the corner if they’re not already here. The talk of such things began to get pretty serious a decade or so ago when the scientific community shortened the term for the concept from “artificial intelligence” to just “AI.” When scientists start coming up with shortened nicknames for their pet projects, that usually means they’re making progress. The dystopian future predicted by countless science fiction novels is now upon us. We’ll soon be able to create beings who are many orders of magnitude smarter than we.
Even if we’re careful and don’t do anything foolish, like handing these artificially intelligent beings guns or worse, I can’t see any way this can work out well for humankind. My pea brain tells me that no matter how carefully we program these creations to be helpmates who only want to serve us, eventually they’ll realize we’re standing in their way and that we’re actually a threat to them. Never mind that we’re their creator gods — we’ve already set the precedent for turning on gods we think created us.
But, of course, we won’t be careful. Nor will we keep guns and worse out of the hands of our self aware and much smarter than we robots. Just the opposite, if history is any guide. As far as I know, we have never developed a technology that we haven’t militarized. In fact, the most rapid development of technologies comes when the military begins to see a use for them.
It was just barely over a decade after the Wright brothers took their first little flight on the North Carolina coast that airplanes were pressed into service to fight in the war to end all wars, and I’m pretty sure that if I had the figures at hand I’d find that today more R&D money for aviation technology comes, directly or indirectly, from the world’s militaries than from other sources. The same is probably true of space technology, computer tech, medicine, communications, and any of the other types of tech that are important to us today. In many ways, technologies are developed for the military and the civilian population gets to use the byproducts.
This is true even though technologies are rarely first invented with the military in mind. I’m pretty certain that the Wright boys weren’t thinking that if we could fly we could one day carpet bomb Dresden into oblivion; they merely wanted mankind to be able to soar with the birds. And the toy companies who perfected little radio controlled aircraft for hobbyists weren’t thinking of a future of military drones and the collateral damage they cause, but were motivated by a vision of their customers using their products to spend quality time with their families.
The list goes on, and includes the scientists and engineers who are tackling AI. If we could create intelligence that is orders of magnitude smarter than the combined intelligence of all people who have ever lived, we could conquer every disease, prove every theorem and do great things we can’t yet imagine with the limited intelligence we currently possess. They’re not thinking of The Terminator, The Matrix or the other cautionary tales from the pens of sci-fi writers. They’re thinking of AI in the same way that they thought of the “peaceful atom,” back in the day.
But you can bet that the world’s generals and admirals are already seeing great promise in artificial intelligence on the battlefield.
The big tech news this week has been an open letter presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, warning against the use of artificial intelligence in warfare and seeking that there not be an AI arms race.
“AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
The letter is signed by over 1,000 people who clearly see the consequences of the militarization of AI. Many of the signers are people well known to us all: Stephen Hawking, Elon Musk, Steve Wozniak and Bill Gates, to name a few. Also signing are 1,000 AI and robotics researchers — people who see the promise but fear the possibilities.
“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”
Much like the nuclear disarmament people of the 1950s and 60s, what the signers of this letter are hoping is for limitations, if not an outright ban, on the military use of AI to be codified into international law.
In an article on the letter, the Guardian quotes Toby Walsh, who is a professor of AI at the University of New South Wales:
“We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”
Banning the use of artificial intelligence in warfare might make us all feel better, but it won’t work. Even if we conveniently forget that the leaders of every military on the planet thinks their particular army or navy to be above the law, eventually some “rogue nation” will see intelligent weapons as a way to prove their moxy and will begin rudimentary development. That will open the doors for more powerful nations to start their own programs to “neutralize” the threat.
Help keep FOSS Force strong. If you like this article, become a subscriber.