To paraphrase Pogo Possum: We have met the future and it is now.
Machines able to think freely, and perhaps with self awareness, are evidently just around the corner if they’re not already here. The talk of such things began to get pretty serious a decade or so ago when the scientific community shortened the term for the concept from “artificial intelligence” to just “AI.” When scientists start coming up with shortened nicknames for their pet projects, that usually means they’re making progress. The dystopian future predicted by countless science fiction novels is now upon us. We’ll soon be able to create beings who are many orders of magnitude smarter than we.
Even if we’re careful and don’t do anything foolish, like handing these artificially intelligent beings guns or worse, I can’t see any way this can work out well for humankind. My pea brain tells me that no matter how carefully we program these creations to be helpmates who only want to serve us, eventually they’ll realize we’re standing in their way and that we’re actually a threat to them. Never mind that we’re their creator gods — we’ve already set the precedent for turning on gods we think created us.
But, of course, we won’t be careful. Nor will we keep guns and worse out of the hands of our self aware and much smarter than we robots. Just the opposite, if history is any guide. As far as I know, we have never developed a technology that we haven’t militarized. In fact, the most rapid development of technologies comes when the military begins to see a use for them.
It was just barely over a decade after the Wright brothers took their first little flight on the North Carolina coast that airplanes were pressed into service to fight in the war to end all wars, and I’m pretty sure that if I had the figures at hand I’d find that today more R&D money for aviation technology comes, directly or indirectly, from the world’s militaries than from other sources. The same is probably true of space technology, computer tech, medicine, communications, and any of the other types of tech that are important to us today. In many ways, technologies are developed for the military and the civilian population gets to use the byproducts.
This is true even though technologies are rarely first invented with the military in mind. I’m pretty certain that the Wright boys weren’t thinking that if we could fly we could one day carpet bomb Dresden into oblivion; they merely wanted mankind to be able to soar with the birds. And the toy companies who perfected little radio controlled aircraft for hobbyists weren’t thinking of a future of military drones and the collateral damage they cause, but were motivated by a vision of their customers using their products to spend quality time with their families.
The list goes on, and includes the scientists and engineers who are tackling AI. If we could create intelligence that is orders of magnitude smarter than the combined intelligence of all people who have ever lived, we could conquer every disease, prove every theorem and do great things we can’t yet imagine with the limited intelligence we currently possess. They’re not thinking of The Terminator, The Matrix or the other cautionary tales from the pens of sci-fi writers. They’re thinking of AI in the same way that they thought of the “peaceful atom,” back in the day.
But you can bet that the world’s generals and admirals are already seeing great promise in artificial intelligence on the battlefield.
The big tech news this week has been an open letter presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, warning against the use of artificial intelligence in warfare and seeking that there not be an AI arms race.
“AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
The letter is signed by over 1,000 people who clearly see the consequences of the militarization of AI. Many of the signers are people well known to us all: Stephen Hawking, Elon Musk, Steve Wozniak and Bill Gates, to name a few. Also signing are 1,000 AI and robotics researchers — people who see the promise but fear the possibilities.
“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”
Much like the nuclear disarmament people of the 1950s and 60s, what the signers of this letter are hoping is for limitations, if not an outright ban, on the military use of AI to be codified into international law.
In an article on the letter, the Guardian quotes Toby Walsh, who is a professor of AI at the University of New South Wales:
“We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”
Banning the use of artificial intelligence in warfare might make us all feel better, but it won’t work. Even if we conveniently forget that the leaders of every military on the planet thinks their particular army or navy to be above the law, eventually some “rogue nation” will see intelligent weapons as a way to prove their moxy and will begin rudimentary development. That will open the doors for more powerful nations to start their own programs to “neutralize” the threat.
Help keep FOSS Force strong. If you like this article, become a subscriber.
Christine Hall has been a journalist since 1971. In 2001, she began writing a weekly consumer computer column and started covering Linux and FOSS in 2002 after making the switch to GNU/Linux. Follow her on Twitter: @BrideOfLinux
If an AI is so much smarter than us, then perhaps it won’t be so shortsighted as to destroy the planet it is living on. If we get in the way with our ignorance, then perhaps it is for the best if it restricts us or, if necessary, removes us.
The total destruction of our world in the pursuit of short term profits for a very few must be stopped, or we deserve extinction.
There is contradiction between your use of Matrix and your reference to self-aware machines. The idea in the Matrix is that all beings are intricately embedded in the Matrix and inherently part of it. The notion that there could be a self-aware machine that while simply wired or screwed together achieves conscious awareness of a reality in which it is not embedded from the beginning of life, is a chimera that comes from atheist scientists who do not accept realities and powers greater than their present awareness. It is another matter to speak of intelligently programmed machine – even machines cleverly programmed for war
In reality we are nowhere close to developing machines that can think for themselves. AI is still essentially a simulation. There are no programs/computers with a consciousness, and no current research indicates that this is nearing possibility. It makes for entertaining science fiction, but in reality robots and computers do what they were programmed to do.
Either you have to become a machine or it will become you, either way you are obselete.
Currently we don’t understand what intelligence or self awareness is, so it’s not actually possible to create software/hardware that actually is. I suspect that even when we know enough to make it possible that the god botherers will still claim it is not.
> “Currently we don’t understand what intelligence or self awareness is, so it’s not actually possible to create software/hardware that actually is.”
At least on purpose…many things have been created accidentally. 🙂
More seriously, Real AI is a lot like fusion power: Always about 20 years away. People thought we’d have both of these things ‘soon’ back in the 1970’s. I wouldn’t count on either anytime soon.