Artificial intelligence got on everybody’s radar in 1997, when IBM’s Deep Blue super-computer defeated Garry Kasparov, then the reigning world chess champion, in a six-game match. In 2011, IBM’s next-generation AI test-bed—dubbed Watson in honor of IBM founder Thomas J. Watson—performed a much more impressive feat: the machine (basically a collection of mega-processing cores) handily beat Ken Jennings on TV’s Jeopardy! Jennings had been the all-time champ on the popular TV quiz show, compiling a 74-game winning streak.
We know what you’re thinking: if IBM’s uber-intelligent machine defenestrated the world’s greatest chess player in the late 1990s, why did it take the programmers more than a decade to master a stupid quiz show?
Watson could explain to you that chess is based on logic, which is the essential DNA of all computers. The binary code that drives the massive data-crunching capabilities of advanced supercomputers is perfectly suited to analyze the kaleidoscopic permutations of chess moves and counter-moves. The calculating capacity and speed of these machines makes the 86 billion neurons (each containing hundreds of thousands of synapses) that fire the human brain look like lazy summer fireflies by comparison.
But answering simple questions posed in human syntax proved to be a much more daunting challenge than figuring out the inevitable endgame if Kasparov decided to sacrifice a bishop to expose the enemy king to his rook. Answering a question like “Who are Marx and Lenin?” was the equivalent of stepping into a pit of quicksand for Watson. In order to put this question into a context it could analyze, IBM’s electronic processing marvel had to run through dozens of complicated algorithms stored in its “parsing and semantic analysis suite” and generate hundreds of “hypotheses” that it then had to investigate before it could come up with the obvious answer: Groucho and John collaborated on I Am The Walrus, the theme song of the Russian Revolution.
Unlocking the mysteries of the human brain (why to do we always conjure an image of Carlos Beltran leaving the bat on his shoulder on the last pitch of the 2006 NLCS every time we smell mustard?) is the Holy Grail of computer science. Watson can out-calculate us, but can it really think? And if (let’s say when) Watson and its silicon-based progeny can teach themselves to think—and then use their new talents to re-engineer thinking itself and take other machines with them to the next level—what are the ramifications for sentient humans like us who already know that Ingrid Bergman never should have left Bogie on the runway and aren’t particularly interested in giving it more thought?
We rapidly are approaching (perhaps in a decade or less) the breakthrough scientists call “technical singularity.” Technical singularity is defined as the hypothesis that the arrival of artificial super-intelligence abruptly will trigger runaway technological growth, resulting in unfathomable changes to human civilization (that’s what Wikipedia says; we’ll have to wait for Wikileaks to tell us whether Putin has a working model of a Terminator).
A bevy of renown geniuses, futurists and high-tech entrepreneurs who have the best understanding of where this is going—people like Stephen Hawking and Elon Musk—are warning us to be afraid, very afraid. These eggheads are sounding a loud alarm that the time to set up some guardrails for the potential applications of artificial intelligence is now, before the smart machines decide that carbon-based organisms are irrelevant, hopelessly inefficient and frankly too messy to clutter up the sterile Matrix the machines may already be constructing while we lie on our pillows and wait for an Ambien-induced haze to make us forget Trump’s latest tweets.
Here’s a sampling of the ongoing geek freakout over the impending Rise of the Machines:
- Tesla and SpaceX founder Musk, who is perhaps the most vocal advocate for regulation of artificial intelligence, has called AI “a fundamental existential risk for human civilization.” In July, Musk addressed the National Association of Governors and painted a dark picture of a near future in which autonomous AI networks will be able to shut off electric grids and water systems on their own and control unstoppable military robots that can be programmed to kill anything that moves. Musk warned that regulations governing artificial intelligence must be in place well before AI systems are perfected because the breakthrough of technical singularity will unfold too quickly to react after the fact.
- Physicist Hawking has been warning since 2014 that artificial intelligence has the potential to destroy human civilization. Hawking told a tech conference in Lisbon, Portugal last month that scientists still are uncertain whether AI could be a boon to the human race or a death sentence for humanity. “Computers can, in theory, emulate human intelligence and exceed it,” Hawking said. “Success in creating effective AI could be the biggest event in the history of our civilization—or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it or sidelined, or conceivably destroyed by it.” [Editor’s Note: Hawking also has warned that if AI doesn’t get us, nuclear war or a global pandemic probably will and therefore we need to colonize Mars as soon as possible to survive as a species, nominating himself as the first passenger on the Mars shuttle.]
However, the drumbeat for urgent regulation of AI is beginning to produce a backlash from some tech titans whose firms are experimenting with AI-based applications.
Microsoft founder Bill Gates, who in 2015 added his voice to the chorus of doomsayers warning about the dangers posed by AI, now is counseling against “panic” over AI and encouraging the continued development of electronic super-brains. Gates rebutted Hawking’s Lisbon speech by declaring that the benefits of artificial intelligence will “far outweigh” any negatives in coming years. Speaking at the recent Misk Global Forum in Riyadh, Saudi Arabia, Gates said “we are in a world of shortage, but these advances will help us take on all of the top problems,” including infectious diseases. [Hey, maybe the $90-Billion-Man can design an AI program that will teach Microsoft Word not to change fonts in the middle of a sentence without our approval.]
Social network goliath Facebook also is pooh-poohing the hand-wringing over AI. Yann LeCun, head of AI at Facebook, recently told NPR that “humans are projecting” when we predict Terminator-style global takeovers. “The desire to dominate socially is not correlated with intelligence, it’s correlated with testosterone, which AI systems won’t have,” he serenely cooed.
Memo to Mark Zuckerberg: get back to us when the Russian bots that have penetrated Facebook are about to terminate you.
Since we like to end our posts on an upbeat note, here’s a picker-upper: if you’re having trouble wrapping your brain around the implications of technical singularity and AI, you soon may have a lot of time on your hands to ponder this subject. That’s because while the super-computers have only begun to design their global Matrix, garden-variety robots already are poised to invade the average workspace.
Martin Ford, a software developer and author of Rise of the Robots: Technology and the Threat of a Jobless Future, notes “a computer doesn’t need to replicate the entire spectrum of your intellectual capability in order to displace you from your job; it only needs to do the specific things you are paid to do.” He cites a 2013 Oxford study which concluded that nearly half of all occupations in the United States are “potentially automatable,” perhaps within “a decade or two.” Ford also says the recent trend toward reshoring of factories in not creating a bonanza of jobs because reshoring decisions are being driven by the increasing ability of manufacturers to automate the production they’re bringing back to the U.S.
Some of the largest current job-creators already are laying the groundwork for a transition to an automated workforce. In 2012, Amazon spent close to $800 million to buy Kiva, a robotics company which makes small vacuum cleaner-sized robots that can zoom around a factory floor and move tall stacks of shelves weighing up to 750 pounds. A Deutsche Bank research report estimated that Amazon could save $22 million a year by introducing the Kiva machines in a single warehouse; the savings company-wide could amount to billions. So Amazon aggressively is moving to replace its human order-pickers with robots (it already has deployed 30,000 Kiva units in its fulfillment centers). When Amazon bought the Whole Foods supermarket chain this year, many analysts predicted that the e-commerce giant intends to automate the grocer’s food-distribution centers as well as its stores.
Regarding the next wave of automation, Amazon boss Jeff Bezos recently said “it’s probably hard to overstate how big of an impact it’s going to have on society over the next twenty years.” Thanks for sparing us the hyperbole, Jeff (and don’t think we don’t remember you’ve already got the blueprints for your escape pod to Mars).
The robots also are getting ready to invade the last bastions of minimum-wage employment, including fast-food restaurants. McDonald’s is introducing “digital ordering kiosks” that are expected to replace human cashiers at 5,500 restaurants by the end of 2018. Construction jobs also are being threatened by automation: a New York-based firm has introduced a laser-guided system that can lay up to 1,200 bricks a day, more than twice as many as an average mason. And then there are the roughly 4 million jobs involving driving that may be eliminated by autonomous vehicles….
Meanwhile, in China, manufacturers are getting ready to unveil something they call the Dark Factory. Hint: they don’t have to turn on the lights because the facility is fully automated.
Martin Ford is concerned that society is headed toward an era of “techno-feudalism.” He envisions a plutocracy living “in gated communities or in elite cities, perhaps guarded by autonomous military robots and drones.” Under the old feudalism, he says, the peasants were exploited; in the not-too-distant future, the new peasants (that would be us) will be superfluous. The best we can hope for, he says, is a collective form of semi-retirement. Ford recommends a guaranteed basic income for all, to be paid for with new taxes on the richest people.
So perhaps the ultimate result of technical singularity will be classified by the robot historians of the future (in a database file labeled homo sapiens/extinct) as a form of euthanasia. That’s a lot to cogitate about here with our inferior neural synapses. It’s enough to make Carlos Beltran stare at three straight curve balls without lifting the bat off his shoulder.
Will autonomous Artificial Intelligence networks decide to get rid of humans?