I consider myself a techno-optimist, but Watson’s performance in Jeopardy’s IBM Challenge has definitely exceeded my expectations. While I did predict Watson would win the competition, I didn’t think it would be so dominant. This was a contest I thought machine intelligence might win by a narrow margin, but the three-day, two game match wasn’t even close. AI has come a long way, baby.
As impressive as Watson’s final cash score was, what I think was more remarkable was it’s answer success rate. In the first match, out of sixty clues, Watson rang in first and answered 38 correctly, with five errors. This is an 88.4% success rate. If only the 30 questions in the Double Jeopardy portion are considered, this jumps to a whopping 96%. You’ll notice I’ve left the Final Jeopardy question out of these calculations. This is because this question had to be answered regardless of the machine’s low confidence level of 14%. It’s important to the competition, but actually indicates the success of the machine’s algorithms.
While the second game (Day 3) wasn’t quite as impressive as the first, Watson still won by a significant margin. Considering it was competing against the two best human Jeopardy players of all time, it’s safe to say IBM met its goal and then some.
Some of the more intriguing (some would rightly say, concerning) moments in the contest were those in which Watson arrived at unfathomably wrong answers. As the lead on the project, Watson Principal Investigator, Dr. David Ferrucci commented:
“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”
The fact is, even Watson’s developers often can’t fathom how it arrives at the answers it does. Parsing through millions of stored documents, Watson applies hundreds of algorithms to arrive at the answer with the highest confidence rating. (While this bears a passing resemblance to Minsky’s “society of mind” concept, it still remains very different from the way humans think.) The incredible complexity of the process means we can’t fully understand it. This is the nature of emergent systems – they generate outcomes that can’t be accurately predicted a lot of the time. They follow an internal logic of their own, one we can’t possibly follow.
In Watson, we’re seeing the barest hints, the merest beginnings of this. The potential domain of future intelligences is vast. It’s possible that one day there will be as many different kinds of machine intelligence as there are biological species. And in all likelihood, we won’t understand the behaviors and motivations of a single one.
Watson is a long way from being an artificial general intelligence. It isn’t HAL-3000. But it is a huge step forward. A step that should be making us ask serious questions about the future of AI. We face a future full of machine intelligences as smart or smarter than we are. Some experts even speculate recursive self-improvement will yield superintelligences vastly more intelligent than the entire human race combined. There should be no question we’ll be incapable of grasping the motivations of such a machine. And there’s no reason to expect it’s objectives will mesh with our own. Obviously, this could have results that are disastrous, potentially even existentially catastrophic.
We aren’t going to stop the advance of artificial intelligence or the eventual development of an artificial general intelligence. Therefore, steps will need to be taken that ensure these machines remain as benevolent as possible and not because they will necessarily be malevolent otherwise. An indifferent superintelligence would be just as big a threat to humanity because it could be capable of taking potentially world-altering actions without considering what they mean for us. Arguments for creating rules-based safeguards, such as Asimov’s “Three Laws of Robotics” will likely fail, simply because rules can be misinterpreted or circumvented given sufficient motivation.
Work toward “Friendly AI”, as proposed by AI researcher, Eliezer Yudkowsky, stands a much better chance of a human-positive outcome. Instilling a machine equivalent of morality not only protects us from the actions of a superintelligence, but from its self-improved progeny as well. Creating “Friendly” safeguards that motivate such a machine to do everything in its power to ensure humans do not come to harm now or in the future may be our best bet. As Yudkowsky states:
“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”
We can hope that a superintelligence comes to the same conclusion. But we can do more than just hope; we can work to ensure it happens.
The capabilities Watson has demonstrated using deep analytics and natural language processing are truly stunning. The technologies that will develop from this will no doubt help the world with many of its significant problems. Not least of these is dealing with the vast, escalating volumes of data our modern world generates. But there is the potential for significant dangers to arise from such technology too. I feel certain though we can overcome these threats and continue the long legacy of building a better world with the help of our technology.
How’s that for techno-optimism?