Watson: The New Jeopardy Champion


I consider myself a techno-optimist, but Watson’s performance in Jeopardy’s IBM Challenge has definitely exceeded my expectations. While I did predict Watson would win the competition, I didn’t think it would be so dominant. This was a contest I thought machine intelligence might win by a narrow margin, but the three-day, two game match wasn’t even close. AI has come a long way, baby.

As impressive as Watson’s final cash score was, what I think was more remarkable was it’s answer success rate. In the first match, out of sixty clues, Watson rang in first and answered 38 correctly, with five errors. This is an 88.4% success rate. If only the 30 questions in the Double Jeopardy portion are considered, this jumps to a whopping 96%. You’ll notice I’ve left the Final Jeopardy question out of these calculations. This is because this question had to be answered regardless of the machine’s low confidence level of 14%. It’s important to the competition, but actually indicates the success of the machine’s algorithms.

While the second game (Day 3) wasn’t quite as impressive as the first, Watson still won by a significant margin. Considering it was competing against the two best human Jeopardy players of all time, it’s safe to say IBM met its goal and then some.

Some of the more intriguing (some would rightly say, concerning) moments in the contest were those in which Watson arrived at unfathomably wrong answers. As the lead on the project, Watson Principal Investigator, Dr. David Ferrucci commented:

“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”

The fact is, even Watson’s developers often can’t fathom how it arrives at the answers it does. Parsing through millions of stored documents, Watson applies hundreds of algorithms to arrive at the answer with the highest confidence rating. (While this bears a passing resemblance to Minsky’s “society of mind” concept, it still remains very different from the way humans think.) The incredible complexity of the process means we can’t fully understand it. This is the nature of emergent systems – they generate outcomes that can’t be accurately predicted a lot of the time. They follow an internal logic of their own, one we can’t possibly follow.

In Watson, we’re seeing the barest hints, the merest beginnings of this. The potential domain of future intelligences is vast. It’s possible that one day there will be as many different kinds of machine intelligence as there are biological species. And in all likelihood, we won’t understand the behaviors and motivations of a single one.

Watson is a long way from being an artificial general intelligence. It isn’t HAL-3000. But it is a huge step forward. A step that should be making us ask serious questions about the future of AI. We face a future full of machine intelligences as smart or smarter than we are. Some experts even speculate recursive self-improvement will yield superintelligences vastly more intelligent than the entire human race combined. There should be no question we’ll be incapable of grasping the motivations of such a machine. And there’s no reason to expect it’s objectives will mesh with our own. Obviously, this could have results that are disastrous, potentially even existentially catastrophic.

We aren’t going to stop the advance of artificial intelligence or the eventual development of an artificial general intelligence. Therefore, steps will need to be taken that ensure these machines remain as benevolent as possible and not because they will necessarily be malevolent otherwise. An indifferent superintelligence would be just as big a threat to humanity because it could be capable of taking potentially world-altering actions without considering what they mean for us. Arguments for creating rules-based safeguards, such as Asimov’s “Three Laws of Robotics” will likely fail, simply because rules can be misinterpreted or circumvented given sufficient motivation.

Work toward “Friendly AI”, as proposed by AI researcher, Eliezer Yudkowsky, stands a much better chance of a human-positive outcome. Instilling a machine equivalent of morality not only protects us from the actions of a superintelligence, but from its self-improved progeny as well. Creating “Friendly” safeguards that motivate such a machine to do everything in its power to ensure humans do not come to harm now or in the future may be our best bet. As Yudkowsky states:

“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”

We can hope that a superintelligence comes to the same conclusion. But we can do more than just hope; we can work to ensure it happens.

The capabilities Watson has demonstrated using deep analytics and natural language processing are truly stunning. The technologies that will develop from this will no doubt help the world with many of its significant problems. Not least of these is dealing with the vast, escalating volumes of data our modern world generates. But there is the potential for significant dangers to arise from such technology too. I feel certain though we can overcome these threats and continue the long legacy of building a better world with the help of our technology.

How’s that for techno-optimism?

What is a Milestone in Artificial Intelligence?

On January 13, 2011, IBM’s Watson supercomputer competed in a practice round of Jeopardy, the long-running trivia quiz show. Playing against the program’s two most successful champions, Ken Jennings and Brad Rutter, Watson won the preliminary match. Is this all a big publicity stunt? Of course it is. But it also marks a significant milestone in the development of artificial intelligence.

For decades, AI – artificial intelligence – has been pursued by computer scientists and others with greater and lesser degrees of success. Promises of Turing tests passed and human-level intelligence being achieved have routinely fallen far short. Nonetheless, there has continued to be an inexorable march toward more and ever more capable machine intelligences. In the midst of all this, IBM’s achievement in developing Watson may mark a very important turning point.

Early attempts at strong AI or artificial general intelligence (AGI) brought to light the daunting complexity of trying to emulate human intelligence. However, during the last few decades, work on weak AI – intelligence targeted to very specific domains or tasks – has met with considerably more success. As a result, today AI permeates our lives, playing a role in everything from anti-lock braking systems to warehouse stocking to electronic trading on stock exchanges. Little by little, AI has taken on roles previously performed by people and bested them in ways once unimaginable. Computer phone attendants capable of routing hundreds of calls a minute. Robot-operated warehouses that deliver items to packers in seconds. Pattern matching algorithms that pick out the correct image from among thousands in a matter of moments. But until now, nothing could compete with a human being when it came to general knowledge about the world.

True, these human champions may yet best Watson, a product of IBM’s DeepQA research project. (The three day match will air February 14-16.) But we only need to think back to 1997 when IBM’s Deep Blue defeated world chess champion Garry Kasparov to understand that it doesn’t really matter. Kasparov had handily beaten Deep Blue only a year earlier, though the 1996 match did mark the first time a computer won a single game in such a match. Today, just as then, the continuing improvements in computer processing speed, memory, storage and algorithms all but ensure that any such triumph would be fleeting. We have turned a page on this once most human of intellectual feats and the world won’t be the same again.

So what can we look ahead to now that we’ve reached this milestone? In the short term, IBM plans to market their technology and profit by their achievement. Initially, the system price will be high, probably in the millions of dollars, but like so much computer technology, the price will plummet over the coming decade. As the technology becomes more widely used, a range of tasks and jobs previously considered safe from AI will no longer be performed by human workers. Protectionist regulations may attempt to save these jobs but these efforts will probably be short-lived. The resulting large-scale unemployment will require a rethinking of government institutions and safety nets, as well as corporate business models.

At the same time, this type of general knowledge AI (it’s far too early to call it AGI) will contribute to greater and more rapid advances in machine intelligence. Such technology could bootstrap the Semantic Web into broad usage. In all likelihood, it will be used to create personal intelligent agents, giving users the virtual equivalent of a staff of assistants. And eventually, it could facilitate the development of a true artificial general intelligence or at least contribute to the education of such an AGI.

Will such an intelligence be conscious? Will it be self-improving, leading to a positive feedback loop that brings about a powerful and hopefully benign superintelligence? Only time will tell. But perhaps one day, on a future holographic version of Jeopardy, we’ll be presented with clues to which the correct response will be, “What was the Singularity?”

The Supercomputer Race

that China is barreling ahead in its development of supercomputers should give the U.S. considerable cause for concern. China has devoted significant resources to their supercomputer program in recent years, resulting in their ranking earlier this year at the number two spot on the TOP500 list. TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations. These tests yield a score based on the computer’s speed measured in double precision floating point operations per second (flops).

To give a little perspective: China didn’t have a single supercomputer ranked in the TOP500 until the mid-1990s. By June 2004, they had their first ranking ever in the top ten. In May 2010, their Nebulae system became the second fastest in the world with a performance of 1.271 petaflops. (A petaflop is 1015 floating point operations per second.) While the Chinese still only have one tenth the number of TOP500 supercomputers the U.S. has, they’ve been quickly catching up based on this metric as well. (Note: TOP500.org ranks the world’s most powerful, commercially available, non-distributed computer systems. There are numerous military and intelligence agency supercomputers in many countries not included in this list.)

China’s Nebulae system operates from the newly built National Supercomputing Centre in Shenzhen. This is also the site of some very recent and very extensive construction which will presumably house some very serious supercomputing power in the near future. “There clearly seems to be a strategic and strong commitment to supercomputing at the very highest level in China,” stated Erich Strohmaier, head of the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory.

The next major goal for supercomputers is the building of an exascale system sometime between 2018 and 2020. Such a system would be almost a thousand times faster than the Jaguar supercomputer at Oak Ridge National Laboratory, currently the world’s fastest. The U.S. Exascale Initiative is committed to developing this technology which brings with it many different challenges of scale. At the same time, Europe and China have accelerated their investment in high-performance systems, with Europeans on a faster development track than the U.S. There are concerns the U.S. could be bypassed if it doesn’t sustain the investment to stay ahead.

This isn’t just about who has the highest ranking on a coveted list – it’s not a sporting event with a big fanfare for the winner. These computers are crucial for modeling, simulation, and large-scale analysis – everything from modeling complex weather systems to simulating biological processes. As our understanding of highly complex systems grows, the only way we’re going to be able to keep moving forward is with more and ever more computing power. At the same time, exascale computing is anticipated to be a highly disruptive technology, not only because of what it will be able to do, but because of the technologies that will be created in the course of developing it. Ultimately, these technologies will end up in all kinds of new products, not unlike what happened with the Apollo space program. Falling behind at this stage of the game would put the U.S. at a big disadvantage in almost every aspect of science and product development.

Just as concerning, I believe, is what this would mean for developing an AGI or artificial general intelligence. There’s been a lot of speculation by experts in the field of AI as to when (if ever) we might develop a human-level artificial intelligence. A recent survey of AI experts indicates we could realize human-level AI or greater in the next couple of decades. More than half of the experts surveyed thought this milestone would occur by mid-century. While there are many different avenues which may ultimately lead to an AGI, it’s a good bet that most of these will require some pretty serious computing power both for research and potentially for the substrate of the AGI itself.

It’s been speculated that there are considerable risks in developing a computer with human-level or greater intelligence, but there are a number of risks in not doing so as well. Whoever builds the first AGI will very probably realize an enormous competitive advantage, both economically and politically. Additionally, the world faces a growing number of existential threats which AGIs could play a critical role in helping us to avoid.

During this time of budget deficits and spending cuts, it would be very easy to decide that Big Science programs, such as the Exascale Initiative, aren’t as crucial to the nation’s well-being as they really are. This would be a grave mistake. The question isn’t how we can afford to commit ourselves to this research, but how we can afford not to.

(NOTE: Beginning with this entry, I’ll be cross-posting my blog at the World Future Society – www.wfs.org.)

The Intelligence Revolution (Part 1)

In the intelligent future, positions, roles, entire fields will disappear from the jobs market. In many cases, the tasks themselves will still exist but increasingly they’ll be done by evermore intelligent technologies instead of by humans. As in the past, these changes will be motivated by market forces and the ongoing incentive to improve efficiency and productivity. And like the Digital Revolution and the Industrial Revolution before it, the Intelligence Revolution will transform the jobs landscape for still another reason. Because it will lead to increasingly rapid innovation and product creation.

In the past, machines have eliminated jobs because they could be designed to be faster, stronger and more tireless than their flesh and blood counterparts. During the Industrial Revolution, this meant lower skilled, often highly-repetitive labor was displaced. Spinning, weaving and sewing machines transformed the work environment. Vast numbers of trades people saw their work taken away by what would eventually become a worldwide clothing industry. Which would itself one day create millions of jobs. Likewise, unskilled manual labor. Before the beginning of the Industrial Revolution, roads and railways were built on the backs of thousands of workers. Today with the aid of machines, dozens can do the work previously performed by hundreds, if not thousands. These technologies (along with many others that came after them) built the cities, highways and factories that led to still more jobs.

The Digital Revolution had similar impacts. Office functions have seen such increases in efficiency that a single person can perform the work once performed by many. At the same time, that worker has seen a huge increase in available skills and technologies. These changes also took away a lot of jobs but led to the creation of many new ones as well. Desktop publishing, digital photography and digital video production utterly transformed their fields. At the same time these new technologies eliminated jobs, they put the ability to create mass media into the hands of an unprecedented number of people.

This antenna was created for a NASA communication satellite using evolutionary algorithms. It was designed to be efficient, not pretty.
This antenna was created for a NASA communication satellite using evolutionary algorithms. It was designed to be efficient, not pretty.

But I suspect these examples will pale beside the changes that will be brought about by the Intelligence Revolution. Two fields that are poised to transform entire industries are those of automated invention and rapid prototyping. Using techniques such as genetic programming (algorithms that utilize fitness selection), computers are increasingly able to invent devices that are far more efficient than anything that could be created by a person. For instance, Evolutionary Antenna Synthesis is leading to better, more cost-effectively designed antennas. In another field, evolvable hardware has improved microprocessor design, creating circuits that are far denser, increasingly complex and fault-tolerant than anything that could be created by a team of unaided humans.

As they develop, 3-D printer rapid-prototyping technologies will allow us to implement these inventions and designs far faster than we ever could before. The creation of open source rapid-prototypers, such as the RepRap and MakerBot projects, will make these technologies widely available, leading to a democratization of this process as well. 3-D designs for these machines are already widely shared and distributed via the internet and the concept will likely lead to downward price pressure as these technologies become increasingly sophisticated.

All of this is leading to a sea change for job markets and how we approach our careers and education. As always, such change will have consequences both good and bad. I’ll elaborate on this further in Part 2 of this series.

Printable electronics

Xerox silver ink is a breakthrough for printable electronics
Xerox silver ink is a breakthrough for printable electronics

Xerox announced this week they’ve developed several breakthroughs in the printing of low-cost electronic circuits. Most notably, this includes their development of a silver-based conductor ink and significant improvements to their previously-developed semi-conductor ink. These advances will make circuits possible on a wide range of materials, including plastics, film and textiles. The process is scalable so it could be used to create everything from low-cost miniature circuits to large video displays. Because the new silver ink can be delivered at sufficiently low temperatures (140°C), it won’t melt plastics, which was a previous obstacle. Finally, the process uses ink-jet technology and doesn’t require clean room facilities such as those needed for silicon chip manufacturing.

Xerox has already begun aggressively marketing the new technology which I believe will make a significant impact, both in established industries and in entirely new uses. The ability to print on fabrics should make wearable electronics really start to take off. Custom large-scale, flexible video screens and signage will become feasible. Sensors capable of detecting all manner of airborne pathogens and toxins will eventually become widespread.

Given the relatively inexpensive setup costs this will bring about in time, I’d expect a significant open source movement to build around printable electronics. Not only will this yield all sorts of innovations, but it will eventually make many consumer devices more affordable. Think RepRap for electronics.

But to me, by far and away the most exciting aspect of this breakthrough is the promise of low-cost RFID tags and sensors. Once these are in widespread use, every item in a warehouse, every book in a store, every piece of furniture in a home will be able to convey information about its location and state. Your refrigerator will be able to poll its contents, generating a shopping list or identifying items past their use-by-date. The possessions in your house could be easily inventoried and recorded for insurance purposes, in case of theft or fire. Misplaced items could be located in an instant.

In a word, everything in our environment would become more intelligent. And that’s only the beginning. Wireless communication to a central server or internet services could allow a failing device to arrange it’s own repair. (Obviously, with over-ride options by the owner.) Sensors along roads and buildings could assist in everything from collision avoidance to giving directions to tourists. Gradually, additional features and processing would be added to each device, creating an ecosystem of interacting, inter-communicating electronics.

Of course, there will no doubt be considerable privacy and security issues to contend with, not to mention all kinds of new scams involving counterfeit RFID tags, false information, misdirection and so forth. But to be honest, I don’t see us foregoing the functional and marketable benefits because of such concerns. We’ll just have to build methods and measures and legislation to deal with the possible downsides of the technology. Because in the end, smarter really is better.