Blog

Watson: The New Jeopardy Champion


I consider myself a techno-optimist, but Watson’s performance in Jeopardy’s IBM Challenge has definitely exceeded my expectations. While I did predict Watson would win the competition, I didn’t think it would be so dominant. This was a contest I thought machine intelligence might win by a narrow margin, but the three-day, two game match wasn’t even close. AI has come a long way, baby.

As impressive as Watson’s final cash score was, what I think was more remarkable was it’s answer success rate. In the first match, out of sixty clues, Watson rang in first and answered 38 correctly, with five errors. This is an 88.4% success rate. If only the 30 questions in the Double Jeopardy portion are considered, this jumps to a whopping 96%. You’ll notice I’ve left the Final Jeopardy question out of these calculations. This is because this question had to be answered regardless of the machine’s low confidence level of 14%. It’s important to the competition, but actually indicates the success of the machine’s algorithms.

While the second game (Day 3) wasn’t quite as impressive as the first, Watson still won by a significant margin. Considering it was competing against the two best human Jeopardy players of all time, it’s safe to say IBM met its goal and then some.

Some of the more intriguing (some would rightly say, concerning) moments in the contest were those in which Watson arrived at unfathomably wrong answers. As the lead on the project, Watson Principal Investigator, Dr. David Ferrucci commented:

“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”

The fact is, even Watson’s developers often can’t fathom how it arrives at the answers it does. Parsing through millions of stored documents, Watson applies hundreds of algorithms to arrive at the answer with the highest confidence rating. (While this bears a passing resemblance to Minsky’s “society of mind” concept, it still remains very different from the way humans think.) The incredible complexity of the process means we can’t fully understand it. This is the nature of emergent systems – they generate outcomes that can’t be accurately predicted a lot of the time. They follow an internal logic of their own, one we can’t possibly follow.

In Watson, we’re seeing the barest hints, the merest beginnings of this. The potential domain of future intelligences is vast. It’s possible that one day there will be as many different kinds of machine intelligence as there are biological species. And in all likelihood, we won’t understand the behaviors and motivations of a single one.

Watson is a long way from being an artificial general intelligence. It isn’t HAL-3000. But it is a huge step forward. A step that should be making us ask serious questions about the future of AI. We face a future full of machine intelligences as smart or smarter than we are. Some experts even speculate recursive self-improvement will yield superintelligences vastly more intelligent than the entire human race combined. There should be no question we’ll be incapable of grasping the motivations of such a machine. And there’s no reason to expect it’s objectives will mesh with our own. Obviously, this could have results that are disastrous, potentially even existentially catastrophic.

We aren’t going to stop the advance of artificial intelligence or the eventual development of an artificial general intelligence. Therefore, steps will need to be taken that ensure these machines remain as benevolent as possible and not because they will necessarily be malevolent otherwise. An indifferent superintelligence would be just as big a threat to humanity because it could be capable of taking potentially world-altering actions without considering what they mean for us. Arguments for creating rules-based safeguards, such as Asimov’s “Three Laws of Robotics” will likely fail, simply because rules can be misinterpreted or circumvented given sufficient motivation.

Work toward “Friendly AI”, as proposed by AI researcher, Eliezer Yudkowsky, stands a much better chance of a human-positive outcome. Instilling a machine equivalent of morality not only protects us from the actions of a superintelligence, but from its self-improved progeny as well. Creating “Friendly” safeguards that motivate such a machine to do everything in its power to ensure humans do not come to harm now or in the future may be our best bet. As Yudkowsky states:

“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”

We can hope that a superintelligence comes to the same conclusion. But we can do more than just hope; we can work to ensure it happens.

The capabilities Watson has demonstrated using deep analytics and natural language processing are truly stunning. The technologies that will develop from this will no doubt help the world with many of its significant problems. Not least of these is dealing with the vast, escalating volumes of data our modern world generates. But there is the potential for significant dangers to arise from such technology too. I feel certain though we can overcome these threats and continue the long legacy of building a better world with the help of our technology.

How’s that for techno-optimism?

The Future of Communications

Lascaux hunters
Lascaux hunters

One of the themes I continually find myself returning to is the growing symbiotic relationship between human beings and their technologies. Every device, machine and factory needs us to invent it and build it, while at the same time we become increasingly dependent on these very technologies. We need them in order to thrive and often to survive; they need us to give them form and existence. If that’s not symbiotic, I don’t know what is.

Hopefully, you’ll get a little sense of this symbiosis in my latest article about the future of communications, “Technically Speaking”. To me, communications is one of the great technological threads woven through the tapestry of human history. From the first moment someone left notches on a piece of wood to record an event or painted on the walls of some now-forgotten cave, we embarked on a tremendous journey. A journey made hand in hand with the many technologies of communications. And as with all the best journeys, this one is likely to continue for a very long time.

Don’t Win the Future


If, like me, you find President Obama’s already overused “Win the Future” catchphrase catching in your throat, you might also be wondering how he decided on this feel-good, but nonsensical slogan.* It seems incredible that an administration that so readily talks about future technologies doesn’t give better consideration to the strategies behind their promotion. Reducing the dialog to the metaphor of competition diminishes it before it has even gotten started. The future isn’t a prize, a thing to be won, it’s a process, a never-ending unfolding of the possible. As futurist Jamais Cascio recently wrote, in encouraging us to “Win the Future”, President Obama “is not just asking us to do something that simply cannot be done, he’s asking us to accept a meager, ephemeral sense of triumph, when we could do so much more.”

It should also make us wonder about the government’s collective grasp of the concepts and processes essential to futures thinking. As complex as our challenges are and will be in the coming decades, we need to be using all of the tools at our command.

I’ve wondered from time to time about the idea of some sort of federal “Department of the Future” or “Office of Foresight”. Part of me rebels against such an Orwellian-sounding governmental agency, but on the other hand, we need to be making policy decisions with a much longer term, systems oriented view than we currently do.

Certainly, there are other departments and agencies that incorporate futures methodology – intelligence agencies and the military, for example. But there can be little doubt these entities have a particular focus and therefore are limited by their own filters. Would we be better served by a nonpartisan futures equivalent to say the Congressional Budget Office? Something which could provide an assessment of potential impacts for a particular piece of legislation? Could we reduce wasted tax dollars, not to mention avoiding unintended consequences, especially ones that could have been readily foreseen?

Remember the tax credit for ethanol production? Because a significant percentage of corn crops were diverted to energy production, food prices around the world skyrocketed. (Commodity speculation was also a contributing factor, though it can be argued this was exacerbated by the policy.) People in some parts of the developing world suffered considerable hardship and many starved. Was such an outcome so impossible to anticipate?

Despite this, I’m not saying I’m entirely convinced an “Office of Foresight” is the right way to go. But I do think it’s worthy of exploration and dialog. It’s not as if there aren’t already precedents. In the UK, the government’s Foresight Programme was established to help them think systematically about the future and its application to developing policy and strategy.

Of course, there are already futures organizations that inform and advise government, but could we be better served by a more fundamental integration of these disciplines into our policy making process?

Maybe this is a good idea. Maybe it isn’t. What do you think? As for me, I know we can do better than to approach the future with the same mentality we bring to a basketball game.

*(Full disclosure: I was and still am an Obama supporter and contributed to his 2008 presidential campaign.)

What is a Milestone in Artificial Intelligence?

On January 13, 2011, IBM’s Watson supercomputer competed in a practice round of Jeopardy, the long-running trivia quiz show. Playing against the program’s two most successful champions, Ken Jennings and Brad Rutter, Watson won the preliminary match. Is this all a big publicity stunt? Of course it is. But it also marks a significant milestone in the development of artificial intelligence.

For decades, AI – artificial intelligence – has been pursued by computer scientists and others with greater and lesser degrees of success. Promises of Turing tests passed and human-level intelligence being achieved have routinely fallen far short. Nonetheless, there has continued to be an inexorable march toward more and ever more capable machine intelligences. In the midst of all this, IBM’s achievement in developing Watson may mark a very important turning point.

Early attempts at strong AI or artificial general intelligence (AGI) brought to light the daunting complexity of trying to emulate human intelligence. However, during the last few decades, work on weak AI – intelligence targeted to very specific domains or tasks – has met with considerably more success. As a result, today AI permeates our lives, playing a role in everything from anti-lock braking systems to warehouse stocking to electronic trading on stock exchanges. Little by little, AI has taken on roles previously performed by people and bested them in ways once unimaginable. Computer phone attendants capable of routing hundreds of calls a minute. Robot-operated warehouses that deliver items to packers in seconds. Pattern matching algorithms that pick out the correct image from among thousands in a matter of moments. But until now, nothing could compete with a human being when it came to general knowledge about the world.

True, these human champions may yet best Watson, a product of IBM’s DeepQA research project. (The three day match will air February 14-16.) But we only need to think back to 1997 when IBM’s Deep Blue defeated world chess champion Garry Kasparov to understand that it doesn’t really matter. Kasparov had handily beaten Deep Blue only a year earlier, though the 1996 match did mark the first time a computer won a single game in such a match. Today, just as then, the continuing improvements in computer processing speed, memory, storage and algorithms all but ensure that any such triumph would be fleeting. We have turned a page on this once most human of intellectual feats and the world won’t be the same again.

So what can we look ahead to now that we’ve reached this milestone? In the short term, IBM plans to market their technology and profit by their achievement. Initially, the system price will be high, probably in the millions of dollars, but like so much computer technology, the price will plummet over the coming decade. As the technology becomes more widely used, a range of tasks and jobs previously considered safe from AI will no longer be performed by human workers. Protectionist regulations may attempt to save these jobs but these efforts will probably be short-lived. The resulting large-scale unemployment will require a rethinking of government institutions and safety nets, as well as corporate business models.

At the same time, this type of general knowledge AI (it’s far too early to call it AGI) will contribute to greater and more rapid advances in machine intelligence. Such technology could bootstrap the Semantic Web into broad usage. In all likelihood, it will be used to create personal intelligent agents, giving users the virtual equivalent of a staff of assistants. And eventually, it could facilitate the development of a true artificial general intelligence or at least contribute to the education of such an AGI.

Will such an intelligence be conscious? Will it be self-improving, leading to a positive feedback loop that brings about a powerful and hopefully benign superintelligence? Only time will tell. But perhaps one day, on a future holographic version of Jeopardy, we’ll be presented with clues to which the correct response will be, “What was the Singularity?”

Retirement Planning for Boomers

January 1, 2011 marked yet another milestone for the Baby Boomers, the massive post-war generation born between 1946 and 1964. As of this New Year’s Day, the first of the boomers turned 65, with an additional ten thousand becoming senior citizens every single day. As many have long observed, this will create pressures and challenges that will ripple throughout our society. Pensions, health care, housing and jobs are only a few of the areas that will be impacted by this outsized demographic shift.

At the same time, many among the youngest of the adult generations – Generation Y, or the Millennials – now find their career opportunities considerably more limited that their parents once did. Currently, adults from 18 to 24 years old are experiencing between 2-3 times the rate of unemployment as the rest of the working population. (Bureau of Labor Statistics, Nov. 2010) With the Great Recession and its glacially slow recovery likely to hold unemployment at these levels for at least several more years, the Millennials have a rough road ahead of them. And let us not forget, it’s a road we all travel on together.

On this scale, stagnated careers and lost opportunities impact more than just the lives of individuals and their families. Federal, state and local tax revenues as well as Social Security and Medicare would all be affected by the resulting reduction in earnings. The inability for a large fraction of this demographic to participate in home ownership would also depress home values as demand withers relative to the supply. But perhaps most concerning is the potential for civil unrest.

Sociologists have often noted a strong correlation between high levels of unemployment among young adults, particularly young men, and the prevalence of war, gang activity and crime in general. Extreme disparities between different segments of society tend to lead to greater levels of discontent, particularly when that segment is disaffected youth. Given the high cost to society of such behavior, stepping up our investment in programs that facilitate education, re-training, job creation and placement would be money wisely spent.

Of course, technology is having a considerable impact on employment as well. Productivity gains due to computerization have been responsible for progressive job loss in some sectors for years. This is probably also one of the reasons employers have been slow to rehire during the current recovery. (Significant job growth has also resulted from the computer revolution. Whether this has led to a total net gain or net loss of jobs is beyond the scope of this post.) Computerized supply chain management, high-speed communications and other technology advances have made global capitalism possible and contributed to outsourcing, another reason for fewer jobs at the local level. Looking ahead, as systems become more intelligent and robotics become more adept, far more jobs are likely to disappear in the coming years.

It’s important too to remember this is not an exclusively American phenomena. Today, Japan’s young adults experience unemployment or underemployment at twice the national rate. In Europe, the disparity is even worse. A recent New York Times article on the lack of employment opportunities for young adults in southern Europe reported “an epic brain drain of college graduates” as they seek work elsewhere.

The economic balance of the world is shifting. The old powerhouses of the U.S., Europe and Japan are rapidly being outstripped by the BRICs (Brazil, Russia, India and China), with the CIVETS (Columbia, Indonesia, Vietnam, Egypt, Turkey and South Africa) and other acronyms coming up fast. In nature, systems tend to adjust according to differentials in potential and the same is true of nations and economies. If we’re not careful, the confluence of demographic, technological and economic shifts could quickly lead to a future in which the Baby Boomers find themselves in a precarious and underfunded retirement.

The Inevitability of Invention


What do nuclear technology, embryonic stem cells, synthetic life and molecular nanotechnology have in common? For many people, these are strange and frightening concepts which conjure erroneous, often very dystopic visions of the future. They’re also technologies with enormous potential; they could seriously damage our world or they could be immensely beneficial. But perhaps most importantly, all of them are inevitable.

Change means risk, so through the ages, a part of our brain has evolved to avoid big changes. Because of this, some of us are inclined to want to stop progress all together or at least to slow it down. Some new technology or knowledge has the potential to be dangerous and so it’s argued that it should be proscribed, banned, halted. But of course, it’s never that simple. The fact is, when the time comes, we can’t stop a technology from coming into existence any more than we can stop a freight train with our bare hands.

In his new book, “What Technology Wants”, Kevin Kelly makes the argument that technology is autonomous and has its own distinct direction and momentum. He details (what many have long known or suspected) that most inventions are made not because of someone’s singular genius, but because the time is right.

Logarithms. Calculus. Oxygen. Evolution. Photography. Steamboats. Telegraphs. Telephones. Incandescent bulbs. Typewriters. Transistors. Nuclear bombs. All of these, and so very many more, were independently discovered or invented at nearly the same time in history. The prevalence of these “simultaneous inventions” strongly suggests that when the time is right, a particular technology will be thrust upon us, whether we want it or not.

This isn’t to say that any of this is predetermined; only that once a particular set of conditions, capabilities and knowledge is in place, the next technological step is probably going to occur. While we can’t say the flux capacitor will be invented on August 23, 2029, we can make a reasonable estimate of when certain technologies are likely to be feasible. This can aid us in preparing for their arrival and in our endeavors to ensure their impact is as beneficial as possible.

Efforts to ban knowledge and the technologies it makes possible are doomed to failure. Stop research in one country and it will almost certainly continue somewhere else. Drive it underground and it will still go on, only without adequate regulation and oversight. Prohibiting emerging technologies will ensure you fall behind the competition. It will probably also mean not having a say in how that technology is developed or what direction it ultimately takes.

New technology is inevitable. Each new addition is just waiting its turn on the timeline of possibility.