Are You Smarter Than a Sixth-Generation Computer?


My latest article, “Are You Smarter Than a Sixth-Generation Computer?” leads off this month’s issue of The Futurist magazine. (Sept/Oct 2012) The article explores the need for a standard metric for AIs and is based on my paper, “Toward a Standard Metric of Machine Intelligence”, which I recently published in the World Future Review. The purpose of the article (and the paper) can be summed up as follows:

As our world becomes increasingly filled with technological intelligence, it will serve us well to know exactly how smart our machines are and in what ways. Given that we try to measure almost every other aspect of our world, it seems only prudent that we accurately measure the intelligence of our machines as well — especially since, by some projections, they’re expected to surpass us in the coming decades.

During the next few decades we’re going to see significant gains in the field of strong AI, potentially giving rise to artificial general intelligences (AGIs). Universal Intelligence tests such as those described here would seem a crucial tool as we prepare for the changes this will bring.

The Supercomputer Race, Revisited

Nine months ago, I wrote a post called “The Supercomputer Race” about China’s then top-ranked supercomputer, the Tianhe-1A and what it meant for the U.S. The Tianhe-1A achieved 2.57 petafolps (1015 floating point operations per second) on LINPACK, a benchmark based on a series of complex linear equations. In comparison, the world’s next fastest system at the time was Oak Ridge National Laboratory’s Jaguar, clocking in at 1.76 petaflops. (Based on TOP500’s semi-annual ranking of the world’s five hundred fastest supercomputers.)

Today TOP500 released their latest rankings, which put Japan’s K Computer in the number one spot with 8.162 petaflops (PFLOPS), a jump of more than three times the performance of the now number two Tianhe-1A. How was such a sharp increase realized and what does it mean for supercomputing in the future?

A little history: TOP 500 has been ranking the world’s supercomputers since 1993. During this time individual and cumulative performance has followed a steady pattern of exponential growth. As with this latest ranking, individual rankings have shown a significant jump in some years (e.g., 1997, 2002), followed by years of more modest improvement. On the other hand, cumulative totals have been very consistent due to the broader data set and are probably a better indicator of where the trend stands overall. (Interestingly, RIKEN’s K Computer represents a jump not seen since Japan’s last number one, the Earth Simulator in 1992.) Not surprisingly, the plot points for the performance growth of the number one spot approximate a classic series of sigmoid growth curves, as technologies reach their limits and are superseded by others.

TOP500 June 2011-Projected Performance
TOP500 June 2011-Projected Performance

The substantial leap forward last year by the Tianhe-1A can mostly be attributed to one significant improvement: the implementation of radically faster interconnects. Rather than focusing on the latest step up in CPU technology, the designers of the Tianhe-1A focused on the biggest bottleneck in the system. Interconnects are networking chipsets that coordinate the data continually being moved between processors, in this case thousands of Intel Westmere and Nvidia Fermi processors. China’s homegrown Galaxy interconnects were a huge improvement in performance at double the speed of the Infiniband interconnects used in many other systems.

This latest ranking saw improvements that are due to a related trend: the transition away from monolithic CPU-based systems to heterogeneous platforms. (Heterogeneous platforms utilize a variety of different types of computational units, including CPUs, GPUs, interconnects, etc.) Looking at the trend line, the Tianhe-1A represented a 50% increase over Oak Ridge’s Jaguar. Japan’s K Computer improves on the Tianhe-1A by almost 200%. During this next year, two U.S. systems are slated to become operational with peak performances in the 20 PFLOP range or a further gain of 150%.

So does this point to a long-term increase in the rate of improvement in supercomputing performance? I’d say, probably not. The elimination of bottlenecks and the transition to new approaches will likely be a blip on the trend line. As the industry moves toward the target of exascale supercomputing later this decade, we’re likely to see improvements slow at various points as we deal with some very considerable challenges of scale. It’s been said that while the move from terascale to petascale computing was evolutionary, the leap from petascale to exascale will be revolutionary. The solutions used in the earlier systems simply won’t scale up without significant changes being made.

A common question among the general public is “why do we even need more powerful supercomputers? Can’t we get by with what we have already?” The simple answer is ‘No’. If the U.S. wants to remain a leading technological and economic force in the world, it will be necessary to invest in a future in which supercomputers play a central role. If we’re to see the nascent technologies of the 21st century realized, we’ll need the vast processing power of exascale systems and beyond. Likewise, we’ll need next-generation supercomputers if we’re to overcome many of the challenges the world now faces. Our digital world is generating enormous quantities of data, data that is itself growing exponentially. Bioinformatics, proteomics and brain simulation are but a few of the fields that will require continuing improvements in supercomputing to deal with their immense data sets. For similar reasons, we’ll need these computers for complex analytic systems such as IBM’s DeepQA Project, more commonly known as Watson. The ability to create tremendously detailed climate models will also be essential as we deal with human-caused climate change, whether to predict its consequences or to implement solutions. In short, to abandon advances in supercomputing is to abandon our place in the future.

(The future of information management is explored in my recent article, “Treading in the Sea of Data”, in the July/August 2011 issue of The Futurist. The article is an abridged version of my paper which will be published this summer in the WorldFuture 2011 conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.)

Exascale Supercomputers: The Next Frontier

Cray Jaguar Supercomputer
Cray Jaguar Supercomputer

The last few years finally saw the arrival of supercomputers capable of petascale performance. In all, seven systems from the US, China, Japan and France achieved the milestone of processing a million billion floating point operations per second (flops) by the end of 2010. But even before this target was reached, computer scientists and engineers were setting their sights on an even loftier goal: Exascale computing.

The supercomputer has become the mainstay of both theoretical and applied science. Climate modeling, genome analysis, protein folding, nuclear fusion research and many other fields all benefit from the continuing gains in processing power. Now, with a range of exascale initiatives, the U.S. and Europe have set a goal of building a supercomputer one thousand times more powerful than any today. And they want to do it sometime between 2018 and 2020.

At first glance, this seems like it should definitely be achievable. After all, three orders of magnitude in seven to nine years certainly falls within the scope of Moore’s Law. But whereas the move from terascale to petascale processing was considered evolutionary, the jump to exascale supercomputers will require advances that will be revolutionary. Simply scaling up current technology won’t work. For instance, the Cray Jaguar supercomputer at Oak Ridge National Laboratory has more than a quarter of a million processor cores, over 360 terabytes of memory and uses 7.8 megawatts at peak power. A combination of air and liquid cooling remove enough waste heat to potentially warm several large buildings. Scaling such a system up a thousand-fold just isn’t feasible.

So new technologies will have to be developed. We’ll need processors with considerably lower power requirements and heat generation, faster optical interconnects and improved algorithms that provide better utilization of available processing cycles. And that’s just for starters.

Will we be able to achieve this goal in the timeframe that’s been set? Possibly, but only just. Professor of computer science and engineering, Peter Kogge recently wrote of his concerns about realizing exascale computing in IEEE Spectrum. Kogge was editor and study lead for the Exascale Computing Study initiated by the Defense Advanced Research Projects Agency (DARPA). This study illuminated a number of obstacles which will have to be overcome. (Note: The DARPA study was made in 2007 to determine the feasibility of exascale computing by 2015.)

But the dream of exascale supercomputers is important enough that DARPA, the U.S. Department of Energy and private industry are forging ahead despite such concerns. Last year, Intel opened three new centers dedicated to exascale research: the Exascale Computing Research Center in Paris, France, the ExaCluster Laboratory in Juelich, Germany and the ExaScience Lab in Leuven, Belgium.

Why is exascale supercomputing so important? The world faces significant challenges in the coming decades. Dealing with climate change, peak oil and a multitude of engineering challenges will require tremendous computing resources. At the same time, we’ve entered an era of massive data sets. Everything from genomics and proteomics to molecular modeling to nanotechnology will benefit from these advances. In short, much of the science of the 21st century will be impacted by exascale supercomputing.

The great thing about a grand challenge like this is even if it does take longer than expected to achieve, there will be all kinds of research and innovation that yield benefits along the way. New processor architectures, improvements in energy efficiency and advancements in parallel algorithms are but a few advances we can expect to eventually trickle down to other, more publicly accessible uses.

But the US and Europe aren’t the only players pursuing the exascale dream. China has very evident goals of continuing to build the world’s fastest supercomputers. As of November 2010, their Tianhe-1A supercomputer was ranked the world’s fastest by TOP500.org. (TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations.) China is also currently building their third National Supercomputing Center in Changsha, Hunan Province, a massive complex that is expected to be completed by the end of 2011. China has set a goal of building an exascale supercomputer sometime between 2016-2020, which should give cause for concern. Given their focus and the speed with which they moved to the top rank with the Tianhe-1A, China could definitely jump far ahead, if we don’t make this a priority.

Fortunately, the Obama administration has asked for $126 million in the 2012 budget for the development of next generation supercomputers. Whether it will receive approval from the new Congress remains to be seen. In my opinion, a decision not to fund such important technology could have far-reaching consequences for our competitiveness in the world and would definitely show a lack of foresight.

Watson: The New Jeopardy Champion


I consider myself a techno-optimist, but Watson’s performance in Jeopardy’s IBM Challenge has definitely exceeded my expectations. While I did predict Watson would win the competition, I didn’t think it would be so dominant. This was a contest I thought machine intelligence might win by a narrow margin, but the three-day, two game match wasn’t even close. AI has come a long way, baby.

As impressive as Watson’s final cash score was, what I think was more remarkable was it’s answer success rate. In the first match, out of sixty clues, Watson rang in first and answered 38 correctly, with five errors. This is an 88.4% success rate. If only the 30 questions in the Double Jeopardy portion are considered, this jumps to a whopping 96%. You’ll notice I’ve left the Final Jeopardy question out of these calculations. This is because this question had to be answered regardless of the machine’s low confidence level of 14%. It’s important to the competition, but actually indicates the success of the machine’s algorithms.

While the second game (Day 3) wasn’t quite as impressive as the first, Watson still won by a significant margin. Considering it was competing against the two best human Jeopardy players of all time, it’s safe to say IBM met its goal and then some.

Some of the more intriguing (some would rightly say, concerning) moments in the contest were those in which Watson arrived at unfathomably wrong answers. As the lead on the project, Watson Principal Investigator, Dr. David Ferrucci commented:

“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”

The fact is, even Watson’s developers often can’t fathom how it arrives at the answers it does. Parsing through millions of stored documents, Watson applies hundreds of algorithms to arrive at the answer with the highest confidence rating. (While this bears a passing resemblance to Minsky’s “society of mind” concept, it still remains very different from the way humans think.) The incredible complexity of the process means we can’t fully understand it. This is the nature of emergent systems – they generate outcomes that can’t be accurately predicted a lot of the time. They follow an internal logic of their own, one we can’t possibly follow.

In Watson, we’re seeing the barest hints, the merest beginnings of this. The potential domain of future intelligences is vast. It’s possible that one day there will be as many different kinds of machine intelligence as there are biological species. And in all likelihood, we won’t understand the behaviors and motivations of a single one.

Watson is a long way from being an artificial general intelligence. It isn’t HAL-3000. But it is a huge step forward. A step that should be making us ask serious questions about the future of AI. We face a future full of machine intelligences as smart or smarter than we are. Some experts even speculate recursive self-improvement will yield superintelligences vastly more intelligent than the entire human race combined. There should be no question we’ll be incapable of grasping the motivations of such a machine. And there’s no reason to expect it’s objectives will mesh with our own. Obviously, this could have results that are disastrous, potentially even existentially catastrophic.

We aren’t going to stop the advance of artificial intelligence or the eventual development of an artificial general intelligence. Therefore, steps will need to be taken that ensure these machines remain as benevolent as possible and not because they will necessarily be malevolent otherwise. An indifferent superintelligence would be just as big a threat to humanity because it could be capable of taking potentially world-altering actions without considering what they mean for us. Arguments for creating rules-based safeguards, such as Asimov’s “Three Laws of Robotics” will likely fail, simply because rules can be misinterpreted or circumvented given sufficient motivation.

Work toward “Friendly AI”, as proposed by AI researcher, Eliezer Yudkowsky, stands a much better chance of a human-positive outcome. Instilling a machine equivalent of morality not only protects us from the actions of a superintelligence, but from its self-improved progeny as well. Creating “Friendly” safeguards that motivate such a machine to do everything in its power to ensure humans do not come to harm now or in the future may be our best bet. As Yudkowsky states:

“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”

We can hope that a superintelligence comes to the same conclusion. But we can do more than just hope; we can work to ensure it happens.

The capabilities Watson has demonstrated using deep analytics and natural language processing are truly stunning. The technologies that will develop from this will no doubt help the world with many of its significant problems. Not least of these is dealing with the vast, escalating volumes of data our modern world generates. But there is the potential for significant dangers to arise from such technology too. I feel certain though we can overcome these threats and continue the long legacy of building a better world with the help of our technology.

How’s that for techno-optimism?

What is a Milestone in Artificial Intelligence?

On January 13, 2011, IBM’s Watson supercomputer competed in a practice round of Jeopardy, the long-running trivia quiz show. Playing against the program’s two most successful champions, Ken Jennings and Brad Rutter, Watson won the preliminary match. Is this all a big publicity stunt? Of course it is. But it also marks a significant milestone in the development of artificial intelligence.

For decades, AI – artificial intelligence – has been pursued by computer scientists and others with greater and lesser degrees of success. Promises of Turing tests passed and human-level intelligence being achieved have routinely fallen far short. Nonetheless, there has continued to be an inexorable march toward more and ever more capable machine intelligences. In the midst of all this, IBM’s achievement in developing Watson may mark a very important turning point.

Early attempts at strong AI or artificial general intelligence (AGI) brought to light the daunting complexity of trying to emulate human intelligence. However, during the last few decades, work on weak AI – intelligence targeted to very specific domains or tasks – has met with considerably more success. As a result, today AI permeates our lives, playing a role in everything from anti-lock braking systems to warehouse stocking to electronic trading on stock exchanges. Little by little, AI has taken on roles previously performed by people and bested them in ways once unimaginable. Computer phone attendants capable of routing hundreds of calls a minute. Robot-operated warehouses that deliver items to packers in seconds. Pattern matching algorithms that pick out the correct image from among thousands in a matter of moments. But until now, nothing could compete with a human being when it came to general knowledge about the world.

True, these human champions may yet best Watson, a product of IBM’s DeepQA research project. (The three day match will air February 14-16.) But we only need to think back to 1997 when IBM’s Deep Blue defeated world chess champion Garry Kasparov to understand that it doesn’t really matter. Kasparov had handily beaten Deep Blue only a year earlier, though the 1996 match did mark the first time a computer won a single game in such a match. Today, just as then, the continuing improvements in computer processing speed, memory, storage and algorithms all but ensure that any such triumph would be fleeting. We have turned a page on this once most human of intellectual feats and the world won’t be the same again.

So what can we look ahead to now that we’ve reached this milestone? In the short term, IBM plans to market their technology and profit by their achievement. Initially, the system price will be high, probably in the millions of dollars, but like so much computer technology, the price will plummet over the coming decade. As the technology becomes more widely used, a range of tasks and jobs previously considered safe from AI will no longer be performed by human workers. Protectionist regulations may attempt to save these jobs but these efforts will probably be short-lived. The resulting large-scale unemployment will require a rethinking of government institutions and safety nets, as well as corporate business models.

At the same time, this type of general knowledge AI (it’s far too early to call it AGI) will contribute to greater and more rapid advances in machine intelligence. Such technology could bootstrap the Semantic Web into broad usage. In all likelihood, it will be used to create personal intelligent agents, giving users the virtual equivalent of a staff of assistants. And eventually, it could facilitate the development of a true artificial general intelligence or at least contribute to the education of such an AGI.

Will such an intelligence be conscious? Will it be self-improving, leading to a positive feedback loop that brings about a powerful and hopefully benign superintelligence? Only time will tell. But perhaps one day, on a future holographic version of Jeopardy, we’ll be presented with clues to which the correct response will be, “What was the Singularity?”

The Supercomputer Race

that China is barreling ahead in its development of supercomputers should give the U.S. considerable cause for concern. China has devoted significant resources to their supercomputer program in recent years, resulting in their ranking earlier this year at the number two spot on the TOP500 list. TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations. These tests yield a score based on the computer’s speed measured in double precision floating point operations per second (flops).

To give a little perspective: China didn’t have a single supercomputer ranked in the TOP500 until the mid-1990s. By June 2004, they had their first ranking ever in the top ten. In May 2010, their Nebulae system became the second fastest in the world with a performance of 1.271 petaflops. (A petaflop is 1015 floating point operations per second.) While the Chinese still only have one tenth the number of TOP500 supercomputers the U.S. has, they’ve been quickly catching up based on this metric as well. (Note: TOP500.org ranks the world’s most powerful, commercially available, non-distributed computer systems. There are numerous military and intelligence agency supercomputers in many countries not included in this list.)

China’s Nebulae system operates from the newly built National Supercomputing Centre in Shenzhen. This is also the site of some very recent and very extensive construction which will presumably house some very serious supercomputing power in the near future. “There clearly seems to be a strategic and strong commitment to supercomputing at the very highest level in China,” stated Erich Strohmaier, head of the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory.

The next major goal for supercomputers is the building of an exascale system sometime between 2018 and 2020. Such a system would be almost a thousand times faster than the Jaguar supercomputer at Oak Ridge National Laboratory, currently the world’s fastest. The U.S. Exascale Initiative is committed to developing this technology which brings with it many different challenges of scale. At the same time, Europe and China have accelerated their investment in high-performance systems, with Europeans on a faster development track than the U.S. There are concerns the U.S. could be bypassed if it doesn’t sustain the investment to stay ahead.

This isn’t just about who has the highest ranking on a coveted list – it’s not a sporting event with a big fanfare for the winner. These computers are crucial for modeling, simulation, and large-scale analysis – everything from modeling complex weather systems to simulating biological processes. As our understanding of highly complex systems grows, the only way we’re going to be able to keep moving forward is with more and ever more computing power. At the same time, exascale computing is anticipated to be a highly disruptive technology, not only because of what it will be able to do, but because of the technologies that will be created in the course of developing it. Ultimately, these technologies will end up in all kinds of new products, not unlike what happened with the Apollo space program. Falling behind at this stage of the game would put the U.S. at a big disadvantage in almost every aspect of science and product development.

Just as concerning, I believe, is what this would mean for developing an AGI or artificial general intelligence. There’s been a lot of speculation by experts in the field of AI as to when (if ever) we might develop a human-level artificial intelligence. A recent survey of AI experts indicates we could realize human-level AI or greater in the next couple of decades. More than half of the experts surveyed thought this milestone would occur by mid-century. While there are many different avenues which may ultimately lead to an AGI, it’s a good bet that most of these will require some pretty serious computing power both for research and potentially for the substrate of the AGI itself.

It’s been speculated that there are considerable risks in developing a computer with human-level or greater intelligence, but there are a number of risks in not doing so as well. Whoever builds the first AGI will very probably realize an enormous competitive advantage, both economically and politically. Additionally, the world faces a growing number of existential threats which AGIs could play a critical role in helping us to avoid.

During this time of budget deficits and spending cuts, it would be very easy to decide that Big Science programs, such as the Exascale Initiative, aren’t as crucial to the nation’s well-being as they really are. This would be a grave mistake. The question isn’t how we can afford to commit ourselves to this research, but how we can afford not to.

(NOTE: Beginning with this entry, I’ll be cross-posting my blog at the World Future Society – www.wfs.org.)