Posts Tagged Disruptive tech

The Supercomputer Race

that China is barreling ahead in its development of supercomputers should give the U.S. considerable cause for concern. China has devoted significant resources to their supercomputer program in recent years, resulting in their ranking earlier this year at the number two spot on the TOP500 list. TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations. These tests yield a score based on the computer’s speed measured in double precision floating point operations per second (flops).

To give a little perspective: China didn’t have a single supercomputer ranked in the TOP500 until the mid-1990s. By June 2004, they had their first ranking ever in the top ten. In May 2010, their Nebulae system became the second fastest in the world with a performance of 1.271 petaflops. (A petaflop is 1015 floating point operations per second.) While the Chinese still only have one tenth the number of TOP500 supercomputers the U.S. has, they’ve been quickly catching up based on this metric as well. (Note: TOP500.org ranks the world’s most powerful, commercially available, non-distributed computer systems. There are numerous military and intelligence agency supercomputers in many countries not included in this list.)

China’s Nebulae system operates from the newly built National Supercomputing Centre in Shenzhen. This is also the site of some very recent and very extensive construction which will presumably house some very serious supercomputing power in the near future. “There clearly seems to be a strategic and strong commitment to supercomputing at the very highest level in China,” stated Erich Strohmaier, head of the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory.

The next major goal for supercomputers is the building of an exascale system sometime between 2018 and 2020. Such a system would be almost a thousand times faster than the Jaguar supercomputer at Oak Ridge National Laboratory, currently the world’s fastest. The U.S. Exascale Initiative is committed to developing this technology which brings with it many different challenges of scale. At the same time, Europe and China have accelerated their investment in high-performance systems, with Europeans on a faster development track than the U.S. There are concerns the U.S. could be bypassed if it doesn’t sustain the investment to stay ahead.

This isn’t just about who has the highest ranking on a coveted list – it’s not a sporting event with a big fanfare for the winner. These computers are crucial for modeling, simulation, and large-scale analysis – everything from modeling complex weather systems to simulating biological processes. As our understanding of highly complex systems grows, the only way we’re going to be able to keep moving forward is with more and ever more computing power. At the same time, exascale computing is anticipated to be a highly disruptive technology, not only because of what it will be able to do, but because of the technologies that will be created in the course of developing it. Ultimately, these technologies will end up in all kinds of new products, not unlike what happened with the Apollo space program. Falling behind at this stage of the game would put the U.S. at a big disadvantage in almost every aspect of science and product development.

Just as concerning, I believe, is what this would mean for developing an AGI or artificial general intelligence. There’s been a lot of speculation by experts in the field of AI as to when (if ever) we might develop a human-level artificial intelligence. A recent survey of AI experts indicates we could realize human-level AI or greater in the next couple of decades. More than half of the experts surveyed thought this milestone would occur by mid-century. While there are many different avenues which may ultimately lead to an AGI, it’s a good bet that most of these will require some pretty serious computing power both for research and potentially for the substrate of the AGI itself.

It’s been speculated that there are considerable risks in developing a computer with human-level or greater intelligence, but there are a number of risks in not doing so as well. Whoever builds the first AGI will very probably realize an enormous competitive advantage, both economically and politically. Additionally, the world faces a growing number of existential threats which AGIs could play a critical role in helping us to avoid.

During this time of budget deficits and spending cuts, it would be very easy to decide that Big Science programs, such as the Exascale Initiative, aren’t as crucial to the nation’s well-being as they really are. This would be a grave mistake. The question isn’t how we can afford to commit ourselves to this research, but how we can afford not to.

(NOTE: Beginning with this entry, I’ll be cross-posting my blog at the World Future Society – www.wfs.org.)

Smart dust for making smarter decisions

Imagine if you didn’t have a nervous system.  Your body would have no way of regulating any of its other systems.  It wouldn’t know if it was too hot or too cold.  It couldn’t register dangers and harmful conditions.  Every aspect of the environment would be shut off to it.  In short, without a nervous system, you wouldn’t survive very long.

The fact is, in order to know how to respond to conditions you need to know what those conditions are.  This is some of the thinking behind “smart dust” – very small, very cheap networked sensors for measuring all kinds of different aspects of our environment. It’s also the idea behind HP’s “Central Nervous System for the Earth” project or CeNSE.  By developing sensors that can detect motion, vibration, light, temperature, air pressure, air flow and humidity, HP hopes to see them deployed throughout the environment.  These will be able to keep watch over the structural integrity of buildings, bridges and other infrastructure.  Chemical sensors will be able to detect dangerous conditions in our air, food and water.  They’ll eventually be capable of alerting us in the event of a terrorist attack using biological agents.  In short, they’ll be our eyes, ears, noses and much more.  They’ll become a new kind of nervous system.

This would be tremendously useful for monitoring manmade structures.  A US DOT 2008 survey of over 600,00 bridges found nearly 27% to be structurally deficient or functionally obsolete.  There is simply no way we have a sufficient number of trained people to adequately monitor all of these.  And that’s only the bridges.  A system of inexpensive sensors that can watch for excessive vibration levels and structural deformity in order to avoid catastrophic collapse will be money very well spent.

Obviously, the environment already has its own kind of feedback loops through which it adapts to changing conditions.  But humanity has imposed itself so thoroughly onto the environment that we need better ways of gauging our effect on it.  Hopefully, this kind of data will allow us to make better, more informed decisions about mitigating environmental impact.  Certainly, this would be preferable to making knee-jerk, expensive, politically feel-good decisions that often do more harm than good.  (e.g., subsidizing the conversion of food crops to ethanol crops and thereby exacerbating food shortages in parts of the world.)

Obviously, there will be downsides to this kind of technology, most notably in terms of its potential use in surveillance.  As with most technological developments, the answer is not in trying to prohibit it but to adapt our laws and institutions to deal with our changing world.  A world we will be knowing much more about, very shortly.

Empowering the people

Change frightens us. The uncertainty of the new, the potential for disruption, it’s one of the reasons that our species seeks to anticipate the future – so we can avoid and hopefully survive the worst it has to throw at us.

Self-replicated RepRap parts
Self-replicated RepRap parts

But change also has a huge potential to improve our lives and empower us. The recent accomplishments of the RepRap project are a case in point. Headed by Dr. Adrian Bowyer of the University of Bath’s Center for Biomimetics, RepRap is short for replicating rapid prototyping machine. In use by industry for about a quarter century now, prototypers are essentially 3D inkjet printers capable of creating parts by laying down thin layers of resin following a computer-driven design. Almost any basic object can be created using this technique, from dinnerware to engine parts. What makes RepRap different is that it’s the first prototyper capable of copying itself. And it’s open source.

While there are still a handful of its own parts that RepRap can’t copy and all of it still has to be hand assembled, the potential for a self-replicating replicator is enormous. Distributed to people in the developing world, such a technology could quickly raise their standard of living, providing necessities many of the rest of us have long taken for granted. Of course, such a technology would be tremendously disruptive to industry, but that can hardly be justification for billions to continue living and dying in unnecessary poverty.

Like RepRap, many other new and developing technologies have the potential to heal, to enable, to lift up vast numbers of people. DEKA’s water purification system, the Kurzweil-National Federation of the Blind Reader, solar energy solutions and worldwide immunization programs are but a few of the recent implementations of technology that have the ability to change the lives of millions for the better.

A changing world can be a frightening place, but it can be a very hopeful place as well.