Blog

A World of Data


Information. There is probably nothing so important to our lives, to our culture, to our world. Information, and the ability to communicate it, has been at the foundation of humanity’s rise since before we used fire or stone tools. It has allowed us to grow from nomadic clans to villages to city-states to nations to become a world-spanning society.

As our society and technologies have grown, so too has our world of information. Its depth, breadth and sheer volume have expanded exponentially. This has occurred for millennia, probably throughout our entire history. Yet now, as we find ourselves in the midst of the Digital Age, we discover we’ve reached a point when the volume of data we generate threatens our very ability to utilize it.

Data grows exponentially. According to market research and analysis firm IDC, the world’s digital output is doubling every one and a half years. In 2010, they expect the world to create and replicate a record 1.2 zettabytes of data. That’s over a trillion billion bytes, or a stack of DVDs reaching to the Moon and back. By 2020, IDC expects this number to grow to 35 zettabytes, or enough DVDs to reach halfway to Mars. But there are reasons to believe this estimate may fall woefully short.

How we address the issues surrounding the information explosion and the overload that accompanies it will directly impact how we develop our society and technologies in the future. My new article “Treading in the Sea of Data” is the cover story for the latest issue of The Futurist (July/August 2011) and it explores what this “data deluge” could mean for our world. The challenges created by Big Data are significant not only from a technological perspective, but from personal and societal ones as well. However, there are a number of approaches we can take in dealing with what will only continue to be a growing condition of life in this time of accelerating progress. I welcome your comments.

(This article draws from my more in-depth paper in the World Future Society’s 2011 conference volume, Moving from Vision to Action, which may be preordered from www.wfs.org.)

An Argument For The Singularity


Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he’d] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”

The Supercomputer Race, Revisited

Nine months ago, I wrote a post called “The Supercomputer Race” about China’s then top-ranked supercomputer, the Tianhe-1A and what it meant for the U.S. The Tianhe-1A achieved 2.57 petafolps (1015 floating point operations per second) on LINPACK, a benchmark based on a series of complex linear equations. In comparison, the world’s next fastest system at the time was Oak Ridge National Laboratory’s Jaguar, clocking in at 1.76 petaflops. (Based on TOP500’s semi-annual ranking of the world’s five hundred fastest supercomputers.)

Today TOP500 released their latest rankings, which put Japan’s K Computer in the number one spot with 8.162 petaflops (PFLOPS), a jump of more than three times the performance of the now number two Tianhe-1A. How was such a sharp increase realized and what does it mean for supercomputing in the future?

A little history: TOP 500 has been ranking the world’s supercomputers since 1993. During this time individual and cumulative performance has followed a steady pattern of exponential growth. As with this latest ranking, individual rankings have shown a significant jump in some years (e.g., 1997, 2002), followed by years of more modest improvement. On the other hand, cumulative totals have been very consistent due to the broader data set and are probably a better indicator of where the trend stands overall. (Interestingly, RIKEN’s K Computer represents a jump not seen since Japan’s last number one, the Earth Simulator in 1992.) Not surprisingly, the plot points for the performance growth of the number one spot approximate a classic series of sigmoid growth curves, as technologies reach their limits and are superseded by others.

TOP500 June 2011-Projected Performance
TOP500 June 2011-Projected Performance

The substantial leap forward last year by the Tianhe-1A can mostly be attributed to one significant improvement: the implementation of radically faster interconnects. Rather than focusing on the latest step up in CPU technology, the designers of the Tianhe-1A focused on the biggest bottleneck in the system. Interconnects are networking chipsets that coordinate the data continually being moved between processors, in this case thousands of Intel Westmere and Nvidia Fermi processors. China’s homegrown Galaxy interconnects were a huge improvement in performance at double the speed of the Infiniband interconnects used in many other systems.

This latest ranking saw improvements that are due to a related trend: the transition away from monolithic CPU-based systems to heterogeneous platforms. (Heterogeneous platforms utilize a variety of different types of computational units, including CPUs, GPUs, interconnects, etc.) Looking at the trend line, the Tianhe-1A represented a 50% increase over Oak Ridge’s Jaguar. Japan’s K Computer improves on the Tianhe-1A by almost 200%. During this next year, two U.S. systems are slated to become operational with peak performances in the 20 PFLOP range or a further gain of 150%.

So does this point to a long-term increase in the rate of improvement in supercomputing performance? I’d say, probably not. The elimination of bottlenecks and the transition to new approaches will likely be a blip on the trend line. As the industry moves toward the target of exascale supercomputing later this decade, we’re likely to see improvements slow at various points as we deal with some very considerable challenges of scale. It’s been said that while the move from terascale to petascale computing was evolutionary, the leap from petascale to exascale will be revolutionary. The solutions used in the earlier systems simply won’t scale up without significant changes being made.

A common question among the general public is “why do we even need more powerful supercomputers? Can’t we get by with what we have already?” The simple answer is ‘No’. If the U.S. wants to remain a leading technological and economic force in the world, it will be necessary to invest in a future in which supercomputers play a central role. If we’re to see the nascent technologies of the 21st century realized, we’ll need the vast processing power of exascale systems and beyond. Likewise, we’ll need next-generation supercomputers if we’re to overcome many of the challenges the world now faces. Our digital world is generating enormous quantities of data, data that is itself growing exponentially. Bioinformatics, proteomics and brain simulation are but a few of the fields that will require continuing improvements in supercomputing to deal with their immense data sets. For similar reasons, we’ll need these computers for complex analytic systems such as IBM’s DeepQA Project, more commonly known as Watson. The ability to create tremendously detailed climate models will also be essential as we deal with human-caused climate change, whether to predict its consequences or to implement solutions. In short, to abandon advances in supercomputing is to abandon our place in the future.

(The future of information management is explored in my recent article, “Treading in the Sea of Data”, in the July/August 2011 issue of The Futurist. The article is an abridged version of my paper which will be published this summer in the WorldFuture 2011 conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.)

The Path to Future Intelligence


Often when people think about intelligence and the future, they have visions of monolithic supercomputers, Terminator scenarios and Skynet. But there’s an entirely different path to future intelligence. Intelligence augmentation (IA, not to be confused with AI) involves supplementing our own brain’s abilities using a range of different technologies. While some people may find such developments disconcerting, evidence suggests this will be a viable technology in the future. And it’s one that may be considerably closer than you think.

In some ways the augmentation of intelligence already has a long history. From the first time we cut notches into sticks or painted on cave walls, we were augmenting our memories by creating a tangible record. The written word developed this concept even further. More recently, the internet and search engines have given us access to a vast subset of human knowledge, effectively extending our memory by many orders of magnitude.

Now a number of fields stand at the threshold of augmenting human intelligence directly. Pharmacological methods include drugs called nootropics which enhance learning and attention. Among these are Ampakines which have been tested by DARPA, the research arm of the Defense Department, in an effort to improve attention span and alertness of soldiers in the field, as well as facilitate their learning and memory.

Biotechnological and genetic approaches are also being explored in order to identify therapeutic strategies which promote neuroplasticity and improve learning ability. A 2010 European Neuroscience Institute study found memory and learning in elderly mice restored to youthful levels when a cluster of genes was activated using a single enzyme. Several stem cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In another study, mice exposed to the natural soil bacterium, Mycobacterium vaccae, found their learning rate and retention significantly improved, possibly the result of an autoimmune response. All of these suggest we’ve only begun to scratch the surface when it comes to improving or augmenting intelligence.

Brain-computer interfaces, or BCIs, are another avenue currently being explored. A BCI gives a user the ability to control a computer or other device using only their thoughts. BCIs already exist that allow the operation of computer interfaces and wheelchairs, offering hope of a more interactive life to quadriplegics and patients with locked-in syndrome. Systems are even being developed to replace damaged brain function and aid in the control of prosthetic limbs. Cochlear implants are restoring hearing and considerable progress has been made in developing artificial retina implants. Work has also been done on an artificial hippocampus and it is likely there will be a number of other brain prostheses as the brain becomes better understood. All of these point to a day when our ability to tie in to enhanced or external resources could become a reality.

Of course, as with many new technologies, there will be those who believe intelligence augmentation should be restricted or banned altogether. But as we’ve seen in the past, this is a response that is doomed to failure. Even if draconian measures managed to prohibit R&D in one country, there will always be others who believe the benefits outweigh the costs. For instance, China is currently sequencing the genomes of 1,000 Chinese adults having an IQ of 145 or higher and comparing these to the genomes of an equal number of randomly picked control subjects. Since a substantial proportion of intelligence is considered to be heritable, the project has interesting potential. But even if this method fails to identify the specific genes that give rise to high intelligence, important information is sure to be garnered. However, regardless of the result, it definitely tells us that China, and probably others, are already committing significant resources to this matter.

The augmentation of human intelligence is likely to be a mixed blessing, yielding both benefits and abuses. Regardless of our feelings about it, we would be wise to anticipate the kind of future such enhancements could one day bring.

Intelligence augmentation is just one of the approaches to information overload and the data explosion explored in my upcoming article in the July-August issue of The Futurist. The article is an abridged version of my paper “Treading Data: Staying Afloat in a Sea of Information” which will be published this summer in the WorldFuture conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.
[This introduction to intelligence augmentation was previously published in “The Intelligence Report” at Psychology Today.]

Vaccinations: The Intelligent Choice?

Vaccinations save lives
Vaccinations save lives

As I mentioned in a previous post, I began blogging for Psychology Today earlier this month. My latest post there is “Vaccinations: The Intelligent Choice?”, which addresses some of the misconceptions and disinformation surrounding vaccine safety. I decided to write about this because the Washington State legislature is currently attempting to improve vaccination rates by requiring that parents receive better information before being allowed to opt out. (Personally, I think we still need to do more than this, but it’s a step in the right direction.)

Vaccinations are one of the major health achievements of the 20th century. Without them, our citizens would still be ravaged by smallpox, polio, tuberculosis and many other terrible diseases. The decision to try to educate people about the correct risks and benefits of immunization is a wise step in the right direction. Ultimately, the decision to vaccinate becomes not just a question of personal responsibility, but of social responsibility as well.

The post has been selected by Psychology Today for their “Essential Reads: Editor’s Picks” section, which of course, I was very happy to hear.

Addendum: The Seattle Times also recently invited me to be a guest columnist, publishing this article as a “Special to the Times”. (4/10/2011)

Exascale Supercomputers: The Next Frontier

Cray Jaguar Supercomputer
Cray Jaguar Supercomputer

The last few years finally saw the arrival of supercomputers capable of petascale performance. In all, seven systems from the US, China, Japan and France achieved the milestone of processing a million billion floating point operations per second (flops) by the end of 2010. But even before this target was reached, computer scientists and engineers were setting their sights on an even loftier goal: Exascale computing.

The supercomputer has become the mainstay of both theoretical and applied science. Climate modeling, genome analysis, protein folding, nuclear fusion research and many other fields all benefit from the continuing gains in processing power. Now, with a range of exascale initiatives, the U.S. and Europe have set a goal of building a supercomputer one thousand times more powerful than any today. And they want to do it sometime between 2018 and 2020.

At first glance, this seems like it should definitely be achievable. After all, three orders of magnitude in seven to nine years certainly falls within the scope of Moore’s Law. But whereas the move from terascale to petascale processing was considered evolutionary, the jump to exascale supercomputers will require advances that will be revolutionary. Simply scaling up current technology won’t work. For instance, the Cray Jaguar supercomputer at Oak Ridge National Laboratory has more than a quarter of a million processor cores, over 360 terabytes of memory and uses 7.8 megawatts at peak power. A combination of air and liquid cooling remove enough waste heat to potentially warm several large buildings. Scaling such a system up a thousand-fold just isn’t feasible.

So new technologies will have to be developed. We’ll need processors with considerably lower power requirements and heat generation, faster optical interconnects and improved algorithms that provide better utilization of available processing cycles. And that’s just for starters.

Will we be able to achieve this goal in the timeframe that’s been set? Possibly, but only just. Professor of computer science and engineering, Peter Kogge recently wrote of his concerns about realizing exascale computing in IEEE Spectrum. Kogge was editor and study lead for the Exascale Computing Study initiated by the Defense Advanced Research Projects Agency (DARPA). This study illuminated a number of obstacles which will have to be overcome. (Note: The DARPA study was made in 2007 to determine the feasibility of exascale computing by 2015.)

But the dream of exascale supercomputers is important enough that DARPA, the U.S. Department of Energy and private industry are forging ahead despite such concerns. Last year, Intel opened three new centers dedicated to exascale research: the Exascale Computing Research Center in Paris, France, the ExaCluster Laboratory in Juelich, Germany and the ExaScience Lab in Leuven, Belgium.

Why is exascale supercomputing so important? The world faces significant challenges in the coming decades. Dealing with climate change, peak oil and a multitude of engineering challenges will require tremendous computing resources. At the same time, we’ve entered an era of massive data sets. Everything from genomics and proteomics to molecular modeling to nanotechnology will benefit from these advances. In short, much of the science of the 21st century will be impacted by exascale supercomputing.

The great thing about a grand challenge like this is even if it does take longer than expected to achieve, there will be all kinds of research and innovation that yield benefits along the way. New processor architectures, improvements in energy efficiency and advancements in parallel algorithms are but a few advances we can expect to eventually trickle down to other, more publicly accessible uses.

But the US and Europe aren’t the only players pursuing the exascale dream. China has very evident goals of continuing to build the world’s fastest supercomputers. As of November 2010, their Tianhe-1A supercomputer was ranked the world’s fastest by TOP500.org. (TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations.) China is also currently building their third National Supercomputing Center in Changsha, Hunan Province, a massive complex that is expected to be completed by the end of 2011. China has set a goal of building an exascale supercomputer sometime between 2016-2020, which should give cause for concern. Given their focus and the speed with which they moved to the top rank with the Tianhe-1A, China could definitely jump far ahead, if we don’t make this a priority.

Fortunately, the Obama administration has asked for $126 million in the 2012 budget for the development of next generation supercomputers. Whether it will receive approval from the new Congress remains to be seen. In my opinion, a decision not to fund such important technology could have far-reaching consequences for our competitiveness in the world and would definitely show a lack of foresight.