One Response to the Eight Grand Challenges

The closing plenary for the WorldFuture conference, held earlier this month in Vancouver, BC, was given by Thomas Frey of the DaVinci Institute. In his presentation, Frey offered up Eight Grand Challenges for humanity. It was a thought provoking presentation, though not without its critics.

To recap, Frey stated that much of our time and resources are taken up dealing with the existing problems of the day. We are continually responding to the problems and conditions we’ve created in the past and by doing so, we trap ourselves there. So why not establish a group of competitive challenges for the purpose of advancing humanity in the future?

Because of the scale of the challenges and the financial and physical resources needed, the competition would be limited to countries. The prize would be national prestige along with the creation of entirely new industries and markets. While the possible applications of the resulting technologies was considered very open-ended, the challenges themselves had set parameters. The challenges are:

  1. Race to the Core: First team to build a probe that makes it all the way to the center of the earth with a communication system capable of sending real-time sensory data to the surface.
  2. Viewing the Past: Create a technology capable of replaying an unrecorded event that happened no less than 20 years earlier in actual-size, in holographic form.
  3. Disassembling Matter: First team to reduce a solid block of granite (2’ cube) to particles no larger than molecules in less than 10 seconds, using less than 500 watts of power without causing an explosion or physical damage to objects more than 10′ away.
  4. The Gravity Challenge: Demonstrate gravitational control over an object weighing no less than 2,000 lbs. by doubling the force of gravity to 4,000 lbs., reducing the force of gravity by 50% to 1,000 lbs., and creating negative gravity by lifting the object 1,000 ft and returning it back to the original position with no explosions and in less than 10 minutes.
  5. The Ultimate Small Storage Particle: Create an electron-based data storage system no larger than 10 millimeters cubed that can be manufactured for less than $1 per 100 terabytes and is capable of uploading, storing, and retrieving a volume of information equal to the U.S. Library of Congress in less than 10 minutes using less than 1 watt per TB/month.
  6. Travel at the Speed of Light: Create a scientific probe capable of traveling at the speed of light for a distance no less than the Earth to Saturn with information sensors to capture stresses, impacts, and details along the way.
  7. Swarm-Bots: Create a swarm of 10,000 synchronized micro drones no larger than 10 millimeters across (height, width, and depth) capable of lifting a 250-pound person to a height of 100 feet and gently returning him/her to the ground.
  8. The 10-Second Interface: Create a direct-to-the-mind interface that will allow 25 average people to answer a series of questions within 10 seconds with no harmful side effects to the user.

Some attendees took issue with several of the challenges, noting that they violated established laws of physics. Now, I’m only an armchair-physicist (that is, I’m as much a physicist as most football fans are professional quarterbacks), but I’m of the opinion we need to be open to these ideas. At first glance, some of the challenges do appear to violate relativity and the First Law of Thermodynamics, but I think they’re actually open-ended enough to have some potential work-arounds.

For instance, disassembling matter would essentially involve breaking the covalent bonding between atoms. Though the limitation of 500 watts seems impossibly low, could a kind of cascade effect be initiated that would do the work without violating the law of conservation of energy?

Special relativity tells us that a massless particle can travel at light speed, but anything having mass can not. As an initial thought experiment: Would it ever be possible to isolate a chargeless particle, such as a neutron, in such a way that its mass wouldn’t be affected by acceleration? Using current technology, no, but I’m not sure this will always be the case.

Certain experiments with quantum entanglement – Einstein’s ‘spooky action at a distance’ – suggest that information may be able to travel backward in time. This is far from proven, but offers potential worth exploring.

Many cosmologists believe that dark energy and repulsive gravity may drive the expansion of the universe. Could advances in our understanding of these forces one day give us the means of controlling gravity?

The impossible is always impossible until it’s not. This doesn’t mean we can expect to violate primary laws of physics, but we need to be open to the possibility of what can be achieved.

For me, the bigger issue is one of motivation. Some of these challenges could take a century or more to be realized. National prestige and the dream of potential markets and industries are all very good, but is this sufficient to sustain participants for the long haul? Could they weather the political and ideological swings that will occur during even a fraction of this timeframe?

Two of the largest, long-term, technically advanced projects ever undertaken were the Manhattan Project and the Apollo Program. Both of these marshaled enormous physical and intellectual resources over many years. However, they weren’t launched to create new markets, but as a response to a perceived existential threat. For me, therein may lie the answer.

The world faces any number of known and unknown existential threats in the coming century and beyond. British Astronomer Royal Sir Martin Rees has put the probability of human extinction sometime during the 21st century at 50%. The risks are considerable. Some we can influence; others we have absolutely no control over. Bio-terrorism. Nanotechnology run amok. Nuclear war. Extreme climate change. Asteroid collisions. Radiation from a nearby supernova. Any one of these could destroy millions of years of evolution and thousands of years of civilization.

The possible uses for technologies arising from the Eight Grand Challenges are vast. But within them I see the necessary ingredients for finally expanding humanity beyond this one small planet. Moving into space, both within and beyond this solar system is our best chance of continuation as a species.

As physicist and cosmologist Stephen Hawking observed, “The long-term survival of the human race is at risk as long as it is confined to a single planet.”

So my interpretation of the Eight Grand Challenges is this: We need to treat humanity the same way we treat any other irreplaceable, highly valued, absolutely critical system.

Let’s make a backup.


A World of Data

Information. There is probably nothing so important to our lives, to our culture, to our world. Information, and the ability to communicate it, has been at the foundation of humanity’s rise since before we used fire or stone tools. It has allowed us to grow from nomadic clans to villages to city-states to nations to become a world-spanning society.

As our society and technologies have grown, so too has our world of information. Its depth, breadth and sheer volume have expanded exponentially. This has occurred for millennia, probably throughout our entire history. Yet now, as we find ourselves in the midst of the Digital Age, we discover we’ve reached a point when the volume of data we generate threatens our very ability to utilize it.

Data grows exponentially. According to market research and analysis firm IDC, the world’s digital output is doubling every one and a half years. In 2010, they expect the world to create and replicate a record 1.2 zettabytes of data. That’s over a trillion billion bytes, or a stack of DVDs reaching to the Moon and back. By 2020, IDC expects this number to grow to 35 zettabytes, or enough DVDs to reach halfway to Mars. But there are reasons to believe this estimate may fall woefully short.

How we address the issues surrounding the information explosion and the overload that accompanies it will directly impact how we develop our society and technologies in the future. My new article “Treading in the Sea of Data” is the cover story for the latest issue of The Futurist (July/August 2011) and it explores what this “data deluge” could mean for our world. The challenges created by Big Data are significant not only from a technological perspective, but from personal and societal ones as well. However, there are a number of approaches we can take in dealing with what will only continue to be a growing condition of life in this time of accelerating progress. I welcome your comments.

(This article draws from my more in-depth paper in the World Future Society’s 2011 conference volume, Moving from Vision to Action, which may be preordered from

An Argument For The Singularity

Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he’d] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”

The Supercomputer Race, Revisited

Nine months ago, I wrote a post called “The Supercomputer Race” about China’s then top-ranked supercomputer, the Tianhe-1A and what it meant for the U.S. The Tianhe-1A achieved 2.57 petafolps (1015 floating point operations per second) on LINPACK, a benchmark based on a series of complex linear equations. In comparison, the world’s next fastest system at the time was Oak Ridge National Laboratory’s Jaguar, clocking in at 1.76 petaflops. (Based on TOP500’s semi-annual ranking of the world’s five hundred fastest supercomputers.)

Today TOP500 released their latest rankings, which put Japan’s K Computer in the number one spot with 8.162 petaflops (PFLOPS), a jump of more than three times the performance of the now number two Tianhe-1A. How was such a sharp increase realized and what does it mean for supercomputing in the future?

A little history: TOP 500 has been ranking the world’s supercomputers since 1993. During this time individual and cumulative performance has followed a steady pattern of exponential growth. As with this latest ranking, individual rankings have shown a significant jump in some years (e.g., 1997, 2002), followed by years of more modest improvement. On the other hand, cumulative totals have been very consistent due to the broader data set and are probably a better indicator of where the trend stands overall. (Interestingly, RIKEN’s K Computer represents a jump not seen since Japan’s last number one, the Earth Simulator in 1992.) Not surprisingly, the plot points for the performance growth of the number one spot approximate a classic series of sigmoid growth curves, as technologies reach their limits and are superseded by others.

TOP500 June 2011-Projected Performance

The substantial leap forward last year by the Tianhe-1A can mostly be attributed to one significant improvement: the implementation of radically faster interconnects. Rather than focusing on the latest step up in CPU technology, the designers of the Tianhe-1A focused on the biggest bottleneck in the system. Interconnects are networking chipsets that coordinate the data continually being moved between processors, in this case thousands of Intel Westmere and Nvidia Fermi processors. China’s homegrown Galaxy interconnects were a huge improvement in performance at double the speed of the Infiniband interconnects used in many other systems.

This latest ranking saw improvements that are due to a related trend: the transition away from monolithic CPU-based systems to heterogeneous platforms. (Heterogeneous platforms utilize a variety of different types of computational units, including CPUs, GPUs, interconnects, etc.) Looking at the trend line, the Tianhe-1A represented a 50% increase over Oak Ridge’s Jaguar. Japan’s K Computer improves on the Tianhe-1A by almost 200%. During this next year, two U.S. systems are slated to become operational with peak performances in the 20 PFLOP range or a further gain of 150%.

So does this point to a long-term increase in the rate of improvement in supercomputing performance? I’d say, probably not. The elimination of bottlenecks and the transition to new approaches will likely be a blip on the trend line. As the industry moves toward the target of exascale supercomputing later this decade, we’re likely to see improvements slow at various points as we deal with some very considerable challenges of scale. It’s been said that while the move from terascale to petascale computing was evolutionary, the leap from petascale to exascale will be revolutionary. The solutions used in the earlier systems simply won’t scale up without significant changes being made.

A common question among the general public is “why do we even need more powerful supercomputers? Can’t we get by with what we have already?” The simple answer is ‘No’. If the U.S. wants to remain a leading technological and economic force in the world, it will be necessary to invest in a future in which supercomputers play a central role. If we’re to see the nascent technologies of the 21st century realized, we’ll need the vast processing power of exascale systems and beyond. Likewise, we’ll need next-generation supercomputers if we’re to overcome many of the challenges the world now faces. Our digital world is generating enormous quantities of data, data that is itself growing exponentially. Bioinformatics, proteomics and brain simulation are but a few of the fields that will require continuing improvements in supercomputing to deal with their immense data sets. For similar reasons, we’ll need these computers for complex analytic systems such as IBM’s DeepQA Project, more commonly known as Watson. The ability to create tremendously detailed climate models will also be essential as we deal with human-caused climate change, whether to predict its consequences or to implement solutions. In short, to abandon advances in supercomputing is to abandon our place in the future.

(The future of information management is explored in my recent article, “Treading in the Sea of Data”, in the July/August 2011 issue of The Futurist. The article is an abridged version of my paper which will be published this summer in the WorldFuture 2011 conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.)

The Path to Future Intelligence

Often when people think about intelligence and the future, they have visions of monolithic supercomputers, Terminator scenarios and Skynet. But there’s an entirely different path to future intelligence. Intelligence augmentation (IA, not to be confused with AI) involves supplementing our own brain’s abilities using a range of different technologies. While some people may find such developments disconcerting, evidence suggests this will be a viable technology in the future. And it’s one that may be considerably closer than you think.

In some ways the augmentation of intelligence already has a long history. From the first time we cut notches into sticks or painted on cave walls, we were augmenting our memories by creating a tangible record. The written word developed this concept even further. More recently, the internet and search engines have given us access to a vast subset of human knowledge, effectively extending our memory by many orders of magnitude.

Now a number of fields stand at the threshold of augmenting human intelligence directly. Pharmacological methods include drugs called nootropics which enhance learning and attention. Among these are Ampakines which have been tested by DARPA, the research arm of the Defense Department, in an effort to improve attention span and alertness of soldiers in the field, as well as facilitate their learning and memory.

Biotechnological and genetic approaches are also being explored in order to identify therapeutic strategies which promote neuroplasticity and improve learning ability. A 2010 European Neuroscience Institute study found memory and learning in elderly mice restored to youthful levels when a cluster of genes was activated using a single enzyme. Several stem cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In another study, mice exposed to the natural soil bacterium, Mycobacterium vaccae, found their learning rate and retention significantly improved, possibly the result of an autoimmune response. All of these suggest we’ve only begun to scratch the surface when it comes to improving or augmenting intelligence.

Brain-computer interfaces, or BCIs, are another avenue currently being explored. A BCI gives a user the ability to control a computer or other device using only their thoughts. BCIs already exist that allow the operation of computer interfaces and wheelchairs, offering hope of a more interactive life to quadriplegics and patients with locked-in syndrome. Systems are even being developed to replace damaged brain function and aid in the control of prosthetic limbs. Cochlear implants are restoring hearing and considerable progress has been made in developing artificial retina implants. Work has also been done on an artificial hippocampus and it is likely there will be a number of other brain prostheses as the brain becomes better understood. All of these point to a day when our ability to tie in to enhanced or external resources could become a reality.

Of course, as with many new technologies, there will be those who believe intelligence augmentation should be restricted or banned altogether. But as we’ve seen in the past, this is a response that is doomed to failure. Even if draconian measures managed to prohibit R&D in one country, there will always be others who believe the benefits outweigh the costs. For instance, China is currently sequencing the genomes of 1,000 Chinese adults having an IQ of 145 or higher and comparing these to the genomes of an equal number of randomly picked control subjects. Since a substantial proportion of intelligence is considered to be heritable, the project has interesting potential. But even if this method fails to identify the specific genes that give rise to high intelligence, important information is sure to be garnered. However, regardless of the result, it definitely tells us that China, and probably others, are already committing significant resources to this matter.

The augmentation of human intelligence is likely to be a mixed blessing, yielding both benefits and abuses. Regardless of our feelings about it, we would be wise to anticipate the kind of future such enhancements could one day bring.

Intelligence augmentation is just one of the approaches to information overload and the data explosion explored in my upcoming article in the July-August issue of The Futurist. The article is an abridged version of my paper “Treading Data: Staying Afloat in a Sea of Information” which will be published this summer in the WorldFuture conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.
[This introduction to intelligence augmentation was previously published in “The Intelligence Report” at Psychology Today.]

Vaccinations: The Intelligent Choice?

Vaccinations save lives

As I mentioned in a previous post, I began blogging for Psychology Today earlier this month. My latest post there is “Vaccinations: The Intelligent Choice?”, which addresses some of the misconceptions and disinformation surrounding vaccine safety. I decided to write about this because the Washington State legislature is currently attempting to improve vaccination rates by requiring that parents receive better information before being allowed to opt out. (Personally, I think we still need to do more than this, but it’s a step in the right direction.)

Vaccinations are one of the major health achievements of the 20th century. Without them, our citizens would still be ravaged by smallpox, polio, tuberculosis and many other terrible diseases. The decision to try to educate people about the correct risks and benefits of immunization is a wise step in the right direction. Ultimately, the decision to vaccinate becomes not just a question of personal responsibility, but of social responsibility as well.

The post has been selected by Psychology Today for their “Essential Reads: Editor’s Picks” section, which of course, I was very happy to hear.

Addendum: The Seattle Times also recently invited me to be a guest columnist, publishing this article as a “Special to the Times”. (4/10/2011)