Blog

The Path to Future Intelligence


Often when people think about intelligence and the future, they have visions of monolithic supercomputers, Terminator scenarios and Skynet. But there’s an entirely different path to future intelligence. Intelligence augmentation (IA, not to be confused with AI) involves supplementing our own brain’s abilities using a range of different technologies. While some people may find such developments disconcerting, evidence suggests this will be a viable technology in the future. And it’s one that may be considerably closer than you think.

In some ways the augmentation of intelligence already has a long history. From the first time we cut notches into sticks or painted on cave walls, we were augmenting our memories by creating a tangible record. The written word developed this concept even further. More recently, the internet and search engines have given us access to a vast subset of human knowledge, effectively extending our memory by many orders of magnitude.

Now a number of fields stand at the threshold of augmenting human intelligence directly. Pharmacological methods include drugs called nootropics which enhance learning and attention. Among these are Ampakines which have been tested by DARPA, the research arm of the Defense Department, in an effort to improve attention span and alertness of soldiers in the field, as well as facilitate their learning and memory.

Biotechnological and genetic approaches are also being explored in order to identify therapeutic strategies which promote neuroplasticity and improve learning ability. A 2010 European Neuroscience Institute study found memory and learning in elderly mice restored to youthful levels when a cluster of genes was activated using a single enzyme. Several stem cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In another study, mice exposed to the natural soil bacterium, Mycobacterium vaccae, found their learning rate and retention significantly improved, possibly the result of an autoimmune response. All of these suggest we’ve only begun to scratch the surface when it comes to improving or augmenting intelligence.

Brain-computer interfaces, or BCIs, are another avenue currently being explored. A BCI gives a user the ability to control a computer or other device using only their thoughts. BCIs already exist that allow the operation of computer interfaces and wheelchairs, offering hope of a more interactive life to quadriplegics and patients with locked-in syndrome. Systems are even being developed to replace damaged brain function and aid in the control of prosthetic limbs. Cochlear implants are restoring hearing and considerable progress has been made in developing artificial retina implants. Work has also been done on an artificial hippocampus and it is likely there will be a number of other brain prostheses as the brain becomes better understood. All of these point to a day when our ability to tie in to enhanced or external resources could become a reality.

Of course, as with many new technologies, there will be those who believe intelligence augmentation should be restricted or banned altogether. But as we’ve seen in the past, this is a response that is doomed to failure. Even if draconian measures managed to prohibit R&D in one country, there will always be others who believe the benefits outweigh the costs. For instance, China is currently sequencing the genomes of 1,000 Chinese adults having an IQ of 145 or higher and comparing these to the genomes of an equal number of randomly picked control subjects. Since a substantial proportion of intelligence is considered to be heritable, the project has interesting potential. But even if this method fails to identify the specific genes that give rise to high intelligence, important information is sure to be garnered. However, regardless of the result, it definitely tells us that China, and probably others, are already committing significant resources to this matter.

The augmentation of human intelligence is likely to be a mixed blessing, yielding both benefits and abuses. Regardless of our feelings about it, we would be wise to anticipate the kind of future such enhancements could one day bring.

Intelligence augmentation is just one of the approaches to information overload and the data explosion explored in my upcoming article in the July-August issue of The Futurist. The article is an abridged version of my paper “Treading Data: Staying Afloat in a Sea of Information” which will be published this summer in the WorldFuture conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.
[This introduction to intelligence augmentation was previously published in “The Intelligence Report” at Psychology Today.]

Vaccinations: The Intelligent Choice?

Vaccinations save lives
Vaccinations save lives

As I mentioned in a previous post, I began blogging for Psychology Today earlier this month. My latest post there is “Vaccinations: The Intelligent Choice?”, which addresses some of the misconceptions and disinformation surrounding vaccine safety. I decided to write about this because the Washington State legislature is currently attempting to improve vaccination rates by requiring that parents receive better information before being allowed to opt out. (Personally, I think we still need to do more than this, but it’s a step in the right direction.)

Vaccinations are one of the major health achievements of the 20th century. Without them, our citizens would still be ravaged by smallpox, polio, tuberculosis and many other terrible diseases. The decision to try to educate people about the correct risks and benefits of immunization is a wise step in the right direction. Ultimately, the decision to vaccinate becomes not just a question of personal responsibility, but of social responsibility as well.

The post has been selected by Psychology Today for their “Essential Reads: Editor’s Picks” section, which of course, I was very happy to hear.

Addendum: The Seattle Times also recently invited me to be a guest columnist, publishing this article as a “Special to the Times”. (4/10/2011)

Exascale Supercomputers: The Next Frontier

Cray Jaguar Supercomputer
Cray Jaguar Supercomputer

The last few years finally saw the arrival of supercomputers capable of petascale performance. In all, seven systems from the US, China, Japan and France achieved the milestone of processing a million billion floating point operations per second (flops) by the end of 2010. But even before this target was reached, computer scientists and engineers were setting their sights on an even loftier goal: Exascale computing.

The supercomputer has become the mainstay of both theoretical and applied science. Climate modeling, genome analysis, protein folding, nuclear fusion research and many other fields all benefit from the continuing gains in processing power. Now, with a range of exascale initiatives, the U.S. and Europe have set a goal of building a supercomputer one thousand times more powerful than any today. And they want to do it sometime between 2018 and 2020.

At first glance, this seems like it should definitely be achievable. After all, three orders of magnitude in seven to nine years certainly falls within the scope of Moore’s Law. But whereas the move from terascale to petascale processing was considered evolutionary, the jump to exascale supercomputers will require advances that will be revolutionary. Simply scaling up current technology won’t work. For instance, the Cray Jaguar supercomputer at Oak Ridge National Laboratory has more than a quarter of a million processor cores, over 360 terabytes of memory and uses 7.8 megawatts at peak power. A combination of air and liquid cooling remove enough waste heat to potentially warm several large buildings. Scaling such a system up a thousand-fold just isn’t feasible.

So new technologies will have to be developed. We’ll need processors with considerably lower power requirements and heat generation, faster optical interconnects and improved algorithms that provide better utilization of available processing cycles. And that’s just for starters.

Will we be able to achieve this goal in the timeframe that’s been set? Possibly, but only just. Professor of computer science and engineering, Peter Kogge recently wrote of his concerns about realizing exascale computing in IEEE Spectrum. Kogge was editor and study lead for the Exascale Computing Study initiated by the Defense Advanced Research Projects Agency (DARPA). This study illuminated a number of obstacles which will have to be overcome. (Note: The DARPA study was made in 2007 to determine the feasibility of exascale computing by 2015.)

But the dream of exascale supercomputers is important enough that DARPA, the U.S. Department of Energy and private industry are forging ahead despite such concerns. Last year, Intel opened three new centers dedicated to exascale research: the Exascale Computing Research Center in Paris, France, the ExaCluster Laboratory in Juelich, Germany and the ExaScience Lab in Leuven, Belgium.

Why is exascale supercomputing so important? The world faces significant challenges in the coming decades. Dealing with climate change, peak oil and a multitude of engineering challenges will require tremendous computing resources. At the same time, we’ve entered an era of massive data sets. Everything from genomics and proteomics to molecular modeling to nanotechnology will benefit from these advances. In short, much of the science of the 21st century will be impacted by exascale supercomputing.

The great thing about a grand challenge like this is even if it does take longer than expected to achieve, there will be all kinds of research and innovation that yield benefits along the way. New processor architectures, improvements in energy efficiency and advancements in parallel algorithms are but a few advances we can expect to eventually trickle down to other, more publicly accessible uses.

But the US and Europe aren’t the only players pursuing the exascale dream. China has very evident goals of continuing to build the world’s fastest supercomputers. As of November 2010, their Tianhe-1A supercomputer was ranked the world’s fastest by TOP500.org. (TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations.) China is also currently building their third National Supercomputing Center in Changsha, Hunan Province, a massive complex that is expected to be completed by the end of 2011. China has set a goal of building an exascale supercomputer sometime between 2016-2020, which should give cause for concern. Given their focus and the speed with which they moved to the top rank with the Tianhe-1A, China could definitely jump far ahead, if we don’t make this a priority.

Fortunately, the Obama administration has asked for $126 million in the 2012 budget for the development of next generation supercomputers. Whether it will receive approval from the new Congress remains to be seen. In my opinion, a decision not to fund such important technology could have far-reaching consequences for our competitiveness in the world and would definitely show a lack of foresight.

The Intelligence Report

I’m excited to announce I was recently invited to write a blog for Psychology Today. As with this blog, “The Intelligence Report” will explore the evolving relationship between intelligence and technology. It will include new posts as well as some cross-posting from this blog when the topic is a good fit. As an introduction, the initial post, “Can Machines Be Intelligent?” explores the possibility that computers may be closer to achieving true intelligence than many people think. Be sure to check out both blogs regularly for the latest postings.

Watson: The New Jeopardy Champion


I consider myself a techno-optimist, but Watson’s performance in Jeopardy’s IBM Challenge has definitely exceeded my expectations. While I did predict Watson would win the competition, I didn’t think it would be so dominant. This was a contest I thought machine intelligence might win by a narrow margin, but the three-day, two game match wasn’t even close. AI has come a long way, baby.

As impressive as Watson’s final cash score was, what I think was more remarkable was it’s answer success rate. In the first match, out of sixty clues, Watson rang in first and answered 38 correctly, with five errors. This is an 88.4% success rate. If only the 30 questions in the Double Jeopardy portion are considered, this jumps to a whopping 96%. You’ll notice I’ve left the Final Jeopardy question out of these calculations. This is because this question had to be answered regardless of the machine’s low confidence level of 14%. It’s important to the competition, but actually indicates the success of the machine’s algorithms.

While the second game (Day 3) wasn’t quite as impressive as the first, Watson still won by a significant margin. Considering it was competing against the two best human Jeopardy players of all time, it’s safe to say IBM met its goal and then some.

Some of the more intriguing (some would rightly say, concerning) moments in the contest were those in which Watson arrived at unfathomably wrong answers. As the lead on the project, Watson Principal Investigator, Dr. David Ferrucci commented:

“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”

The fact is, even Watson’s developers often can’t fathom how it arrives at the answers it does. Parsing through millions of stored documents, Watson applies hundreds of algorithms to arrive at the answer with the highest confidence rating. (While this bears a passing resemblance to Minsky’s “society of mind” concept, it still remains very different from the way humans think.) The incredible complexity of the process means we can’t fully understand it. This is the nature of emergent systems – they generate outcomes that can’t be accurately predicted a lot of the time. They follow an internal logic of their own, one we can’t possibly follow.

In Watson, we’re seeing the barest hints, the merest beginnings of this. The potential domain of future intelligences is vast. It’s possible that one day there will be as many different kinds of machine intelligence as there are biological species. And in all likelihood, we won’t understand the behaviors and motivations of a single one.

Watson is a long way from being an artificial general intelligence. It isn’t HAL-3000. But it is a huge step forward. A step that should be making us ask serious questions about the future of AI. We face a future full of machine intelligences as smart or smarter than we are. Some experts even speculate recursive self-improvement will yield superintelligences vastly more intelligent than the entire human race combined. There should be no question we’ll be incapable of grasping the motivations of such a machine. And there’s no reason to expect it’s objectives will mesh with our own. Obviously, this could have results that are disastrous, potentially even existentially catastrophic.

We aren’t going to stop the advance of artificial intelligence or the eventual development of an artificial general intelligence. Therefore, steps will need to be taken that ensure these machines remain as benevolent as possible and not because they will necessarily be malevolent otherwise. An indifferent superintelligence would be just as big a threat to humanity because it could be capable of taking potentially world-altering actions without considering what they mean for us. Arguments for creating rules-based safeguards, such as Asimov’s “Three Laws of Robotics” will likely fail, simply because rules can be misinterpreted or circumvented given sufficient motivation.

Work toward “Friendly AI”, as proposed by AI researcher, Eliezer Yudkowsky, stands a much better chance of a human-positive outcome. Instilling a machine equivalent of morality not only protects us from the actions of a superintelligence, but from its self-improved progeny as well. Creating “Friendly” safeguards that motivate such a machine to do everything in its power to ensure humans do not come to harm now or in the future may be our best bet. As Yudkowsky states:

“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”

We can hope that a superintelligence comes to the same conclusion. But we can do more than just hope; we can work to ensure it happens.

The capabilities Watson has demonstrated using deep analytics and natural language processing are truly stunning. The technologies that will develop from this will no doubt help the world with many of its significant problems. Not least of these is dealing with the vast, escalating volumes of data our modern world generates. But there is the potential for significant dangers to arise from such technology too. I feel certain though we can overcome these threats and continue the long legacy of building a better world with the help of our technology.

How’s that for techno-optimism?

The Future of Communications

Lascaux hunters
Lascaux hunters

One of the themes I continually find myself returning to is the growing symbiotic relationship between human beings and their technologies. Every device, machine and factory needs us to invent it and build it, while at the same time we become increasingly dependent on these very technologies. We need them in order to thrive and often to survive; they need us to give them form and existence. If that’s not symbiotic, I don’t know what is.

Hopefully, you’ll get a little sense of this symbiosis in my latest article about the future of communications, “Technically Speaking”. To me, communications is one of the great technological threads woven through the tapestry of human history. From the first moment someone left notches on a piece of wood to record an event or painted on the walls of some now-forgotten cave, we embarked on a tremendous journey. A journey made hand in hand with the many technologies of communications. And as with all the best journeys, this one is likely to continue for a very long time.

Like us on Facebook