Archive for the Intelligence Category

Review: Superintelligence by Nick Bostrom


Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.

For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.

One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.

As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.

As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.

It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.

The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.

An Argument For The Singularity


Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he’d] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”

The Path to Future Intelligence


Often when people think about intelligence and the future, they have visions of monolithic supercomputers, Terminator scenarios and Skynet. But there’s an entirely different path to future intelligence. Intelligence augmentation (IA, not to be confused with AI) involves supplementing our own brain’s abilities using a range of different technologies. While some people may find such developments disconcerting, evidence suggests this will be a viable technology in the future. And it’s one that may be considerably closer than you think.

In some ways the augmentation of intelligence already has a long history. From the first time we cut notches into sticks or painted on cave walls, we were augmenting our memories by creating a tangible record. The written word developed this concept even further. More recently, the internet and search engines have given us access to a vast subset of human knowledge, effectively extending our memory by many orders of magnitude.

Now a number of fields stand at the threshold of augmenting human intelligence directly. Pharmacological methods include drugs called nootropics which enhance learning and attention. Among these are Ampakines which have been tested by DARPA, the research arm of the Defense Department, in an effort to improve attention span and alertness of soldiers in the field, as well as facilitate their learning and memory.

Biotechnological and genetic approaches are also being explored in order to identify therapeutic strategies which promote neuroplasticity and improve learning ability. A 2010 European Neuroscience Institute study found memory and learning in elderly mice restored to youthful levels when a cluster of genes was activated using a single enzyme. Several stem cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In another study, mice exposed to the natural soil bacterium, Mycobacterium vaccae, found their learning rate and retention significantly improved, possibly the result of an autoimmune response. All of these suggest we’ve only begun to scratch the surface when it comes to improving or augmenting intelligence.

Brain-computer interfaces, or BCIs, are another avenue currently being explored. A BCI gives a user the ability to control a computer or other device using only their thoughts. BCIs already exist that allow the operation of computer interfaces and wheelchairs, offering hope of a more interactive life to quadriplegics and patients with locked-in syndrome. Systems are even being developed to replace damaged brain function and aid in the control of prosthetic limbs. Cochlear implants are restoring hearing and considerable progress has been made in developing artificial retina implants. Work has also been done on an artificial hippocampus and it is likely there will be a number of other brain prostheses as the brain becomes better understood. All of these point to a day when our ability to tie in to enhanced or external resources could become a reality.

Of course, as with many new technologies, there will be those who believe intelligence augmentation should be restricted or banned altogether. But as we’ve seen in the past, this is a response that is doomed to failure. Even if draconian measures managed to prohibit R&D in one country, there will always be others who believe the benefits outweigh the costs. For instance, China is currently sequencing the genomes of 1,000 Chinese adults having an IQ of 145 or higher and comparing these to the genomes of an equal number of randomly picked control subjects. Since a substantial proportion of intelligence is considered to be heritable, the project has interesting potential. But even if this method fails to identify the specific genes that give rise to high intelligence, important information is sure to be garnered. However, regardless of the result, it definitely tells us that China, and probably others, are already committing significant resources to this matter.

The augmentation of human intelligence is likely to be a mixed blessing, yielding both benefits and abuses. Regardless of our feelings about it, we would be wise to anticipate the kind of future such enhancements could one day bring.

Intelligence augmentation is just one of the approaches to information overload and the data explosion explored in my upcoming article in the July-August issue of The Futurist. The article is an abridged version of my paper “Treading Data: Staying Afloat in a Sea of Information” which will be published this summer in the WorldFuture conference volume, “Moving From Vision to Action,” editor, Cynthia G. Wagner.
[This introduction to intelligence augmentation was previously published in “The Intelligence Report” at Psychology Today.]

The Intelligence Report

I’m excited to announce I was recently invited to write a blog for Psychology Today. As with this blog, “The Intelligence Report” will explore the evolving relationship between intelligence and technology. It will include new posts as well as some cross-posting from this blog when the topic is a good fit. As an introduction, the initial post, “Can Machines Be Intelligent?” explores the possibility that computers may be closer to achieving true intelligence than many people think. Be sure to check out both blogs regularly for the latest postings.

Future of intelligence article

“Get Smart”, my new article about the future of intelligence is out in this month’s Mensa Bulletin

The domain of sapient entities may one day encompass not only humans, but transhumans, machine intelligences, augmented animals, distributed networks, group minds, even uploaded personalities.  If so, our definition of what is intelligent and sentient would need to change, along with the legal and social institutions under which we’d coexist.  There can be little doubt that it would be a very strange and different world – a world in which new forms of intelligence appear much more rapidly than has ever occurred in the past.

Originally titled “Evolution, Technology and the Future of Intelligence”, the Bulletin decided to go with the snappier “Get Smart”, presumably unaware of Jamais Cascio’s Atlantic Monthly article of the same name from two months before.  I’ll try to include a link to the full article in the near future.

Like us on Facebook