Posts Tagged AI

Review: Superintelligence by Nick Bostrom


Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.

For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.

One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.

As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.

As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.

It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.

The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.

Are You Smarter Than a Sixth-Generation Computer?


My latest article, “Are You Smarter Than a Sixth-Generation Computer?” leads off this month’s issue of The Futurist magazine. (Sept/Oct 2012) The article explores the need for a standard metric for AIs and is based on my paper, “Toward a Standard Metric of Machine Intelligence”, which I recently published in the World Future Review. The purpose of the article (and the paper) can be summed up as follows:

As our world becomes increasingly filled with technological intelligence, it will serve us well to know exactly how smart our machines are and in what ways. Given that we try to measure almost every other aspect of our world, it seems only prudent that we accurately measure the intelligence of our machines as well — especially since, by some projections, they’re expected to surpass us in the coming decades.

During the next few decades we’re going to see significant gains in the field of strong AI, potentially giving rise to artificial general intelligences (AGIs). Universal Intelligence tests such as those described here would seem a crucial tool as we prepare for the changes this will bring.

Alan Turing: A Unique Human Being

Statue of Turing at Bletchley Park (click for full view)
Statue of Turing at Bletchley Park (click for full view)

In futures thinking as in life, often it’s important to look back in order to look ahead. This week marks the 100th anniversary of the birth of Alan Turing, offering the perfect opportunity to do both.

Given the scope and scale of Turing’s achievements, it’s easy and natural to want to focus on his many accomplishments: the cryptanalyst who, with others at Bletchley Park, succeeded in cracking the seemingly unbreakable codes generated by the Nazi’s Enigma machine; the computer scientist, who along with John von Neumann, conceived the stored-program computer architecture, the model on which so much of modern computing is based; the originator of the Turing test, the first exploration of what might constitute machine intelligence and whether it could ever achieve parity with the human mind. And so much more.

With so many technical triumphs, it’s easy to overlook the fact that Turing had a private, personal life as well, just like any other man. In his case, he was a gay man living at a time when this was illegal in Great Britain. As a result, Turing was hounded, prosecuted, stripped of his security clearance and made to undergo estrogen hormone injections resulting in chemical castration. Plunging into a severe depression, Turing eventually took his own life at the age of 41. Even if he hadn’t been a national hero, this would be a terrible injustice. But in light of all Turing contributed to the world, it was tragic beyond measure. (Note: Turing’s contribution to the war effort wasn’t made public until the seventies, when this information was declassified.)

It’s difficult to grasp just how different that era was for gays and lesbians, even though it was less than sixty years ago. While full equality and rights still haven’t been completely attained, the western world has come a very long way from those dark days. Within another generation, it seems likely the battle will be behind us and people will wonder what all the fuss once was over sexual orientation. With any luck, this will apply to race, creed and color as well.

But that doesn’t mean equality for all will have been realized or that discrimination will have completely disappeared. As a species we have a very xenophobic streak; at a certain level we’re hard-wired to be wary of anyone we perceive as too different. Anyone we see as Other.

Given the rapid progress we can expect in a host of different fields, the world is about to get a lot stranger and it’s probably going to be filled with a lot more people who could potentially be viewed as Other. Transhumans. Posthumans. Cyborgs and human-machine symbiotes. Group minds. Digital uploaded minds. There’s every likelihood each of these groups will be forced to jump through the same hoops and fires every other discriminated group has, before eventually, hopefully, being recognized as having the same inalienable rights as all other human beings.

So on this centenary of Alan Turing’s birth, let us give a moment of thought to what kind of world we want to live in. Let’s strive to make it one in which we value every life and every mind for what it truly is: Unique.

Rise of the Intelligent Machines


I’m beginning a short series at Psychology Today about the ongoing advances being made in machine intelligence. I’d originally thought about using “The Ascent of Machine” for the series title — after Jacob Bronowski’s wonderful work, “The Ascent of Man”, which I found so inspiring when it first came out. But I thought it sounded a bit kludgy and so I ultimately decided on the Cameron-esque “Rise of the Intelligent Machines”.

Step by step, we’re being equaled and more importantly, surpassed. As we’ve seen in both biology and technology, this is a march that is taking place with ever more rapid strides. Complexity breeds still further complexity, cross-fertilizing itself into previously unconceived of combinations. The world is quickly approaching a time when humanity may come to share the world with an equal or greater intelligence. One that will have been of our own making.

This multi-part series will explore the rise of machine intelligence, research and advances that will impact its development and what this may mean for the future of human intelligence. Check back at Psychology Today for future installments. Next in Part 2: How to Build a Brain.
 

An Argument For The Singularity


Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he’d] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”

The Intelligence Report

I’m excited to announce I was recently invited to write a blog for Psychology Today. As with this blog, “The Intelligence Report” will explore the evolving relationship between intelligence and technology. It will include new posts as well as some cross-posting from this blog when the topic is a good fit. As an introduction, the initial post, “Can Machines Be Intelligent?” explores the possibility that computers may be closer to achieving true intelligence than many people think. Be sure to check out both blogs regularly for the latest postings.