Archive for the Machine intelligence Category

Review: Superintelligence by Nick Bostrom


Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.

For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.

One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.

As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.

As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.

It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.

The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.

Connect to the Future



“Connecting with Our Connected World” is my latest article and explores the coming era of the Internet of Things. It’s also the lead article for The Futurist magazine – my third cover story for them, which obviously I’m pleased about. While much has been written about IoT – or the Internet of Things – in recent years, this article takes a somewhat different tack.

We’re creating a world in which evermore aspects of our natural and man-made environments are becoming interconnected, capable of communicating with human operators as well as with each other. This will allow objects to keep us appraised of everything from their precise location to handling their own restocking and servicing and so much more.

However, there is a darker side to all this. For instance, issues of personal privacy and self-determism, not to mention some less well-behaved aspects of the technology:

“Once a system reaches a particular threshold of complexity, we can no longer be certain about specific cause-and-effect relationships; rather, we must think in terms of probabilities. Instead of being 100% certain that A will lead to B, we might assign a likelihood of, say, 99.98%.

For some events, this probabilistic approach works fine, but for others it could be disastrous: Power plants, automated weapons systems, and freeways full of self-driving cars all could experience catastrophes if operating on erroneous information. So these and other systems will need to be designed with greater safeguards and redundancies than they have today.”

My point is we’re at a stage in our relationship with our technology when we’ll need to start thinking in more and more nondeterministic ways. But in many respects, isn’t this how a great deal of our world has always been? So fortunately, we have lots of practice.

Are You Smarter Than a Sixth-Generation Computer?


My latest article, “Are You Smarter Than a Sixth-Generation Computer?” leads off this month’s issue of The Futurist magazine. (Sept/Oct 2012) The article explores the need for a standard metric for AIs and is based on my paper, “Toward a Standard Metric of Machine Intelligence”, which I recently published in the World Future Review. The purpose of the article (and the paper) can be summed up as follows:

As our world becomes increasingly filled with technological intelligence, it will serve us well to know exactly how smart our machines are and in what ways. Given that we try to measure almost every other aspect of our world, it seems only prudent that we accurately measure the intelligence of our machines as well — especially since, by some projections, they’re expected to surpass us in the coming decades.

During the next few decades we’re going to see significant gains in the field of strong AI, potentially giving rise to artificial general intelligences (AGIs). Universal Intelligence tests such as those described here would seem a crucial tool as we prepare for the changes this will bring.

Alan Turing: A Unique Human Being

Statue of Turing at Bletchley Park (click for full view)
Statue of Turing at Bletchley Park (click for full view)

In futures thinking as in life, often it’s important to look back in order to look ahead. This week marks the 100th anniversary of the birth of Alan Turing, offering the perfect opportunity to do both.

Given the scope and scale of Turing’s achievements, it’s easy and natural to want to focus on his many accomplishments: the cryptanalyst who, with others at Bletchley Park, succeeded in cracking the seemingly unbreakable codes generated by the Nazi’s Enigma machine; the computer scientist, who along with John von Neumann, conceived the stored-program computer architecture, the model on which so much of modern computing is based; the originator of the Turing test, the first exploration of what might constitute machine intelligence and whether it could ever achieve parity with the human mind. And so much more.

With so many technical triumphs, it’s easy to overlook the fact that Turing had a private, personal life as well, just like any other man. In his case, he was a gay man living at a time when this was illegal in Great Britain. As a result, Turing was hounded, prosecuted, stripped of his security clearance and made to undergo estrogen hormone injections resulting in chemical castration. Plunging into a severe depression, Turing eventually took his own life at the age of 41. Even if he hadn’t been a national hero, this would be a terrible injustice. But in light of all Turing contributed to the world, it was tragic beyond measure. (Note: Turing’s contribution to the war effort wasn’t made public until the seventies, when this information was declassified.)

It’s difficult to grasp just how different that era was for gays and lesbians, even though it was less than sixty years ago. While full equality and rights still haven’t been completely attained, the western world has come a very long way from those dark days. Within another generation, it seems likely the battle will be behind us and people will wonder what all the fuss once was over sexual orientation. With any luck, this will apply to race, creed and color as well.

But that doesn’t mean equality for all will have been realized or that discrimination will have completely disappeared. As a species we have a very xenophobic streak; at a certain level we’re hard-wired to be wary of anyone we perceive as too different. Anyone we see as Other.

Given the rapid progress we can expect in a host of different fields, the world is about to get a lot stranger and it’s probably going to be filled with a lot more people who could potentially be viewed as Other. Transhumans. Posthumans. Cyborgs and human-machine symbiotes. Group minds. Digital uploaded minds. There’s every likelihood each of these groups will be forced to jump through the same hoops and fires every other discriminated group has, before eventually, hopefully, being recognized as having the same inalienable rights as all other human beings.

So on this centenary of Alan Turing’s birth, let us give a moment of thought to what kind of world we want to live in. Let’s strive to make it one in which we value every life and every mind for what it truly is: Unique.

Rise of the Intelligent Machines


I’m beginning a short series at Psychology Today about the ongoing advances being made in machine intelligence. I’d originally thought about using “The Ascent of Machine” for the series title — after Jacob Bronowski’s wonderful work, “The Ascent of Man”, which I found so inspiring when it first came out. But I thought it sounded a bit kludgy and so I ultimately decided on the Cameron-esque “Rise of the Intelligent Machines”.

Step by step, we’re being equaled and more importantly, surpassed. As we’ve seen in both biology and technology, this is a march that is taking place with ever more rapid strides. Complexity breeds still further complexity, cross-fertilizing itself into previously unconceived of combinations. The world is quickly approaching a time when humanity may come to share the world with an equal or greater intelligence. One that will have been of our own making.

This multi-part series will explore the rise of machine intelligence, research and advances that will impact its development and what this may mean for the future of human intelligence. Check back at Psychology Today for future installments. Next in Part 2: How to Build a Brain.
 

Watson: The New Jeopardy Champion


I consider myself a techno-optimist, but Watson’s performance in Jeopardy’s IBM Challenge has definitely exceeded my expectations. While I did predict Watson would win the competition, I didn’t think it would be so dominant. This was a contest I thought machine intelligence might win by a narrow margin, but the three-day, two game match wasn’t even close. AI has come a long way, baby.

As impressive as Watson’s final cash score was, what I think was more remarkable was it’s answer success rate. In the first match, out of sixty clues, Watson rang in first and answered 38 correctly, with five errors. This is an 88.4% success rate. If only the 30 questions in the Double Jeopardy portion are considered, this jumps to a whopping 96%. You’ll notice I’ve left the Final Jeopardy question out of these calculations. This is because this question had to be answered regardless of the machine’s low confidence level of 14%. It’s important to the competition, but actually indicates the success of the machine’s algorithms.

While the second game (Day 3) wasn’t quite as impressive as the first, Watson still won by a significant margin. Considering it was competing against the two best human Jeopardy players of all time, it’s safe to say IBM met its goal and then some.

Some of the more intriguing (some would rightly say, concerning) moments in the contest were those in which Watson arrived at unfathomably wrong answers. As the lead on the project, Watson Principal Investigator, Dr. David Ferrucci commented:

“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”

The fact is, even Watson’s developers often can’t fathom how it arrives at the answers it does. Parsing through millions of stored documents, Watson applies hundreds of algorithms to arrive at the answer with the highest confidence rating. (While this bears a passing resemblance to Minsky’s “society of mind” concept, it still remains very different from the way humans think.) The incredible complexity of the process means we can’t fully understand it. This is the nature of emergent systems – they generate outcomes that can’t be accurately predicted a lot of the time. They follow an internal logic of their own, one we can’t possibly follow.

In Watson, we’re seeing the barest hints, the merest beginnings of this. The potential domain of future intelligences is vast. It’s possible that one day there will be as many different kinds of machine intelligence as there are biological species. And in all likelihood, we won’t understand the behaviors and motivations of a single one.

Watson is a long way from being an artificial general intelligence. It isn’t HAL-3000. But it is a huge step forward. A step that should be making us ask serious questions about the future of AI. We face a future full of machine intelligences as smart or smarter than we are. Some experts even speculate recursive self-improvement will yield superintelligences vastly more intelligent than the entire human race combined. There should be no question we’ll be incapable of grasping the motivations of such a machine. And there’s no reason to expect it’s objectives will mesh with our own. Obviously, this could have results that are disastrous, potentially even existentially catastrophic.

We aren’t going to stop the advance of artificial intelligence or the eventual development of an artificial general intelligence. Therefore, steps will need to be taken that ensure these machines remain as benevolent as possible and not because they will necessarily be malevolent otherwise. An indifferent superintelligence would be just as big a threat to humanity because it could be capable of taking potentially world-altering actions without considering what they mean for us. Arguments for creating rules-based safeguards, such as Asimov’s “Three Laws of Robotics” will likely fail, simply because rules can be misinterpreted or circumvented given sufficient motivation.

Work toward “Friendly AI”, as proposed by AI researcher, Eliezer Yudkowsky, stands a much better chance of a human-positive outcome. Instilling a machine equivalent of morality not only protects us from the actions of a superintelligence, but from its self-improved progeny as well. Creating “Friendly” safeguards that motivate such a machine to do everything in its power to ensure humans do not come to harm now or in the future may be our best bet. As Yudkowsky states:

“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”

We can hope that a superintelligence comes to the same conclusion. But we can do more than just hope; we can work to ensure it happens.

The capabilities Watson has demonstrated using deep analytics and natural language processing are truly stunning. The technologies that will develop from this will no doubt help the world with many of its significant problems. Not least of these is dealing with the vast, escalating volumes of data our modern world generates. But there is the potential for significant dangers to arise from such technology too. I feel certain though we can overcome these threats and continue the long legacy of building a better world with the help of our technology.

How’s that for techno-optimism?

Like us on Facebook