Archive for the AI Category

Rise of the Intelligent Machines

I’m beginning a short series at Psychology Today about the ongoing advances being made in machine intelligence. I’d originally thought about using “The Ascent of Machine” for the series title — after Jacob Bronowski’s wonderful work, “The Ascent of Man”, which I found so inspiring when it first came out. But I thought it sounded a bit kludgy and so I ultimately decided on the Cameron-esque “Rise of the Intelligent Machines”.

Step by step, we’re being equaled and more importantly, surpassed. As we’ve seen in both biology and technology, this is a march that is taking place with ever more rapid strides. Complexity breeds still further complexity, cross-fertilizing itself into previously unconceived of combinations. The world is quickly approaching a time when humanity may come to share the world with an equal or greater intelligence. One that will have been of our own making.

This multi-part series will explore the rise of machine intelligence, research and advances that will impact its development and what this may mean for the future of human intelligence. Check back at Psychology Today for future installments. Next in Part 2: How to Build a Brain.
 

An Argument For The Singularity

Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he’d] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”

What is a Milestone in Artificial Intelligence?

On January 13, 2011, IBM’s Watson supercomputer competed in a practice round of Jeopardy, the long-running trivia quiz show. Playing against the program’s two most successful champions, Ken Jennings and Brad Rutter, Watson won the preliminary match. Is this all a big publicity stunt? Of course it is. But it also marks a significant milestone in the development of artificial intelligence.

For decades, AI – artificial intelligence – has been pursued by computer scientists and others with greater and lesser degrees of success. Promises of Turing tests passed and human-level intelligence being achieved have routinely fallen far short. Nonetheless, there has continued to be an inexorable march toward more and ever more capable machine intelligences. In the midst of all this, IBM’s achievement in developing Watson may mark a very important turning point.

Early attempts at strong AI or artificial general intelligence (AGI) brought to light the daunting complexity of trying to emulate human intelligence. However, during the last few decades, work on weak AI – intelligence targeted to very specific domains or tasks – has met with considerably more success. As a result, today AI permeates our lives, playing a role in everything from anti-lock braking systems to warehouse stocking to electronic trading on stock exchanges. Little by little, AI has taken on roles previously performed by people and bested them in ways once unimaginable. Computer phone attendants capable of routing hundreds of calls a minute. Robot-operated warehouses that deliver items to packers in seconds. Pattern matching algorithms that pick out the correct image from among thousands in a matter of moments. But until now, nothing could compete with a human being when it came to general knowledge about the world.

True, these human champions may yet best Watson, a product of IBM’s DeepQA research project. (The three day match will air February 14-16.) But we only need to think back to 1997 when IBM’s Deep Blue defeated world chess champion Garry Kasparov to understand that it doesn’t really matter. Kasparov had handily beaten Deep Blue only a year earlier, though the 1996 match did mark the first time a computer won a single game in such a match. Today, just as then, the continuing improvements in computer processing speed, memory, storage and algorithms all but ensure that any such triumph would be fleeting. We have turned a page on this once most human of intellectual feats and the world won’t be the same again.

So what can we look ahead to now that we’ve reached this milestone? In the short term, IBM plans to market their technology and profit by their achievement. Initially, the system price will be high, probably in the millions of dollars, but like so much computer technology, the price will plummet over the coming decade. As the technology becomes more widely used, a range of tasks and jobs previously considered safe from AI will no longer be performed by human workers. Protectionist regulations may attempt to save these jobs but these efforts will probably be short-lived. The resulting large-scale unemployment will require a rethinking of government institutions and safety nets, as well as corporate business models.

At the same time, this type of general knowledge AI (it’s far too early to call it AGI) will contribute to greater and more rapid advances in machine intelligence. Such technology could bootstrap the Semantic Web into broad usage. In all likelihood, it will be used to create personal intelligent agents, giving users the virtual equivalent of a staff of assistants. And eventually, it could facilitate the development of a true artificial general intelligence or at least contribute to the education of such an AGI.

Will such an intelligence be conscious? Will it be self-improving, leading to a positive feedback loop that brings about a powerful and hopefully benign superintelligence? Only time will tell. But perhaps one day, on a future holographic version of Jeopardy, we’ll be presented with clues to which the correct response will be, “What was the Singularity?”

Advances in AI

A number of recent stories illustrate the rapid progress that’s being made in many different fields linked to artificial intelligence:

Europe’s four-year AMARSi project (Adaptive Modular Architecture for Rich Motor Skills) could lead to humanoid robots that quickly acquire skills from human co-workers as well as their environments.
AMARSi project could see robots learn from co-workers

Also from Europe is HUMAVIPS (Humanoids with Auditory and Visual Abilities in Populated Spaces) which seeks to improve the ways humans and robots communicate and interact.
Humanoid robots to gain advanced social skills

A research team at the University of Michigan at Ann Arbor has demonstrated how memristors can mimic the behavior of brain synapses.
Electronics ‘missing link’ brings neural computing closer

Natural responses to technological change

This music was composed by a prolific, young composer named Emi. During a brief career, Emi created thousands of works, many of them in the style of famous composers, such as Bach, Beethoven, and Chopin. These compositions have been described as moving, soulful, emotional. In many ways, Emi may prove to be one of the most important composers of the age. Emi is also a computer program.

When many listeners first learn the nature of these compositions, particularly after believing they were created by a person, their opinion of the work changes dramatically. Music they previously called soulful or inspiring becomes calculating, stilted, even mechanistic. This points to a human chauvinism about the growing intelligence of our machines which we would be wise to examine. As with so many developments in artificial intelligence, there seems to be resistance against any encroachment into what we regard as the realm of human intellect. Just as chess masters once denigrated the abilities of chess-playing computers, each new hurdle in AI will probably be met with similar resistance.

Though it may sound a little strange, I’m going to suggest that this response follows a pattern similar to Kubler-Ross’s five stages of grief: denial, anger, bargaining, depression and acceptance. Grief is a normal response to loss and it may be that many people feel they are losing something crucial when technology begins to take on functions previously exclusive to humans. Obviously, disbelief is a common enough reaction to new and developing technology, but it’s hard to maintain for long in the face of direct evidence. Emi’s creator, University of California, Santa Cruz professor, David Cope, has reported considerable anger from listeners in response to Emi’s work, as well as his later program, Emily Howell.

It’s almost as if the listener felt tricked or cheated once they discovered the music was created by an artificial intelligence rather than a natural one. As for bargaining and depression, I suspect these are not unfamiliar to people who have seen their career plans radically altered by the continually shifting technological landscape.

Then there’s acceptance. So much AI has been accepted that most of us no longer categorize it as AI at all. When was the last time you used spell check, or a voice menu or played a video game and acknowledged the leaps in artificial intelligence that made them possible? Typically, we don’t. They simply become another piece of the background of our technologically enhanced lives.

All of these are probably very natural responses. We humans have considered ourselves to stand at the pinnacle of intelligence for so very long. Nothing in the animal kingdom even comes close to the height of human achievement and for the most part, our machines are even further down the scale. Many will argue that the types of intelligences exhibited by these machines is very limited and domain-specific. That they are examples of weak AI – sets of rules and knowledge bases and Bayesian pattern recognition algorithms. There’s no way any of these could ever develop into something that would rival our marvelous minds.

But more and more evidence is indicating that our brains are themselves composed of myriad subsystems which together make up the sum total of our intellect. Marvin Minsky’sSociety of Mind” describes the concept very well. Why shouldn’t an artificial general intelligence be composed of modules, agents and subsystems too?

With each new leap in machine intelligence, we come a little closer to slipping from our pinnacle on Mount Intellect. Lightning-fast calculation, expert diagnostic systems, face and image recognition, real-time voice translation. Exponential improvements in both hardware and software are rapidly driving us into a new era. It will be an era in which we’ll likely share the stage of superior intelligence with many other players. That is, we will if we’re lucky.