Archive for the AI Category

New York Times Interview

There’s something very satisfying about waking up in the morning to find you’ve been featured in a New York Times article. Writer, consultant and workplace expert, Alexandra Levit spoke with me earlier this summer about some of the ways increasingly emotionally aware technology will change our work environment.

NYT-Preoccupations

Given the preponderance of vanishing job doom-and-gloom, we focused on the idea that more and more we’re going to see “co-work” situations in which AIs and robots work alongside people to help enhance capabilities and productivity. Such scenarios will only benefit from these technologies’ growing ability to be emotionally sensitive to their human co-workers. Since this inevitably contributes to the growth of machine intelligence, it’s a win-win for both our tribes.

“Heart of the Machine” coming to a store near you!

I haven’t been writing articles or posting for a good few months, because I’ve been head down completing my new book. Heart of the Machine is now complete and going through final edits with my publisher, Skyhorse Publishing, one of the fastest growing independent publishers in the country.

Heart of the Machine cover 600x900

Toys that change based on children’s emotional responses. Smart homes and digital assistants that sense what kind of day you had and interact with you accordingly. Devices that can artificially generate a specific feeling for you. Perhaps even the foundations for true machine consciousness. Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence explores the coming era of affective computing, social robotics and other emotionally-aware technologies. These systems are destined to transform our world and our lives over the next few decades. Already, the field is forecast to grow from a U.S. market of $9.35 billion in 2015 to $42.51 billion by 2020. That’s a heck of a growth rate for a market that virtually didn’t exist a decade ago!

Heart of the Machine is being released in March 2017 and is now available for pre-order at Amazon and Barnes & Noble. For other outlets, ask your friendly neighborhood bookstore when they plan to get it on their shelves. To learn more about this incredible future, please visit this page.

 

Review: Superintelligence by Nick Bostrom

Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.

For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.

One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.

As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.

As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.

It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.

The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.

Affective Computing and the Future of Marketing

My new article “How Your Computer Will Read You Like A Book – And Then Sell You Stuff” is up at Fast Company’s Futurist Forum. In it, I talk about ‘affective computing‘ – systems that read, interpret and even simulate human emotion. This is going to change our relationship with technology in ways you’ve probably never imagined. To my mind, one of its more interesting applications will be in the field of marketing. As human beings, we communicate volumes of information to each other via nonverbal cues – facial expressions, posture, gestures, gait. Yet for the most part, these have been inaccessible to computers. Until now. Imagine what will happen to marketing when your response
can be instantly, accurately interpreted, allowing ads to be altered on the fly, targeting you as never before? Get ready for a Brave New Shopping Experience.

Rise of the Intelligent Machines

I’m beginning a short series at Psychology Today about the ongoing advances being made in machine intelligence. I’d originally thought about using “The Ascent of Machine” for the series title — after Jacob Bronowski’s wonderful work, “The Ascent of Man”, which I found so inspiring when it first came out. But I thought it sounded a bit kludgy and so I ultimately decided on the Cameron-esque “Rise of the Intelligent Machines”.

Step by step, we’re being equaled and more importantly, surpassed. As we’ve seen in both biology and technology, this is a march that is taking place with ever more rapid strides. Complexity breeds still further complexity, cross-fertilizing itself into previously unconceived of combinations. The world is quickly approaching a time when humanity may come to share the world with an equal or greater intelligence. One that will have been of our own making.

This multi-part series will explore the rise of machine intelligence, research and advances that will impact its development and what this may mean for the future of human intelligence. Check back at Psychology Today for future installments. Next in Part 2: How to Build a Brain.
 

An Argument For The Singularity

Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he'd] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”

Page 1 of 212
Like us on Facebook