Blog

Future of intelligence article

“Get Smart”, my new article about the future of intelligence is out in this month’s Mensa Bulletin

The domain of sapient entities may one day encompass not only humans, but transhumans, machine intelligences, augmented animals, distributed networks, group minds, even uploaded personalities.  If so, our definition of what is intelligent and sentient would need to change, along with the legal and social institutions under which we’d coexist.  There can be little doubt that it would be a very strange and different world – a world in which new forms of intelligence appear much more rapidly than has ever occurred in the past.

Originally titled “Evolution, Technology and the Future of Intelligence”, the Bulletin decided to go with the snappier “Get Smart”, presumably unaware of Jamais Cascio’s Atlantic Monthly article of the same name from two months before.  I’ll try to include a link to the full article in the near future.

WorldFuture looks ahead

Mark Twain once wrote “Everyone talks about the weather, but nobody does anything about it.” The same might be said about the future – with one significant exception: I don’t think people are talking about it nearly enough.

“But what about the energy crisis?”, comes the imagined reply. “What about global warming? What about the water shortages that are impacting significant parts of the world? Surely that shows we’re looking ahead to the future?”

No, it doesn’t.

Each of these examples cites a present-day response to a problem that could have been anticipated and acted upon decades ago. The information was available, the technology was feasible. What was lacking was the will to look beyond present-day motivations and the very immediate future in order to alter the way our actions affect the world.

WorldFuture 2008 conference

Of course, there are some people who do want to look ahead. In July, over 1,000 futurists from nearly forty countries attended WorldFuture 2008 in Washington, DC. During this annual five-day conference of the World Future Society, attendees took part in courses, presentations, debates and discussions addressing future economic, educational, political, social and technological trends. It was an exciting event, full of ideas, inspiration and hope.

These futurists recognize the importance of foresight and planning in a world that is changing in profound and increasingly rapid ways. They’ll be the first to tell you that the future can’t be predicted – not in specific terms anyway. But they also know there are methods and tools to point the way. Trends can be analyzed and extrapolated; scenarios can be created to anticipate best, worst and preferred possibilities; roadmaps and models can be built; systems-wide thinking can be applied. All so that we can be better prepared for the changes and challenges that lie ahead.

If there’s one thing we can learn from this, it’s that we should all be talking more about the future. Our future. Perhaps then, and only then, we can start to do something about it.

Space elevator needs a boost

2008 Space Elevator Conference

I’ve been attending the 2008 Space Elevator Conference being held at Microsoft’s Redmond campus this weekend.  The many talks and papers given there clearly demonstrate the tremendous dedication and creative engineering that can be found in this nascent field.

For those of you who are fuzzy on the concept, the general idea of the space elevator involves running a tether from the surface of the earth to geosynchronous orbit and beyond.  A mechanical “climber” then ascends and descends the tether, delivering payloads into orbit.  If this sounds like science fiction, that’s because until recently, it was.  Independently conceived by a Russian (Artsutanov) and an American (Pearson), the space elevator concept was popularized by the late science fiction writer Arthur C. Clarke in his 1979 novel, “The Fountains of Paradise”.  At that time, no material was light enough and strong enough to make it possible.  But with the developing field of nanotechnology and the discovery of carbon nanotubes (CNTs), a number of people have begun to take the idea more seriously.  (For the record, Clarke is also credited with originating the idea of using geosynchronous satellites as telecommunications relays back in 1945, a concept that completely revolutionized communications.)

Now while the general concepts behind the space elevator are simple, the implementation is anything but.  There are numerous technical issues to be worked out and an enormous initial investment to be made.  But the potential payoff is huge.  Currently, payloads to geosynchronous orbit constitute only about 3.5% of total launch weight and cost on the order of $5-10,000 per pound of payload.  Payload efficiency on the space elevator could be as much as 90% or more depending on the method used.  Payload envelopes wouldn’t be limited to the size and shape of cylindrical payload bays and nose cones.  Pollution from rockets would be eliminated.  Most importantly though, the cost per pound would plummet.  As a result, a new space era would be born.

Why is this important?  New materials and manufacturing methods could be developed which can only be achieved in zero-gravity.  Off-world mining would allow us to supplement our diminishing resources.  Solar power beamed from space could meet the needs of our increasingly energy-hungry world.  Whether the space elevator is built by one country or becomes a multi-national effort, it will be a huge stimulus for the world economy, particularly for the key players involved.

But it’s not going to happen without support and financial commitment.  True, NASA currently offers as much as four million dollars in prizes for the Spaceward Games, a competition designed to stimulate progress in the field.  But the planning, the engineering analyses, the proof-of-concept work is all being done on a shoestring as thin as the carbon nanotubes themselves.  Uncounted hours are being volunteered by engineers and enthusiasts the world over, people who know this will one day become a reality.  But such dedication can take the space elevator only so far.  The day is quickly coming when we’ll have to make a greater commitment if we want to participate in what will surely be one of the greatest engineering feats humankind has ever seen.

Driving forces

Fuel prices continue to climb, steadily increasing the cost of every mile we travel.  Meanwhile, traffic congestion gets worse with each passing year.  And tragically,  car accidents kill around 43,000 people in the US annually. 

What if all of these statistics could be improved with a single technology?

Autonomous driving – driverless cars – may eventually do just that.  Okay, to be fair, the concept involves a family of technologies: video, infra-red, laser and radar sensors; GPS navigation; AI-controlled motion planning; and a variety of mechanical control systems.  But isn’t that what current-day cars are, anyway: an amalgam of systems?

The idea of driverless vehicles has been explored for decades, but it’s only recently that the supporting technologies have reached a sufficient maturity to really be able to capture the attention of the media and the imagination of the public.  I think recent advances are very indicative of how quickly this technology is going to mature during the next few years.

2007 Urban Challenge winner

In 2004, DARPA (Defense Advanced Research Projects Agency) held their first Grand Challenge.  It was over a 150 mile desert course which none of the twenty-one contestants finished.  In fact, the longest distance covered by any of the vehicles was only a little over seven miles.  In the second Challenge held in 2005, five of the vehicles finished a 132 mile off-road course.  All but one of the twenty-three entrants surpassed the prior year’s top distance of 7.36 miles.

Last year, DARPA held their 2007 Urban Challenge.  Of the 11 finalists, six completed the 55 mile urban course, three within the 6 hour time limit.  Rules included obeying all California state driving laws while negotiating with other traffic and obstacles and merging into traffic.  The $2 million prize was won by Tartan Racing, a collaborative effort by Carnegie Mellon University and General Motors Corporation.  Their vehicle, a Chevy Tahoe, covered the course in 4 hours 10 minutes for an average speed of nearly 14 mph.

Last month, GM and Carnegie Mellon announced a new Collaborative Research Lab and a $5 million commitment to work jointly on technologies that will accelerate the emerging field of autonomous driving.  This follows an announcement by GM in January that the company plans to test driverless car technology by 2015 and have cars on the road around 2018.  I wouldn’t be at all surprised if competitive pressures and AI advances moved this forward by a couple of years. 

In the end, regulatory issues and public acceptance of the systems’ safety may delay wide-scale use by several years, but ultimately these vehicles will become the norm.  A properly designed machine can easily react to a detected condition many times faster than a human being.  On-board transmitters and signaling systems could warn of intended actions, giving adjacent vehicles plenty of time to respond.  Combine this with AI swarming algorithms and vehicles will eventually be able to interact with each other with great speed and safety.  (How many collisions have you seen between flocks of birds recently?)

Given the enormous benefits this technology promises (fuel savings, improved utilization of existing roads and lives saved), $5 million dollars seems a trifling sum.  If a more substantial commitment resulted in autonomous vehicles being embraced just one year sooner, how much could we truly save?

Science, abstraction and magic

I was in a pub a while back having a pint with a British friend when the topic turned briefly to computing.  “What I don’t understand,” he said.  “Is how programmers make something from nothing?

I thought for a moment, then started briefly explaining about levels of abstraction, starting from transistors through machine language to higher-level languages.  I figured this might show him how by controlling certain electrical properties at the micro-scale we’re able to create a panoply of macro-effects.  Well, this obviously wasn’t the right approach since he began to glaze over after only a few minutes.  To be fair, we were a few pints in, but he had asked after all.  He was a university grad, so I’d assumed this was the kind of answer he’d been looking for.  Thinking back, I’m not entirely sure he wasn’t winding me up (pulling my leg) – a favorite English pastime – but I suspect that wasn’t the case.  I think he actually considered programming to involve the creation of something from nothing.

This got me thinking about the oft-quoted Arthur C. Clarke observation that “any sufficiently advanced technology is indistinguishable from magic.”  It’s been written  about frequently enough, but I’ve tended to think of it in terms of eras and generations.  The space probes, imaging technologies and wireless communications of today would seem like magic to a citizen of the 19th Century.  The locomotives, magic lanterns and telegraphs of the 19th Century would probably be quite magical to someone from the 15th Century Renaissance.  Likewise for the engines of da Vinci, perspective paintings and printing presses of that era, if you could show them to someone of a sufficiently earlier time.

But just as the rate of technological advance is increasing, so are the time frames of these examples diminishing.  I don’t think it’s necessarily a direct inverse relationship, but it does look like a fairly solid trend.  The fact is that a growing proportion of our technology is moving beyond the realm of common knowledge. 

When we flick a switch to turn on a light, we’re using an interface which separates us from the more complex processes involved, relatively speaking.  The processes are being abstracted in order to make them more accessible.  I think it’s fair to say that most people don’t understand the fundamental principles that cause a light bulb to glow when they flick that switch.  This is even more the case with more advanced devices.  As devices become increasingly sophisticated, there will have to be more levels of abstraction created between what they do and how we use them, between what we perceive as cause and effect, in order for us to be able to use them.  Because of this, the world is going to appear to become ever more “magical”.

Consider a lighting system.  We can already walk into some homes and activate the lights by our motion or with a verbal command.  In a few years, we’ll be able to operate much more complex systems using simple gestures or eye tracking.  And maybe a bit more than a decade beyond that by just thinking about it.  The simple interfaces will be there, the underlying technology will be there, but the conceptual distance between them will be so great that for many of us our understanding of the processes will become distorted. 

This creates the potential for some very negative consequences.  A number of people including futurologist Ian Pearson (see my recent interview with him in FutureNovo) have suggested that we may be moving toward a technological dark age where among other things, superstitions will arise around certain devices and processes because of just such a lack of understanding.  The ramifications of such a disconnect are alarming since it creates an environment begging for backlash and exploitation.  A greater gulf will occur between those with very specialized, esoteric knowledge and the end-users who reap the benefits.  If only a rare few understand how to build, operate or repair a technology, all kinds of abuse becomes possible.  Finally, should something happen to those few, we could easily find ourselves with no access to the technology at all.

And then we’ll really have to build something out of nothing.

Empowering the people

Change frightens us. The uncertainty of the new, the potential for disruption, it’s one of the reasons that our species seeks to anticipate the future – so we can avoid and hopefully survive the worst it has to throw at us.

Self-replicated RepRap parts

But change also has a huge potential to improve our lives and empower us. The recent accomplishments of the RepRap project are a case in point. Headed by Dr. Adrian Bowyer of the University of Bath’s Center for Biomimetics, RepRap is short for replicating rapid prototyping machine. In use by industry for about a quarter century now, prototypers are essentially 3D inkjet printers capable of creating parts by laying down thin layers of resin following a computer-driven design. Almost any basic object can be created using this technique, from dinnerware to engine parts. What makes RepRap different is that it’s the first prototyper capable of copying itself. And it’s open source.

While there are still a handful of its own parts that RepRap can’t copy and all of it still has to be hand assembled, the potential for a self-replicating replicator is enormous. Distributed to people in the developing world, such a technology could quickly raise their standard of living, providing necessities many of the rest of us have long taken for granted. Of course, such a technology would be tremendously disruptive to industry, but that can hardly be justification for billions to continue living and dying in unnecessary poverty.

Like RepRap, many other new and developing technologies have the potential to heal, to enable, to lift up vast numbers of people. DEKA’s water purification system, the Kurzweil-National Federation of the Blind Reader, solar energy solutions and worldwide immunization programs are but a few of the recent implementations of technology that have the ability to change the lives of millions for the better.

A changing world can be a frightening place, but it can be a very hopeful place as well.

Page 11 of 12« First...89101112
Like us on Facebook