Space elevator needs a boost

2008 Space Elevator Conference

I’ve been attending the 2008 Space Elevator Conference being held at Microsoft’s Redmond campus this weekend.  The many talks and papers given there clearly demonstrate the tremendous dedication and creative engineering that can be found in this nascent field.

For those of you who are fuzzy on the concept, the general idea of the space elevator involves running a tether from the surface of the earth to geosynchronous orbit and beyond.  A mechanical “climber” then ascends and descends the tether, delivering payloads into orbit.  If this sounds like science fiction, that’s because until recently, it was.  Independently conceived by a Russian (Artsutanov) and an American (Pearson), the space elevator concept was popularized by the late science fiction writer Arthur C. Clarke in his 1979 novel, “The Fountains of Paradise”.  At that time, no material was light enough and strong enough to make it possible.  But with the developing field of nanotechnology and the discovery of carbon nanotubes (CNTs), a number of people have begun to take the idea more seriously.  (For the record, Clarke is also credited with originating the idea of using geosynchronous satellites as telecommunications relays back in 1945, a concept that completely revolutionized communications.)

Now while the general concepts behind the space elevator are simple, the implementation is anything but.  There are numerous technical issues to be worked out and an enormous initial investment to be made.  But the potential payoff is huge.  Currently, payloads to geosynchronous orbit constitute only about 3.5% of total launch weight and cost on the order of $5-10,000 per pound of payload.  Payload efficiency on the space elevator could be as much as 90% or more depending on the method used.  Payload envelopes wouldn’t be limited to the size and shape of cylindrical payload bays and nose cones.  Pollution from rockets would be eliminated.  Most importantly though, the cost per pound would plummet.  As a result, a new space era would be born.

Why is this important?  New materials and manufacturing methods could be developed which can only be achieved in zero-gravity.  Off-world mining would allow us to supplement our diminishing resources.  Solar power beamed from space could meet the needs of our increasingly energy-hungry world.  Whether the space elevator is built by one country or becomes a multi-national effort, it will be a huge stimulus for the world economy, particularly for the key players involved.

But it’s not going to happen without support and financial commitment.  True, NASA currently offers as much as four million dollars in prizes for the Spaceward Games, a competition designed to stimulate progress in the field.  But the planning, the engineering analyses, the proof-of-concept work is all being done on a shoestring as thin as the carbon nanotubes themselves.  Uncounted hours are being volunteered by engineers and enthusiasts the world over, people who know this will one day become a reality.  But such dedication can take the space elevator only so far.  The day is quickly coming when we’ll have to make a greater commitment if we want to participate in what will surely be one of the greatest engineering feats humankind has ever seen.

Driving forces

Fuel prices continue to climb, steadily increasing the cost of every mile we travel.  Meanwhile, traffic congestion gets worse with each passing year.  And tragically,  car accidents kill around 43,000 people in the US annually. 

What if all of these statistics could be improved with a single technology?

Autonomous driving – driverless cars – may eventually do just that.  Okay, to be fair, the concept involves a family of technologies: video, infra-red, laser and radar sensors; GPS navigation; AI-controlled motion planning; and a variety of mechanical control systems.  But isn’t that what current-day cars are, anyway: an amalgam of systems?

The idea of driverless vehicles has been explored for decades, but it’s only recently that the supporting technologies have reached a sufficient maturity to really be able to capture the attention of the media and the imagination of the public.  I think recent advances are very indicative of how quickly this technology is going to mature during the next few years.

2007 Urban Challenge winner

In 2004, DARPA (Defense Advanced Research Projects Agency) held their first Grand Challenge.  It was over a 150 mile desert course which none of the twenty-one contestants finished.  In fact, the longest distance covered by any of the vehicles was only a little over seven miles.  In the second Challenge held in 2005, five of the vehicles finished a 132 mile off-road course.  All but one of the twenty-three entrants surpassed the prior year’s top distance of 7.36 miles.

Last year, DARPA held their 2007 Urban Challenge.  Of the 11 finalists, six completed the 55 mile urban course, three within the 6 hour time limit.  Rules included obeying all California state driving laws while negotiating with other traffic and obstacles and merging into traffic.  The $2 million prize was won by Tartan Racing, a collaborative effort by Carnegie Mellon University and General Motors Corporation.  Their vehicle, a Chevy Tahoe, covered the course in 4 hours 10 minutes for an average speed of nearly 14 mph.

Last month, GM and Carnegie Mellon announced a new Collaborative Research Lab and a $5 million commitment to work jointly on technologies that will accelerate the emerging field of autonomous driving.  This follows an announcement by GM in January that the company plans to test driverless car technology by 2015 and have cars on the road around 2018.  I wouldn’t be at all surprised if competitive pressures and AI advances moved this forward by a couple of years. 

In the end, regulatory issues and public acceptance of the systems’ safety may delay wide-scale use by several years, but ultimately these vehicles will become the norm.  A properly designed machine can easily react to a detected condition many times faster than a human being.  On-board transmitters and signaling systems could warn of intended actions, giving adjacent vehicles plenty of time to respond.  Combine this with AI swarming algorithms and vehicles will eventually be able to interact with each other with great speed and safety.  (How many collisions have you seen between flocks of birds recently?)

Given the enormous benefits this technology promises (fuel savings, improved utilization of existing roads and lives saved), $5 million dollars seems a trifling sum.  If a more substantial commitment resulted in autonomous vehicles being embraced just one year sooner, how much could we truly save?

Science, abstraction and magic

I was in a pub a while back having a pint with a British friend when the topic turned briefly to computing.  “What I don’t understand,” he said.  “Is how programmers make something from nothing?

I thought for a moment, then started briefly explaining about levels of abstraction, starting from transistors through machine language to higher-level languages.  I figured this might show him how by controlling certain electrical properties at the micro-scale we’re able to create a panoply of macro-effects.  Well, this obviously wasn’t the right approach since he began to glaze over after only a few minutes.  To be fair, we were a few pints in, but he had asked after all.  He was a university grad, so I’d assumed this was the kind of answer he’d been looking for.  Thinking back, I’m not entirely sure he wasn’t winding me up (pulling my leg) – a favorite English pastime – but I suspect that wasn’t the case.  I think he actually considered programming to involve the creation of something from nothing.

This got me thinking about the oft-quoted Arthur C. Clarke observation that “any sufficiently advanced technology is indistinguishable from magic.”  It’s been written  about frequently enough, but I’ve tended to think of it in terms of eras and generations.  The space probes, imaging technologies and wireless communications of today would seem like magic to a citizen of the 19th Century.  The locomotives, magic lanterns and telegraphs of the 19th Century would probably be quite magical to someone from the 15th Century Renaissance.  Likewise for the engines of da Vinci, perspective paintings and printing presses of that era, if you could show them to someone of a sufficiently earlier time.

But just as the rate of technological advance is increasing, so are the time frames of these examples diminishing.  I don’t think it’s necessarily a direct inverse relationship, but it does look like a fairly solid trend.  The fact is that a growing proportion of our technology is moving beyond the realm of common knowledge. 

When we flick a switch to turn on a light, we’re using an interface which separates us from the more complex processes involved, relatively speaking.  The processes are being abstracted in order to make them more accessible.  I think it’s fair to say that most people don’t understand the fundamental principles that cause a light bulb to glow when they flick that switch.  This is even more the case with more advanced devices.  As devices become increasingly sophisticated, there will have to be more levels of abstraction created between what they do and how we use them, between what we perceive as cause and effect, in order for us to be able to use them.  Because of this, the world is going to appear to become ever more “magical”.

Consider a lighting system.  We can already walk into some homes and activate the lights by our motion or with a verbal command.  In a few years, we’ll be able to operate much more complex systems using simple gestures or eye tracking.  And maybe a bit more than a decade beyond that by just thinking about it.  The simple interfaces will be there, the underlying technology will be there, but the conceptual distance between them will be so great that for many of us our understanding of the processes will become distorted. 

This creates the potential for some very negative consequences.  A number of people including futurologist Ian Pearson (see my recent interview with him in FutureNovo) have suggested that we may be moving toward a technological dark age where among other things, superstitions will arise around certain devices and processes because of just such a lack of understanding.  The ramifications of such a disconnect are alarming since it creates an environment begging for backlash and exploitation.  A greater gulf will occur between those with very specialized, esoteric knowledge and the end-users who reap the benefits.  If only a rare few understand how to build, operate or repair a technology, all kinds of abuse becomes possible.  Finally, should something happen to those few, we could easily find ourselves with no access to the technology at all.

And then we’ll really have to build something out of nothing.

Empowering the people

Change frightens us. The uncertainty of the new, the potential for disruption, it’s one of the reasons that our species seeks to anticipate the future – so we can avoid and hopefully survive the worst it has to throw at us.

Self-replicated RepRap parts

But change also has a huge potential to improve our lives and empower us. The recent accomplishments of the RepRap project are a case in point. Headed by Dr. Adrian Bowyer of the University of Bath’s Center for Biomimetics, RepRap is short for replicating rapid prototyping machine. In use by industry for about a quarter century now, prototypers are essentially 3D inkjet printers capable of creating parts by laying down thin layers of resin following a computer-driven design. Almost any basic object can be created using this technique, from dinnerware to engine parts. What makes RepRap different is that it’s the first prototyper capable of copying itself. And it’s open source.

While there are still a handful of its own parts that RepRap can’t copy and all of it still has to be hand assembled, the potential for a self-replicating replicator is enormous. Distributed to people in the developing world, such a technology could quickly raise their standard of living, providing necessities many of the rest of us have long taken for granted. Of course, such a technology would be tremendously disruptive to industry, but that can hardly be justification for billions to continue living and dying in unnecessary poverty.

Like RepRap, many other new and developing technologies have the potential to heal, to enable, to lift up vast numbers of people. DEKA’s water purification system, the Kurzweil-National Federation of the Blind Reader, solar energy solutions and worldwide immunization programs are but a few of the recent implementations of technology that have the ability to change the lives of millions for the better.

A changing world can be a frightening place, but it can be a very hopeful place as well.

Vanishing computers

Computers are disappearing. 

Now before you panic (or in a few cases, jump for joy), what I mean to say is computers are disappearing from view.  They’ll still be here, more powerful and in greater numbers than ever.  We’ll just not be seeing a growing proportion of them.


Intel 80 core research chip
  • Last year Intel unveiled a postage stamp-sized 80-core research chip as powerful as a 1996 supercomputer which at that time took up 2,000 square feet.  The new chip requires about 1/10,000th as much power as that supercomputer did. 
  • Wireless technology is available in more of our environment at continually increasing transmission speeds.  The recent auction of 700Mhz spectrum will allow for the delivery of a wide range of new software services via wireless.
  • GPS and other positioning technologies are being developed with greater degrees of accuracy and granularity at ever-lower cost.
  • RFID is becoming increasingly capable.  Identification, sensor-integration, data storage, firewalled access and encrypted communication are just some of their current features.  Grains so small they qualify as powder can be embedded in just about anything imaginable.  Even under your skin.
  • Cloud computing is taking off.  With its growth, more and more of our processing needs can be off-loaded to distant, unseen servers, which will provide processing-on-demand and greatly reduce wasted processing cycles.
  • Display technology is shrinking.  Texas Instruments recently demonstrated a prototype DLP pico-projector which is small enough to fit in a cell phone.  Wearable displays and retinal projection technology will become increasingly available in the near future. 
  • Emotive headset

    Several companies have recently demonstrated the ability to translate thoughts into commands that can be used to control games and other applications.  Emotiv Systems plans to ship its first-gen neuroheadsets in late 2008.

All of these technologies are becoming increasingly capable even as their cost is plummeting.  This is how technology works.  Many of us can remember when a not very sophisticated calculator cost as much as a current-day PS3.  And that’s not even in adjusted dollars.

So how does this change the way we’ll use computers?  Well, for one, they’ll soon be with us everywhere, all the time.  If you have enough computing power in your pocket or woven into your clothes or embedded under your skin to control basic I/O functions, various forms of wireless, GPS and cloud computing can do the rest.  Clunky old keyboards, mice and monitors will be a thing of the past.  RFID in clothing, jewelry or even under your fingertips will make gesture recognition input possible.  Wearable displays have the potential to provide heads-up information anywhere you go, augmenting your environment with different layers and levels of information.  Contextual overlays will be driven by a mix of geographic data and proximity detection, while being controlled and modified by personal preference filters.

And brainwave I/O is only just getting started.

A new era is coming.  Get ready to say goodbye to your computer.