Strategies for a Troubled Economy

From the very first time one of our primate ancestors poked a stick into an anthill to retrieve a few tasty morsels, we’ve been on a persistent track of technological betterment. Today, it’s reached the point when it seems a day doesn’t pass that we don’t hear of yet another advancement that will change and improve our lives. Machines that can perform tasks both mundane and complex. Computer programs that duplicate the expertise of the highly skilled. Processes that allow one person to do the job of ten.

Unfortunately, such advances can also have a very negative impact on the welfare of the people and society they should be improving. As we’ve seen repeatedly over the years, such progress can displace not only individual workers, but entire fields and industries as well. While the argument can be made that new technologies create new jobs and opportunities, the reality is often just the opposite: net job losses. Bad as this is, there are indications this trend that will not only continue, but is in fact, accelerating. The coming decades hold considerable potential for catastrophic levels of job loss. Losses that will be tremendously detrimental both for society and the corporate world unless we can adapt.

In the decades ahead, we face a legion of disruptive technologies that will transform the world. Open source 3-D printing that will allow us to cheaply print almost any part or object. Designs and inventions generated by evolutionary algorithms. Molecular nanotechnology that will eventually allow us to build almost anything we can imagine atom by atom. Artificial intelligences that will be increasingly capable of performing even the most human of tasks.

All of these technologies are based on information. Easily accessible, readily replicable information. As each technology develops and matures, there will be tremendous incentive for businesses and individuals to utilize it for maximum benefit and maximum profit. This in spite of the long-term impact on the economy and on society.

We can already see the beginnings of this starting to take place in many of the world’s economies. The efficiency improvements promised by computers and the information revolution are being realized. But we’ve been unprepared for many of the negative aspects of this progress. The still bigger problem, of course, is this is just the beginning. The job loss we see today may only be the tip of tomorrow’s iceberg. If this is allowed to proceed unchecked, the world’s economies could grind to a halt. Following a long, downward spiral, we would eventually reach a point when few businesses are selling because so few people are capable of buying.

It’s often been said that new technologies generate new jobs and this is true up to a point. But in our current stage of technological development, it’s difficult to see these keeping pace with the rapid rise in job losses. New industries, retraining and adapting institutions all take far longer to implement than it does to realize the fruits of exponential progress once it reaches a certain level relative to human time scales.

So what is the solution? Do we somehow impose restrictions or regulations that prevent the advances that will eventually cause these losses? Do we create incentives to retain employees despite their “jobsolescence”?

Such approaches are unlikely to work in a global market governed by competitive forces. The reality is we probably won’t be able to resolve these issues using our existing tools and institutions. In order to fix this, it may be necessary to reinvent commerce itself.

Exactly what form this will take is difficult to say. It may be that businesses will be forced to change their revenue models by moving away from the standard buy-to-own concept. As development times shrink and production costs plummet, it may be both possible and necessary to move toward something entirely different. A subscription model could be one possibility: paying a fee to obtain regular updates of a product or line, just as we do with software. As furniture, food, clothing, even buildings come to be governed by the laws and forces of information and accelerating change, we’ll need to adapt the revenue models that support them.

In addition to this, social programs will likely need to be restructured to better deal with periods of large-scale, extended unemployment. In the current U.S. political climate, this may seem impossible, but I believe it will eventually be deemed necessary, if we are to avoid truly catastrophic downturns. A smarter, more adaptable social safety net would act as a buffer, providing time for new industries to mature and workers to be retrained, while keeping the economy primed and flowing.

Extensive development of lifelong learning will also be essential to ensure workers can meet the needs of emerging fields. It’s been observed that many of yesterday’s jobs have disappeared, just as many jobs a decade from now don’t even exist yet. Adapting worker schedules and funding to support such continuing education will pay big dividends in a rapidly changing world.

It’s very possible we’ll need to rethink our approach to economics and business models many times during the coming century. Such adaptation and restructuring could allow both producers and consumers to more fully realize the benefits of technological progress, while maintaining an economy that protects the existence of both.

Playing the Long Game

It’s interesting what inspires or motivates an article. It might be a conversation, a good book, a song from days gone by. Sometimes there’s just a deadline waiting to be met. In the case of “Playing the Long Game“, my latest article at H+ Magazine, it was a recent tweet by its editor, Michael Anissimov:

This is a sentiment I definitely share and it got me to thinking: Here we are, the one animal that’s capable of planning days, weeks, even years in advance. What is that keeps us locked into short-term thinking? Why aren’t we making better use of this unique and powerful ability? As it turns out, I came up with quite a few factors. How we work, how we play, how we interact — a whole lot of our modern lifestyle encourages us to avoid long-term thinking and planning.

All of this perpetuates a shallowness of thought that impacts our decision making in so many spheres. Too often, our political system is hobbled by thinking that expects simplistic, slogan-ready solutions to increasingly complex problems in an increasingly complex world. Higher education teaches yesterday’s skills for jobs that soon won’t exist, instead of developing critical thinking and other adaptive skills in anticipating the needs of tomorrow. In short, all this living for the moment makes us forget there is a world that lies beyond the fifteen minute horizon and we are suffering for it.

While we face some very big challenges in the coming century, I do think we have it in us to deal with them. But it might mean making some changes to how we think about today and tomorrow.

Rise of the Intelligent Machines

I’m beginning a short series at Psychology Today about the ongoing advances being made in machine intelligence. I’d originally thought about using “The Ascent of Machine” for the series title — after Jacob Bronowski’s wonderful work, “The Ascent of Man”, which I found so inspiring when it first came out. But I thought it sounded a bit kludgy and so I ultimately decided on the Cameron-esque “Rise of the Intelligent Machines”.

Step by step, we’re being equaled and more importantly, surpassed. As we’ve seen in both biology and technology, this is a march that is taking place with ever more rapid strides. Complexity breeds still further complexity, cross-fertilizing itself into previously unconceived of combinations. The world is quickly approaching a time when humanity may come to share the world with an equal or greater intelligence. One that will have been of our own making.

This multi-part series will explore the rise of machine intelligence, research and advances that will impact its development and what this may mean for the future of human intelligence. Check back at Psychology Today for future installments. Next in Part 2: How to Build a Brain.
 

One Response to the Eight Grand Challenges

The closing plenary for the WorldFuture conference, held earlier this month in Vancouver, BC, was given by Thomas Frey of the DaVinci Institute. In his presentation, Frey offered up Eight Grand Challenges for humanity. It was a thought provoking presentation, though not without its critics.

To recap, Frey stated that much of our time and resources are taken up dealing with the existing problems of the day. We are continually responding to the problems and conditions we’ve created in the past and by doing so, we trap ourselves there. So why not establish a group of competitive challenges for the purpose of advancing humanity in the future?

Because of the scale of the challenges and the financial and physical resources needed, the competition would be limited to countries. The prize would be national prestige along with the creation of entirely new industries and markets. While the possible applications of the resulting technologies was considered very open-ended, the challenges themselves had set parameters. The challenges are:

  1. Race to the Core: First team to build a probe that makes it all the way to the center of the earth with a communication system capable of sending real-time sensory data to the surface.
  2. Viewing the Past: Create a technology capable of replaying an unrecorded event that happened no less than 20 years earlier in actual-size, in holographic form.
  3. Disassembling Matter: First team to reduce a solid block of granite (2’ cube) to particles no larger than molecules in less than 10 seconds, using less than 500 watts of power without causing an explosion or physical damage to objects more than 10′ away.
  4. The Gravity Challenge: Demonstrate gravitational control over an object weighing no less than 2,000 lbs. by doubling the force of gravity to 4,000 lbs., reducing the force of gravity by 50% to 1,000 lbs., and creating negative gravity by lifting the object 1,000 ft and returning it back to the original position with no explosions and in less than 10 minutes.
  5. The Ultimate Small Storage Particle: Create an electron-based data storage system no larger than 10 millimeters cubed that can be manufactured for less than $1 per 100 terabytes and is capable of uploading, storing, and retrieving a volume of information equal to the U.S. Library of Congress in less than 10 minutes using less than 1 watt per TB/month.
  6. Travel at the Speed of Light: Create a scientific probe capable of traveling at the speed of light for a distance no less than the Earth to Saturn with information sensors to capture stresses, impacts, and details along the way.
  7. Swarm-Bots: Create a swarm of 10,000 synchronized micro drones no larger than 10 millimeters across (height, width, and depth) capable of lifting a 250-pound person to a height of 100 feet and gently returning him/her to the ground.
  8. The 10-Second Interface: Create a direct-to-the-mind interface that will allow 25 average people to answer a series of questions within 10 seconds with no harmful side effects to the user.

Some attendees took issue with several of the challenges, noting that they violated established laws of physics. Now, I’m only an armchair-physicist (that is, I’m as much a physicist as most football fans are professional quarterbacks), but I’m of the opinion we need to be open to these ideas. At first glance, some of the challenges do appear to violate relativity and the First Law of Thermodynamics, but I think they’re actually open-ended enough to have some potential work-arounds.

For instance, disassembling matter would essentially involve breaking the covalent bonding between atoms. Though the limitation of 500 watts seems impossibly low, could a kind of cascade effect be initiated that would do the work without violating the law of conservation of energy?

Special relativity tells us that a massless particle can travel at light speed, but anything having mass can not. As an initial thought experiment: Would it ever be possible to isolate a chargeless particle, such as a neutron, in such a way that its mass wouldn’t be affected by acceleration? Using current technology, no, but I’m not sure this will always be the case.

Certain experiments with quantum entanglement – Einstein’s ‘spooky action at a distance’ – suggest that information may be able to travel backward in time. This is far from proven, but offers potential worth exploring.

Many cosmologists believe that dark energy and repulsive gravity may drive the expansion of the universe. Could advances in our understanding of these forces one day give us the means of controlling gravity?

The impossible is always impossible until it’s not. This doesn’t mean we can expect to violate primary laws of physics, but we need to be open to the possibility of what can be achieved.

For me, the bigger issue is one of motivation. Some of these challenges could take a century or more to be realized. National prestige and the dream of potential markets and industries are all very good, but is this sufficient to sustain participants for the long haul? Could they weather the political and ideological swings that will occur during even a fraction of this timeframe?

Two of the largest, long-term, technically advanced projects ever undertaken were the Manhattan Project and the Apollo Program. Both of these marshaled enormous physical and intellectual resources over many years. However, they weren’t launched to create new markets, but as a response to a perceived existential threat. For me, therein may lie the answer.

The world faces any number of known and unknown existential threats in the coming century and beyond. British Astronomer Royal Sir Martin Rees has put the probability of human extinction sometime during the 21st century at 50%. The risks are considerable. Some we can influence; others we have absolutely no control over. Bio-terrorism. Nanotechnology run amok. Nuclear war. Extreme climate change. Asteroid collisions. Radiation from a nearby supernova. Any one of these could destroy millions of years of evolution and thousands of years of civilization.

The possible uses for technologies arising from the Eight Grand Challenges are vast. But within them I see the necessary ingredients for finally expanding humanity beyond this one small planet. Moving into space, both within and beyond this solar system is our best chance of continuation as a species.

As physicist and cosmologist Stephen Hawking observed, “The long-term survival of the human race is at risk as long as it is confined to a single planet.”

So my interpretation of the Eight Grand Challenges is this: We need to treat humanity the same way we treat any other irreplaceable, highly valued, absolutely critical system.

Let’s make a backup.

 

A World of Data

Information. There is probably nothing so important to our lives, to our culture, to our world. Information, and the ability to communicate it, has been at the foundation of humanity’s rise since before we used fire or stone tools. It has allowed us to grow from nomadic clans to villages to city-states to nations to become a world-spanning society.

As our society and technologies have grown, so too has our world of information. Its depth, breadth and sheer volume have expanded exponentially. This has occurred for millennia, probably throughout our entire history. Yet now, as we find ourselves in the midst of the Digital Age, we discover we’ve reached a point when the volume of data we generate threatens our very ability to utilize it.

Data grows exponentially. According to market research and analysis firm IDC, the world’s digital output is doubling every one and a half years. In 2010, they expect the world to create and replicate a record 1.2 zettabytes of data. That’s over a trillion billion bytes, or a stack of DVDs reaching to the Moon and back. By 2020, IDC expects this number to grow to 35 zettabytes, or enough DVDs to reach halfway to Mars. But there are reasons to believe this estimate may fall woefully short.

How we address the issues surrounding the information explosion and the overload that accompanies it will directly impact how we develop our society and technologies in the future. My new article “Treading in the Sea of Data” is the cover story for the latest issue of The Futurist (July/August 2011) and it explores what this “data deluge” could mean for our world. The challenges created by Big Data are significant not only from a technological perspective, but from personal and societal ones as well. However, there are a number of approaches we can take in dealing with what will only continue to be a growing condition of life in this time of accelerating progress. I welcome your comments.

(This article draws from my more in-depth paper in the World Future Society’s 2011 conference volume, Moving from Vision to Action, which may be preordered from www.wfs.org.)

An Argument For The Singularity

Earlier this week, Charles Stross posted his thoughts on why he doesn’t think the Technological Singularity will happen. If you’re not familiar with the concept, the Technological Singularity is defined as that point in the future when a self-improving artificial general intelligence (AGI) achieves superintelligence. As Stross recommends, if you’re not familiar with the topic, you’d be wise to read the following first:

I’m going to take it as read that you’ve read Vernor Vinge’s essay on the coming technological singularity (1993), are familiar with Hans Moravec’s concept of mind uploading, and know about Nick Bostrom’s Simulation argument. If not, stop right now and read them before
you continue with this piece. Otherwise you’re missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted. It’s probably a good idea to also be familiar with Extropianism and to have read the posthumanism FAQ, because if you haven’t you’ll have missed out on the salient social point that posthumanism has a posse.

First, let me say that Stross is a first-class writer who brings serious thought to bear on a complex and controversial subject. I completely agree with many of his points and I can definitely see how the Singularity may never happen. But as I was reading his arguments, one thought popped out at me. To sum up why I think there’s a reasonable chance the Singularity will happen:

“Human-equivalent AI does not equal human-level AI.”

Early on, Stross makes the argument against human-equivalent AI, the building of an intelligence that thinks like us. This is an accomplishment that may never happen due to a number of issues I won’t repeat. Re-read Stross. But then, based on my reading of it anyway, he proceeds with his argument as though human-equivalent AI were the same as human-level AI and they’re not.

We stand on the cusp of a technological explosion that may (or may not) be unprecedented in the history of the universe. Authors such as Ray Kurzweil (The Singularity is Near), James M. Gardner (Biocosm) and Kevin Kelly (What Technology Wants) have discussed this at length. Read them. Based on the long history of self-organizing principles in the universe – what Kelly refers to as exotropy and Max More calls extropy – this technological explosion may well result in an explosion of intelligence as well. Now this may not occur as early as 2045, as Kurzweil has forecast. And potentially, it could happen in the next decade, though I’m skeptical of that time frame. But in geological and cosmological terms, if it happens, it will be in a relative eye blink from now. The resulting growth of intelligence would be comparable to the Cambrian Explosion, that era in Earth’s history when complex life underwent rapid and extensive diversification into many of the morphologies we see today.

My point is that technology needn’t emulate humans in order to be intelligent. We’re one accident in the history of evolution that managed to get it right. (From our perspective, anyway.) Technology is different. Unlike the long, arduous, vertical exchange of information that occurs through sexual recombination, technology moves its best solutions around in a much more free-form, horizontal manner. It’s not unlike the idea of horizontal gene transfer (HGT) which preceded complex life on Earth, explored by microbiologist Carl Woese and others. Historically, this process of technological recombination has required human intelligence as an intermediary, but recently this has started to change.

This, I believe, will eventually lead to a vast array of intelligences. Some will be smarter than us in certain ways, some won’t. Some might seem vaguely familiar; others will be utterly unfathomable. But ultimately these many intelligences will span the domain of possible intelligences to fill every niche in the information ecosphere. The extent of this domain is potentially very large and in it, human intelligence would be a very, very small subset.

Does this mean the Singularity will happen? I don’t know. The Singularity has come to represent different things to different people. Some who need it to fulfill some religious circuit in their brains, see it in quasi-spiritual terms – the so-called “Rapture of the Nerds.” Others believe it will result in a “fast take-off”, leading to an uplift of our own species (and potentially others as well). To myself and others, it’s “just” the development of a superintelligence which may possibly be followed by an explosion of intelligences within our light-cone of the universe. Ultimately, there’s no reason to expect it will result in anything like an entity that cares one iota about us. This is why ideas such as Eliezer Yudkowsky’s “Friendly AI” are really important. Within this domain of possible intelligences, whether vast monolithic superintelligences or distributed networked intelligences or bio-digital amalgams, some will inevitably have enough volition to present an existential threat to the human race unless safeguards are put in place. And even these are no guarantee.

As Vernor Vinge stated in an interview I did with him a few years ago, he thinks “the Singularity is the most likely non-catastrophic event for this century and [he’d] be surprised if it doesn’t happen by 2030.” But whether the Singularity happens or not, I think we have to be prepared for a world in which we are far from the only form of advanced, tool-using, concept-manipulating intelligence. As I stated earlier, “Human-equivalent AI does not equal human-level AI.”

To which I would add: “Nor does it need to be.”