Discover tomorrow's opportunities for your business today.
BOOK RICHARD    

Blog

The Inevitability of Invention


What do nuclear technology, embryonic stem cells, synthetic life and molecular nanotechnology have in common? For many people, these are strange and frightening concepts which conjure erroneous, often very dystopic visions of the future. They’re also technologies with enormous potential; they could seriously damage our world or they could be immensely beneficial. But perhaps most importantly, all of them are inevitable.

Change means risk, so through the ages, a part of our brain has evolved to avoid big changes. Because of this, some of us are inclined to want to stop progress all together or at least to slow it down. Some new technology or knowledge has the potential to be dangerous and so it’s argued that it should be proscribed, banned, halted. But of course, it’s never that simple. The fact is, when the time comes, we can’t stop a technology from coming into existence any more than we can stop a freight train with our bare hands.

In his new book, “What Technology Wants”, Kevin Kelly makes the argument that technology is autonomous and has its own distinct direction and momentum. He details (what many have long known or suspected) that most inventions are made not because of someone’s singular genius, but because the time is right.

Logarithms. Calculus. Oxygen. Evolution. Photography. Steamboats. Telegraphs. Telephones. Incandescent bulbs. Typewriters. Transistors. Nuclear bombs. All of these, and so very many more, were independently discovered or invented at nearly the same time in history. The prevalence of these “simultaneous inventions” strongly suggests that when the time is right, a particular technology will be thrust upon us, whether we want it or not.

This isn’t to say that any of this is predetermined; only that once a particular set of conditions, capabilities and knowledge is in place, the next technological step is probably going to occur. While we can’t say the flux capacitor will be invented on August 23, 2029, we can make a reasonable estimate of when certain technologies are likely to be feasible. This can aid us in preparing for their arrival and in our endeavors to ensure their impact is as beneficial as possible.

Efforts to ban knowledge and the technologies it makes possible are doomed to failure. Stop research in one country and it will almost certainly continue somewhere else. Drive it underground and it will still go on, only without adequate regulation and oversight. Prohibiting emerging technologies will ensure you fall behind the competition. It will probably also mean not having a say in how that technology is developed or what direction it ultimately takes.

New technology is inevitable. Each new addition is just waiting its turn on the timeline of possibility.

The Supercomputer Race

that China is barreling ahead in its development of supercomputers should give the U.S. considerable cause for concern. China has devoted significant resources to their supercomputer program in recent years, resulting in their ranking earlier this year at the number two spot on the TOP500 list. TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations. These tests yield a score based on the computer’s speed measured in double precision floating point operations per second (flops).

To give a little perspective: China didn’t have a single supercomputer ranked in the TOP500 until the mid-1990s. By June 2004, they had their first ranking ever in the top ten. In May 2010, their Nebulae system became the second fastest in the world with a performance of 1.271 petaflops. (A petaflop is 1015 floating point operations per second.) While the Chinese still only have one tenth the number of TOP500 supercomputers the U.S. has, they’ve been quickly catching up based on this metric as well. (Note: TOP500.org ranks the world’s most powerful, commercially available, non-distributed computer systems. There are numerous military and intelligence agency supercomputers in many countries not included in this list.)

China’s Nebulae system operates from the newly built National Supercomputing Centre in Shenzhen. This is also the site of some very recent and very extensive construction which will presumably house some very serious supercomputing power in the near future. “There clearly seems to be a strategic and strong commitment to supercomputing at the very highest level in China,” stated Erich Strohmaier, head of the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory.

The next major goal for supercomputers is the building of an exascale system sometime between 2018 and 2020. Such a system would be almost a thousand times faster than the Jaguar supercomputer at Oak Ridge National Laboratory, currently the world’s fastest. The U.S. Exascale Initiative is committed to developing this technology which brings with it many different challenges of scale. At the same time, Europe and China have accelerated their investment in high-performance systems, with Europeans on a faster development track than the U.S. There are concerns the U.S. could be bypassed if it doesn’t sustain the investment to stay ahead.

This isn’t just about who has the highest ranking on a coveted list – it’s not a sporting event with a big fanfare for the winner. These computers are crucial for modeling, simulation, and large-scale analysis – everything from modeling complex weather systems to simulating biological processes. As our understanding of highly complex systems grows, the only way we’re going to be able to keep moving forward is with more and ever more computing power. At the same time, exascale computing is anticipated to be a highly disruptive technology, not only because of what it will be able to do, but because of the technologies that will be created in the course of developing it. Ultimately, these technologies will end up in all kinds of new products, not unlike what happened with the Apollo space program. Falling behind at this stage of the game would put the U.S. at a big disadvantage in almost every aspect of science and product development.

Just as concerning, I believe, is what this would mean for developing an AGI or artificial general intelligence. There’s been a lot of speculation by experts in the field of AI as to when (if ever) we might develop a human-level artificial intelligence. A recent survey of AI experts indicates we could realize human-level AI or greater in the next couple of decades. More than half of the experts surveyed thought this milestone would occur by mid-century. While there are many different avenues which may ultimately lead to an AGI, it’s a good bet that most of these will require some pretty serious computing power both for research and potentially for the substrate of the AGI itself.

It’s been speculated that there are considerable risks in developing a computer with human-level or greater intelligence, but there are a number of risks in not doing so as well. Whoever builds the first AGI will very probably realize an enormous competitive advantage, both economically and politically. Additionally, the world faces a growing number of existential threats which AGIs could play a critical role in helping us to avoid.

During this time of budget deficits and spending cuts, it would be very easy to decide that Big Science programs, such as the Exascale Initiative, aren’t as crucial to the nation’s well-being as they really are. This would be a grave mistake. The question isn’t how we can afford to commit ourselves to this research, but how we can afford not to.

(NOTE: Beginning with this entry, I’ll be cross-posting my blog at the World Future Society – www.wfs.org.)

WorldFuture 2010 Recap

has come to a close, but the ideas and inspirations it generated will carry on well into the future. Held last week in Boston, the annual futurist conference was often profound, consistently thought-provoking, and even occasionally unsettling. With nearly a hundred presentations, workshops, tours, seminars and keynote speeches, over 900 attendees from around the world had plenty to think and talk about. This year’s conference theme was “Sustainable Futures, Strategies and Technologies”, made all the more relevant given the economic and environmental challenges the world has recently had to face.

The sustainability theme ran through a broad range of fields and topics. A small sampling of these presentations included “Global Efforts to Develop Sustainable Public Health Initiatives”, “Achieving Low-Carbon Economic Growth”, and “Sustainability and Future Human Evolution.”

While sustainability was the official conference theme, accelerated growth could easily have been designated the unofficial one. Technology ethicist, Wendell Wallach addressed it in his opening speech, “Navigating the Future: Moral Machines, Techno Sapiens, and the Singularity”. Inventor and author, Ray Kurzweil revisited the concept repeatedly in his keynote presentation, “Building the Human Mind.” (Kurzweil mentioned exponential growth enough times that some attendees later joked about turning it into a drinking game.) Many of the other presenters also talked about how the nature of technological progress, especially the convergence of previously unrelated fields, is driving this acceleration. For me, it was truly exciting to be among so many people who readily accept and incorporate this important concept.

Given my own inclinations, my favorite sessions tended toward the more technical. Among these were “Technology Futures and Their Massive Potential Societal Impacts”, “Humans in 2020: The Next 10 Years of Personal Biotechnology”, “Challenges and Opportunities in Space Medicine” and “The Human-Computer Interface.” Unfortunately, I couldn’t attend every presentation I wanted to see. That’s the downside of a conference of this scale: there’s no way to do it all. But then on the plus side, there’s definitely something for everyone.

For me, the best thing about WorldFuture is that while the conference themes and presentations may change from year to year, there’s always a strong belief in the need to look ahead. The world faces many serious environmental, technical and social challenges in the coming decades. We’re going to need serious foresight and planning if we want to make it a positive, sustainable future that’s supportive of our citizens, our economies and our planet.

Why the Future Isn’t Evenly Distributed


How often do we hear people ask “Why isn’t the future here yet? Where are the gleaming cities connected by suspended walkways that look disturbingly like Habitrails? Where are the moon colonies and twinkling domed cities? Where are the robot servants dressed in 1940’s maid uniforms?” And most ubiquitously, “Where, oh where, are the flying cars?”

For nearly all of human history, there have been people who looked to the future with an almost utopic fervor. Too often they’ve foreseen it as an earthly nirvana, where dirt, crime and social ills fall away, unable to adhere to its highly polished non-stick surface.

But somehow it hasn’t seemed to work out that way. Most people’s perception of our present is that it’s pretty much like our past, save for a few new gadgets and a few different problems. The future hasn’t been the universally transformative event many had hoped for.

So what happened? I think author William Gibson succinctly answered the question when he said, “The future is already here. It just isn’t evenly distributed.” With that pithy observation, Gibson summed up the entire issue. It’s here, it just isn’t everywhere. Yet.

The reality is we’re already capable of some pretty amazing things, stuff that belonged to the future not all that long ago. We’ve put a dozen men on the moon and maintained a significant presence in space for nearly four decades. We can sequence a person’s complete genome. We’ve created humanoid robots that are improving by literal leaps and bounds. We even have flying cars. So why aren’t these wonders more common, more available? Well, economics is certainly one significant factor. Given enough effort, we probably could have had a moon colony by now, but what a price tag! Likewise, domed cities, yet to what real purpose? Flying cars? Even if you get past the costs, you’re in a vehicle where even a minor failure is catastrophic. Combine the potentially high death-to-accident ratio with regulatory issues, astronomical insurance costs, the nightmare of air traffic control and the fact few of us have the necessary three-dimensional spatial skills and it’s doubtful we’ll see too many of these babies zipping across our skies anytime soon.

But there are many developing technologies that will almost certainly be embraced in the not-so-distant future, first by the wealthy, then over time by the rest of us. Biotech wonder drugs tailored to our unique DNA. Space tourism. Nanocomputers embedded in our clothing and throughout the environment. Artificial organs which our bodies won’t reject because they’re created from our own cells. Robot assistants to aide us in various tasks, hopefully minus the maid uniforms.

Technology adoption is also a question of inertia. Civic. Social. Political. Psychological. We don’t want to tear out all of the buildings and transportation infrastructure every quarter century just because there are newer, perhaps better ways to construct them. So we end up with 21st Century buildings set amidst 20th Century structures, maybe with a number of 19th Century ones scattered around and so on. Which is a good thing because it gives us continuity and makes us feel better psychologically. While parts of our minds have come to crave change, for other aspects continuity equals security. (Or inversely, change equals threat.) So we carry on moving forward along our timeline, letting the new mix in with the old, hanging onto legacy systems, products and methods, either because of cost, convenience or sentimentality.

So how do we get to a more evenly distributed future? We give it time to merge with the present, at which point, of course, it’s not the future anymore. A little over a century ago, no one had electricity or a telephone. It took decades before a quarter of the populace had these scientific marvels. Cell phones took a fraction of that time to reach a similar level of market saturation. Now there are nearly five billion cell phones in the world, which is fast approaching one hundred percent saturation. Put another way, there are tribal nomads walking around today with far more computer processing power and digital storage in their pocket than was on board the Apollo command modules. That’s a very thoroughly distributed technology.

Today’s present has so many wonders that yesterday’s present couldn’t even dream of: vaccines that have all but eradicated many lethal illnesses; vehicles that hurl us around the globe quickly and in comfort; buildings that soar high into the clouds; instant communication with anyone, anywhere, anytime; the sum of human knowledge available at our fingertips.

Make no mistake, the future’s here and it will be with you shortly. But by that time, it will be your present overlaid onto your past. Which, of course, was once someone else’s future. So just be patient.

And don’t expect the skies to be filled with flying cars anytime soon.

Who Will Watch the Watchers?

Technological advances bring all sorts of change to our world. Such change often brings with it the need for new rules and legislation. The trick is in establishing laws that create a level playing field – one which can provide us adequate protection without putting a stranglehold on new developments. For many generations, we’ve looked to our institutions for such regulation and protection. But as the pace of change quickens, these same institutions are quickly falling behind. In many respects, it can even be argued that they’re starting to generate more problems than they’re resolving.

A recent article in The Economist (“Why the rules on copyright need to return to their roots”) recaps the original intent of copyright law as a means of balancing “the incentive to create with the interest that society has in free access to knowledge and art”. But changes over the last fifty years have dramatically increased the period of copyright protection, significantly shifting this balance. Increasing the length of copyright doesn’t encourage the creation of new work, but rather it limits its “dissemination, impact and influence.” This has contributed to a business model that is far more interested in profit than the creation of art and or the fostering of knowledge. As a result, today we have individuals receiving excessive fines for DRM violations and onerous battles over what is and isn’t fair use or in the public domain. This was hardly the original intent when the concept of copyright was established.

Location of the BRCA1 gene on chromosome 17.
Location of the BRCA1 gene on chromosome 17.

The patent process is dealing with similar shortcomings that have even more serious repercussions for the public good. For nearly thirty years, human genes have been considered patentable based on nothing more than their isolation and identification. This locking up of something that is so obviously a part of our natural world should never have been allowed to happen. Awarding these patents has hogtied research, slowing advancement in fields that have directly impacted an untold number of lives. The striking down of patents on two genes by a federal judge in March will hopefully open the floodgates and lead to the challenging of thousands more human gene patents. The two genes in question in this recent case, BRCA1 and BRCA2, are closely associated with breast and ovarian cancers. The idea that women have died because of legal wranglings over these genes is repellent.

In the summer of 2008, the world saw the beginning of a massive financial crisis. The exact details are still being uncovered, but it’s evident that much of the blame can be laid at the systematic deregulation that occurred in the US over the prior decade. The institutions responsible for this loosening did not give adequate consideration to the relationships and interdependencies between the different players, not to mention the increasingly automated, electronic trading made possible by the internet.

On a slightly more amusing, yet no less frightening note, a post in the Wall Street Journal Law Blog this week, describes a few gaps in the technical grasp of some of our Supreme Court Justices. In the course of a current constitutional rights case, Justices had to inquire about the difference between “email and a pager” and whether a text sent to someone in the midst of sending to someone else would get through. “Does it say: ‘Your call is important to us, and we will get back to you?’” one Justice asked. Another had difficulty with the concept of a service provider, asking “You mean the text doesn’t go right to me?” For a body hearing cases that will increasingly be dependent on technical knowledge, such a lack of basic understanding is very disturbing.

Whether we’re talking about copyright law, gene patents or financial regulation, the point I’m trying to make is this: In these accelerating times, our institutions, our courts, our regulatory bodies need to become as nimble, informed and responsive as the industries and technologies they seek to govern. So what can we do to ensure that they’re up to the challenge?

The Age of the Interface

My latest article “The Age of the Interface” is the cover story for the May/June issue of The Futurist which is out this week.


A properly designed and implemented interface not only facilitates system-to-system communication, but it also simplifies and automates control of otherwise complex functions. Interfaces let us operate on things that we can’t otherwise deal with and peer into regions where we couldn’t otherwise see. From steering aircraft carriers to moving atoms with atomic force microscopes, interfaces rescale our actions. They translate digital signals and invisible radiation into media that are readily accessible to our senses. In essence, they become our eyes, ears, hands, and even extensions of our minds.

As astounding and varied as our interfaces are today, they’re on track to become much more so in the near future. Under development now are a range of new methods for interacting with our devices in ways that would have been inconceivable only a few years ago. With so many advances now on the horizon, we may someday look back on this period as the Golden Age of the Interface.

This article grew out of some of my observations about the profusion of interfaces that are currently under development. Some of these will be coming onto the market this year, others five to ten years from now. Overall, the trend is toward more natural ways of interfacing with one (or more) of our senses and an increasingly immediate integration with our bodies.

I’m very pleased with how the article turned out and welcome your responses and critiques.

Page 10 of 13« First...89101112...Last »
Like us on Facebook