The Future Era of Space Mining

Astronaut on asteroid

An astronaut secures the “bag” holding a huge space rock in an illustration of NASA’s proposed Asteroid Redirect Mission
Credit: NASA

The new era of space mining is the topic of my latest article for Scientific American. The recent passage of the Space Act of 2015 by Congress will finally eliminate a lot of uncertainty for this nascent industry, assuming President Obama signs it into law. Given the enormous cost of lifting materials and supplies into orbit and beyond, the ability to draw from off-planet resources is critical for the continuing development of space exploration and colonization. As Eric Anderson, co-founder and co-chairman of Planetary Resources stated following Congressional passage of the Act:

“Many years from now, we will view this pivotal moment in time as a major step toward humanity becoming a multi-planetary species. This legislation establishes the same supportive framework that created the great economies of history, and it will foster the sustained development of space.”

Given the continuing progress being made by the developing commercial space industry, including yesterday’s first successful VTOL (Vertical Take-Off and Landing) by Blue Origin, it seems to me we may be witnessing the start of one of the great transformational periods in human history.

Cybercrime 2025 and Beyond

San Francisco was host to WorldFuture 2015 earlier this month. With around 700 attendees and professional futurists from all over the globe, it remains the world’s oldest and best known futures conference.

On Saturday, the conference’s first full day, I had the opportunity to give my session, “Cybercrime 2025 and Beyond” to a full house and it seemed very well received. I chose to take a little different tack with this talk and explored the development of cybercrime and the search for its solutions as part of the ongoing coevolution between humanity and technology. I ended with a call for building a biologically-inspired cyber-immune system that could protect not only our future digital assets, but potentially our physical infrastructure as well. While there is research being done in this area, there isn’t a focused program for bringing it to fruition. Yet.

Given the growing threat of cybercrime in recent years, it’s no surprise there were several speakers at the conference who addressed the subject. “Future Crimes” author, Marc Goodman opened the morning with an excellent talk that drew from the wide range of topics he covers in his new book. Later in the afternoon, Roey Tzezana, a nanotechnology scientist and futurist from the University of Tel Aviv spoke about his team’s cybercrime forecast model which looks at the threat profiles that are generated when different technologies are combined. Fascinating stuff.

Optocapacitance in Scientific American

Gold nanoparticles

Gold nanoparticles refract light
differently based on on their size
My latest Scientific American article “Optocapacitance Shines New Light on the Brain” explores an exciting new technology, tentatively named optocapacitance. While I’ve been excited about its better-known cousin, optogenetics, for years, I think this has greater potential for therapeutic application and augmentation uses in living human beings. Treatment of macular degeneration and certain other retinal diseases has been one suggested application. It could also offer an effective method for integrating robotic and neural prosthetics with our bodies. Direct connections between various devices and our nervous system could become possible, providing improved sensory feedback and control. As far as affective computing goes, I could see it one day leading to direct communication with our emotions for diagnostic, therapeutic and entertainment purposes.

Is Time Running Out for Smartwatches?

Ask any number of technology analysts, pundits, experts and they’ll tell you that smartwatches are the future. Definitely. The technology is a sure thing. The demand is obvious.

I think it’s a terrible idea – and I always have.

Professional futurists will routinely tell you their work isn’t about making predictions. What we do is much more about exploring a range of potential futures in order to help clients reduce uncertainty in an ever-changing world. Nevertheless, prediction can’t entirely be avoided, which was the case over a year ago when I wrote about why I think smartwatches are a total nonstarter. (“Who Will Watch the Watches?” 8/14/13)

Many of the reasons I gave are now reflected in a recent Business Insider global survey (12/16/14) of over 2,000 readers. Quite simply, most of us just don’t see the point.

Smartwatches have definitely been a case of an idea chasing a market. As I discussed in my post last year, this isn’t just an expense without a purpose, it’s an implementation that’s actually more kludgy and less user-friendly than the technology it’s seeking to supplement. Note, I say supplement because these devices don’t replace or eliminate smartphones, but are instead a further addition to their ever extending ecology. Had this technology been able to entirely replace our phones rather than just be an add-on to them, this might have been another matter. But they’re not phones or email clients, or calendars or contact managers. They’re proxies. Quite simply, smartwatches don’t accomplish what they aspire to do on so many different levels and they never will.

By the time smartwatches work the way we need them to, a range of other technologies will be able to accomplish the same tasks and they’ll do them so much better. While I don’t think this will necessarily be in the form of Google Glass, a range of similar and related interfaces will hit the market in the coming years that should do a much better job integrating with our smartphones and the ways we use them.

The primary point I was making in my earlier post is that every technology comes about in it’s own time or not at all. We can not force it or make it do our bidding beyond a certain point. Ultimately, it’s the current conditions that will determine its fate, deciding whether or not it’s viable.

Let me add that I completely recognize there’s enormous market potential for wearables in the broader sense. This is a nascent field that is only just getting started and which will grow by leaps and bounds during this decade.

To support this, the IEEE 802.15.6 standards committee has set the stage for BANs or Body Area Networks, which will eventually evolve into truly sophisticated wearables – from implantable medical devices, such as programmable insulin pumps to dermal telemetric monitors to smart tattoos. Many of these will create and encourage new markets that will be embraced by consumers because they generate real value and are actually justified.

Unfortunately for smartwatches, they’re likely to remain little more than a market waiting to happen.

Cybercrime in Scientific American

My new article for Scientific American explores the rising threat of cybercrime, including the near-term potential for the world’s first “online murder.” By this, I don’t mean streaming a video of someone’s death – as horrific as that is – but rather the remote manipulation of data to assassinate a specifically targeted victim.

The forecast that spurred me to write about this was included in a recent report from Europol’s European Cybercrime Centre. As awful as this internet milestone would be, I’m of the opinion there are considerably bigger threats in store for us due to recent changes in the business models many of these criminals are using.

According to the report, the underground economy has been developing a “Crime-as-a-Service” (CaaS) business model in which skilled specialists create a wide range of products and services. These cover many different forms of criminal activity and are allowing those with few technical resources to readily enter into the world of cybercrime. Unfortunately for most of us, this wasn’t a bar that needed to be lowered.”

Press release: Intelligent Future Cybercrime Article in Scientific American

Review: Superintelligence by Nick Bostrom

Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.

For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.

One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.

As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.

As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.

It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.

The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.

Page 2 of 1112345...10...Last »
Like us on Facebook