Ask any number of technology analysts, pundits, experts and they’ll tell you that smartwatches are the future. Definitely. The technology is a sure thing. The demand is obvious.
I think it’s a terrible idea – and I always have.
Professional futurists will routinely tell you their work isn’t about making predictions. What we do is much more about exploring a range of potential futures in order to help clients reduce uncertainty in an ever-changing world. Nevertheless, prediction can’t entirely be avoided, which was the case over a year ago when I wrote about why I think smartwatches are a total nonstarter. (“Who Will Watch the Watches?” 8/14/13)
Many of the reasons I gave are now reflected in a recent Business Insider global survey (12/16/14) of over 2,000 readers. Quite simply, most of us just don’t see the point.
Smartwatches have definitely been a case of an idea chasing a market. As I discussed in my post last year, this isn’t just an expense without a purpose, it’s an implementation that’s actually more kludgy and less user-friendly than the technology it’s seeking to supplement. Note, I say supplement because these devices don’t replace or eliminate smartphones, but are instead a further addition to their ever extending ecology. Had this technology been able to entirely replace our phones rather than just be an add-on to them, this might have been another matter. But they’re not phones or email clients, or calendars or contact managers. They’re proxies. Quite simply, smartwatches don’t accomplish what they aspire to do on so many different levels and they never will.
By the time smartwatches work the way we need them to, a range of other technologies will be able to accomplish the same tasks and they’ll do them so much better. While I don’t think this will necessarily be in the form of Google Glass, a range of similar and related interfaces will hit the market in the coming years that should do a much better job integrating with our smartphones and the ways we use them.
The primary point I was making in my earlier post is that every technology comes about in it’s own time or not at all. We can not force it or make it do our bidding beyond a certain point. Ultimately, it’s the current conditions that will determine its fate, deciding whether or not it’s viable.
Let me add that I completely recognize there’s enormous market potential for wearables in the broader sense. This is a nascent field that is only just getting started and which will grow by leaps and bounds during this decade.
To support this, the IEEE 802.15.6 standards committee has set the stage for BANs or Body Area Networks, which will eventually evolve into truly sophisticated wearables – from implantable medical devices, such as programmable insulin pumps to dermal telemetric monitors to smart tattoos. Many of these will create and encourage new markets that will be embraced by consumers because they generate real value and are actually justified.
Unfortunately for smartwatches, they’re likely to remain little more than a market waiting to happen.
My new article for Scientific American explores the rising threat of cybercrime, including the near-term potential for the world’s first “online murder.” By this, I don’t mean streaming a video of someone’s death – as horrific as that is – but rather the remote manipulation of data to assassinate a specifically targeted victim.
The forecast that spurred me to write about this was included in a recent report from Europol’s European Cybercrime Centre. As awful as this internet milestone would be, I’m of the opinion there are considerably bigger threats in store for us due to recent changes in the business models many of these criminals are using.
According to the report, the underground economy has been developing a “Crime-as-a-Service” (CaaS) business model in which skilled specialists create a wide range of products and services. These cover many different forms of criminal activity and are allowing those with few technical resources to readily enter into the world of cybercrime. Unfortunately for most of us, this wasn’t a bar that needed to be lowered.”
Press release: Intelligent Future Cybercrime Article in Scientific American
Since the advent of the computer age and quite possibly before, popular media has seized on the idea of a superintelligent machine and turned it into a clichéd tale of warning. Too often a monolithic mind with an ill-defined need for global destruction or domination is eventually thwarted because a chisel-jawed protagonist identifies and exploits some flimsy flaw in the machine’s operations or logic, saving the world at the last possible moment. Of course, few of these movies or books move beyond the trope and adequately considers the difficulties of building such a machine, nor do they really explore just how alien such an intellect could actually be.
For many years, various artificial intelligence aficionados – AI professionals, cognitive scientists, philosophers and even hobbyists – have discussed the realities of just such a development in a much more serious vein. As with any good debate about such a speculative subject, this has resulted in a number of good ideas and papers, as well as a great deal of frivolous speculation. Some of the better work to come from this ongoing discussion has been in the form of several papers by Oxford professor of philosophy Nick Bostrom. Now Bostrom has expanded considerably on this work in his new book, “Superintelligence.” Bostrom takes the reader step-by-step from the potential methods and difficulties in achieving a superintelligence through to the likelihood of it actually occurring within a specific time frame. Given that several expert surveys converge on a point near the middle of this current century, it seems prudent that we begin to give the matter critical consideration as soon as possible.
One of the primary dangers Bostrom discusses (as have others), is the potential for an intelligence explosion – that point when an artificial intelligence has acquired sufficient ability to direct its own self-improvement. This potentially leads to a positive feedback loop that could result in a human level machine intelligence (HLMI) rapidly developing into a true superintelligence. Such a superintelligence is often defined as an “intellect that greatly exceeds the cognitive performance of human beings in virtually all domains of interest” and may surpass the sum intellect of the world’s entire human population. Such a mind would be completely alien to us, potentially with motives and goals very much in conflict with our own. This creates a very worrying scenario, one that could in fact bring about the end of humanity.
As the potential severity of the existential threat such an entity could present is explored, so too are the considerable difficulties if not impossibility of containing it. The strategies for containment differ greatly depending on the kinetics of the intelligence explosion, that is, whether the intelligence achieves criticality in a fast run-up taking from hours to days, or a medium one that takes months to years, or a slow run-up that occurs over decades or even centuries. Interestingly, Bostrom later makes the argument that under certain global conditions, a fast run-up might actually be the preferred option. If we have any say in the matter, that is.
As Bostrom points out repeatedly, we need to deal with this issue well before an AI actually achieves take-off. Any sufficiently powerful superintelligence is extremely likely to have motives and goals that conflict with our own and will strive to thwart any of our efforts to change or control it. More importantly, it will have the ability to do so. Basically, we have one opportunity to get this right. There won’t be any chance for a do-over. This is the point Elon Musk was making recently when he said, “with artificial intelligence we are summoning the demon.” Supernatural references aside, developing a superintelligence is playing with superheated fire.
It should be noted that “Superintelligence,” published by Oxford Press, is an academic, not a trade book. While well-reasoned and explained, this is no narrative romp through the subject and the casual reader may find the material arduous at times. For instance, unless you have some familiarity with the methods used to create AI algorithms, it’s unlikely you’ll appreciate the benefits of Bayesian inference. But really you don’t need to. Far more important are concepts such as Bostrom’s explications of Yudkowsky’s idea of providing a seed AI with humanity’s “coherent extrapolated volition” or CEV, essentially a means of coding a humanity-compatible value system into an AI, absent the many problems culture-bound morality has historically caused for our species.
The book finishes off with an exploration of possible strategies in light of the challenges that have been identified. Because the entire future existence of humanity may be at stake, this is a decision process that cannot be taken lightly. Far from being the solution to this challenging dilemma, “Superintelligence” forces us to understand just how complex a task this really will be. As important a work as Bostrom’s book is, it isn’t a book of answers. Rather, it is a clarion call to those who will one day find the solutions to this very considerable problem. That is, if we are lucky.
My article on a recent and important mathematical proof regarding the beginnings of our universe was published at Scientific American today. While it may seem odd that a futurist is writing about something that happened 13.8 billion years ago, in fact I think it’s both justified and applicable. Our understanding of the origins of our universe tells us quite a bit about its future and its eventual end. As we refine our models and they become increasingly accurate, our predictions about other cosmological phenomena also improves. Not to mention it’s great fun!
Today’s announcement of the detection of primordial gravitational waves is huge. The Harvard-Smithsonian Center for Astrophysics gave a news conference in which it described the first ever detection of these waves which provides a window onto the very earliest stages of our universe. Gravitational waves were the last untested prediction of Einstein’s General Theory of Relativity. Using a specialized telescope, the research group on the BICEP2 (Background Imaging of Cosmic Extragalactic Polarization) project at the south pole gathered data which should only be observable from an event as massively violent as the Big Bang.
These primordial gravitational waves would’ve been generated a trillionth of a trillionth of a trillionth of a second after the Big Bang, during a period known as cosmological inflation. The inflation period would’ve expanded the nascent universe many, many times faster than the speed of light and led to an extremely, yet not perfectly, smooth and uniform cosmos.
This evidence isn’t only a direct window onto the very earliest stages of the formation of our universe, it gives us new tools for studying it. Additionally, it provides major support for what’s known in physics as the Standard Model. This indicates our understanding of the cosmos is very much on track, even though there is still much for us to learn.
So why is this important to futurists? After all, this all occurred an unfathomably distant time in the past. But because this evidence allows us to more accurately model our universe’s past, it will also let us better understand it’s most distant future. As a result of today’s discovery, the evolution and ultimately the fate of our universe can be far better anticipated than ever before. It’s a discovery that many consider worthy of a Nobel prize.
(I explore the BICEP2 project and cosmic inflation in greater detail in my upcoming article, “Making Waves in the Cosmos” in the July-August 2014 issue of The Futurist Magazine.)