Cybercrime in Scientific American

My new article for Scientific American explores the rising threat of cybercrime, including the near-term potential for the world’s first “online murder.” By this, I don’t mean streaming a video of someone’s death – as horrific as that is – but rather the remote manipulation of data to assassinate a specifically targeted victim.

The forecast that spurred me to write about this was included in a recent report from Europol’s European Cybercrime Centre. As awful as this internet milestone would be, I’m of the opinion there are considerably bigger threats in store for us due to recent changes in the business models many of these criminals are using.

According to the report, the underground economy has been developing a “Crime-as-a-Service” (CaaS) business model in which skilled specialists create a wide range of products and services. These cover many different forms of criminal activity and are allowing those with few technical resources to readily enter into the world of cybercrime. Unfortunately for most of us, this wasn’t a bar that needed to be lowered.”

Press release: Intelligent Future Cybercrime Article in Scientific American

Rise of the Intelligent Machines

I’m beginning a short series at Psychology Today about the ongoing advances being made in machine intelligence. I’d originally thought about using “The Ascent of Machine” for the series title — after Jacob Bronowski’s wonderful work, “The Ascent of Man”, which I found so inspiring when it first came out. But I thought it sounded a bit kludgy and so I ultimately decided on the Cameron-esque “Rise of the Intelligent Machines”.

Step by step, we’re being equaled and more importantly, surpassed. As we’ve seen in both biology and technology, this is a march that is taking place with ever more rapid strides. Complexity breeds still further complexity, cross-fertilizing itself into previously unconceived of combinations. The world is quickly approaching a time when humanity may come to share the world with an equal or greater intelligence. One that will have been of our own making.

This multi-part series will explore the rise of machine intelligence, research and advances that will impact its development and what this may mean for the future of human intelligence. Check back at Psychology Today for future installments. Next in Part 2: How to Build a Brain.

The Inevitability of Invention

What do nuclear technology, embryonic stem cells, synthetic life and molecular nanotechnology have in common? For many people, these are strange and frightening concepts which conjure erroneous, often very dystopic visions of the future. They’re also technologies with enormous potential; they could seriously damage our world or they could be immensely beneficial. But perhaps most importantly, all of them are inevitable.

Change means risk, so through the ages, a part of our brain has evolved to avoid big changes. Because of this, some of us are inclined to want to stop progress all together or at least to slow it down. Some new technology or knowledge has the potential to be dangerous and so it’s argued that it should be proscribed, banned, halted. But of course, it’s never that simple. The fact is, when the time comes, we can’t stop a technology from coming into existence any more than we can stop a freight train with our bare hands.

In his new book, “What Technology Wants”, Kevin Kelly makes the argument that technology is autonomous and has its own distinct direction and momentum. He details (what many have long known or suspected) that most inventions are made not because of someone’s singular genius, but because the time is right.

Logarithms. Calculus. Oxygen. Evolution. Photography. Steamboats. Telegraphs. Telephones. Incandescent bulbs. Typewriters. Transistors. Nuclear bombs. All of these, and so very many more, were independently discovered or invented at nearly the same time in history. The prevalence of these “simultaneous inventions” strongly suggests that when the time is right, a particular technology will be thrust upon us, whether we want it or not.

This isn’t to say that any of this is predetermined; only that once a particular set of conditions, capabilities and knowledge is in place, the next technological step is probably going to occur. While we can’t say the flux capacitor will be invented on August 23, 2029, we can make a reasonable estimate of when certain technologies are likely to be feasible. This can aid us in preparing for their arrival and in our endeavors to ensure their impact is as beneficial as possible.

Efforts to ban knowledge and the technologies it makes possible are doomed to failure. Stop research in one country and it will almost certainly continue somewhere else. Drive it underground and it will still go on, only without adequate regulation and oversight. Prohibiting emerging technologies will ensure you fall behind the competition. It will probably also mean not having a say in how that technology is developed or what direction it ultimately takes.

New technology is inevitable. Each new addition is just waiting its turn on the timeline of possibility.