Archive for the Interfaces Category

Swearing Computers – AI’s Challenge of Toxic Degeneration

Neural toxic degeneation at AI2 and UW

Allen Institute for AI and University of Washington researchers, top row from left, Samuel Gehman, Suchin Gururangan, Maarten Sap, and bottom row from left, Yejin Choi, Noah A. Smith. (AI2 Photo)

After defeating the all-time human champions of the game show, Jeopardy in 2011, IBM’s Watson supercomputer was scheduled for an upgrade. Wanting to give it access to a broader vocabulary, its programmers introduced the AI to the Urban Dictionary, a web-based collection of cultural phrases and slang. Unfortunately, because of this, Watson was soon swearing profusely.

Something similar happened in 2016, when Microsoft introduced it’s new Twitter chatbot, Tay, to the world. Based on the persona of a teenage girl, the program devolved into a racist neo-Nazi after less than 24 hours on the internet and had to be removed.

Why do natural language AIs frequently degenerate in this way, cursing and spouting biased, racist and otherwise toxic messages. I recently explored these issues for Geekwire, speaking with a team of researchers from the Allen Institute of Artificial Intelligence and the University of Washington who are exploring this problem. While the problem is very challenging, their work may help address a problem that is becoming increasingly important as we increasingly rely on AIs to communicate with us using natural language.

Who Will Watch the Watches?

I love technology.

Good. Now that that’s out of the way, allow me to clarify. I love good technology. Jetpacks. Flying cars. Meal-in-a-pill. Not good. These won’t fly (some of them literally) because they don’t meet all of the criteria of good technology.

Consider that in order to move all the way from concept to prototype to marketable product, every idea has to pass through a succession of filters. Is the idea possible within the laws of physics as they’re currently understood? Then forget retro-causality (time machines), perpetual motion, faster than light travel/communication, etc. Do our existing, or soon to be existing, engineering capabilities, materials, tolerances, etc., allow us to realize the idea or will it remain on the drawing board for centuries, as did Leonardo da Vinci’s flying machines or Charles Babbage’s Difference Engine? Can a need be established? That is, can consumers, corporations, or the military be convinced this is something they must have? Because without a perceived need, it will surely go the way of the Edsel.

And what of other institutions? Regulatory bodies, insurers, political organizations and others must be persuaded to support or at least tolerate and accept the new tech. And ultimately is this an idea that is right for its time? An invention must fit within the established mores, accepted behaviors and realities of user understanding and functionality. Without all of these, the idea will die stillborn. Given all this, it may seem a miracle any new tech ever comes to life and gets the opportunity to walk the earth, even if only for a few years.

So all this said, I’m going to forecast that the revisited concept of a wrist-worn computer in the form of a smartwatch, be it an iWatch, a Pebble, or what have you, will be very short-lived. Don’t get me wrong. These are going to be pushed into the market because so many corporations have jumped on the bandwagon and they all smell profit. But really, most of this Dick Tracy, two-way radio concept was technologically feasible over a decade ago. That’s when these companies should’ve been pushing it. Not now, just when all manner of head mounted displays (HMDs), such as Google Glass are just coming to market.

Think about it: Why would anyone want to obviously look down at their wrist instead of surreptitiously glancing at an eye level monitor? Why obliterate your view and call attention to yourself when snapping a picture, when you could do it by subtly turning your head and issuing an eye-tracked command or vocal instruction – and soon a subvocal one? Why pass up the ability to have virtual images and data superimposed over the real world as augmented reality comes online? And WHY would you want to squint at an image so small it can fit on your wrist (or else be encumbered by an oversized wearable), when considerably better, larger display interfaces will be available? Short of projecting an image (creating all kinds of stabilization challenges), watches will remain limited in their usable display size, because of our eyes’ typical minimum focusing distance. In the meantime, HMDs, retinal displays, and in the next decade, active contact lenses, will provide ever-larger, more immersive display experiences.

Of course, HMDs will have their own challenges. We can’t ignore that for a time, there will be some resistance to the new headware. There are always those who reject the new because they feel threatened by it’s arrival. It’s natural and at times it may even be appropriate. But this isn’t one of those times. Turning to inferior technologies and interfaces is rarely a winning strategy. Ignoring the future never is.

This is cross-posted to my blog at the World Future Society.