The world around us is continually changing, especially when it comes to our livelihoods. Automation, global competition, changing business models, artificial intelligence, the fourth Industrial Revolution, robotics, big data — it’s all driving increasingly rapid transformations in the kinds of work we do and the ways that we do it.
The COVID pandemic only added to this disruption. Now, as the economy strives to ramp back up, we face an ironic disconnect. Millions of jobs are going unfilled while millions of would-be workers remain unemployed. According to the U.S. Labor Department in May 2021, there were 9.3 million available jobs, even as 9 million Americans sought to find work.
While some pundits are pointing to unemployment supplements that were initiated during the pandemic as a major cause, this is only one of many factors and probably not the most significant. Skill mismatches, burnout among essential workers across many fields, including healthcare, education and food preparation, as well as pent-up discontent over diminishing real wages during the past 40 years are all contributing to the problem. These are but a few of the causes behind the current situation.
These issues are not going to be resolved with a single solution, but in looking ahead at what it will take to meet our future labor needs, I think one approach stands out. A comprehensive transition to a lifelong learning model across society could go far in addressing many of these challenges, particularly in terms of continually building the new skills that will be needed to power tomorrow’s economy. The one-job-for-life ideal of the mid-20th century is long gone (if it ever truly existed), calling for a much-needed shift in expectations in our world of accelerating change. To be sure, new approaches to upskilling and certification models, post-secondary funding and even labor laws will be required in order to realize this crucial reimagining of our education and career tracks. These are all matters I continue to explore with my clients so we may all be better prepared to build our preferred futures.
As a futurist, I’m continually researching emerging technologies and trends, speaking and corresponding with the scientists, labs and businesses who are helping to build our future.
Recently, I thought it might be fun to share some of what I learn via different channels. You can check out some of these articles at GeekWire, the online technology news site where I’ve been exploring the latest developments in quantum computing, natural language processing, machine learning solutions that protect the environment, deepfakes, coding for the common good and more. I hope you enjoy it!
Back in 2018 & 2019, when I was writing Future Minds, I coined a phrase I’d never run across before: Neureality. (I’m not saying no one had ever thought of it, just that I’d never heard of or read it.) My idea at the time was that our technologies were rapidly approaching the stage when they would be able to directly alter our perceived reality in much the same way virtual reality does today. However, given this would be done using brain-computer interfaces and other tech, it had the potential to be far more convincing and immersive. This I thought, would be neureality.
As I’ve had time to mull this idea, however, I’ve realized much of this anticipated development has already arrived. To borrow from William Gibson, “it’s just not evenly distributed yet.” While I’m sure there will be many beneficial uses of neureality in the future, it’s already being implemented in ways that are concerningly detrimental. I’ll be exploring ideas about neureality in the coming months and look forward to sharing more with you in the near future.
After defeating the all-time human champions of the game show, Jeopardy in 2011, IBM’s Watson supercomputer was scheduled for an upgrade. Wanting to give it access to a broader vocabulary, its programmers introduced the AI to the Urban Dictionary, a web-based collection of cultural phrases and slang. Unfortunately, because of this, Watson was soon swearing profusely.
Something similar happened in 2016, when Microsoft introduced it’s new Twitter chatbot, Tay, to the world. Based on the persona of a teenage girl, the program devolved into a racist neo-Nazi after less than 24 hours on the internet and had to be removed.
Why do natural language AIs frequently degenerate in this way, cursing and spouting biased, racist and otherwise toxic messages. I recently explored these issues for Geekwire, speaking with a team of researchers from the Allen Institute of Artificial Intelligence and the University of Washington who are exploring this problem. While the problem is very challenging, their work may help address a problem that is becoming increasingly important as we increasingly rely on AIs to communicate with us using natural language.
In writing my latest book, Future Minds: The Rise of Intelligence from the Big Bang to the End of the Universe, it was essential to properly define intelligence in an appropriate context. However, my research led me to over 200 different definitions of this nebulous word, most of which were overly restrictive or naively broad. This is due in large part to intelligence being what AI pioneer Marvin Minsky called a “suitcase word”, a term he used to describe words having many meanings and associations. In a 1998 interview with Edge, Minsky said of suitcase-words (like ‘intelligence’, ‘intuition’ or ‘consciousness’): “all of us use these to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can’t yet explain.”
Because of this and for reasons I expand on throughout my book, I finally arrived at a broad definition that encompasses the ideas I’d been exploring.
Intelligence: An emergent system’s ability to respond to its environment in order to improve its conditions, perpetuate itself and maximize its future freedom of action.”
This allows intelligence to be seen much more as an almost inherent property of the universe, something that isn’t limited to a single species or substrate, but which is an ongoing optimization in those systems that are able to successfully perpetuate themselves into the future. Based on this, while it may take considerable time, it seems likely that advanced technological intelligences will one day become a reality.