Artificial intelligence debates tend to lead us down two separate, but related, rabbit holes. The first is the scary, science fiction-esque future in which machines become smarter than we are and rule over humankind. The second is more realistic and likely much more imminent: economic displacement.
In our interviews with individuals at the top of their respective fields both inside and outside of academia, we discussed the validity of fear about economic displacement, and what the role of the humanities might be in an increasingly automated world. We wanted to know how artificial intelligence (AI) will affect humankind, and how humans might drive the future of artificial intelligence.
How worried should we be about robot overlords and economic displacement?
If you need to choose one of the above to panic about, go with economic displacement. We aren’t anywhere close to a future in which super-intelligent machines make humans their slaves. John Markoff, longtime technology and science reporter for The New York Times and an expert in the AI space, addressed the concept of the Singularity, the idea that an exponential increase of machine intelligence will lead to machines that are more intelligent than humans.
“The fact is, machines are not ‘learning’ in a human sense of understanding a concept,” said Markoff. “It’s not happening yet. Maybe at some point it will. I can’t say no, but I see no evidence. And most of the computer scientists who are serious practitioners of the field dismiss that model until it can be proven.”
Economic displacement is an entirely different story. According to a study conducted at the University of Oxford in 2013, 47 percent of all job categories are at risk for automation over a two decade period. Fortunately, the doom-and-gloom outlook on the future that this estimate implies isn’t universally shared. “McKinsey has done the same kind of analysis, and they have a radically different view,” Markoff said. “They say the kind of automation that is happening is more task automation than job automation, meaning jobs will change as much as they’re going to be replaced. Jobs by and large involve people doing diverse things. You might be able to automate part of the job, but automating all of the job is more difficult.”
The fact that the “nature and composition” of the workforce, as Markoff framed it, will change due to increased automation seems inevitable. The rising fear that all jobs will disappear, however, is hyperbole. The other good news? It’s possible to develop skill sets and to pursue courses of study that increase one’s chances of surviving—and thriving—in a highly-automated economy.
Humanities skills are necessary for success in an automated world
It turns out that the “new” skills needed to succeed in the 21st century and beyond aren’t so new. The great irony, says theoretical neuroscientist Vivienne Ming, is that the skills that will make a person “robot-proof”—problem-solving skills like general cognitive ability, working memory, and attention—are the exact same skills that have been predictive of success throughout history. Even in the tech industry today, employers are looking for craftspeople who can solve problems in the technology space, not people who have mastered one fixed set of skills. Problem-solving, according to Ming, is an umbrella term that captures qualities that are not only predictive of life outcomes, but are also intervenable in young children. Through her company, Socos LLC, Ming creates technology that sends personalized recommendations to parents of young children via text to try to facilitate the development of these problem-solving skills.
As automation continues to permeate our economy and our lives, flexibility and diversified skill sets will become increasingly important. Tim Kobe, founder and CEO of design firm Eight Inc., worked closely with Steve Jobs for 12 years, beginning in 1997 when Eight Inc. was hired to produce Apple product launches. Kobe believes that the way in which Jobs effectively integrated the left and right sides of his brain was integral to his astronomical success.
“I think the humanities are essential,” Kobe said in reference to building his Eight Inc. team. Potential employees of Eight Inc., which aims to create meaningful human experiences through design, need to understand that everything they do is a reflection of the wider culture, Kobe said. “Particularly today, there’s an emphasis on STEM and the development of the more analytical way of thinking about things, but we haven’t seen that in the most successful people. In the most successful people, we’ve actually seen an incredible balance in the ability to move between right and left brain capabilities.”This balance suggests a comprehensive approach to education that often falls by the wayside as humanities departments face looming budget cuts and pressure to demonstrate their practicality. Yet paradoxically, what’s practical in the short-term may be far from practical in the long-term.
“I’ve been traveling the country having a debate with Jerry Kaplan, who wrote Humans Need Not Apply,” Markoff said. “We both have come to the conclusion that one of most valuable things to have in this economy is a liberal arts education…We’re seeing a world where people do different things every two to five years.” Students in the 21st century need, above all, to learn how to learn. The humanities, with its emphasis on critical thinking and problem-solving, instills this drive and skill set in its students.
Studying English and art history has served Soraya Darabi, co-founder of lifestyle e-commerce platform Zady, extremely well in her own career. “The humanities have allowed me to float between disciplines,” said Darabi. And no matter what our automated future looks like, individuals who are adaptive, resilient, and intellectually curious will be at an advantage. “The smartest people I’ve met in my life are academics,” Darabi said, “and academics tend to be lifelong learners.”
Using the humanities to determine the future of AI
When determining what the future of AI should look like, we must take into account what we want the future of our society to look like. “As a society, we demonstrate who we are as a human species based upon how we spend our time and where we devote other resources,” said Elñora Tena Webb, president of Laney College in Oakland. “The degree to which we will experience greater levels of health and wellbeing as a society is highly correlated, and arguably causal, with the degree to which we invest in the humanities.”
An exclusivist emphasis on STEM education elevates analytical skills over everything else that shapes our values and our culture. Norberto Grzywacz, Dean of the Graduate School of Arts and Sciences at Georgetown University, illustrated this principle with a story from the book The Meaning of Human Existence by Edward O. Wilson: If an alien race that achieved our level of scientific progress one hundred million years ago landed on Earth, these aliens would care only about our culture and creativity, not our science or technology. The technological advancement we pride ourselves on should never take precedence over everything else that makes our society unique.
“I think science has contributed a lot to our progress as a species,” said Grzywacz, “but like Wilson, I feel that humanities are the soul of humankind.”
Preserving this “soul” is a central question in the development of AI. If we divorce philosophy and ethics from the discussion of how scientists create AI, we could end up with a future that appeals to no one. “While I don’t think it’s likely that these machines are going to be self-aware anytime soon, I think the conversation they’ve started is a really important one,” said Markoff. “Increasingly, these machines won’t be self-aware, but they will be autonomous. That means that the values that designers have that go into the making of the machines, become increasingly important.”
Once machines are autonomous, it becomes too late to input these values. This humanist conversation about ethics is one that needs to be happening now, and should supersede the fear mongering and robot panic that too often dominates AI conversations. “I think it’s possible to design these systems so that the human is at the center of the system and the system has human values,” said Markoff. “That’s why I’m still optimistic. Humans are still in loop at this point, and they’re going to stay in the loop. That gives us the ability to design these machines in ways that actually improve humans, rather than put them on sidelines.”
Perhaps one way to ensure that humans remain in the loop is to focus on the potential of machines to enhance wellbeing and expand human potential, rather than replace humans all together. “I think AI of the future is not artificial intelligence; it’s augmented intelligence,” said Ming. “When I describe myself, I am not someone that builds AI. I build glasses. I enhance people’s vision…We will have super-intelligent people. I realize it sounds like science fiction, but over the next 20-30 years, the definition of what it means to be human will fundamentally change.”
Who better to analyze and reflect on what it means to be human than humanists? After all, it’s what they’ve been doing all along.