Interview with Dr. Kate Darling, researcher at the MIT Media Lab
The year is 2020. Society is overrun by machines. They are everywhere; transporting packages, cleaning floors, even ordering groceries. They are in hospitals saving lives, while also on battlefields sowing death.
Is this it? After thousands of years of dominance, has the human race met its match? Is the era of machines upon us?
According to Dr. Kate Darling, leading robot ethicist and researcher at the MIT Media Lab, we still have plenty of time before any potential robot apocalypse. As ubiquitous as AI has become, it's still nowhere near capable of taking all our jobs and forcing the human race into servitude.
We are however at a critical junction. The decisions we make about what kind of technology we develop and how we use it will have long-lasting consequences on the economy and society. Kate was gracious enough to hop on a video call and explain those choices and consequences to us.
It's not the tech that's disruptive. It's how we use it.
Alex: What got you interested in AI and robots?
Kate: The honest answer is that I read way too much science fiction as a young adult. My dad had this big collection of sci-fi from the 1960s and 70s, and I've always been fascinated by robots and artificial intelligence.
I studied law originally, but I later somehow managed to find a job where I get to work with robots.
Alex: Do you have a favorite science fiction author?
Kate: Ursula K. Le Guin. Her work was less about technology and more about how societies change, which is similar to how I think about technology. I'm interested in how technology influences the way society is interacting.
Alex: Great, because that's exactly what I'd like to talk about today! Broadly speaking, when it comes to the future of technology, are you an optimist or a pessimist? Will we live in harmony with machines, or will they destroy us all?
Kate: I'm both an optimist and a pessimist. I have concerns, but not necessarily about the technology itself. I'm more worried about the choices that we humans make when using technology. At the same time, I'm also really excited about the potential of technology.
I do have a pet peeve when it comes to both how we talk about robots and AI, and how we're designing and trying to integrate them. We're constantly comparing robots and artificial intelligence to humans and human intelligence, when really, AI has an entirely different skill set!
Dr. Kate Darling is a leading expert in robot ethics and MIT Media Lab's intellectual property policy advisor. She graduated from law school with honors and holds a doctorate from the Swiss Federal Institute of Technology (ETH Zurich). Her passion for social robotics has led her to explore the emotional connection between people and lifelike machines. Kate is also a writer whose work on human-robot relationships has been featured in leading newspapers, journals and magazines from around the world.
Kate's first book, The New Breed: What Our History with Animals Reveals about Our Future with Robots, is scheduled for release in April 2021. When she isn't writing cutting-edge essays, giving speeches or organizing workshops, Kate can be found tweeting at @grok_ about eating cheerios for dinner.
Kate: This limitation in the way we think about AI is what fuels our fears about robots taking all our jobs, or taking over the world. Instead, we should be thinking about AI with other, more useful analogies.
Right now, I'm writing a book called The New Breed that compares the integration of robots to how we used animals in the past. Each species has its own unique skill set, which humans have used for all kinds of things, from work, to weaponry, to companionship. We used animals to supplement our abilities, and we partnered with them to achieve our goals. That's a far better analogy for our future relationship with AI and robotics.
Alex: Moving from human labor to animal labor still meant major changes for our economy and way of life. The same thing happened when we transitioned from animal labor to machine labor. Do you think AI will be just as disruptive?
Kate: Yes. AI is a transformative technology, it will cause major disruptions; the way other technologies have in the past.
That being said, I think public discourse focuses too much on AI itself being the replacement for human workers. Instead, we should consider rethinking how under our unbridled capitalist system, we made the choice to treat workers like a disposable commodity. And that is a choice!
At the end of the day, it's our economic system and the decisions humans make on how we use technology that lead to harmful disruptions—not the technology itself.
Alex: Do you have examples of systemic reform that we should implement to blunt the disruptive nature of AI?
Kate: We know from past experience that we should do anything we can to soften the harm done to workers who are victims of disruption. That can mean retraining people. It can mean making conscious choices to use technology to help rather than replace workers. It can mean not relegating workers to unskilled, highly monitored jobs, as some of their tasks are bound to be replaced by machines.
All of these are choices that companies and governments can make. It's hard to give a more specific answer, because the way these disruptions happen are very industry-specific, and therefore require industry-specific responses.
Alex: A concern we often hear from experts is that the industries that benefit from the disruption aren't the same as those that suffer most from it. How do we get the Silicon Valley tech crowd—who is making a lot of money off of these innovations—to be more mindful of the concerns of more vulnerable sectors of the economy?
Kate: That gap between the winners and losers of this disruption is indeed cause for concern. The people making money off of these innovations will often say things like, "From a bird's eye perspective, AI won't take any jobs because of how many jobs it's creating!" That may be true, but those new jobs are in a different industry.
To be more mindful of the vulnerable parts of the economy, those who design technology should try to be more creative about what they're building and what it's intended to do. Design decisions go a long way; they entrench the way we use things.
For example, it really annoys me that everyone is so focused on developing humanoid robots. Do a Google image search for the word "robot," or even for "AI" and you'll see lots of robots with a torso, a head, two arms and two legs. In some areas that makes sense: We live in a world built for humans, with stairs, narrow passageways, buttons to press, and so on.
But that kind of thinking really limits us. We could be thinking much more creatively about cheaper, better robots that function in those spaces. We could be thinking about recreating the spaces themselves. Perhaps if we design spaces in ways that are accessible to a wide variety of robots, those spaces would also become more accessible to a wider variety of people.
Everyone could benefit from these industries starting to think more creatively about how robots could assist rather than replace humans.
Alex: So no Wall-E like future where we all get to float around and watch TV while the robots do all the work?
Kate: As much as I love that movie, the potential of AI and robotics isn't to take away all the work and let us spend all day on lounge chairs. If we use it correctly, AI should help us find more fulfilling work.
We want robots to take over the jobs people don't want to do. Jobs that are dangerous, or just very dull and can be automated. Replacement for those kinds of jobs would make sense, as long as we can buffer the damage that comes with that type of disruption.
Robots that do no harm, but get in harm's way
Alex: What do you see as the greatest success of AI, that we should use as inspiration moving forward?
Kate: For me, the most valuable successes that we've had and will have is in reducing harm to people. Being able to reduce risk and danger. Think about nuclear plants, or search and rescue; any area where we've been able to use AI to do better risk assessment, or use physical robots to shield people from harm. That's really important.
Alex: Do you worry about the weaponization of robotics?
Kate: Absolutely. I'm very worried that right now, we're technically able to create automated weapon systems that use facial recognition and are prone to bias and error. I also worry about removing human responsibility from crimes by taking humans out of the decision-making processes.
If a war crime is committed by a robot due to an unforeseeable error, under current laws nobody can be held accountable for that. If a human were to commit the same crime, they would be held accountable.
I worry that the existing legal framework under which society operates is not equipped to deal with the disruptions caused by AI, and nowhere is that scarier than with the weaponization of robots.
A spoonful of AI supplement
Alex: I'd like to focus on how companies use AI. For some, AI is a tool to monitor employees and extract as much productivity as possible. Others use AI to supplement human ability and automate menial tasks, giving employees more time for creative thinking. Which of the two approaches do you think will prevail?
Kate: There is a lot of evidence to suggest that using AI as a supplement to human ability is more successful than trying to roboticize workers.
A canonical example we see often is how ATMs allowed banks to expand their teller services. Machines took over the more simple requests so that the tellers could spend more time dealing with a broader range of more complicated services.
Companies are making a variety of different choices. Some invest in technologies that promote people to more fulfilling jobs. Others demote their employees to lower skilled, lower paid, highly monitored jobs. Some even do both at the same time. These are all choices; they are not set in stone.
Alex: What should we look out for when applying for a job at a company that uses AI?
Kate: We should ask ourselves if the companies we work for—or want to work for—are thoughtful in their use of AI. I think the red flag is when a company takes a technology and tries to brute force deploy it, without thinking about integrating it into existing systems, or how it will affect workers.
Surveillance and monitoring are very tempting for companies to use, because AI needs data to learn. However, I would want to see companies being a little bit more thoughtful about how they use data. It's unethical to just say, "We're going to monitor everything and collect as much data as possible, regardless of what that means for workers' privacy and data security."
Alex: It seems like the most successful companies are the ones that behave unethically; that do blanket monitoring and data collection. Is that how it's going to be? Or is there any hope for improvement?
Kate: The big problem with data collection is that there's no incentive to curb it. When we talk about AI supplementing or replacing humans, there was evidence to show that supplementation is better for companies. When it comes to data collection, it's hard to make an argument against people collecting as much data as they can.
Data helps technology work better. Even at the consumer level, the more I talk to my smart speaker, the more it knows about me, the better it will perform.
The only way to curb the unethical use of data collection is through governments and regulation.
Alex: Do you think that's going to happen?
Making the most of diverse skill sets
Alex: What do you think will be the greatest opportunity that AI will provide to companies in the near future?
Kate: The greatest opportunity is how effective these technologies can be when we think outside the box. They're so much more than just a cost saving measure—they can actually improve lives.
A small but interesting example is patent offices. Patent offices around the world, including in the U.S. and Japan, are exploring how artificial intelligence can be used to help patent examiners. However, they're not going down the road of replacing all the patent examiners with machines.
Instead, they use AI to perform one of the most difficult aspects of the job, which is searching for prior art. Then, the human examiners can focus on what they're good at, which is the delicate task of evaluating whether or not the unearthed material applies to the patent application at hand.
The key is thinking about where AI can assist humans by performing tasks for which it's better equipped than we are. The entire patent system may start working better with this new, smart distribution of tasks between people and machines.
Alex: I'm assuming that cutting-edge artificial intelligence wasn't developed by the patent officers themselves. In order to effectively deploy AI, will every company need a small army of software engineers?
Kate: Probably. Right now it's often outsourced. But artificial intelligence is becoming the new electricity. It will be needed everywhere, and we'll require skilled people to put it in place.
Alex: Does this shortage of developers mean we're moving toward a future where AI will create its own AI? Or is that the beginning of the robot uprising?
Kate: First, we should focus on creating better artificial intelligence. People overestimate where we currently are. They read these stories about how robots now beat humans at complicated games like chess and go, and assume that robots are therefore smarter than people.
Really, machines have a very limited set of intelligence. Even if the super-smart AI that plays go could produce another AI that's even better at go, Siri still isn't going to understand half of what I'm asking her.
Alex: Sounds like we have time before the robots take over.
Kate: Lots of time.
Diverse teams make better technology
Alex: Do you personally have a particular piece of AI that you would like to see developed in the future?
Kate: I don't know if this counts as AI, but I personally would really like some sort of smart technology for breast pumping. The breast pumps that mothers have now are even cruder than the technology we have for milking cows.
Alex: That seems like the kind of technology that we should already have by now! It illustrates that the people developing these technologies are not necessarily representative of society as a whole.
Kate: Absolutely. Right now, we need more interdisciplinarity and diversity in the field. As AI comes to play such a big role in society, we can't have just young, high-income white men working on it. Not that white men are bad, just that we need a wide variety of different perspectives in order to create technology that's ultimately socially beneficial.
And companies are aware of this. They know they should be focusing on diversity. It's very clear that diversity leads to better tech development.
Article by Alex Steullet. Edited by Ade Lee and Mina Samejima. Photographs courtesy of Kate Darling.