© 2021 AO Kaspersky Lab. All Rights Reserved.
A Tomorrow Unlocked original that recaps the most notorious cybercrime cases of the recent years.
Loading videos
NEXT
Autoplay
Loading videos
NEXT
Autoplay
Should we be polite to AI assistants? Why don’t we understand AI is strange because humans are strange? Are people getting their perceptions of robots from The Terminator franchise? I interview Dr. Beth Singler, anthropologist and Junior Research Fellow in artificial intelligence (AI) at University of Cambridge, on the weird and wonderful ways we imagine AI, robotics and the future of work.
Dr. Beth Singler (@BVLSingler) is one of many experts appearing in Tomorrow Unlocked’s new audio series Fast Forward. She examines the social, ethical and philosophical implications of AI and robotics, and has spoken at Edinburgh Science Festival, London Science Museum and New Scientist Live.
Ken: In your work, you engage people in conversations about the implications of AI and robotics. What do people think AI is?
Beth: For the public, it isn’t one thing. People point to examples of AI being implemented, but it has different definitions for people. They draw presumptions from science fiction and media accounts of dangerous AI and scary robots. It’s a malleable term – people say ‘the algorithm’ and mean AI.
Many think of AI in the workplace replacing human physical work, but we see AI taking on more knowledge labor and even emotional labor.
Ken: What kind of emotional tasks can AI do?
Beth: We increasingly see interfaces with AI that give simulated emotional responses. AI assistants do tasks for you but pleasantly and civilly. Call center work is already highly structured and scripted – an AI assistant or chatbot can take over that pleasantry system. How workplaces implement AI will influence how we connect with other humans.
Ken: Are we creating a human-machine social world we’ll have to learn to interact with?
Beth: Yes. We’re seeing these human-machine interactions playing out in different places – in the home, workplace, and care settings. We’re having to understand that relationship and teach our children to negotiate it. There are discussions on whether children should be polite when using AI assistants. We’re coming up with a new social format for interactions with AI.
Ken: I thought, of course you should be polite to machines – if only because one day they’ll look at everything we’ve said and done and judge us accordingly. I want to be on the right side of them.
Beth: We also see arguments that you should be civil to AI assistants because this is how we should behave to other entities, whether human or non-human – that it reflects our natures. If we aren’t civil to machines, it says more about us than their needs. There are many different answers to questions of politeness to AI assistants.
Ken: People find conversations with Cleverbot amusing when it asks things like, “Don’t you wish you had a body?” or “What is God to you?” They don’t consider Cleverbot thinks it’s appropriate because a human asked IT those questions. We’re looking into a strange, distorting mirror and not recognizing our reflection.
Beth: Absolutely. There’s a reason the Black Mirror TV series is called Black Mirror – it’s a reflective surface for understanding ourselves. AI and machine responses come from data sets, and those involve biases.
It’s a moment to reflect, for instance, on questions of personhood before we even get to anything like artificial general intelligence (AGI) or superintelligence. Should we be civil? If we say rude or sexist things to a female AI assistant, does that matter? These questions come out again and again.
I’m an anthropologist, meaning I study what humans do and think. These big questions are integral to our concept of what AI is. I’ve seen in my work engaging the public and seeing their sometimes hopeful, sometimes fearful responses that this will be a conversation we’ll have for some time yet.
Talking about AI and the future of work gets down to big questions like, what is the human being for? If we define ourselves in terms of what we do and what we produce, we’ll fear replacement.
Ken: I was at an airport buying a train ticket one afternoon. It was quiet, and the woman behind the counter said, “You should have been here yesterday – the automatic ticket machines had recalibrated, giving out wrong tickets. People adjust. Machines don’t.” I wondered if this ability to adjust is part of our relationship with machines.
Beth: It’s interesting how much we adjust to machines. With the airport systems that use facial recognition software, I often have to take off my glasses, change my hair or bob down. We adjust ourselves to be accepted by the system.
You see this in how automation is changing the workplace. There are interviews with facial recognition software involved, so we’re trying to smile more in a video interview. We’re increasingly making changes to fit the machine-based system.
Ken: It suggests an element of trust. Where does trust fit in our relationship with machines?
Beth: Trust is key. We want to believe software that observes our responses in job interviews is fair and neutral, but we have examples where trust is let down.
In the UK in 2020, an algorithm that helped grade student exam papers damaged public trust – it penalized students studying at less high-achieving schools. In my work, I see examples of people trusting too much – they have an image of a superintelligence that doesn’t exist yet. Around the term “blessed by the algorithm,” people feel their YouTube content is promoted because the algorithm decided they should be lucky. They use the language of religious belief.
Society can only trust technology it understands. Digital literacy – understanding what AI is and isn’t – is key to that.
Ken: We tend to understand things better as fiction. It’s a way to get a grip on the world. But I get the feeling fiction’s not a grip anymore, but a stranglehold. Is that fair?
Beth: I enjoy science fiction accounts of AI in their many interpretations, fears and hopes.
One of the hazards is a strict, negative story used too often. I’m a fan of the Terminator film franchise, but I see how dystopian imagery of robot uprisings shapes people’s views of AI. And AI making crucial decisions about our future – whether we get a job or a mortgage, or how we’re treated in hospital – may also be overshadowed by Terminator-like stories.
Ken: And it stops us noticing when AI does good things, like in medicine and traffic control. The robots are already among us, but they don’t usually walk on two legs. They’re more likely to be sorting out your airplane ticket.
Beth: Absolutely. The ‘home help’ robot concept from the 1950s and 60s would move around the house on two legs and perform tasks. It made real home automation invisible – a washing machine doesn’t have that shiny futuristic look.
It’s the same with recent examples like the robot vacuum cleaner – they become an invisible family member.
Ken: If we had the domestic robots imagined from the 1930s to the 1950s, we’d have to rebuild homes – they wouldn’t fit.
Sophia the robot Interview: Sophia the robot answers Stylist’s philosophical questions youtu.be
Sophia the robot answers philosophical questions
Beth: There’s much hype over embodied robots. For some, Hanson’s Sophia robot represents the next step in AI and human evolution too. But what’s Sophia’s commercial use? It’s unclear if she’s useful in the home or office. What dream are we selling with bipedal robot servants that don’t fit into how we use technology today? We’ve made space in our homes for AI assistants – the disembodied voice that answers our questions.
Ken: Interestingly, there’s not often a ‘man machine.’ It’s usually the ‘woman machine,’ from Maria in the film Metropolis to Olimpia in The Sandman and Hadaly in The Future Eve. Why is the woman and the machine conflated?
Beth: Look at the voices involved in choosing to make AI assistants female. There are arguments we find female voices more soothing, but for many academics, gendered AI seems an attempt to replicate the mother or wife.
We’ve moved on in society – women can choose to work in or outside the home. For some, that leaves a gap for intellectual and emotional labor. The always-responsive female figure, whether the wife or the mother, is reconstituted in machine form.
Ken: Another thing that parallels the idea of the robot is a child. Robots are becoming smaller. For the sake of argument, a male of average build might seem threatening to many people.
Beth: There’s a move toward making robots cuter and replicating child and animal forms to reduce those threatening associations from science fiction. Think of Arnold Schwarzenegger’s Terminator versus the therapeutic robot PARO, modeled on a baby harp seal.
Ken: Is there an element of trying to make work more fun? Perhaps work becomes more like play if you have an AI assistant who helps with the emotional labor?
Beth: Yes. There’s a history of trying to gamify the workplace – developing ‘third space’ options that involve games or places where you can nap. Perhaps how we apply AI is a part of how we make the workplace more enjoyable. If our software chatted back to us, was entertaining and responded to us, it might seem less laborious.
Ken: Going back to emotional labor, programs could soften the edges of work relationships, whether online or in an office – I can imagine something like an ’emotional Roomba’ (robot vacuum cleaner) allowing for moments of interaction.
Beth: We see examples of AI mediating between humans in conversation, like machine learning algorithms suggesting how to respond to emails or warning your tone is too harsh – softening the edges of our interactions at work is a developing space.
Ken: After some emails I’ve had, I see the value in something like that.
Beth: I also saw an application for divorced or divorcing couples helping conversations be more amicable for the benefit of any children. A machine learning algorithm warns you things like, perhaps you’re being a bit sarcastic.
Ken: I’m scared of an algorithm that understands sarcasm. That will be the end of humanity.
Beth: There’s a wonderful Tom Gauld cartoon about scientists trying to create a sarcastic bot. And the bot says to the scientist, “It’s going great. This guy is a real genius.”
Ken: What thought about AI and the future of work would you like to share?
Beth: I’d like people to consider how much we should change our behavior around AI in the workplace. People don’t normally interact in purely rational ways. If we curtail that normal human messiness, we’re not anthropomorphizing AI but robo-morphizing humans. If we make ourselves smile more to do well in an interview with facial recognition software, we limit ourselves. Although we might see AI as a human simulation, do we become a human simulation in response to AI?
Beth Singler features in the Tomorrow Unlocked audio series Fast Forward, Episode 5. Listen to Fast Forward and explore more interviews with featured experts.
Renan Ozturk: Who is the extreme filmmaker?
Writing, directing and filming documentaries on nature’s extremes and our relationship with it
Writing, directing and filming documentaries on nature’s extremes and our relationship with it
Renan Ozturk pushes filmmaking to the extremes with rich documentary stories about the natural world and our relationship with it.
An explorer at heart, Renan spent his pre-filmmaking years living in the wilderness as a climber and artist, exploring and painting landscapes.
From climber and painter to National Geographic Adventurer of the Year in 2013, Renan now takes a different view of planet Earth through a video camera lens.
Renan pushes the art of filmmaking to the edge in his environmental documentaries, combining extreme expeditions with raw landscapes, outstanding drone footage and visual storytelling. Often collaborating with spouse and fellow director Taylor Rees, Renan’s approach is the stuff of filmmaking dreams.
Drawn to earth’s most demanding environments, Renan Ozturk focuses on human connection with the natural world. His use of strong visual identity draws on his experience as a landscape painter and adventurer, bringing out nature’s best alongside compelling human stories.
For the full portfolio of his work, see Renan Ozturk’s website, but here’s a snapshot of his writing, directing, producing and more.
The Denali Experiment from Expedition Studios on Vimeo.
Co-Director
Denali is North America’s highest mountain peak, in Alaska, US. In one of Ozturk’s first films, a diverse team of athletes assembled by The North Face take on this behemoth, one of the biggest challenges on Earth.
Watch Renan Ozturk’s The Denali Experiment on Vimeo
Writer
This philosophical portrait of the human mind asks, how do we balance risk with reward? Renan teams up with creators of award-winning film All.I.Can for an epic insight into the quests of the world’s greatest mountain sport athletes. Expect stunning footage and sporting extremes.
Watch Renan Ozturk’s Into The Mind on YouTube
Cinematographer and subject
In the world of big-wall climbing, Shark’s Fin on India’s Meru Peak is the holy grail – 4,000 feet (1,200 meters) of sheer deadliness, traversed and failed by elite climbers. Dwindling supplies, snowstorms and sub-zero temperatures make for some tale.
Watch Renan Ozturk’s Meru on YouTube
Cinematographer and co-director
Renan’s first Everest climbing and filming trip is one for the history books. The precision of beauty of Sherpa’s camera angles, achieved in such extreme places, is the stuff of wonder.
Cinematographer
Renan’s dazzling camera work complements this is a stunning ode to mountains, a collaboration between Australian Chamber Orchestra and BAFTA-nominated director Jennifer Peedom. Mountain examines the beauty of wild landscapes and our fascination with conquering them.
Watch Renan Ozturk’s Mountain on Amazon
THE LAST HONEY HUNTER from FELT SOUL MEDIA on Vimeo.
Producer
It’s steep, misty and packed with psychoactive honey only locals know how to handle. Renan and his team joins the indigenous Kirat Kulung people as they scale deadly cliffs for an emotional final honey harvest.
Watch The Last Honey Hunter on Vimeo
Cinematographer
When the guardian of an almost unreachable archipelago in the Far East of Russia, the Kuril Islands, hitches a ride with Renan and his team, no one expected the result. The team, including director Taylor Rees, set out to make a classic adventure story, but got something even more powerful.
Expedition climber for The North Face, photojournalist for Sony and National Geographic as well as documentary filmmaker, you won’t have to do too much digging to keep up with Renan Ozturk. Follow him on Instagram, Twitter and Facebook.
Topic:
Future robot colleagues – Fast Forward audio doc
Robots in the workplace raise questions, says future tech audio series Fast Forward
Robots in the workplace raise questions, says future tech audio series Fast Forward
The first robotic arms introduced to car assembly lines in the 1960s had the same ‘reach’ as a human worker. They did the same tasks in the same space. Even then, futurists predicted machines would change the world more with their information processing power than by performing manual tasks.
Listen to the full Fast Forward episode 5 on your future robot colleagues
Dr Beth Singler, artificial intelligence research fellow at University of Cambridge, points to automation in the workplace revolving around replacing human ‘knowledge labor’ as much as physical labor. “Artificial intelligence (AI) assistants do tasks for you, but in an emotionally accessible way – with pleasantries and civility.”
Dr. Singler points to discussions about whether we should be civil back to AI assistants, like Alexa or Siri. “These questions are so integral to our conception of what AI is and could be that you can’t have a conversation about AI without them coming up. It all gets down to philosophical questions of ‘what is the human being for?'” But Dr. Singler also gives thought-provoking examples of good uses of AI’s ability to show humanlike traits while being seen as neutral.
Kaspersky Principal Security Researcher David Emm says contrary to the human-robot conflict common in sci fi worlds, we might see machines as too unthreatening. “Our research with University of Ghent on how people would react to robots in the workplace found 40 percent would unlock security doors for robots. People didn’t question why a robot needed access to a secure area when it was delivering pizza.”
If in real life we accept machines in ways we wouldn’t accept humans, Emm says we may “share information or give access we shouldn’t.”
One person thinking about how robots could improve our relationship with both security and ethics is Alan Winfield, professor of robot ethics in the Bristol Robotics Lab. He believes robots should include technology that gives a data trail to understand contributing factors when they make a bad decision like causing an accident – an ‘ethical black box,’ like a flight data recorder.
Eventually robots could use the data to self-evolve, he believes – improving on their next generation without the human decision-making past generations used to domesticate animals and breed higher yielding crops. Listen to Fast Forward episode 5 to hear Professor Winfield explain his mind-blowing vision and how four universities are working to make it happen.
Listen to Fast Forward and explore more interviews with featured experts.Subscribe to future episodes on these audio streaming services:
Topic: