Celebrating today’s young bright minds in tech
The tech innovators showing age is no barrier to achievement
We use cookies to make your experience of our websites better. By using and further navigating this website you accept this. Detailed information about the use of cookies on this website is available by clicking on more information.
Accept and closeThe tech innovators showing age is no barrier to achievement
The final instalment of our series hacker:HUNTER Olympic Destroyer examines how Pyeongchang winter Olympics hackers put smokescreen to misdirect cybersecurity analysts. But through the fog, analysts realized the culprit wasn't who you might expect.
If successful, the 2018 Pyeongchang cyberattack could have cost billions of dollars, leaving a canceled Olympics and a geopolitical disaster in its wake. Their deceptive methods meant the cybercriminals nearly got away with it. Why did they want to point the analysts at another group? And who was behind it all?
Cybercriminals don't leave a calling card, but they do leave evidence. The art of finding and using that evidence to find the culprit is known as threat attribution.
Threat attribution is forensic analysis for advanced persistent threats (APTs). It analyzes the attackers' 'fingerprints,' such as the style of their code, where they attack and what kinds of organizations they target. Attacks can be matched with the fingerprints of other attacks attributed to specific groups.
Hackers have their own set of tactics, techniques and procedures. Cybersecurity experts can identify threat actors by studying these elements.
In February 2016, hackers attempted to steal $851 million US dollars and siphoned $81 million US dollars from the Central Bank of Bangladesh. The attack was linked to notorious cyber espionage and sabotage group Lazarus Group. Lazarus attacks casinos, financial institutions, and investment and cryptocurrency software developers.
Lazarus has certain targets and ways of attacking: Infecting a website employees of a targeted organization often visit or finding a vulnerability in one of their servers. These are the 'fingerprints' used in threat attribution.
Crucially, Lazarus Group is long thought to be linked to North Korea. Olympic Destroyer included a piece of Lazarus's malware code, but the type of attack didn't fit. Its fingerprints better matched a cluster of attacks by another group with a very different agenda.
Watch the full video to see if you knew who the hacker was all along.
This APT might not have worked, but over the years, others have. To see what a successful APT looks like, watch Chasing Lazarus: A hunt for the infamous hackers to prevent big bank heists.
Security researchers described the code used to attack the 2018 Pyeongchang winter Olympics as 'Frankenstein-like.' In part two of our video series, hacker:HUNTER Olympic Destroyer, they explain how the malware was designed to point in multiple directions.
The designer of an extraordinary piece of code lodged it in a system where it remained undetected for months. Part two of hacker:HUNTER Olympic Destroyer explores the nature of the attack, its process and why 'Frankenstein-like' code made it one of the most mysterious advanced persistent threat (APT) attacks in history.
Olympic Destroyer was the perfect example of an APT. What are they, and why are they so harmful?
APTs are sophisticated hacks that often wait for the perfect time to strike to create maximum damage. They lodge themselves in a system and steal critical data over weeks, months or years. Those behind these attacks build complex software for intentional damage – from espionage and sabotage to data theft.
APTs are notoriously associated with highly organized groups. They attack high-status targets like countries or large corporations, notably in manufacturing and finance, aiming to compromise high-value information like intellectual property, military plans and sensitive user data.
Their high-profile targets will have secure networks and defenses, so threats must stay undetected as long as possible. The longer the attack goes on, the more time attackers have to map the system and plan to steal what they want.
Motives behind attacks vary, from harvesting intellectual property to gaining advantage in an industry, to stealing data for use in fraud. One thing is clear: APTs cause severe damage.
Olympic Destroyer was the perfect APT. A highly-organized group attacked a national Olympic committee, and it worked.
The 'confusion bomb' had been undetected in the computer system for four months, biding its time to strike. Being in the system gave them time to find weak spots and pain points to make the attack more devastating. When it finally surfaced, all hell broke loose.
By directly attacking the Olympics' data centers in Seoul, South Korea, Olympic Destroyer cut employees' access to network computers. Because Wi-Fi was out, Olympic building security gates stopped working, coverage stopped, and the whole infrastructure went offline. The Pyeongchang IT team was staring down the barrel of a potential geopolitical disaster.
Stay tuned for episode three, where we unravel the IT team's ingenious response and find out who did it. Any guesses? Go to hacker:HUNTER to stay up to speed.
Looking forward to watching the Olympic Games in Tokyo? Here's a reminder of what happened at the opening ceremony of the 2018 Winter Olympics in Pyeongchang
Barely noticed by the public, but an elaborate hacking attack hit the stadium, starting a cyber-political puzzle.
It is February 9, 2018. The stage is set for the Pyeongchang Winter Olympics' opening ceremony. But the organizers didn't realize one of the most deceptive cyberattacks in history was afoot.
This three-part series looks at the background to the Pyeongchang cyberattack, the Olympics IT team's stunning response and why it was so hard (and so risky) to find out who did it.
On September 9, in a hospital in Dusseldorf, Germany, a patient died from a virus. It wasn't what you might think: the hospital was hit by ransomware, infecting 30 servers before causing a total system shutdown, leading to the loss of her life. Yet this was a random act of chaos: the hackers misfired, they intended to infiltrate a nearby university.
This attack was fatal, but not unexpected. Attacks on hospitals and other health organizations have dramatically increased during the pandemic. When they hit, they can cost lives. Hospitals often have limited cybersecurity, making them vulnerable to attacks. In March, the University Hospital Brno, Czech Republic, faced a similar attack, fortunately, with no casualties.
For the latest hacker episode:HUNTER, we spoke to hospital staff to understand how ransomware attacks could harm patients.
During the peak of pandemic information overload, COVID-19-themed cyberattacks spiked to a million a day in early March. Attacks targeting people access systems remotely – such as phishing, malicious websites, and malware - increased by a staggering 300 times during 2020.
Craig Jones, Director of Cybercrime at Interpol, explains: "Since March, the levels of work have ramped up. I've never known a period like it, not just at Interpol but also during my law enforcement experience." Check out Interpol's advice to protect yourself against Covid-19 cyberthreats.
So what can we do in a world where cybercriminals seem to be one step ahead of us? Hunting down the hackers is no easy task, but as the heroes in the second season of hacker:HUNTER shows, we can protect everyone by taking a stand against cybercrime.
How hackers are exploiting the pandemic
"Cybercriminals were quick to realize many years ago that people fall prey to hot topics," says Costin Raiu, Director of Global Research & Analysis, Kaspersky. And today's hottest topic is the pandemic.
Chapter 2 of hacker:HUNTER ha(ck)c1ne explores COVID-related phishing attacks, known as spear-phishing. These attacks skyrocketed by nearly seven times between February and March this year.
Cybercriminals published fake news saying Facebook would be handing out free money to everyone affected by COVID-19. On a site cleverly disguised to look like Facebook, you fill out a form that shares personal data like your address, social security number or a photo of your ID. You get a confirmation message that your application has been accepted and sit back and wait for the money to arrive. It never will.
.
It's not just people like us who criminals are targeting - organizations are hit too. At work, you get sent an email you think is from someone you know or your manager. But when you click on a link or open an attachment, it downloads malicious software opening the door for hackers to access the corporate network. They download data to sell on the dark web, or encrypt it via ransomware and force the business to pay the ransom to stop it from being leaked.
Criminals have the resources to hit everyone, from society's most vulnerable people to lucrative targets like big businesses and government. "Clearly the world is not as safe as we would like it to be. We're surrounded by all kinds of new and different threats," explains Zak Doffman, Founder and CEO of Digital Barriers. "The access to COVID treatments is a nation-state wide competitive advantage."
In the face of this influx of threats, more kudos to the people keeping us and our data safe, like the Cyber Volunteers 19. To keep yourself safe, Kaspersky Daily serves up advice on spotting and protecting yourself from the Facebook grants scam.
How WannaCry hit the world and how it suddenly stopped
One day in May 2017, computers all around the world suddenly shut down. A malware called WannaCry asks for ransom. The epidemic suddenly stops, because a young, British researcher found a killswitch, by accident.
From the Web:
What is WannaCry ransomware, how does it infect, and who was responsible?
WannaCry cyber attack cost the NHS £92m as 19,000 appointments cancelled
A star is born - and soon arrested
His random act of heroism makes security researcher Marcus Hutchins famous overnight. Being celebrated by media around the world, he spends a week in Las Vegas. When he wants to leave, the FBI arrests him. They suspect him of creating malware.
From the Web:
FBI arrest of Marcus Hutchins (@MalwareTechBlog) has chilling effect
Bad news for WannaCry slayer Marcus Hutchins
What Happens When a Hacker Hero is Arrested by the FBI? | Freethink Coded
Jail forever or a free man?
Stuck in the US, free on bail, Marcus Hutchins considers his options and decides to plead guilty. He faces up to 10 years in jail.
From the web:
Marcus "MalwareTech" Hutchins Pleads Guilty to Writing, Selling Banking Malware
Marcus Hutchins spared US jail sentence over malware charges
hacker:HUNTER Cashing In, Episode One
"ATMs hold cash, and that makes them attractive for criminals." The opening statement of this episode sums up what the whole mini-series is about. While criminals around the world try to get to the money in cash-machines with hammers, explosives, excavators or other heavy gear, the Carbanak gang found a more elegant and stealth way. They would hack into bank networks and monitor the activities there until they understood how to trigger the machines remotely to spill out all the money.
Episode 1 explains how security researchers were alerted to it, how they brought international police forces into the investigation and why the method of attacking ATMs is called Jackpotting after a researcher named Barnaby Jack.
hacker:HUNTER Cashing In, Episode Two
The Carbanak Group attacks a bank in Taiwan and sends 22 money mules into the country. What they didn't anticipate: within a few hours the Taiwanese police publish surveillance pictures of all the money mules. The hunt begins.
hacker:HUNTER "Cashing In" Episode Three
19 money mules flee Taiwan, the rest are left in Taipei with several million dollars. The police get closer and closer.
hacker:HUNTER Cashing In: Episode Four
The Taiwanese police finds clues to the whereabouts of the head of the Carbanak group and coordinates with Europol. Can the group be stopped?
hacker:HUNTER Cashing In, Episode One
"ATMs hold cash, and that makes them attractive for criminals." The opening statement of this episode sums up what the whole mini-series is about. While criminals around the world try to get to the money in cash-machines with hammers, explosives, excavators or other heavy gear, the Carbanak gang found a more elegant and stealth way. They would hack into bank networks and monitor the activities there until they understood how to trigger the machines remotely to spill out all the money.
Episode 1 explains how security researchers were alerted to it, how they brought international police forces into the investigation and why the method of attacking ATMs is called Jackpotting after a researcher named Barnaby Jack.
hacker:HUNTER Cashing In, Episode Two
The Carbanak Group attacks a bank in Taiwan and sends 22 money mules into the country. What they didn't anticipate: within a few hours the Taiwanese police publish surveillance pictures of all the money mules. The hunt begins.
hacker:HUNTER "Cashing In" Episode Three
19 money mules flee Taiwan, the rest are left in Taipei with several million dollars. The police get closer and closer.
hacker:HUNTER Cashing In: Episode Four
The Taiwanese police finds clues to the whereabouts of the head of the Carbanak group and coordinates with Europol. Can the group be stopped?
How WannaCry hit the world and how it suddenly stopped
One day in May 2017, computers all around the world suddenly shut down. A malware called WannaCry asks for ransom. The epidemic suddenly stops, because a young, British researcher found a killswitch, by accident.
From the Web:
What is WannaCry ransomware, how does it infect, and who was responsible?
WannaCry cyber attack cost the NHS £92m as 19,000 appointments cancelled
A star is born - and soon arrested
His random act of heroism makes security researcher Marcus Hutchins famous overnight. Being celebrated by media around the world, he spends a week in Las Vegas. When he wants to leave, the FBI arrests him. They suspect him of creating malware.
From the Web:
FBI arrest of Marcus Hutchins (@MalwareTechBlog) has chilling effect
Bad news for WannaCry slayer Marcus Hutchins
What Happens When a Hacker Hero is Arrested by the FBI? | Freethink Coded
Jail forever or a free man?
Stuck in the US, free on bail, Marcus Hutchins considers his options and decides to plead guilty. He faces up to 10 years in jail.
From the web:
Marcus "MalwareTech" Hutchins Pleads Guilty to Writing, Selling Banking Malware
Marcus Hutchins spared US jail sentence over malware charges
hacker:HUNTER goes into Season 2 with a look at how cybercriminals attacked healthcare during the pandemic
The next episode of hacker:HUNTER reveals the shocking surge in cyberattacks on healthcare during the Covid-19 global pandemic. We take our audience on the frontlines of targeted cyberattacks on the vaccine researchers, hospitals and World Health Organisation who have reported a fivefold increase in attacks on its systems since March.
Launching on September 25th on YouTube!
On September 9, in a hospital in Dusseldorf, Germany, a patient died from a virus. It wasn't what you might think: the hospital was hit by ransomware, infecting 30 servers before causing a total system shutdown, leading to the loss of her life. Yet this was a random act of chaos: the hackers misfired, they intended to infiltrate a nearby university.
This attack was fatal, but not unexpected. Attacks on hospitals and other health organizations have dramatically increased during the pandemic. When they hit, they can cost lives. Hospitals often have limited cybersecurity, making them vulnerable to attacks. In March, the University Hospital Brno, Czech Republic, faced a similar attack, fortunately, with no casualties.
For the latest hacker episode:HUNTER, we spoke to hospital staff to understand how ransomware attacks could harm patients.
During the peak of pandemic information overload, COVID-19-themed cyberattacks spiked to a million a day in early March. Attacks targeting people access systems remotely – such as phishing, malicious websites, and malware - increased by a staggering 300 times during 2020.
Craig Jones, Director of Cybercrime at Interpol, explains: "Since March, the levels of work have ramped up. I've never known a period like it, not just at Interpol but also during my law enforcement experience." Check out Interpol's advice to protect yourself against Covid-19 cyberthreats.
So what can we do in a world where cybercriminals seem to be one step ahead of us? Hunting down the hackers is no easy task, but as the heroes in the second season of hacker:HUNTER shows, we can protect everyone by taking a stand against cybercrime.
How hackers are exploiting the pandemic
"Cybercriminals were quick to realize many years ago that people fall prey to hot topics," says Costin Raiu, Director of Global Research & Analysis, Kaspersky. And today's hottest topic is the pandemic.
Chapter 2 of hacker:HUNTER ha(ck)c1ne explores COVID-related phishing attacks, known as spear-phishing. These attacks skyrocketed by nearly seven times between February and March this year.
Cybercriminals published fake news saying Facebook would be handing out free money to everyone affected by COVID-19. On a site cleverly disguised to look like Facebook, you fill out a form that shares personal data like your address, social security number or a photo of your ID. You get a confirmation message that your application has been accepted and sit back and wait for the money to arrive. It never will.
.
It's not just people like us who criminals are targeting - organizations are hit too. At work, you get sent an email you think is from someone you know or your manager. But when you click on a link or open an attachment, it downloads malicious software opening the door for hackers to access the corporate network. They download data to sell on the dark web, or encrypt it via ransomware and force the business to pay the ransom to stop it from being leaked.
Criminals have the resources to hit everyone, from society's most vulnerable people to lucrative targets like big businesses and government. "Clearly the world is not as safe as we would like it to be. We're surrounded by all kinds of new and different threats," explains Zak Doffman, Founder and CEO of Digital Barriers. "The access to COVID treatments is a nation-state wide competitive advantage."
In the face of this influx of threats, more kudos to the people keeping us and our data safe, like the Cyber Volunteers 19. To keep yourself safe, Kaspersky Daily serves up advice on spotting and protecting yourself from the Facebook grants scam.
Looking forward to watching the Olympic Games in Tokyo? Here's a reminder of what happened at the opening ceremony of the 2018 Winter Olympics in Pyeongchang
Barely noticed by the public, but an elaborate hacking attack hit the stadium, starting a cyber-political puzzle.
It is February 9, 2018. The stage is set for the Pyeongchang Winter Olympics' opening ceremony. But the organizers didn't realize one of the most deceptive cyberattacks in history was afoot.
This three-part series looks at the background to the Pyeongchang cyberattack, the Olympics IT team's stunning response and why it was so hard (and so risky) to find out who did it.
Security researchers described the code used to attack the 2018 Pyeongchang winter Olympics as 'Frankenstein-like.' In part two of our video series, hacker:HUNTER Olympic Destroyer, they explain how the malware was designed to point in multiple directions.
The designer of an extraordinary piece of code lodged it in a system where it remained undetected for months. Part two of hacker:HUNTER Olympic Destroyer explores the nature of the attack, its process and why 'Frankenstein-like' code made it one of the most mysterious advanced persistent threat (APT) attacks in history.
Olympic Destroyer was the perfect example of an APT. What are they, and why are they so harmful?
APTs are sophisticated hacks that often wait for the perfect time to strike to create maximum damage. They lodge themselves in a system and steal critical data over weeks, months or years. Those behind these attacks build complex software for intentional damage – from espionage and sabotage to data theft.
APTs are notoriously associated with highly organized groups. They attack high-status targets like countries or large corporations, notably in manufacturing and finance, aiming to compromise high-value information like intellectual property, military plans and sensitive user data.
Their high-profile targets will have secure networks and defenses, so threats must stay undetected as long as possible. The longer the attack goes on, the more time attackers have to map the system and plan to steal what they want.
Motives behind attacks vary, from harvesting intellectual property to gaining advantage in an industry, to stealing data for use in fraud. One thing is clear: APTs cause severe damage.
Olympic Destroyer was the perfect APT. A highly-organized group attacked a national Olympic committee, and it worked.
The 'confusion bomb' had been undetected in the computer system for four months, biding its time to strike. Being in the system gave them time to find weak spots and pain points to make the attack more devastating. When it finally surfaced, all hell broke loose.
By directly attacking the Olympics' data centers in Seoul, South Korea, Olympic Destroyer cut employees' access to network computers. Because Wi-Fi was out, Olympic building security gates stopped working, coverage stopped, and the whole infrastructure went offline. The Pyeongchang IT team was staring down the barrel of a potential geopolitical disaster.
Stay tuned for episode three, where we unravel the IT team's ingenious response and find out who did it. Any guesses? Go to hacker:HUNTER to stay up to speed.
The final instalment of our series hacker:HUNTER Olympic Destroyer examines how Pyeongchang winter Olympics hackers put smokescreen to misdirect cybersecurity analysts. But through the fog, analysts realized the culprit wasn't who you might expect.
If successful, the 2018 Pyeongchang cyberattack could have cost billions of dollars, leaving a canceled Olympics and a geopolitical disaster in its wake. Their deceptive methods meant the cybercriminals nearly got away with it. Why did they want to point the analysts at another group? And who was behind it all?
Cybercriminals don't leave a calling card, but they do leave evidence. The art of finding and using that evidence to find the culprit is known as threat attribution.
Threat attribution is forensic analysis for advanced persistent threats (APTs). It analyzes the attackers' 'fingerprints,' such as the style of their code, where they attack and what kinds of organizations they target. Attacks can be matched with the fingerprints of other attacks attributed to specific groups.
Hackers have their own set of tactics, techniques and procedures. Cybersecurity experts can identify threat actors by studying these elements.
In February 2016, hackers attempted to steal $851 million US dollars and siphoned $81 million US dollars from the Central Bank of Bangladesh. The attack was linked to notorious cyber espionage and sabotage group Lazarus Group. Lazarus attacks casinos, financial institutions, and investment and cryptocurrency software developers.
Lazarus has certain targets and ways of attacking: Infecting a website employees of a targeted organization often visit or finding a vulnerability in one of their servers. These are the 'fingerprints' used in threat attribution.
Crucially, Lazarus Group is long thought to be linked to North Korea. Olympic Destroyer included a piece of Lazarus's malware code, but the type of attack didn't fit. Its fingerprints better matched a cluster of attacks by another group with a very different agenda.
Watch the full video to see if you knew who the hacker was all along.
This APT might not have worked, but over the years, others have. To see what a successful APT looks like, watch Chasing Lazarus: A hunt for the infamous hackers to prevent big bank heists.
Stalkerware is making headlines, for all the wrong reasons
How could her partner know where she is every day? Why does his girlfriend know who he's messaging? The answer could be a disturbing new technology that's fuelling a new wave of violence across the globe. The worst thing? It's as easy as downloading an app.
This app can install violence
Consumer surveillance and privacy are hot topics. Not only because they're the cornerstone of our human rights, but now digital tech is pressing the fast forward button on spying techniques. One dangerous software, fuelling a new wave of gender violence, is stalkerware.
Spouseware. Legal Spyware. Stalkerware. It has many names, but the premise is simple. Stalkerware is software used to secretly spy on another person's private life via a smart device. Usually installed secretly on mobile phones by co-called 'friends', paranoid partners (hence the spouseware title) and others, stalkerware tracks everything from the victim's physical location and internet activity, to text messages and phone calls to friends. Even though it's sometimes defined as legal spyware, it's anything but. Victims can't spot it too. Meanwhile, stalkers are accessing a wide range of personal data, and it's having bad repercussions for today's relationships.
A new form of relationship violence
Stalkerware is on the rise. In 2019, Kaspersky detected a 67 percent year-on-year increase of stalkerware on users' mobile devices globally; Germany, Italy and France were the most hit countries in Europe. Perhaps more shockingly, early data suggests the situation didn't improve in 2020. Alessandra Pauncz, Executive Director of WWP European Network, highlights how dangerous this threat is, "The effects of cyber-violence on women and girls are devastating because they're part of a continuum of violence that deprives them of their freedom."
It's fuelling a new type of violence across the globe, in which people take away their partner's freedom - in more ways than one. But aside from violating privacy, for victims and their abusers, there are other dangers. As a form of malware – short for "malicious software," a type of computer program designed to infect a user's computer and inflict harm – these programs can expose the victims' data and breach protection tech, increasing the chances of their device getting infected by other malware.
It might sound like attackers are predominantly men, but Eva Galperin, Director of Cybersecurity at the Electronic Frontier Foundation, notes that "this is not just a 'men spying on women' issue." From former spouses spying on people through internet-connect thermostats to men being outed as gay, Eva and her team have seen all manner of stalkerware victims and cases.
Fighting for online privacy and data protection
Before stalkerware hit the mainstream, Eva was leading the fightback. She was outraged by a hacker who abused women then threatened to compromise their devices if they spoke out. Now she's an influential voice in the fight against stalkerware, helping thousands of victims get their privacy back. Watch her story, featured in our series Defenders of Digital.
Education can combat stalkerware
Stalkerware isn't the easiest enemy to spot, but it starts with education. As Alfonso Ramirez, General Manager at Kaspersky Spain says, "It's quite hard to fight against stalkerware using only tech tools. However, it would really help if practitioners and users are aware stalkerware exists, know how to recognize the signs of this software being installed on their devices and what to do next."
As awareness rises, with Eva and others leading the charge, new global initiatives to fight back are cropping up, like DeStalk. Created by Kaspersky and NGO partners, DeStalk is an EU-wide project designed to educate people, professionals and the government on how to spot stalkers and deal with them.
Chicago's tiny not-for-profit taking on powerful institutions.
The history of surveillance is one of control. As monitoring technologies accelerate, one not-for-profit noticed a concerning rise in unethical police cell phone observation. Their objections led to new, stronger digital rights legislation.
Smartphones have improved our lives more than we could have imagined. We work on them, use them to take and store private photos and they know where we are at any moment. But with advanced surveillance techniques, phones have become a powerful way for law enforcement to observe and identify us, ethically or not.
Last year's change to remote life made us all digital. Are we now in danger of trading private digital data for convenient digital services? Check out Kaspersky's privacy predictions for 2021 and learn how this year is going to affect our privacy in cyberspace.
One Chicago not-for-profit, Lucy Parsons Labs, is demanding government agencies like the police and Immigration and Customs Enforcement (ICE) be more transparent about how and why they track people through their phones. Defenders of Digital episode three speaks with Lucy Parsons Labs' Executive Director Freddy Martinez about how law enforcement use technologies to covertly observe people, what it means for digital rights and how his team made US legal history.
Kira Rakova and her team help you regain control of your personal data.
Safe Sisters fight harassment and 'revenge porn' with education
Online abuse and cyber-harassment mean a disproportionate number of women remove themselves from crucial discussions. One not-for-profit is making a change for women in East Africa.
In the digital age, not only do we send videos to friends and sing online karaoke with those we've never met, many are using social media to fight for equality. But online harassment, image-based sexual abuse (also called 'revenge porn') and cyberattacks can stop women especially from being part of the conversation that leads to real change. These cowardly acts also leave victims feeling embarrassed, ashamed and alone.
Safe Sisters is a fellowship program empowering girls and women, especially human rights activists, journalists and those in the media, to fight online abuse. In Defenders of Digital season two episode five, Safe Sisters' Immaculate Nabwire explains a landmark Ugandan image-based sexual abuse case that inspires her, the digital threats women in East Africa face and how her team are fighting for change.Gamers are using their voice to overcome accessibility problems
By 2023, there could be over three billion gamers worldwide. But for some people with disabilities, taking part in this wildly popular passion can be frustrating to impossible. Now, one piece of tech is out to make slaying dragons and building civilizations accessible to all. Will it change the future of gaming?
Since the world's first video game 'Pong' appeared in 1958, gaming has evolved in ways never imagined. But game accessibility is still a problem for as many as 30 million people in the US, because they have an impairment that means they come up against accessibility barriers when gaming.
Fridai is changing all that. The voice-activated, AI-powered assistant gives advice on anything gamers with disabilities may need, from hands-free options to being reminded of the game's objective. In Defenders of Digital series two, episode four, Mark Engelhardt, Fridai's Co-founder and CEO, talks about how the technology uses AI to create a new interface between humans and machines.
The world of digital privacy is changing.
Algorithms are everywhere, but they are trained based on the beliefs of their developer. In episode two of our second season of Defenders of Digital, we learn about Homo Digitalis' work to expose algorithm bias that impedes digital rights for millions. The first corporate they catch might surprise you.
Algorithms can improve our experience online. But one not-for-profit is going beyond the code for the greater good. Founded in 2018, Homo Digitalis has over 100 members. They promote transparency in algorithmic programming and safeguards against discrimination by algorithm.
Because programmers – as humans – have biases, algorithms learn from those biases. When we hand power over to the algorithm, it may erode digital rights and impinge freedom of expression without us knowing.
Homo Digitalis has already called out one tech giant for their moderation process. It could have impacted millions. Who was it? Find out in Defenders of Digital season two, episode two.
Meet Susie Hargreaves and her team.
Internet Watch Foundation (IWF) hunts down child sexual abuse images online and helps identify children involved so that law enforcement can intervene. While the recent pandemic has triggered greater numbers of child abuse images, CEO Susie Hargreaves and her team are fighting back with a new piece of tech.
COVID-19 has fuelled a disturbing increase in child sex abuse material online. Our latest Defenders of Digital series begins by introducing Susie Hargreaves's team at Internet Watch Foundation (IWF) and explores their mission to make children safer. It also looks at how the pandemic has moved the goalposts and the new tech making a difference.
Formed in 1996 in response to a fast-growing number of online child abuse cases, IWF's 155 members include tech's biggest names, such as Microsoft and Google. They're united by the common goal to rid the internet of child sexual abuse images and videos.
The pandemic has made the issue of online child sexual abuse material more acute. During lockdown in the UK alone, IWF says 300,000 people were looking at online child sexual abuse images at any one time. What's worse, the material is always changing.
IWF has recently seen a worrying rise in self-generated sexual abuse material, chiefly among girls age 11 to 13. The victim is groomed or coerced into photographing or filming themselves, which the sexual predator captures and distributes online. In the past year alone, the proportion of online content they're removing that is self-generated has risen from 33 to 40 percent.
There are encouraging developments helping IWF with their work. Microsoft's PhotoDNA analyzes known child exploitation images, finds copies elsewhere on the internet, and reports them for removal. It helped IWF remove 132,700 web pages showing child sexual abuse images in 2019. How does it work?
First, PhotoDNA creates a unique digital fingerprint of a known child abuse image, called a 'hash.' It compares that fingerprint against other hashes across the internet to find copies. It reports copies it finds to the site's host. It's a fast and ingenious way to shut down child exploitation.
Internet users who have stumbled across suspected child abuse images and reported them to IWF have been instrumental in starting a process that's led to many children in abusive situations receiving help. If you see an image or video you think may show child sexual exploitation, report it anonymously to IWF.
Want to learn how to better protect your kids when they're online? A free training course, based on the Skill Cup mobile app and developed with Kaspersky, is now available for parents to understand the challenges children face today.
Explore the course to better protect your kids online.
The Defenders of Digital video series profiles tech experts who guard the digital world. We'll soon launch season two, but for now, these are the five people whose stories started it all. They're critical to our future: they make our digital world safe, free, open and functional. Who are they, and what motivates them?
Eva Galperin was outraged by a hacker who abused women then threatened to compromise their devices if they spoke out. She has since become the most powerful voice in the fight against stalkerware, and in doing so, helped thousands of victims get their privacy back.
Security specialist Einar Otto Stangvik wanted to use his programming skills to do more than make money. He developed software to identify hackers stealing and sharing private photos from iCloud backups. One hacker turned out to be a prominent public figure.
Now Stangvik is onto an even more ambitious project that will help vulnerable children.
Salvi Pascual knows the heavily censored Cuban media and internet well. When he moved to the US, friends started asking him to send them online content they wanted. It turned into a business, but getting around government controls had Pascual's team always on their toes. Soon, they'd developed a solution that's uncensored the internet for thousands of Cubans.
Giorgio Patrini is fighting back against the constant threat of fake news.
'Deepfakes' is the disturbing phenomenon of videos or audio that use AI-based algorithms to substitute one person for another. Nearly indistinguishable from the real thing, they're used to harass, blackmail and commit fraud. But Patrini knew when technology creates a problem, it can also create a solution.
Kira Rakova believes our digital footprint is like a private journal. A breach of private online information is like publishing someone's diary without their consent. While there is increasing concern over personal data being used to manipulate and defraud, not everyone understands the risks and what they can do about them.
That's where Rakova and her team come in. They use privacy auditing to help people regain control of their data.
You've seen the first series of Defenders of Digital. Soon, we'll bring you a new series with changemakers from around the globe.
Subscribe to Tomorrow Unlocked on YouTube for the latest episodes.
Stalkerware is making headlines, for all the wrong reasons
How could her partner know where she is every day? Why does his girlfriend know who he's messaging? The answer could be a disturbing new technology that's fuelling a new wave of violence across the globe. The worst thing? It's as easy as downloading an app.
This app can install violence
Consumer surveillance and privacy are hot topics. Not only because they're the cornerstone of our human rights, but now digital tech is pressing the fast forward button on spying techniques. One dangerous software, fuelling a new wave of gender violence, is stalkerware.
Spouseware. Legal Spyware. Stalkerware. It has many names, but the premise is simple. Stalkerware is software used to secretly spy on another person's private life via a smart device. Usually installed secretly on mobile phones by co-called 'friends', paranoid partners (hence the spouseware title) and others, stalkerware tracks everything from the victim's physical location and internet activity, to text messages and phone calls to friends. Even though it's sometimes defined as legal spyware, it's anything but. Victims can't spot it too. Meanwhile, stalkers are accessing a wide range of personal data, and it's having bad repercussions for today's relationships.
A new form of relationship violence
Stalkerware is on the rise. In 2019, Kaspersky detected a 67 percent year-on-year increase of stalkerware on users' mobile devices globally; Germany, Italy and France were the most hit countries in Europe. Perhaps more shockingly, early data suggests the situation didn't improve in 2020. Alessandra Pauncz, Executive Director of WWP European Network, highlights how dangerous this threat is, "The effects of cyber-violence on women and girls are devastating because they're part of a continuum of violence that deprives them of their freedom."
It's fuelling a new type of violence across the globe, in which people take away their partner's freedom - in more ways than one. But aside from violating privacy, for victims and their abusers, there are other dangers. As a form of malware – short for "malicious software," a type of computer program designed to infect a user's computer and inflict harm – these programs can expose the victims' data and breach protection tech, increasing the chances of their device getting infected by other malware.
It might sound like attackers are predominantly men, but Eva Galperin, Director of Cybersecurity at the Electronic Frontier Foundation, notes that "this is not just a 'men spying on women' issue." From former spouses spying on people through internet-connect thermostats to men being outed as gay, Eva and her team have seen all manner of stalkerware victims and cases.
Fighting for online privacy and data protection
Before stalkerware hit the mainstream, Eva was leading the fightback. She was outraged by a hacker who abused women then threatened to compromise their devices if they spoke out. Now she's an influential voice in the fight against stalkerware, helping thousands of victims get their privacy back. Watch her story, featured in our series Defenders of Digital.
Education can combat stalkerware
Stalkerware isn't the easiest enemy to spot, but it starts with education. As Alfonso Ramirez, General Manager at Kaspersky Spain says, "It's quite hard to fight against stalkerware using only tech tools. However, it would really help if practitioners and users are aware stalkerware exists, know how to recognize the signs of this software being installed on their devices and what to do next."
As awareness rises, with Eva and others leading the charge, new global initiatives to fight back are cropping up, like DeStalk. Created by Kaspersky and NGO partners, DeStalk is an EU-wide project designed to educate people, professionals and the government on how to spot stalkers and deal with them.
Kira Rakova and her team help you regain control of your personal data.
Chicago's tiny not-for-profit taking on powerful institutions.
The history of surveillance is one of control. As monitoring technologies accelerate, one not-for-profit noticed a concerning rise in unethical police cell phone observation. Their objections led to new, stronger digital rights legislation.
Smartphones have improved our lives more than we could have imagined. We work on them, use them to take and store private photos and they know where we are at any moment. But with advanced surveillance techniques, phones have become a powerful way for law enforcement to observe and identify us, ethically or not.
Last year's change to remote life made us all digital. Are we now in danger of trading private digital data for convenient digital services? Check out Kaspersky's privacy predictions for 2021 and learn how this year is going to affect our privacy in cyberspace.
One Chicago not-for-profit, Lucy Parsons Labs, is demanding government agencies like the police and Immigration and Customs Enforcement (ICE) be more transparent about how and why they track people through their phones. Defenders of Digital episode three speaks with Lucy Parsons Labs' Executive Director Freddy Martinez about how law enforcement use technologies to covertly observe people, what it means for digital rights and how his team made US legal history.
Safe Sisters fight harassment and 'revenge porn' with education
Online abuse and cyber-harassment mean a disproportionate number of women remove themselves from crucial discussions. One not-for-profit is making a change for women in East Africa.
In the digital age, not only do we send videos to friends and sing online karaoke with those we've never met, many are using social media to fight for equality. But online harassment, image-based sexual abuse (also called 'revenge porn') and cyberattacks can stop women especially from being part of the conversation that leads to real change. These cowardly acts also leave victims feeling embarrassed, ashamed and alone.
Safe Sisters is a fellowship program empowering girls and women, especially human rights activists, journalists and those in the media, to fight online abuse. In Defenders of Digital season two episode five, Safe Sisters' Immaculate Nabwire explains a landmark Ugandan image-based sexual abuse case that inspires her, the digital threats women in East Africa face and how her team are fighting for change.Gamers are using their voice to overcome accessibility problems
By 2023, there could be over three billion gamers worldwide. But for some people with disabilities, taking part in this wildly popular passion can be frustrating to impossible. Now, one piece of tech is out to make slaying dragons and building civilizations accessible to all. Will it change the future of gaming?
Since the world's first video game 'Pong' appeared in 1958, gaming has evolved in ways never imagined. But game accessibility is still a problem for as many as 30 million people in the US, because they have an impairment that means they come up against accessibility barriers when gaming.
Fridai is changing all that. The voice-activated, AI-powered assistant gives advice on anything gamers with disabilities may need, from hands-free options to being reminded of the game's objective. In Defenders of Digital series two, episode four, Mark Engelhardt, Fridai's Co-founder and CEO, talks about how the technology uses AI to create a new interface between humans and machines.
The world of digital privacy is changing.
Algorithms are everywhere, but they are trained based on the beliefs of their developer. In episode two of our second season of Defenders of Digital, we learn about Homo Digitalis' work to expose algorithm bias that impedes digital rights for millions. The first corporate they catch might surprise you.
Algorithms can improve our experience online. But one not-for-profit is going beyond the code for the greater good. Founded in 2018, Homo Digitalis has over 100 members. They promote transparency in algorithmic programming and safeguards against discrimination by algorithm.
Because programmers – as humans – have biases, algorithms learn from those biases. When we hand power over to the algorithm, it may erode digital rights and impinge freedom of expression without us knowing.
Homo Digitalis has already called out one tech giant for their moderation process. It could have impacted millions. Who was it? Find out in Defenders of Digital season two, episode two.
Meet Susie Hargreaves and her team.
Internet Watch Foundation (IWF) hunts down child sexual abuse images online and helps identify children involved so that law enforcement can intervene. While the recent pandemic has triggered greater numbers of child abuse images, CEO Susie Hargreaves and her team are fighting back with a new piece of tech.
COVID-19 has fuelled a disturbing increase in child sex abuse material online. Our latest Defenders of Digital series begins by introducing Susie Hargreaves's team at Internet Watch Foundation (IWF) and explores their mission to make children safer. It also looks at how the pandemic has moved the goalposts and the new tech making a difference.
Formed in 1996 in response to a fast-growing number of online child abuse cases, IWF's 155 members include tech's biggest names, such as Microsoft and Google. They're united by the common goal to rid the internet of child sexual abuse images and videos.
The pandemic has made the issue of online child sexual abuse material more acute. During lockdown in the UK alone, IWF says 300,000 people were looking at online child sexual abuse images at any one time. What's worse, the material is always changing.
IWF has recently seen a worrying rise in self-generated sexual abuse material, chiefly among girls age 11 to 13. The victim is groomed or coerced into photographing or filming themselves, which the sexual predator captures and distributes online. In the past year alone, the proportion of online content they're removing that is self-generated has risen from 33 to 40 percent.
There are encouraging developments helping IWF with their work. Microsoft's PhotoDNA analyzes known child exploitation images, finds copies elsewhere on the internet, and reports them for removal. It helped IWF remove 132,700 web pages showing child sexual abuse images in 2019. How does it work?
First, PhotoDNA creates a unique digital fingerprint of a known child abuse image, called a 'hash.' It compares that fingerprint against other hashes across the internet to find copies. It reports copies it finds to the site's host. It's a fast and ingenious way to shut down child exploitation.
Internet users who have stumbled across suspected child abuse images and reported them to IWF have been instrumental in starting a process that's led to many children in abusive situations receiving help. If you see an image or video you think may show child sexual exploitation, report it anonymously to IWF.
Want to learn how to better protect your kids when they're online? A free training course, based on the Skill Cup mobile app and developed with Kaspersky, is now available for parents to understand the challenges children face today.
Explore the course to better protect your kids online.
The Defenders of Digital video series profiles tech experts who guard the digital world. We'll soon launch season two, but for now, these are the five people whose stories started it all. They're critical to our future: they make our digital world safe, free, open and functional. Who are they, and what motivates them?
Eva Galperin was outraged by a hacker who abused women then threatened to compromise their devices if they spoke out. She has since become the most powerful voice in the fight against stalkerware, and in doing so, helped thousands of victims get their privacy back.
Security specialist Einar Otto Stangvik wanted to use his programming skills to do more than make money. He developed software to identify hackers stealing and sharing private photos from iCloud backups. One hacker turned out to be a prominent public figure.
Now Stangvik is onto an even more ambitious project that will help vulnerable children.
Salvi Pascual knows the heavily censored Cuban media and internet well. When he moved to the US, friends started asking him to send them online content they wanted. It turned into a business, but getting around government controls had Pascual's team always on their toes. Soon, they'd developed a solution that's uncensored the internet for thousands of Cubans.
Giorgio Patrini is fighting back against the constant threat of fake news.
'Deepfakes' is the disturbing phenomenon of videos or audio that use AI-based algorithms to substitute one person for another. Nearly indistinguishable from the real thing, they're used to harass, blackmail and commit fraud. But Patrini knew when technology creates a problem, it can also create a solution.
Kira Rakova believes our digital footprint is like a private journal. A breach of private online information is like publishing someone's diary without their consent. While there is increasing concern over personal data being used to manipulate and defraud, not everyone understands the risks and what they can do about them.
That's where Rakova and her team come in. They use privacy auditing to help people regain control of their data.
You've seen the first series of Defenders of Digital. Soon, we'll bring you a new series with changemakers from around the globe.
Subscribe to Tomorrow Unlocked on YouTube for the latest episodes.
#fromkurilswithlove is raising funds for the conservation of the Kuril Islands
Interviewing Cambridge University Junior Research Fellow in AI, Dr. Beth Singler, about the future of work.
Should we be polite to AI assistants? Why don't we understand AI is strange because humans are strange? Are people getting their perceptions of robots from The Terminator franchise? I interview Dr. Beth Singler, anthropologist and Junior Research Fellow in artificial intelligence (AI) at University of Cambridge, on the weird and wonderful ways we imagine AI, robotics and the future of work.
Dr. Beth Singler (@BVLSingler) is one of many experts appearing in Tomorrow Unlocked's new audio series Fast Forward. She examines the social, ethical and philosophical implications of AI and robotics, and has spoken at Edinburgh Science Festival, London Science Museum and New Scientist Live.
Ken: In your work, you engage people in conversations about the implications of AI and robotics. What do people think AI is?
Beth: For the public, it isn't one thing. People point to examples of AI being implemented, but it has different definitions for people. They draw presumptions from science fiction and media accounts of dangerous AI and scary robots. It's a malleable term – people say 'the algorithm' and mean AI.
Many think of AI in the workplace replacing human physical work, but we see AI taking on more knowledge labor and even emotional labor.
Ken: What kind of emotional tasks can AI do?
Beth: We increasingly see interfaces with AI that give simulated emotional responses. AI assistants do tasks for you but pleasantly and civilly. Call center work is already highly structured and scripted – an AI assistant or chatbot can take over that pleasantry system. How workplaces implement AI will influence how we connect with other humans.
Ken: Are we creating a human-machine social world we'll have to learn to interact with?
Beth: Yes. We're seeing these human-machine interactions playing out in different places – in the home, workplace, and care settings. We're having to understand that relationship and teach our children to negotiate it. There are discussions on whether children should be polite when using AI assistants. We're coming up with a new social format for interactions with AI.
Ken: I thought, of course you should be polite to machines – if only because one day they'll look at everything we've said and done and judge us accordingly. I want to be on the right side of them.
Beth: We also see arguments that you should be civil to AI assistants because this is how we should behave to other entities, whether human or non-human – that it reflects our natures. If we aren't civil to machines, it says more about us than their needs. There are many different answers to questions of politeness to AI assistants.
Ken: People find conversations with Cleverbot amusing when it asks things like, "Don't you wish you had a body?" or "What is God to you?" They don't consider Cleverbot thinks it's appropriate because a human asked IT those questions. We're looking into a strange, distorting mirror and not recognizing our reflection.
Beth: Absolutely. There's a reason the Black Mirror TV series is called Black Mirror – it's a reflective surface for understanding ourselves. AI and machine responses come from data sets, and those involve biases.
It's a moment to reflect, for instance, on questions of personhood before we even get to anything like artificial general intelligence (AGI) or superintelligence. Should we be civil? If we say rude or sexist things to a female AI assistant, does that matter? These questions come out again and again.
I'm an anthropologist, meaning I study what humans do and think. These big questions are integral to our concept of what AI is. I've seen in my work engaging the public and seeing their sometimes hopeful, sometimes fearful responses that this will be a conversation we'll have for some time yet.
Talking about AI and the future of work gets down to big questions like, what is the human being for? If we define ourselves in terms of what we do and what we produce, we'll fear replacement.
Ken: I was at an airport buying a train ticket one afternoon. It was quiet, and the woman behind the counter said, "You should have been here yesterday – the automatic ticket machines had recalibrated, giving out wrong tickets. People adjust. Machines don't." I wondered if this ability to adjust is part of our relationship with machines.
Beth: It's interesting how much we adjust to machines. With the airport systems that use facial recognition software, I often have to take off my glasses, change my hair or bob down. We adjust ourselves to be accepted by the system.
You see this in how automation is changing the workplace. There are interviews with facial recognition software involved, so we're trying to smile more in a video interview. We're increasingly making changes to fit the machine-based system.
Ken: It suggests an element of trust. Where does trust fit in our relationship with machines?
Beth: Trust is key. We want to believe software that observes our responses in job interviews is fair and neutral, but we have examples where trust is let down.
In the UK in 2020, an algorithm that helped grade student exam papers damaged public trust – it penalized students studying at less high-achieving schools. In my work, I see examples of people trusting too much – they have an image of a superintelligence that doesn't exist yet. Around the term "blessed by the algorithm," people feel their YouTube content is promoted because the algorithm decided they should be lucky. They use the language of religious belief.
Society can only trust technology it understands. Digital literacy – understanding what AI is and isn't – is key to that.
Ken: We tend to understand things better as fiction. It's a way to get a grip on the world. But I get the feeling fiction's not a grip anymore, but a stranglehold. Is that fair?
Beth: I enjoy science fiction accounts of AI in their many interpretations, fears and hopes.
One of the hazards is a strict, negative story used too often. I'm a fan of the Terminator film franchise, but I see how dystopian imagery of robot uprisings shapes people's views of AI. And AI making crucial decisions about our future – whether we get a job or a mortgage, or how we're treated in hospital – may also be overshadowed by Terminator-like stories.
Ken: And it stops us noticing when AI does good things, like in medicine and traffic control. The robots are already among us, but they don't usually walk on two legs. They're more likely to be sorting out your airplane ticket.
Beth: Absolutely. The 'home help' robot concept from the 1950s and 60s would move around the house on two legs and perform tasks. It made real home automation invisible – a washing machine doesn't have that shiny futuristic look.
It's the same with recent examples like the robot vacuum cleaner – they become an invisible family member.
Ken: If we had the domestic robots imagined from the 1930s to the 1950s, we'd have to rebuild homes – they wouldn't fit.
Sophia the robot Interview: Sophia the robot answers Stylist's philosophical questions youtu.be
Sophia the robot answers philosophical questions
Beth: There's much hype over embodied robots. For some, Hanson's Sophia robot represents the next step in AI and human evolution too. But what's Sophia's commercial use? It's unclear if she's useful in the home or office. What dream are we selling with bipedal robot servants that don't fit into how we use technology today? We've made space in our homes for AI assistants – the disembodied voice that answers our questions.
Ken: Interestingly, there's not often a 'man machine.' It's usually the 'woman machine,' from Maria in the film Metropolis to Olimpia in The Sandman and Hadaly in The Future Eve. Why is the woman and the machine conflated?
Beth: Look at the voices involved in choosing to make AI assistants female. There are arguments we find female voices more soothing, but for many academics, gendered AI seems an attempt to replicate the mother or wife.
We've moved on in society – women can choose to work in or outside the home. For some, that leaves a gap for intellectual and emotional labor. The always-responsive female figure, whether the wife or the mother, is reconstituted in machine form.
Ken: Another thing that parallels the idea of the robot is a child. Robots are becoming smaller. For the sake of argument, a male of average build might seem threatening to many people.
Beth: There's a move toward making robots cuter and replicating child and animal forms to reduce those threatening associations from science fiction. Think of Arnold Schwarzenegger's Terminator versus the therapeutic robot PARO, modeled on a baby harp seal.
Ken: Is there an element of trying to make work more fun? Perhaps work becomes more like play if you have an AI assistant who helps with the emotional labor?
Beth: Yes. There's a history of trying to gamify the workplace – developing 'third space' options that involve games or places where you can nap. Perhaps how we apply AI is a part of how we make the workplace more enjoyable. If our software chatted back to us, was entertaining and responded to us, it might seem less laborious.
Ken: Going back to emotional labor, programs could soften the edges of work relationships, whether online or in an office – I can imagine something like an 'emotional Roomba' (robot vacuum cleaner) allowing for moments of interaction.
Beth: We see examples of AI mediating between humans in conversation, like machine learning algorithms suggesting how to respond to emails or warning your tone is too harsh – softening the edges of our interactions at work is a developing space.
Ken: After some emails I've had, I see the value in something like that.
Beth: I also saw an application for divorced or divorcing couples helping conversations be more amicable for the benefit of any children. A machine learning algorithm warns you things like, perhaps you're being a bit sarcastic.
Ken: I'm scared of an algorithm that understands sarcasm. That will be the end of humanity.
Beth: There's a wonderful Tom Gauld cartoon about scientists trying to create a sarcastic bot. And the bot says to the scientist, "It's going great. This guy is a real genius."
Ken: What thought about AI and the future of work would you like to share?
Beth: I'd like people to consider how much we should change our behavior around AI in the workplace. People don't normally interact in purely rational ways. If we curtail that normal human messiness, we're not anthropomorphizing AI but robo-morphizing humans. If we make ourselves smile more to do well in an interview with facial recognition software, we limit ourselves. Although we might see AI as a human simulation, do we become a human simulation in response to AI?
Beth Singler features in the Tomorrow Unlocked audio series Fast Forward, Episode 5. Listen to Fast Forward and explore more interviews with featured experts.
Writing, directing and filming documentaries on nature's extremes and our relationship with it
Renan Ozturk pushes filmmaking to the extremes with rich documentary stories about the natural world and our relationship with it.
An explorer at heart, Renan spent his pre-filmmaking years living in the wilderness as a climber and artist, exploring and painting landscapes.
From climber and painter to National Geographic Adventurer of the Year in 2013, Renan now takes a different view of planet Earth through a video camera lens.Renan pushes the art of filmmaking to the edge in his environmental documentaries, combining extreme expeditions with raw landscapes, outstanding drone footage and visual storytelling. Often collaborating with spouse and fellow director Taylor Rees, Renan's approach is the stuff of filmmaking dreams.
Drawn to earth's most demanding environments, Renan Ozturk focuses on human connection with the natural world. His use of strong visual identity draws on his experience as a landscape painter and adventurer, bringing out nature's best alongside compelling human stories.
For the full portfolio of his work, see Renan Ozturk's website, but here's a snapshot of his writing, directing, producing and more.The Denali Experiment from Expedition Studios on Vimeo.
Co-Director
Denali is North America's highest mountain peak, in Alaska, US. In one of Ozturk's first films, a diverse team of athletes assembled by The North Face take on this behemoth, one of the biggest challenges on Earth.
Writer
This philosophical portrait of the human mind asks, how do we balance risk with reward? Renan teams up with creators of award-winning film All.I.Can for an epic insight into the quests of the world's greatest mountain sport athletes. Expect stunning footage and sporting extremes.
Cinematographer and subject
In the world of big-wall climbing, Shark's Fin on India's Meru Peak is the holy grail – 4,000 feet (1,200 meters) of sheer deadliness, traversed and failed by elite climbers. Dwindling supplies, snowstorms and sub-zero temperatures make for some tale.
Cinematographer and co-director
Renan's first Everest climbing and filming trip is one for the history books. The precision of beauty of Sherpa's camera angles, achieved in such extreme places, is the stuff of wonder.
Cinematographer
Renan's dazzling camera work complements this is a stunning ode to mountains, a collaboration between Australian Chamber Orchestra and BAFTA-nominated director Jennifer Peedom. Mountain examines the beauty of wild landscapes and our fascination with conquering them.
THE LAST HONEY HUNTER from FELT SOUL MEDIA on Vimeo.
Producer
It's steep, misty and packed with psychoactive honey only locals know how to handle. Renan and his team joins the indigenous Kirat Kulung people as they scale deadly cliffs for an emotional final honey harvest.
Cinematographer
When the guardian of an almost unreachable archipelago in the Far East of Russia, the Kuril Islands, hitches a ride with Renan and his team, no one expected the result. The team, including director Taylor Rees, set out to make a classic adventure story, but got something even more powerful.
Expedition climber for The North Face, photojournalist for Sony and National Geographic as well as documentary filmmaker, you won't have to do too much digging to keep up with Renan Ozturk. Follow him on Instagram, Twitter and Facebook.
Robots in the workplace raise questions, says future tech audio series Fast Forward
The first robotic arms introduced to car assembly lines in the 1960s had the same 'reach' as a human worker. They did the same tasks in the same space. Even then, futurists predicted machines would change the world more with their information processing power than by performing manual tasks.
Listen to the full Fast Forward episode 5 on your future robot colleagues
Dr. Beth Singler, an artificial intelligence research fellow at University of Cambridge, points to automation in the workplace revolving around replacing human 'knowledge labor' as much as physical labor. "Artificial intelligence (AI) assistants do tasks for you, but in an emotionally accessible way – with pleasantries and civility."
Dr. Singler points to discussions about whether we should be civil back to AI assistants, like Alexa or Siri. "These questions are so integral to our conception of what AI is and could be that you can't have a conversation about AI without them coming up. It all gets down to philosophical questions of 'what is the human being for?'" But Dr. Singler also gives thought-provoking examples of good uses of AI's ability to show humanlike traits while being seen as neutral.
Kaspersky Principal Security Researcher David Emm says contrary to the human-robot conflict common in sci fi worlds, we might see machines as too unthreatening. "Our research with University of Ghent on how people would react to robots in the workplace found 40 percent would unlock security doors for robots. People didn't question why a robot needed access to a secure area when it was delivering pizza."
If in real life we accept machines in ways we wouldn't accept humans, Emm says we may "share information or give access we shouldn't."
One person thinking about how robots could improve our relationship with both security and ethics is Alan Winfield, professor of robot ethics in the Bristol Robotics Lab. He believes robots should include technology that gives a data trail to understand contributing factors when they make a bad decision like causing an accident – an 'ethical black box,' like a flight data recorder.
Eventually robots could use the data to self-evolve, he believes – improving on their next generation without the human decision-making past generations used to domesticate animals and breed higher yielding crops. Listen to Fast Forward episode 5 to hear Professor Winfield explain his mind-blowing vision and how four universities are working to make it happen.
Listen to Fast Forward and explore more interviews with featured experts.Subscribe to future episodes on these audio streaming services: