advertisement
advertisement

How ProPublica Became Big Tech’s Scariest Watchdog

The nonprofit is fighting fire with fire, developing algorithms and bots that hold Facebook and Amazon accountable.

advertisement
advertisement

Facebook is a political battleground where Russian operatives work to influence elections, fake news runs rampant, and political hopefuls use ad targeting to reach swing voters. We have no idea what goes on inside Facebook’s insidious black box algorithm, which controls the all-powerful News Feed. Are politicians playing by the rules? Can we trust Facebook to police them? Do we really have any choice?

advertisement
advertisement

One emerging way to hold tech companies like Facebook accountable is to use similar technology to figuratively poke at that black box, gathering data and testing hypotheses about what might be going on inside, almost like early astronomers studying the solar system.

It’s a tactic being pioneered at the nonprofit news organization ProPublica by a team of reporters, programmers, and researchers led by award-winning reporter Julia Angwin. Angwin’s team specializes in investigating algorithms that impact people’s lives, from the Facebook News Feed to Amazon’s pricing models to the software determining people’s car insurance payments and even who goes to prison and for how long. To investigate these algorithms, they’ve had to develop a new approach to investigative reporting that uses technology like machine learning and chatbots.

“The one thing that’s been so interesting about the algorithms project that I would never have guessed is that we’ve ended up having to build algorithms all the time,” says Angwin, who has been writing about data and surveillance for more than a decade. It’s a resource-intensive, deeply challenging task in a media landscape where few are willing to invest in large projects, but Angwin views her team’s reporting as essential to holding big tech companies accountable and providing lawmakers with concrete evidence of wrongdoing. “We’re going to get police hats for our New Year’s presents,” she jokes.

[Image: Lucas Waldron/ProPublica]
ProPublica didn’t start off using technology as an investigative tool. The team got its foothold in 2016 with a blockbuster story about criminal risk scores. Their report revealed that these scores, which are generated by an algorithm and used by judges to make decisions about bail and prison sentences, are rife with systemic racism: Black men were often rated as being higher risk than white men with very similar criminal histories. (Independent researchers have disputed the results.) The reporting was done the old-fashioned way, through Freedom of Information Act requests. Angwin’s team first dipped its toes into building algorithms for a story on Amazon’s pricing algorithm. A former programmer for ProPublica ran tests, ironically using AWS servers, on all kinds of hypotheses–like if Amazon charged more on mobile versus desktop or for Prime versus non-Prime members–ultimately finding that Amazon prioritizes products it sells in its listings rather than giving customers the best price.

Inevitably, the reporters turned their gaze toward Facebook. “Honestly, it was getting a little awkward doing a series on algorithms and not having written anything on Facebook,” Angwin says. “The algorithm that most people encounter every day that they’re most intuitively in touch with is actually the News Feed.”

advertisement

But cracking the News Feed is no easy task. It’s something no one has done outside of Facebook, which keeps the inner workings of the influential algorithm secret. Like the rest of us, Angwin and her team don’t have access to the reams of data Facebook has on its users or what information the News Feed algorithms prioritize, so they looked for other ways in.

Their big break came when a source leaked documents laying out Facebook’s secretive censorship rules. After their story on the rules came out, readers reached out to Angwin to tell her that they had had experiences where Facebook wasn’t enforcing its own rules. Their accounts gave the ProPublica team a hypothesis to investigate, and, to collect even more evidence about whether the company’s censors were following their own rules, they built their own Facebook Messenger bot. 

[Screenshot: courtesy ProPublica]
The idea was to crowdsource people’s stories about Facebook’s behavior, from posts that had been taken down by censors to posts that had been reported as offensive and yet remained. Then, they planned to assess each example and see if it really did violate Facebook’s censorship rules–and, hopefully, determine whether the company was following its own guidelines.

“We chose to let people directly contact us through Facebook,” says Madeline Varner, a programmer who works with the team and who built the team’s chatbot. “It made the most sense because it was the platform they were already on and wanted to talk about.”

Varner, along with ProPublica engagement reporter Ariana Tobin, carefully crafted the bot’s questions, many of which were multiple choice. They went through rounds of user testing and finally published the bot on Facebook Messenger. (To reach those that had been kicked off the platform or who no longer used it, they also built a stand-alone survey form.)

advertisement

They got thousands of responses. About 900 were usable, and the team narrowed those down to 49 of the most egregious examples of Facebook not following its own rules. When ProPublica took its findings to the company, Facebook said that it does sampling to ensure that censors are following the rules–but it also admitted that it had made a mistake on 22 of the 49 posts. In response to the investigation, Facebook also added “age” as a protected group in its censorship rules, which means that the category “black children” would be a protected group from hate speech. However, “Muslim immigrants” are considered a subgroup of a protected category, so any slurs against them are still not deemed hate speech.

[Screenshot: courtesy ProPublica]
“They say they do their own sampling, they have all these service centers where people work hourly, looking at images around the world and making quick judgments,” Angwin says. “They say they do their own sampling of those places, where they’ll pick a bunch and see how well they’re following the rules. But like, no one’s ever seen these data sets. This is a secret, extralegal judicial system that governs speech. And they can say whatever they want about how they enforce it, but this is our ability to try and see how they actually do.”

Next, the team branched out to tackle advertisements on the platform. Instead of trying to reverse engineer which people see certain ads on Facebook, they started buying ads themselves. The simple insight led them to discover, in 2016, that Facebook allows advertisers to exclude people by race including in housing ads, to specifically target “Jew Haters,” and prevent older people from seeing job listings.

In the aftermath of ProPublica’s explosive stories, Facebook claimed it built a system to reject discriminatory ads (though Angwin tested it again and found the platform was still allowing advertisers to target housing ads by race). The company removed anti-Semitic categories like “Jew Haters” and promised to do a better job of monitoring ad targeting categories. The company has denied that showing job postings only to people of a certain age is against the law; the lawsuits have already begun.

“So much of this personal data is being used to make decisions about which ads you get sounds really innocuous until you realize there’s a job ad you’re not going to see because you’re too old,” Angwin says.

advertisement

[Screenshot: courtesy ProPublica]
The team’s current project is a global investigation into how political ads function on Facebook. But instead of buying ads, this time they built a browser extension called the Political Ad Collector that scrapes the ads from your Facebook feed and uses a machine learning algorithm to determine which are political and which are not. “We actually built machine learning AI to determine what is a political ad and what is not a political ad,” Angwin says. “It turns out that’s how you police an algorithm. You need some algorithms sometimes to do that kind of accountability.”

This week, they published some of their first findings on political ads in the U.S. Many of them don’t include the mandatory “I approve this message” disclaimers that you’ll see with television or print ads.

This Political Ad Collector, or PAC for short, is the team’s main initiative in the run-up to the 2018 midterms. The extension, which adheres to the strictest privacy standards, also debuted in Germany before the September elections last year and is now used in newsrooms in a total of eight countries, including Switzerland, Italy, and Denmark. (ProPublica built custom algorithms for each country so it can recognize the distinct style of political ads in these countries.)

Meanwhile, Angwin’s team is testing hypothesis after hypothesis–a crucial part of the process, even if few ideas turn into a story. Even more important? Having specific evidence of wrongdoing–something that lawmakers can use to act. “Our goal is to be as concrete as possible about the problems,” Angwin says. “There’s plenty of people writing: Tech platforms! They have too much power! But that’s not concrete enough for a policymaker to do anything about it. So we’re trying to bring more data to the table.”

She calls it “small data.” It’s nothing in comparison to the kind of data that companies like Facebook, Google, and Amazon have, but it’s more data than many tech journalists have access to. That’s something Angwin hopes will change, but holding tech giants accountable is resource- and time-intensive. “All of these companies deserve the same amount of scrutiny. It’s just there’s only a few of us,” she says. “But there’s no reason that every tech company shouldn’t be under this type of scrutiny, and also non-tech companies, because tech invades everything.”

advertisement

[Screenshot: courtesy ProPublica]
Part of that also means educating people about how algorithms work and what kind of insidious effect they can have, which might then inspire more leads for new stories. “We hope that plugging some of that in will at least inspire people to talk to us more,” Tobin, the engagement reporter, says.

“And get mad,” Angwin adds. “Outrage is the new porn. That was my line for 2017. One thing we found through looking through political ads, a lot of them were just frauds and malware and scams because people have realized that’s the only effective thing that people will click on. So it’s being used for everything.”

It’s a depressing picture, with Angwin and her team–along with a small but growing number of researchers and journalists doing similar work–standing apart as bulwarks against the automated decision-making that is already impacting the way people live now. When we’re confronted with findings like ProPublica’s, about the duplicitous ways these algorithms manipulate what we see and understand, it’s hard not to feel disillusioned. What do we do? Can we affect any real change?

“I guess I always console myself with the environmental analogy: Our rivers were on fire for 50 years, and then we were like, you know what, maybe rivers shouldn’t catch fire from pollution,” Angwin says. “I just think it takes a while for people to wake up.”

About the author

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at [email protected] and follow her on Twitter @kschwabable

More