During the week of March 25, the European Parliament will hold the final vote on the Copyright Directive, the first update to EU copyright rules since 2001; normally this would be a technical affair watched only by a handful of copyright wonks and industry figures, but the Directive has become the most controversial issue in EU history, literally, with the petition opposing it attracting more signatures than any other petition in change.org’s history.
How did we get here?
European regulations are marathon affairs, and the Copyright Directive is no exception: it had been debated and refined for years, and as of spring 2017, it was looking like all the major points of disagreement had been resolved. Then all hell broke loose. Under the leadership of German Member of the European Parliament (MEP) Axel Voss, acting as "rapporteur" (a sort of legislative custodian), two incredibly divisive clauses in the Directive (Articles 11 and 13) were reintroduced in forms that had already been discarded as unworkable after expert advice. Voss's insistence that Articles 11 and 13 be included in the final Directive has been a flashpoint for public anger, drawing criticism from the world's top technical, copyright, journalistic, and human rights experts and organizations.
Why can no one agree on what the Directive actually means?
"Directives" are rules made by the European Parliament, but they aren't binding law—not directly. After a Directive is adopted at the European level, each of the 28 countries in the EU is required to "transpose" it by passing national laws that meet its requirements. The Copyright Directive has lots of worrying ambiguity, and much of the disagreement about its meaning comes from different assumptions about what the EU nations do when they turn it into law: for example, Article 11 (see below) allows member states to ban links to news stories that contain more than a word or two from the story or its headline, but it only requires them to ban links that contain more than "brief snippets"—so one country might set up a linking rule that bans news links that reproduce three words of an article, and other countries might define "snippets" so broadly that very little changes. The problem is that EU-wide services will struggle to present different versions of their sites to people based on which country they're in, and so there's good reason to believe that online services will converge on the most restrictive national implementation of the Directive.
What is Article 11 (The "Link Tax")?
Article 11 seeks to give news companies a negotiating edge with Google, Facebook and a few other Big Tech platforms that aggregate headlines and brief excerpts from news stories and refer users to the news companies' sites. Under Article 11, text that contains more than a "snippet" from an article are covered by a new form of copyright, and must be licensed and paid by whoever quotes the text, and while each country can define "snippet" however it wants, the Directive does not stop countries from making laws that pass using as little as three words from a news story.
What's wrong with Article 11/The Link Tax?
Article 11 has a lot of worrying ambiguity: it has a very vague definition of "news site" and leaves the definition of "snippet" up to each EU country's legislature. Worse, the final draft of Article 11 has no exceptions to protect small and noncommercial services, including Wikipedia but also your personal blog. The draft doesn’t just give news companies the right to charge for links to their articles—it also gives them the right to ban linking to those articles altogether, (where such a link includes a quote from the article) so sites can threaten critics writing about their articles. Article 11 will also accelerate market concentration in news media because giant companies will license the right to link to each other but not to smaller sites, who will not be able to point out deficiencies and contradictions in the big companies' stories.
What is Article 13 ("Censorship Machines")?
Article 13 is a fundamental reworking of how copyright works on the Internet. Today, online services are not required to check everything that their users post to prevent copyright infringement, and rightsholders don't have to get a court order to remove something they view as a copyright infringement—they just have to send a "takedown notice" and the services have to remove the post or face legal jeopardy. Article 13 removes the protection for online services and relieves rightsholders of the need to check the Internet for infringement and send out notices. Instead, it says that online platforms have a duty to ensure that none of their users infringe copyright, period. Article 13 is the most controversial part of the Copyright Directive.
What's a "copyright filter?"
The early versions of Article 13 were explicit about what online service providers were expected to do: they were supposed to implement "copyright filters" that would check every tweet, Facebook update, shared photo, uploaded video, and every other upload to see if anything in it was similar to items in a database of known copyrighted works, and block the upload if they found anything too similar. Some companies have already made crude versions of these filters, the most famous being YouTube's "ContentID," which blocks videos that match items identified by a small, trusted group of rightsholders. Google has spent $100m on ContentID so far.
Why do people hate filters?
Copyright filters are very controversial. All but the crudest filters cost so much that only the biggest tech companies can afford to build them—and most of those are US-based. What's more, filters are notoriously inaccurate, prone to overblocking legitimate material—and lacking in checks and balances, making it easy for censors to remove material they disagree with. Filters assume that the people who claim copyrights are telling the truth, encouraging laziness and sloppiness that catches a lot of dolphins in the tuna-net.
Does Article 13 require "filters?"
Axel Voss and other proponents for Article 13 removed references to filters from the Directive in order to win a vote to remove them in the European Parliament. But the new text of Article 13 still demands that the people who operate online communities somehow examine and make copyright assessments about everything, hundreds of billions of social media posts and forum posts and video uploads. Article 13 advocates say that filters aren't required, but when challenged, not one has been able to explain how to comply with Article 13 without using filters. Put it this way: if I pass a law requiring you to produce a large African mammal with four legs, a trunk, and tusks, we definitely have an elephant in the room.
Will every online service need filters?
Europe has a thriving tech sector, composed mostly of "small and medium-sized enterprises" (SMEs), and the politicians negotiating the Directive have been under enormous pressure to protect these Made-In-Europe firms from a rule that would wipe them out and turn over permanent control over Europe's Internet to America's Big Tech companies. The political compromise that was struck makes a nod to protecting SME's but ultimately dooms them. The new rules grant partial limits on copyright liability only for the first three years of an online service's existence, and even these limits are mostly removed once a firm attains over 5m in unique visitors (an undefined term) in a given month, and once a European company hits annual revenues (not profits!) of €10m, it has all the same obligations as the biggest US platforms. That means that the 10,000,001st euro a company earns comes with a whopping bill for copyright filters. There are other, vaguer exemptions for not-for-profit services, but without a clear description of what they would mean. As with the rest of the law, it will depend on how each individual country implements the Directive. France’s negotiators, for example, made it clear that they believe no Internet service should be exempted from the Article’s demands, so we can expect their implementation to provide for the narrowest possible exemption. Smaller companies and informal organizations will have to prepare to lawyer up in these jurisdictions because that’s where rightsholders will seek to sue. A more precise, and hopefully equitable, solution could finally be decided by the European Court of Justice, but such suits will take years to resolve. Both the major rightsholders and Big Tech will strike their own compromise license agreements outside of the courts, and both will have an interest in limiting these exceptions, so it will come down to those same not-for-profit services or small companies to spend the costs required to win those cases and live in legal uncertainty until they have been decided.
What about "licenses" instead of "filters"?
Article 13 only requires companies to block infringing uses of copyrighted material: Article 13 advocates argue that online services won't need to filter if they license the catalogues of big entertainment companies. But almost all creative content put online (from this FAQ to your latest tweet) is instantly and automatically copyrighted. Despite what EU lawmakers believe, we don’t live in a world where a few large rightsholders control the copyright of the majority of creative works. Every Internet user is a potential rightsholder. All three billion of them. Article 13 doesn't just require online services to police the copyrights of a few giant media companies; it covers everyone, meaning that a small forum for dog fanciers would have to show it had made "best efforts" to license photos from other dog fancier forums that their own users might report—every copyright holder is covered by Article 13. Even if an online platform could license all the commercial music, books, comics, TV shows, stock art, news photos, games, and so on (and assuming that media companies would sell them these licenses), they would still somehow have to make "best effort" to license other user's posts or stop their users from reposting them.
Doesn't Article 13 say that companies shouldn't overblock?
Article 13 has some language directing European countries to make laws that protect users from false copyright takedowns, but while EU copyright sets out financial damages for people whose copyrights are infringed, you aren't entitled to anything if your legitimate posts are censored. So if a company like Facebook, which sees billions of posts a day, accidentally blocks one percent of those posts, that would mean that it would have to screen and rule on millions of users' appeals every single day. If Facebook makes those users wait for days or weeks or months or years for a ruling, or if it hires moderators who make hasty, sloppy judgments, or both, Article 13 gives those users no rights to demand better treatment, and even the minimal protections under Article 13 can be waved away by platforms through a declaration that users' speech was removed because of a "terms of service violation" rather than a copyright enforcement.
Do Article 13's opponents only want to "save the memes?"
Not really. It's true that filters—and even human moderators—would struggle to figure out when a meme crosses the line from "fair dealing" (a suite of European exceptions to copyright for things like parody, criticism and commentary) into infringement, but "save the memes" is mostly a catchy way of talking about all the things that filters struggle to cope with, especially incidental use. If your kid takes her first steps in your living room while music is playing in the background, the "incidental" sound could trigger a filter, meaning you couldn't share an important family moment with your loved ones around the world. Or if a news photographer takes a picture of police violence at a demonstration, or the aftermath of a terrorist attack, and that picture captures a bus-ad with a copyrighted stock-photo, that incidental image might be enough to trigger a filter and block this incredibly newsworthy image in the days (or even weeks) following an event, while the photographer waits for a low-paid, overworked moderator at a big platform to review their appeal. It also affects independent creators whose content is used by established rightsholders. Current filters frequently block original content, uploaded by the original creator, because a news service or aggregator subsequently used that content, and then asserted copyright over it. (Funny story: MEP Axel Voss claimed that AI can distinguish memes from copyright infringement on the basis that a Google image search for "memes" displays a bunch of memes)
What can I do?
Please contact your MEP and tell them to vote against the Copyright Directive. The Copyright Directive vote is practically the last thing MEPs will do before they head home to start campaigning for EU elections in May, so they're very sensitive to voters right now! And on March 23, people from across Europe are marching against the Copyright Directive. The pro-Article 13 side has the money, but we have the people!