21 Nov 2018

ENDitorial: Facebook can never get it right

By Bits of Freedom

In 2017, a man posted live footage on Facebook of a murder he was committing. The platform decides whether you get to see this shocking footage or not – an incredibly tricky decision to make. And not really the kind of decision we want Facebook to be in charge of at all.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

I didn’t actually see the much-discussed footage of the murder – and I really don’t feel the need to see it. The footage will undoubtedly be shocking, and seeing it would without doubt leave me feeling very uncomfortable. When I close my eyes, I unfortunately have no trouble conjuring up the picture of Michael Brown after he had just been shot. Or the footage of the beheading of journalist James Foley. The thought of it is enough to make me sick.

Should these kinds of images be readily available? I certainly can’t think of a straightforward answer to this question. I would even argue that those who claim to know the answer are overlooking a lot of the finer nuances involved. The images will, of course, have the bereaved family and friends cowering in pain. Every time one pops up somewhere, they will have to go through it all again and again. You wouldn’t want people to accidentally come across any inappropriate images either: not everyone will be affected equally, but the images are inappropriate nonetheless. No one remains indifferent.

That said, I still have to admit that visuals are sometimes essential in getting a serious problem across. A while back I offered a journalist some information that we both agreed was newsworthy, and we also agreed it was important to bring it to people’s attention. Even so, his words were: “You’ve got a smoking gun, but where is the dead body?”. I didn’t realise then that this sometimes needs to be taken very literally. Sometimes, the photographs or footage of an awful event can act as a catalyst for change.

Without iconic images such as the one of Michael Brown’s body, discrimination by police in the United States might not have been given much attention. And we would probably have never seen reports on the countless mass demonstrations that ensued. The fact we’re not forgetting about the Vietnam War has something to do with a single seminal photograph. Had we never seen these images, they could never have made such a lasting impact, and the terrible events that caused them would not be as fresh in our collective memory today.

I have no doubt that these images sometimes need to be accessible – the question is when. When is it okay to post something? Should images be shared straight away or not for a while? With or without context, blurred or in high definition? And perhaps most importantly: who gets to decide? Right now, an incredible amount of power lies with Facebook. The company controls the availability of news items for a huge group of users. That power comes with an immense responsibility. I wouldn’t like to be in Facebook’s shoes, as Facebook can never get it right. There is always going to be someone among those two billion users who will take offence, and for legitimate reasons.

But there, for me at least, lies part of the problem – and maybe also part of the solution. Facebook decides for its users what they get to see or not. Many of the questions floating around about Facebook’s policy would be less on people’s minds if Facebook wasn’t making important decisions on behalf of its users, and if instead users themselves were in control. The problem would be less worrisome if users actually were given a choice.

One way to make that possible is to go back to a system where you can choose between a large variety of providers of similar services. Not one Facebook, but dozens of Facebooks, each with its own profile. Compare it with the newspapers of the past. Some people were satisfied with a subscription to The New York Times while others felt more at home with The Sun. And where the editors of one newspaper would include certain images on its front page, the editors of another newspaper would make a different choice. As a reader, you could choose what you subscribed to.

But even without such fundamental changes to the way the internet is set up, users might be able to get more of a say – for instance if they can do more to manage their flood of incoming messages. Get rid of certain topics, or favour messages with a different kind of tone. Or prioritise messages from a specific source if they are the only ones writing about something. Users may not even have to make all those decisions by themselves if instead they can rely on trusted curators to make a selection for them. And even though that sounds quite straightforward, it really isn’t. That one interface has to accommodate those same two billion users, and shouldn’t create any new problems – like introducing a filter bubble.

So what it is we’re supposed to do about that shocking murder footage, I really wouldn’t know. There is no straightforward and definite answer to that question. But one thing is very clear: it is not a good idea to leave those kinds of decisions to a major tech company that holds too much power and does not necessarily share our interests. One way out would be to give users more of a choice, and consequently more control, over what they see online.

Facebook can never get it right (20.11.2018)
https://www.bitsoffreedom.nl/2018/11/20/facebook-can-never-get-it-right/

Bits of Freedom
https://www.bitsoffreedom.nl/

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands; translation from Dutch by Marleen Masselink)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
21 Nov 2018

Whom do we trust with the collective good?

By Bits of Freedom

Wittingly and unwittingly, we increasingly leave the care of society to tech companies. This trend will prove detrimental to us.

In search of gentrification

Gentrification is a process in which a neighbourhood attracts more and more well-to-do residents, gradually driving out the less affluent. The Dutch designer Sjoerd ter Borg, collaborating with Radboud University Nijmegen among others, is researching whether we can use technology to recognise this urban process from large quantities of visual information.

One of the first projects originating from this research makes use of Google Street View archives. While looking for indications of gentrification in Seoul, South Korea, Ter Borg came across the beach umbrella. Street vendors use these umbrellas to stand out while staying in the shade. Beach umbrellas have long dominated the streets of Seoul, but they are disappearing from gentrified neighbourhoods. Because Google photographs streets at regular intervals and Street View has the option to go “back in time”, it makes for a perfect visualisation of this phenomenon. This enabled Ter Borg to create the intriguing film Beach Umbrella that touches on urgent questions concerning the access to and use of data.

Data on the present means control over the future

Whether you want to virtually walk the streets of Seoul or check out the gardens of a Yorkshire village: for the best results, you’ll have to go to a US company. Of course there are other initiatives, such as the data portal Amsterdam City Data made by the City of Amsterdam, that collect and connect data and make it available. The scale on which this is done, however, is incomparable. The Google Street View archives are unequalled. Moreover, Google enhances the data collected by its Street View cars with for instance satellite images. Local authorities provide the company with information on the design of public space; public transport companiesprovide timetables and real-time information on disruptions. The mobile phones we all carry with us are useful sensors for Google.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

And the result? If Google were to go into real-estate development, it would have a head start. If it were to make a bid for Amsterdam’s public-transport service, without a doubt the company could do it more efficiently than the city’s own transport company. Thanks to Google’s access to immense quantities of data combined with the unlimited capacity for making complex analyses, it can make better-founded assumptions regarding future developments. And that’s the most painful bit: as a logical consequence of our blind faith in big data, the future will be shaped by the parties with the most data and the greatest computing power at their disposal.

Increasingly, public data is in private hands

More and more those powerful parties are private companies. It’s a disturbing thought that Google has more data on Amsterdam’s urban development than the city itself. Not only do other companies find it increasingly difficult to compete with Google, but the public sector is being left behind as well. We are moving toward a situation in which more and more of our public data is privately owned—and leased back to us under commercial terms.

An untenable situation, once you realise that Google will be the one to decide which data should be collected and which part of it should be disclosed to whom and under what conditions. Should Google be unhappy with the direction a certain research project is taking, or with the products and services being constructed on the basis of “its” data, it can simply shut down access to those data. If it turns out that Google’s interests conflict with citizens’ interests, we have no democratic means to hold it accountable or influence plans as we would with local government. If we are not careful, collective means to cultivate the collective good will become increasingly scarce.

The future is nearer than you think

Do you think these are problems of the distant future? Think again. In Canada right now there is a vehement discussion about the development of Toronto, where Google’s sister company Sidewalk Labs is developing a “smart neighbourhood”.

Everything that’s been going wrong for years wherever Google steps in, is also going wrong in Toronto. One after another, privacy experts are leaving the project disillusioned; inhabitants don’t get a say in what is happening, nor will they be able to opt out once it’s done; all the data generated for the project seems to become the property of Sidewalk Labs, enhancing the Google family’s hegemony even further.

The promise of data blinds us to its shortcomings

A little digression. During his stay in Seoul, Ter Borg picked the beach umbrella as the symbol of a non-gentrified street view. An easily recognisable object that appeals to the imagination of people outside the South-Korean capital as well. However, it of course strongly simplifies a complex and layered process. Collected data and the data deduced from that may approach reality, but is not reality. In the context of Ter Borg’s project that’s not a problem; in the context of decisions on urban problems and developments, it is. Do we really trust the Googles of the world to build our future on the basis of an illusion?

What are the consequences for our society?

Having more data, or “as much data as Google”, is not the solution. What is needed is a greater vision on data collection and use—for what purpose, under which conditions, by whom, what data—and on the special nature of public data. If we believe that there are public matters where the interests of society outweighs commercial interests, then we must also protect that data related to those public matters.

Whom do we trust with the collective good? (only in Dutch, 15.11.2018)
https://www.bitsoffreedom.nl/2018/11/15/wie-vertrouwen-we-het-collectieve-goed-toe/

Google’s “Smart City of Surveillance” Faces New Resistance in Toronto (13.11.2018)
https://theintercept.com/2018/11/13/google-quayside-toronto-smart-city/

City of Amsterdam dataportal (only in Dutch)
https://data.amsterdam.nl

Bits of Freedom
https://www.bitsoffreedom.nl/

(Contribution by Evelyn Austin, EDRi member Bits of Freedom)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
21 Nov 2018

Terrorist content regulation – prior authorisation for all uploads?

By Joe McNamee

The European Commission’s proposal for a Regulation on Terrorist Content Online is a complex and multi-layered instrument. On the basis of an “impact assessment” that fails to provide meaningful justification for the measures, it proposes:

  • Obligations to take down content on the basis of removal orders within one hour
  • An arbitrary system of referrals of arbitrarily designated potentially dangerous (but not illegal) content to internet service providers. The removal of this content is decided by the provider, based on their terms of service and not the law
  • Unclear, arbitrary “proactive” measures to be imposed on an unclear subset of service providers to remove unspecified content.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

However, buried deep in this chaotic-by-default legal drafting are explanatory notes (“recitals”) that goes further than is explicit in all of the headline measures in the proposal:

  • Recital 18 introduces the notion that “reliable technical tools” (artificial intelligence software, in other words) may be used by service providers to “identify new terrorist content”.
  • Recital 19 goes on to say, that a“decision to impose such specific proactive measures should not, in principle, lead to the imposition of a general obligation to monitor”. “In principle” means that it should not, but it may. In this context, this is the only way these words can be interpreted. Moreover, this recital also gives the national “competent authority” the option to force the use of technical measures upon service providers.

The text goes on to explain that in unspecified circumstances, Member States may derogate from their obligation (under the E-Commerce Directive) not to impose a general monitoring obligation on internet service providers. When doing so, it should “provide appropriate justification” (to whom is not explained).

What does this mean? It means that the proposal explicitly tells European Member States that they have the option to require not “just” monitoring all uploads to filter out of known terrorist content – but to require the use of algorithms to review all content, while it is being uploaded. Permission for the upload would be algorithmically denied or granted. In case of denial of permission by the algorithms involved, personal data may be stored and made to law enforcement authorities.

The proposal is ill-drafted, lacks evidence to justify the extreme measures it contains. It is unclear according to the data provided by the Impact Assessment how such measures would, even in theory, address or resolve the problem at stake. What is clear is that it will lead to more power for big tech companies to scan and delete information online without accountability. Will the European Parliament be able to fix this? Time – and our elected representatives – will tell.

(Contribution by Joe McNamee, EDRi)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
21 Nov 2018

The TERR Committee votes on its irreparable draft Report

By Chloé Berthélémy

The draft Report of the rather secretive work carried out by the European Parliament’s Special Committee on Terrorism (TERR) released in June 2018 raised major concerns, as previously reported in the EDRi-gram. On 13 November 2018, the members of the TERR Committee voted on the amendments to the draft.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

If they had been adopted, some of the proposed amendments could have improved the text and, we would be telling a different story today. Unfortunately, the Report seems to be beyond salvation. Despite its non-binding nature, the text sets dangerous precedents and does not even respect the mandate of the Committee “to assess the impact of the EU anti-terrorism legislation and its implementation on fundamental rights”. Instead, it maps current initiatives related (or not) to counter-terrorism and recommends considering proposals that further erode fundamental rights online and offline in the European Union. The Report will soon be tabled for a plenary vote in the European Parliament. Its chances to be significantly amended or rejected are, however, not big.

Fundamental rights deserve more than a preamble

The draft Report originally included a section dedicated to fundamental rights, which proved to be very problematic. For example, it claimed that the right to security was more important than the right to privacy – that we somehow should surrender our fundamental rights in exchange for security. It was also narrowly focused on the rights of a limited category of people. Unfortunately, references to fundamental rights were so scarce that Committee members chose to add a preamble to the Report to introduce vague considerations for the respect of fundamental rights and freedoms instead of mainstreaming our fundamental rights in the text. For instance, the Report calls for the alignment of the European counter-terrorist policies with the Charter of Fundamental Rights of the European Union, while proposing measures that are in contradiction to this principle.

Accepted amendments merely establish a “shopping list” of individual rights. None of the fundamental rights provisions of the Report is substantiated with concrete proposals to end violations of fundamental rights or how to better enforce and respect these rights. For instance, the Committee missed the opportunity to advocate for impact assessments focused on human rights or for the systematic consultation of organisations focussed civil liberties and fundamental freedoms, including civil society and Data Protection Authorities (DPAs). Compensating repressive and freedom-restrictive measures with superficial references to fundamental rights (such as removing more content online “but without endangering freedom of speech” or balancing interoperability with “fundamental rights of the data subjects”) is utterly insufficient at best and duplicitous at worst. Fundamental rights deserve stronger protection.

TERR Committee, the Commission’s yes-man

It is an easy see that the Report praises the European Commission’s rhetoric and legislative action. In terms of institutional framework, the text calls for the maintaining a Commissioner for Security Union. It ignores the fact that this function created multiple overlaps and frictions with other Commissioners’ portfolios and shifted the focus of many legislative files (such as the draft Terrorism Regulation) to a purely law enforcement perspective, moving them away from consideration of fundamental rights and civil liberties requirements.

Another example is that the Report calls for the swift adoption of the Commission’s proposals for cross-border access to electronic evidence, despite many concerns repeatably expressed by several stakeholders, including the European Data Protection Board and despite the fact that the democratic scrutiny of this instrument has hardly started in the Parliament.

Worse still, it exacerbates the initial provisions on encryption, by requesting the development of a hub for decryption, including decryption tools and expertise within Europol, to access data obtained in the course of criminal investigations. Weakening encryption to supposedly support law enforcement services actually creates vulnerabilities and increases security risks.

Is there something that could save the day?

Fortunately, one can also dig out a few positive elements. For example, the text recommends to keep counter-terrorism as an area of responsibility of the European Parliament Committee on Civil Liberties, Justice and Home Affairs (LIBE). In addition, the Committee advocates for the integration of media and information literacy into national education systems in order to teach how to use the internet responsibly. Elaborating on the right to privacy, the text also calls on the Commission and the European Data Protection Supervisor (EDPS) to further develop innovative “privacy by design” solutions.

Overall, however, the Report sends a very bad sign to citizens that – even in a non-binding report – giving up liberty in return for security (even if perfect security was possible) is a deal worth making.

The final version is expected to be tabled for a vote in European Parliament plenary in the coming months.

EU Parliament’s anti-terrorism draft Report raises major concerns (10.10.2018)
https://edri.org/eu-parliaments-anti-terrorism-draft-report-raises-major-concerns/

Draft report on findings and recommendations of the Special Committee on Terrorism (21.06.2018)
http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+COMPARL+PE-621.073+01+DOC+PDF+V0//EN&language=EN

Amendments to the Draft report on findings and recommendations of the Special Committee on Terrorism (12-18.09.2018)
http://www.europarl.europa.eu/committees/en/terr/amendments.html

EU’s flawed arguments on terrorist content give big tech more power (24.10.2018)
https://edri.org/eus-flawed-arguments-on-terrorist-content-give-big-tech-more-power/

Independent study reveals the pitfalls of “e-evidence” proposals (10.10.2018)
https://edri.org/independent-study-reveals-the-pitfalls-of-e-evidence-proposals/

Opinion on Commission proposals on European Production and Preservation Orders for electronic evidence in criminal matters (08.10.2018)
https://edpb.europa.eu/our-work-tools/our-documents/opinion-art-70/opinion-commission-proposals-european-production-and_en

(Contribution by Chloé Berthélémy, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
21 Nov 2018

Greece: Clarifications sought on human rights impacts of iBorderCtrl

By Homo Digitalis

On 5 November 2018, EDRi observer Homo Digitalis filed a petition to the Greek Parliament about the pilot implementation of the iBorderCtrl project on the Greek border. The Minister in charge will have 25 days to reply to it.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

The iBorderCtrl (Intelligent Portable Control System) is a project that claims to enable faster and more thorough border control for third country nationals crossing the land borders of EU Member States. It includes software and hardware technologies ranging from portable readers and scanners related to biometric verification, automated deception detection, document authentication, and risk assessment. The project has received funding from the European Union’s Horizon 2020 research and innovation program (EU contribution: 4 501 877 euro, Project N°: 700626).

As a pilot project implemented on the Hungarian, Greek, and Latvian borders, iBorderCtrl is not an authorised law enforcement system and it works on a voluntary basis. It consists of a two-stage procedure. The first stage is a pre-screening step in which travelers upload pictures of their passport, visa and proof of funds to an online platform, and they get interviewed by a computer-animated border guard via a webcam. Based on the micro-gestures of the travelers, the system claims to be able to detect deception and figure out whether or not the interviewees are lying. The second stage takes place at the actual border. There, travelers who have been flagged as low risk during the pre-screening stage go through a short re-evaluation of their information, while individuals categorised as higher-risk undergo a more detailed check.

Homo Digitalis is alarmed about the introduction of such artificial intelligence (AI) systems to different aspects of our lives, even on a voluntary basis. In the European Union, people enjoy a high level of human rights protection based on the provisions of the EU Treaties and the Charter of Fundamental Rights of the European Union. It is unlikely that an AI system could reliably and without errors detect deception based on face expressions. In addition, if the technical reports and the legal/ethical evaluations that accompany this system are kept confidential, there is even more reason for doubt.

The petition filed by Homo Digitalis underlines the lack of transparency in the implementation of the technology and expresses mistrust regarding the true capabilities of the AI system used in the context of the iBorderCtrl project. In addition, the petition stresses out that there is a high risk of discrimination against natural persons on the basis of special categories of personal data. Therefore, the petition demands the Minister in change to state whether a Data Protection Impact Assessment and a consultation with the Greek Data Protection Authority (DPA) took place prior to the implementation of this pilot system to the Greek borders. It also requests clarification on why the technical reports and the legal and ethical evaluations accompanying the project are being kept confidential, even though the iBorderCtrl is not an authorised law enforcement system.

Smart lie-detection system to tighten EU’s busy borders (24.10.2018)
https://ec.europa.eu/research/infocentre/article_en.cfm?artid=49726

iBorderCtrl: a success story! (29.10.2018)
https://www.iborderctrl.eu/news/iborderctrl-success-story

Homo Digitalis files a petition to the Hellenic Parliament about the pilot implementation of the iBorderCtrl system to the Greek borders (only in Greek, 05.11.2018)
https://www.homodigitalis.gr/posts/2771

Homo Digitalis
https://www.homodigitalis.gr/

(Contribution by Eleftherios Chelioudakis, EDRi observer Homo Digitalis, Greece)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
21 Nov 2018

#TeleormanLeaks: Privacy vs freedom of expression

By ApTI

The first big General Data Protection Regulation (GDPR) privacy case broke out in Romania at the beginning of November 2018 in connection with an article about a corruption scandal involving a politician and his relationship with a company investigated for fraud.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

The Romanian Data Protection Authority (ANSPDCP) sent a series of questions to the journalists who published information about the scandal, asking for their “sources” and mentioning a possible penalty − the biggest one since the entry into force of the GDPR in May 2018: up to 20,000,000 euro. ANSPDCP claims that it acts independently, without any political interference, and that its entire raison d’etre ever since its establishment in 2005 is to “ensure a balance between the right to the protection of personal data, the freedom of expression and the right to information”.

#TeleormanLeaks is the name of the press story uncovering the link between Tel Drum, a road construction company based in Teleorman county in Romania, (currently under investigation for fraud with European funds, based on a complaint sent by the European Anti-Fraud Office – OLAF), and Liviu Dragnea, the president of the Social Democratic Party and president of the Chamber of Deputies. The first part of the investigation was published on 5 November 2018 by RISE Project, a Romanian investigative journalism outlet. A Facebook post was also published to promote the investigation, as a teaser.

On 8 November, ANSPDCP sent a notice to RISE Project to ask eight questions on the personal data included in the material posted on Facebook, including “the sources from where the personal data was obtained”. This sparked international outrage, the Organized Crime and Corruption Reporting Project (OCCRP), the European Commission, and dozens of journalists and media outlets reacted strongly.

One day before the authority’s letter, Romanian media reported that one of the key people involved in this scandal, currently the commercial director of Tel Drum and former head of the financial department in the same company, filed a “right to be forgotten” claim to ANSPDCP. It is important to note that apparently the ANSPDCP’s notice to RISE Project was not based on this complaint filed by this employee, but as the authority’s clarifications underline, they issued their letter based on a notice from a third party not directly affected by the case.

ANSPDCP considered that it was entitled to invoke Articles 57 (1) (f) and 58 (1) of the GDPR and ask for information about the source of the information published in the Facebook post. However, in both “clarifications” published on its website, the authority fails to explain why it considers that the situation does not fall under the derogations of Article 7 of law 190/2018 (that implemented Article 85 of GDPR), nor does it share its analysis for reconciling the fundamental rights in question.

In law no. 190/2018 implementing the GDPR, Romania opted to limit the exceptions of Article 85 to the following alternative scenarios in which data processing activities can be performed for journalistic purposes (Article 7):

  • if it concerns personal data which was clearly made public by the data subject;
  • if the personal data is tightly connected to the data subject’s quality as a public person;
  • or if the personal data is tightly connected to the public character of the acts in which the data subject is involved.

This national implementation of the GDPR is questionable because it allows derogations from GDPR only on data processing for journalistic purposes only in one of these three alternative scenarios, which are extremely limited. Personal data processing for journalistic activities is usually much wider than this. To restrict derogations for journalistic purposes only to the three listed options falls short of the protections required to protect freedom of expression, failing to respect, in particular, journalistic freedom and human rights jurisprudence in this regard. Such an approach also will not lead to a uniform application of the GDPR at European level.

From the correspondence to RISE Project, it can be assumed that ANSPDCP interpreted that it was not covered by Article 7 of law 190/2018 and that it considered that:

  • either the Facebook post was not written for journalistic purposes;
  • or that this situation is not covered by one of the narrow exceptions in Article 7.

Speculating more broadly, perhaps ANSPDCP never intended to look into the journalistic activity, but they were rather interested in whether there had been an underlying abuse of personal data, and sought to find out who did not adequately protect the personal data that is now in the hands of the journalist.
That the data protection authority sought not to apply the exception in Article 7, is in itself questionable. However, even if they had applied Article 7, the deficiencies in this exception outlined above mean that it is not guaranteed that there would have been adequate protection for freedom of expression and journalistic sources.

ApTI together with Privacy International, EDRi, and 15 other digital rights NGOs sent a letter to the European Data Protection Board, with ANSPDCP and the European Commission in copy, asking for the GDPR not to be misused in order to threaten media freedom in Romania.

At national level, ApTI together with other 11 local human rights and media organisations sent an open letter calling on ANSPDCP to carefully analyse GDPR cases that might endanger freedom of expression and demanding for an urgent and transparent mechanism to be put in place when assessing claims involving data processing operations for journalistic purposes.

This case demonstrates that it is essential that data protection authorities work to reconcile fundamental rights. Data protection law should be used to protect rights, and not as a tool to silence or intimidate journalists and public interest reporting.

#TeleormanLeaks explained: privacy, freedom of expression, and public interest (21.11.2018)
https://privacyinternational.org/blog/2456/teleormanleaks-explained-privacy-freedom-expression-and-public-interest

Data protection law is not a tool to undermine freedom of the media (21.11.2018)
https://privacyinternational.org/advocacy-briefing/2455/data-protection-law-not-tool-undermine-freedom-media

Legislative and Jurisprudential Guidelines for Internet Freedom – Best practices study
https://cases.internetfreedom.blog/index.html#sapte

Asociația pentru Tehnologie și Internet (ApTI)
https://www.apti.ro

(Contribution by Valentina Pavel, Mozilla Fellow and EDRi member ApTI, Romania)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
20 Nov 2018

Letter to the EU Council: Stand for citizen’s rights and the European digital economy in the copyright negotiations!

By EDRi

On 19 November 2018, EDRi, together with 53 other NGOs, sent a letter to the Council of the European Union. The letter draws attention to the ongoing concerns regarding the proposal on copyright in the Digital Single Market, ahead of a crucial meeting on 23 November.

You can read the letter here (pdf) and below:

Your Excellency Deputy Ambassador,

We, the undersigned, are writing to you ahead of the 23 November COREPER 1 meeting, at which copyright in the Digital Single Market will be discussed by the Austrian Council Presidency. We consider that it is too early at this stage to give a renewed mandate to the Austrian Presidency.

Representing human, privacy, civil rights and media freedom organisations, software developers, creators, journalists, radio stations, higher education institutions and research institutions, we would like to draw your attention to our ongoing concerns regarding the proposal. We believe that both the Council and the Parliament texts risk creating severe impediments to the functioning of the Internet and the freedom of expression of all. In previous open letters of April 26 (here) and July 2 (here), we urged European policymakers to deliver a reform that upholds fundamental rights of all to freedom of expression as well as core principles such as limitation of internet intermediaries’ liability (which is essential to ensure the balance of rights repeatedly required by CJEU rulings) and access to knowledge.

In the current negotiations, these values are severely threatened, most importantly due to:

  • Art. 13 (upload filters): Changing or reinterpreting the liability regime for platforms and making them directly liable is a threat to fundamental rights as +70 Internet luminaries, the UN Special Rapporteur on Freedom of Expression, NGOs, programmers, and academics have stated repeatedly. The resultant upload filters (“content recognition technologies”) would push internet intermediaries to rely on technologies that are error prone, intrusive and legally questionable1. Relying on very imperfect algorithms to regulate free speech online will put the diversity of opinions and creative content at risk. The legal uncertainty created for European companies will mean that they will never know how much filtering will be enough to be considered to be “enough” for 27 national transpositions of the Directive. The only option will be blocking of legal content.
  • Art. 11 (press publishers’ right): As analysed by a plethora of academics (see here and here, for example), a press publishers’ right is not needed and will have only harmful outcomes. Furthermore, as a result of this proposal, user-shared links on social media, news aggregation websites and search engines will no longer show extracts or will become unavailable, posing limits to the freedom of citizens to seek and impart information. Media plurality will suffer as new or innovative news sources will no longer be treated equally in the display of results on the internet. Additionally, user-created content on platforms will no longer be able to include extracts of licensed works, as quotation rules among European countries are not harmonised.

For the ongoing trilogue negotiations, we urge you to reject obligatory or “voluntary” coerced filters and to keep the current liability regime intact. Enforcement of copyright must not become a pre-emptive, arbitrary and privately-enforced censorship of legal content.

Moreover, we ask you to hear the voice of academic research that a press publishers’ right will not have the intended effect and will instead lead to a less informed European society.1

For all of the above reasons, we call on you to take a firm stance for citizen’s rights and the European digital economy in the ongoing trilogue negotiations. We call on you to stand up for a copyright that respects the foundations of a free, innovative and open digital society that delivers a vibrant, open marketplace for artists and their works.

Best regards,

EUROPE
1. Civil Liberties Union for Europe (Liberties)
2. European Digital Learning Network (DLEARN)
3. European Digital Rights (EDRi)
4. European Network for Copyright in Support of Education and Science (ENCES)
5. Knowledge Ecology International Europe (KEI Europe)
6. Free Knowledge Advocacy Group EU AUSTRIA
7. epicenter.works – for digital rights
8. Freischreiber – Verein zur Förderung des freien Journalismus

BELGIUM
9. KlasCement.net

BULGARIA
10. Bulgarian Helsinki Committee

CROATIA
11.Digital DemoCroatia

CZECH REPUBLIC
12. EDUin

DENMARK
13. IT-Political Association of Denmark

ESTONIA
14. Estonian Human Rights Centre

FRANCE
15. Wikimédia France

GERMANY
16. Chaos Computer Club
17. Digitale Gesellschaft e.V.
18. Freischreiber
19. Initiative gegen ein Leistungsschutzrecht (IGEL)
20. Verbraucherzentrale Bundesverband e.V.
21. Wikimedia Deutschland

GREECE
22. Open Technologies Alliance – GFOSS (Greek Free Open Source Software Society)

ITALY
23. Hermes Center for Transparency and Digital Human Rights

LUXEMBOURG
24. Frënn vun der Ënn

NETHERLANDS
25. Bits of Freedom (BoF)
26. Kennisland

NORWAY
27. Elektronisk Forpost Norge

POLAND
28. Centrum Cyfrowe
29. ePa ń stwo Foundation

PORTUGAL
30. Associação D3 – Defesa dos Direitos Digitais (D³)
31. Associação Nacional para o Software Livre (ANSOL)

ROMANIA
32. ActiveWatch
33. APADOR-CH (Romanian Helsinki Committee)
34. Association for Technology and Internet (ApTI)
35. Centrul pentru Inovare Public ă (Center for Public Innovation)
36. Digital Citizens Romania

SLOVENIA
37. Digitas Institute
38. Forum za digitalno družbo (Digital Society Forum)
39. Intellectual Property Institute

SPAIN
40. Asociación de Internautas
41. Plataforma en Defensa de la Libertad de Información (PDLI)
42. Rights International Spain
43. Xnet

SWEDEN
44. Wikimedia Sverige

UNITED KINGDOM
45. Open Rights Group (ORG)
46. Statewatch

GLOBAL
47. Association for Progressive Communications (APC)
48. Center for Democracy & Technology (CDT)
49. COMMUNIA Association
50. Creative Commons
51. Electronic Frontier Foundation (EFF)
52. Open Knowledge International
53. OpenMedia
54. Wikimedia

 

LIST OF ADDITIONAL SIGNATURES

Update on 20 November 2018

55. World Wide Web Foundation (Global)


1 See CJEU cases Scarlet VS SABAM (C-70/10) in filtering and Promusicae v Telefónica (C/275/06) on the obligations of Member States on balance of rights.

Twitter_tweet_and_follow_banner

close
12 Nov 2018

Job alert: EDRi is looking for a Senior Policy Advisor

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 39 digital human rights organisations from across Europe and beyond. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

EDRi is looking for a talented and dedicated Senior Policy Advisor to join EDRi’s team in Brussels. This is a unique opportunity to be part of a growing and well-respected NGO that is making a real difference in the defence and promotion of online rights and freedoms in Europe and beyond. The deadline to apply is 2 December 2018. This full-time, permanent position is to be filled as soon as possible.

Key responsibilities:

As a Senior Policy Advisor, your main tasks will be to:

  • Monitor, analyse and report about human rights implications of EU digital policy developments;
  • Advocate for the protection of digital rights, particularly but not exclusively in the areas of platform regulation, surveillance and law enforcement, telecommunications and digital trade;
  • Provide policy-makers with expert, timely and accurate input;
  • Draft policy documents, such as briefings, position papers, amendments, advocacy one-pagers, letters, blogposts and EDRi-gram articles;
  • Provide EDRi members with information about EU’s relevant legislative processes, coordinate working groups, help developing campaign messages and providing the public with information about EU’s relevant legislative processes and EDRi’s activities.
  • Represent EDRi at European and global events;
  • Organise and participate in expert meetings;
  • Maintain good relationships with policy-makers, stakeholders and the press;
  • Support and work closely with other staff members including policy, communications and campaigns colleagues and report to the Executive Director;
  • Contribute to the policy strategy of the organisation;

Desired qualifications and experience:

  • Minimum 3 years of relevant experience in a similar role or EU institution;
  • A university degree in law, EU affairs, policy, human rights or related field or equivalent experience;
  • Demonstrable knowledge of, and interest in privacy, net neutrality, digital trade, surveillance and law enforcement, freedom of expression, as well as other internet policy issues;
  • Knowledge and understanding of the EU, its institutions and its role in digital rights policies;
  • Experience in leading advocacy efforts and creating networks of influence;
  • Exceptional written and oral communications skills;
  • IT skills;
  • Strong multitasking abilities and ability to manage multiple deadlines;
  • Experience of working with and in small teams;
  • Experience of organising events and/or workshops;
  • Ability to work in English and French. Other European languages an advantage.

What EDRi offers:

  • A permanent, full-time contract;
  • A dynamic, multicultural and enthusiastic team of experts based in Brussels;
  • A competitive NGO salary with benefits;
  • A high degree of autonomy and flexibility;
  • An international and diverse network;
  • Internal career growth;
  • Networking opportunities.

How to apply:

To apply, please send a maximum one-page cover letter and a maximum two-page CV in English and in .pdf format to julien.bencze(at)edri.org by 2 December 2018.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

Please note that only shortlisted candidates will be contacted.

Twitter_tweet_and_follow_banner

close
07 Nov 2018

NGOs urge Austrian Council Presidency to finalise e-Privacy reform

By Epicenter.works

EDRi member epicenter.works, together with 20 NGOs, is urging the Austrian Presidency of the Council of the European Union to take action towards ensuring the finalisation of the e-Privacy reform. The group, counting the biggest civil society organisations in Austria such as Amnesty International and two labour unions, demands in an open letter sent on 6 November 2018 an end to the apparently never-ending deliberations between the EU member states.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

It is today 666 days since the European Commission launched its proposal. The e-Privacy regulation is an essential aspect for the future of Europe’s digital strategy and a necessity for the protection of modern democracies from ubiquitous surveillance networks. Echoing European citizens rightful demands for protections of their online privacy, the organisations ask the Austrian Presidency to lead the way into a new privacy era by concluding the e-Privacy dossier by 2019.

The letter comes in a context in which a parliamentary inquiry from the Austrian Social Democratic party tries to shed light on the lobby connections of the Austrian government regarding the hampering of secure communications for its citizens. Right now, the Austrian government’s position is closely aligned with the interests of internet giants like Facebook and Google, big telecom companies and the advertisement industry.

The Austrian government has recently fast-tracked negotiations on the controversial e-evidence proposal, which would weaken the rule of law and foster further surveillance of citizens’ online behaviour. This is a stark contrast to the meager effort Austrian representatives put into negotiations around legislative proposals that aim to protect the fundamental right to privacy – a topic missing from the Austrian Council Presidency agenda.

In order to ensure that e-Privacy laws will not be used as excuse for the establishment of new repressive instruments, epicenter.works demands a clear commitment to the prohibition of data retention. Data retention has been found unconstitutional in different European countries, while epicenter.works was plaintiff in the 2014 proceedings of the European Court of Justice (ECJ) annulling the data retention directive. A circumvention of the ECJ’s ban through the e-Privacy regulation could expose EU citizens to indiscriminate mass-surveillance and severely undermine trust in EU institutions.

Open Letter sent to Austrian Government (in German only, 06.11.2018)
https://epicenter.works/content/offener-brief-wir-brauchen-eprivacy

Parliamentary inquiry from the Austrian Social Democratic Party (in German only, 29.10.2018)
https://www.parlament.gv.at/PAKT/VHG/XXVI/J/J_02174/index.shtml

Council continues limbo dance with the ePrivacy standards (24.10.2018)
https://edri.org/council-continues-limbo-dance-with-the-eprivacy-standards/

ePrivacy: Public benefit or private surveillance? (24.10.2018)
https://edri.org/eprivacy-public-benefit-or-private-surveillance/

ECJ: Data retention directive contravenes European law (09.04.2014)
https://edri.org/ecj-data-retention-directive-contravenes-european-law/

(Contribution by Thomas Lohninger, EDRi member epicenter.works)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
07 Nov 2018

UN Special Rapporteur analyses AI’s impact on human rights

By Chloé Berthélémy

In October 2018, the United Nations (UN) Special Rapporteur for the promotion and protection of the right to freedom of opinion and expression, David Kaye, released his report on the implications of artificial intelligence (AI) technologies for human rights. The report was submitted to the UN General Assembly on 29 August 2018 but has only been published recently. The text focuses in particular on freedom of expression and opinion, privacy and non-discrimination. In the report, the UN Special Rapporteur David Kaye first clarifies what he understands by artificial intelligence and what using AI entails for the current digital environment, debunking several myths. He then provides an overview of all potential human rights affected by relevant technological developments, before laying down a framework for a human rights-based approach to these new technologies.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

1. Artificial intelligence is not a neutral technology

David Kaye defines artificial intelligence as a “constellation of processes and technologies enabling computers to complement or replace specific tasks otherwise performed by humans” through “computer code […] carrying instructions to translate data into conclusions, information or outputs.” He states that AI is still highly dependent on human intervention, as humans need to design the systems, define their objectives and organise the datasets for the algorithms to function properly. The report points out that AI is therefore not a neutral technology, as the use of its outputs remains in the hands of humans.

Current forms of AI systems are far from flawless, as they demand human scrutiny and sometimes even correction. The report considers that AI systems’ automated character, the quality of data analysis as well as systems’ adaptability are sources of bias. Automated decisions may produce discriminatory effects as they rely exclusively on specific criteria, without necessarily balancing them, and they undermine scrutiny and transparency over the outcomes. AI systems also rely on huge amounts of data that has questionable origins and accuracy. Furthermore, AI can identify correlations that can be mistaken for causations. David Kaye points at the main problem of adaptability when losing human supervision: it poses challenges to ensuring transparency and accountability.

2. Current uses of artificial intelligence interfere with human rights

David Kaye describes three main applications of AI technology that pose important threats to several human rights.

The first problem raised is AI’s effect on freedom of expression and opinion. On one hand, “artificial intelligence shapes the world of information in a way that is opaque to the user” and conceals its role in determining what the user sees and consumes. On the other, the personalisation of information display has been shown to reinforce biases and “incentivize the promotion and recommendation of inflammatory content or disinformation in order to sustain users’ online engagement”. These practices impact individuals’ self-determination and autonomy to form and develop personal opinions based on factual and varied information, therefore threatening freedom of expression and opinion.

Secondly, similar concerns can be raised in relation to our right to privacy, in particular with regard to AI-enabled micro-targeting for advertisement purposes. As David Kaye states, profiling and targeting users foster mass collection of personal data, and lead to inferring “sensitive information about people that they have not provided or confirmed”. The few possibilities to control personal data collected and generated by AI systems put into question the respect of privacy.

Third, the Special Rapporteur highlights AI as an important threat to our rights to freedom of expression and non-discrimination due to AI’s increasingly-allocated role in the moderation and filtering of content online. Despite some companies’ claims that artificial intelligence can support exceeded human capacities, the report sees the recourse to automate moderation as impeding the exercise of human rights. In fact, artificial intelligence is unable to resist discriminatory assumptions or to grasp sarcasm and the cultural context for each piece of content published. As a result, freedom of expression and our right not to be discriminated against can be severely hampered by delegating complex censorship exercises to AI and private actors.

3. A set of recommendations for both companies and States

Recalling that “ethics” is not a cover for companies and public authorities to neglect binding and enforceable human rights-based regulation, the UN Special Rapporteur recommends that “any efforts to develop State policy or regulation in the field of artificial intelligence should ensure consideration of human rights concerns”.

David Kaye suggests human rights should guide development of business practices, AI design and deployment and calls for enhanced transparency, disclosure obligations and robust data protection legislation – including effective means for remedy. Online service providers should make clear which decisions are made with human review and which by artificial intelligence systems alone. This information should be accompanied by explanations of the decision-making logic used by algorithms. Further, the “existence, purpose, constitution and impact” of AI systems should be disclosed in an effort to improve the level of individual users’ education around this topic. The report also recommends to make available and publicise data on the “frequency at which AI systems are subject to complaints and requests for remedies, as well as the types and effectiveness of remedies available”.

States are identified as key actors responsible for creating a legislative framework hospitable to a pluralistic information landscape, preventing technology monopolies and supportive of network and device neutrality.

Lastly, the Special Rapporteur provides useful tools to oversee AI development:

  1. human rights impact assessments performed prior, during and after the use of AI systems;
  2. external audits and consultations with human rights organisations;
  3. enabled individual choice thanks to notice and consent;
  4. effective remedy processes to end human rights violations.

UN Special Rapporteur on Freedom of Expression and Opinion Report on AI and Freedom of Expression (29.08.2018)
https://freedex.org/wp-content/blogs.dir/2015/files/2018/10/AI-and-FOE-GA.pdf

Civil society calls for evidence-based solutions to disinformation
(19.10.2018)
https://edri.org/civil-society-calls-for-evidence-based-solutions-to-disinformation/

(Contribution by Chloé Berthélémy, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close