16 Jul 2020

A victory for us all: European Court of Justice makes landmark ruling to invalidate the Privacy Shield

By EDRi

Today, 16 July 2020, the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield. The ruling proves a major victory for EU residents on how their personal data is processed and used by platforms like Facebook. The decision mandates the need to bring strong privacy legislation in the US and and generally a close scrutiny to data protection systems in place to avoid the misuse and unnecessary handling of private data of EU residents.

The huge power of US intelligence services, as disclosed by Edward Snowden in 2013, proved that the data protection and privacy rights of EU residents are not sufficiently protected. We cannot allow any foreign agency to track and surveil our communities with such a disregard for fundamental rights.

“Today’s European Court of Justice ruling is a victory for privacy against mass surveillance”, says Diego Naranjo, Head of Policy at EDRi. “This is a win both for Europeans, whose personal data will be better protected, and a call for US authorities to reform the way intelligence service operate.”, he further adds.

At its core, this case is about a conflict of law between US surveillance laws which demand surveillance and EU data protection laws that require privacy. The CJEU has decided today to bin Privacy Shield and instead reinforce that Standard Contractual Clauses (SCCs). SCCs which is one of the ways in which companies can make data transfers need very close scrutiny or should be suspended, if protections in the third country cannot be ensured. As noyb notes in their first reaction, Facebook and similar companies may also not use “SCCs” to transfer data as DPC must stop transfers under this instrument. The ruling is great news for all of those defending human rights online.

The background

In 2013, Edward Snowden publicly disclosed that US Intelligence Agencies use surveillance programs such as PRISM to access the personal data of Europeans. The documents disclosed listed several US companies such as Apple, Microsoft, Facebook, Google and Yahoo sharing data with the US government for surveillance programs.

Based on this whistleblowing case, Mr Max Schrems (currently of EDRi member, noyb) filed a complaint against Facebook Ireland Ltd before the Irish Data Protection Commissioner (DPC). The complaint argued that under the EU-US Safe Harbor Decision 2000/520/EC, Mr Schrems’ (and therefore any European platform user) personal data should not be sent from Facebook Ireland Ltd (serving Facebook users outside of the US and Canada) to Facebook Inc. (the US parent company), given that Facebook has to grant the US National Security Agency access to such data.

Next steps

Today’s CJEU ruling is just the beginning. It is now up to the EU to start negotiating a new framework with the US and ensure deep reforms in order for the new framework to be valid and respectful of fundamental rights.

Read more:

CJEU invalidates “Privacy Shield” in US Surveillance case.  SCCs cannot be used by Facebook and similar companies (16.07.20)
https://noyb.eu/en/cjeu

CJEU Media Page (Background, FAQ & other resources)
https://noyb.eu/en/CJEU-Media-Page

EU-US-Datenabkommen gekippt (16.07.20) https://digitalcourage.de/blog/2020/eu-us-datenabkommen-gekippt

In a victory for privacy, the EU Court of Justice bins EU-US Privacy Shield (16.07.20)
https://www.accessnow.org/in-a-victory-for-privacy-the-eu-court-of-justice-bins-eu-us-privacy-shield/

close
08 Jul 2020

Europol: Non-accountable cooperation with IT companies could go further

By Chloé Berthélémy

There is an ongoing mantra among law enforcement authorities in Europe according to which private companies are indispensable partners in the fight against “cyber-enabled” crimes as they are often in possession of personal data relevant for law enforcement operations. For that reason, police authorities increasingly attempt to lay hands on data held by companies – sometimes in disregard to the safeguards imposed by long-standing judicial cooperation mechanisms. Several initiatives at European Union (EU) level, like the proposed regulation on European Production and Preservation Orders for electronic evidence in criminal matters (so called “e-evidence” Regulation), seek to “facilitate” that access to personal data by national law enforcement authorities. Now it’s Europol’s turn.

The Europol Regulation entered into force in 2017, authorising the European Police Cooperation Agency (Europol) to “receive” (but not directly request) personal data from private parties like Facebook and Twitter directly. The goal was to enable Europol to gather personal data, feed it into its databases and support Member States in their criminal investigations. The Commission was supposed to specifically evaluate this practice of reception and transfer of personal data with private companies after two years of implementation (in May 2019). However, there is no public information on whether the Commission actually conducted such evaluation, what were its modalities as well as its results.

Regardless of the absence of this assessment’s results and of a fully-fledged evaluation of Europol’s mandate, the Commission and the Council consider the current legal framework as too limiting and therefore decided to revise it. The legislative proposal for a new Europol Regulation is planned to be released at the end of this year.

One of the main policy option foreseen is to lift the ban on Europol’s ability to proactively request data from private companies or query databases managed by private parties (e.g. WHOIS). However, disclosures by private actors would remain “voluntary”. Just as the EU Internet Referral Unit operates without any procedural safeguards or strong judicial oversight, this extension of Europol’s executive powers would barely comply with the EU Charter of Fundamental Rights – that requires that restrictions of fundamental rights (on the right to privacy in this case) must be necessary, proportionate and “provided for by law” (rather than on ad hoc “cooperation” arrangements).

This is why, in light of the Commission’s consultation call, EDRi shared the following remarks:

  • EDRi recommends to first carry out a full evaluation of the 2016 Europol Regulation, before expanding the agency’s powers, in order to base the revision of its mandate on proper evidence;
  • EDRi opposes the Commission’s proposal to expand Europol’s powers in the field of data exchange with private parties as it goes beyond Europol’s legal basis (Article 88(2));
  • The extension of Europol’s mandate to request personal data from private parties promotes the voluntary disclosure of personal data by online service providers which goes against the EU Charter of Fundamental Rights and national and European procedural safeguards;
  • The procedure by which Europol accesses EU databases should be reviewed and include the involvement of an independent judicial authority;
  • The Europol Regulation should grant the Joint Parliamentary Scrutiny Group with real oversight powers.

Read our full contribution to the consultation here.

Read more:

Europol: Non-transparent cooperation with IT companies (18.05.16)
https://edri.org/europol-non-transparent-cooperation-with-it-companies/

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.18)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

Oversight of the new Europol regulation likely to remain superficial (12.07.16)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

(Contribution by Chloé Berthélémy, EDRi policy advisor)

close
08 Jul 2020

Web browser privacy: ARTICLE 19 welcomes initiatives to protect users

By Article 19

There are widespread web tracking practices that undermine users’ human rights. However, safeguards against web tracking can and are being deployed by various service providers. EDRi member ARTICLE 19, and more generally EDRi as a whole, support these initiatives to protect user privacy and anonymity as part of a wider shift toward a more rights-respecting sector.

Web browsers are our guide across the internet. We use them to connect with others around the globe, orient ourselves, and find what we need or want online. The resulting trail of data that we generate of our preferences and actions has been exploited by the increasingly interdependent business models of the online advertising industry and web browsers. As advertising publishers, agencies, and service providers aim to maximise profit from advertisers by delivering increasingly personalised content to users, web browsers have strong incentives to collect as much data as possible about what each user searches, visits, and clicks on to feed into these targeted advertising models.

These practices not only threaten users’ right to privacy, but can also undermine other fundamental rights, such as freedom of expression and access to information and non-discrimination.

How we are tracked online

A number of mechanisms used by web browsers for ad targeting and tracking can also be used to cross-reference and track users, block access to websites, or discriminate among users based on profiles generated about them from their online activities and physical location. These mechanisms include:

  • Web usage mining, where the underlying data, such as pages visited and time spent on each page, is collected as clickstreams;
  • Fingerprinting, where information such as a user’s OS version, browser version, language, time zone, and screen settings are collected to identify the device;
  • Beacons, which are graphic images placed on a website or email to monitor the behaviour of the user and their remote device; and
  • Cookies, which are small files holding client and website data that can remain in browsers for long periods of time and are often used by third parties.

Being subject to these practices should not be the non-negotiable price of using the internet. An increasing number of service providers are developing and implementing privacy-oriented approaches to serve as alternatives – or even the new default – in web browsing. These changes range from stronger, more ubiquitous encryption of data to the configuration and use of trusted servers for different tasks. These safeguards may be deployed as entirely new architectures and protocols by browsers and applications, and are being deployed at different layers of the internet architecture.

Encrypting the Domain Name System (DNS)

One advancement has been the development and deployment of internet protocols that support greater and stronger encryption of the data generated by users when they visit websites, redressing historical vulnerabilities in the Domain Name System (DNS). Encrypted Server Name Indication (eSNI) encrypts each domain’s identifiers when multiple domains are hosted by a single IP address, so that it is more difficult for Internet Service Providers (ISPs) and eavesdroppers to pinpoint which sites a user visits. DNS-over-HTTPS (DoH) sends encrypted DNS traffic over the Hypertext Transfer Protocol Secure (HTTPS) port and looks up encrypted queries made in the browser using the servers of a trusted DNS provider. These protocols make it difficult to detect, track, and block users’ DNS queries and therefore introduce needed privacy and security features to web browsing.

Privacy-oriented web browsers

Another shift is in the architectures and advertising models of web browsers themselves. Increasingly popular privacy browsers such as Tor and Brave help protect user data and identity. Tor encrypts and anonymises users’ traffic by routing it through the Tor network while Brave anonymises user authentication by using the Privacy Pass protocol, which allows users to prove that they are trusted without revealing identifying information to the browser. Brave’s efforts to develop a privacy-centric model for web advertising, including a protocol that confirms when a user observes an ad without revealing who they are and an anonymised, blockchain-based system to compensate publishers, have been closely followed by Apple and Google, which aim to standardise their own web architectures including Apple Webkit’s ad click attribution technology and Google Chrome’s Conversion Measurement API.

Although there are some differences, Brave’s, Apple’s, and Google’s advertising models all include mechanisms to limit the amount of data passed between parties and the amount of time this data is kept in their systems, disallow data such as cookies for reporting purposes, delay reports randomly to prevent identifiability through timestamp cross-referencing, and prevent arbitrary third parties from registering user data. As such, they not only protect users’ privacy and anonymity, but also prevent cross-site tracking and user profiling.

Despite protocols such as eSNI and DoH and recent privacy advances in web browser advertising models and architectures, tracking of online activities continues to be the norm. For this reason, service providers that are working toward industry change are advocating for the widespread adoption of secure protocols and the standardisation of web browsing privacy models to redress existing vulnerabilities that have been exploited to monetise users’ data without their knowledge, monitor and profile them, and restrict the availability of content.

If privacy-oriented protocols and privacy-respecting web browsing models are standardised and widely adopted by the sector, respect for privacy will become an essential parameter for competition among not only web browsers, but also ISPs and DNS servers. This change can stimulate innovation and provide users with the choice between more and better services that guarantee their fundamental rights.

Challenges for privacy-enhancing initiatives

While these protocols and models have been welcomed by a number of stakeholders, they have also been challenged. Critics claim that these measures make it more difficult, if not impossible, to perform internet blocking and filtering. They claim that, as a result, privacy models undermine features such as parental controls and thwart the ability of ISPs and governments to identify malware traffic and malicious actors. These challenges rest on the assumption that there is a natural trade-off between the power of parties who retain control of the internet and the privacy of individual users.

In reality, however, technological advancement constantly occurs as a whole; updated models lead to updated tools and mechanisms. Take DoH and its impact on parental controls as an example. DoH encrypts DNS queries, rendering most current DNS-filtering mechanisms used for parental controls obsolete; these mechanisms rely on DNS packet inspection that cannot be done on encrypted data without intercepting and decrypting the stream first. In response, both browsers and DNS servers are developing new technologies and services. Mozilla launched its “Canary Domains” mechanism, where queries for ISP-restricted domains are flagged and trigger DoH to be disabled. DoH-compatible DNS server providers like cleanbrowsing.org implement their own filtering policies at the resolver level. While these responses do not mitigate the need to ensure users’ privacy and access to information rights through strong legal and regulatory protections, accountability and transparency of service providers to users, and meaningful user choice, they demonstrate that the real benefits of browser privacy and security measures should not be thwarted on the basis of perceived threats to the status quo.

Leadership opportunity for the EU

In the European Union, the adoption of the General Data Protection Regulation (GDPR) has obliged all stakeholders in the debate to recognise and comply with data protection and privacy-by-design principles. Moreover, the Body of European Electronic Communication Regulators, whose main task is to contribute to the development and better functioning of the EU internal market for electronic communications networks and services, has identified users’ empowerment among its priorities. These dynamics create an opportunity for EU actors to advance global leadership in efforts toward a privacy-oriented internet infrastructure.

Recommendations 

ARTICLE 19 strongly supports initiatives to advance browser privacy, including the implementation of protocols such as eSNI and DoH that facilitate stronger, more ubiquitous encryption of the Domain Name System and privacy-centric web advertising models for browsers. We believe these initiatives will lead to greater respect for privacy and human rights across the sector. In particular, we recommend that:

  • ISPs must help decentralize the encrypted DNS model by deploying their own DoH-compatible servers and encrypted services, taking advantage of the relatively low number of devices currently using DoH and the easy adoption curve it implies;
  • Browsers and DNS service providers should not override users’ configurations regarding when to enable or disable encryption services and which DNS service provider to use. Meaningful user choice should be facilitated by clear terms of service and accessible and clearly defined default, opt-in, and opt-out settings and options;
  • Browsers must additionally ensure that, even as they build privacy-friendly revenue generation schemes and move away from targeted ad models, all of these practices are transparent and clearly defined for users, both in the terms of service and codebase;
  • Internet standards bodies should encourage the inclusion of strong privacy and accountability considerations in the design of protocol specifications themselves, acknowledging the effects of these protocols in real-life testing and deployment; and
  • Civil society must promote the widespread adoption of secure tools, designs, and protocols through information dissemination to help educate the community and empower users’ choices;

Finally, Article 19 urges internet users to support the development and application of privacy-based tools that do not monetise their data by demanding products from their service providers that better protect their privacy.

Read more:

Ethical Web Development booklet:
https://edri.org/files/ethical_web_dev_web.pdf

US companies to implement better privacy for website browsing (29.08.2018)
https://edri.org/us-companies-to-implement-better-privacy-for-website-browsing/

Internet protocol community has a new tool to respect human rights (15.11.2017)
https://edri.org/internet-protocol-community-has-a-new-tool-to-respect-human-rights

(Contribution from Maria Luisa Stasi & Joey Salazar, from EDRi member ARTICLE 19)

close
08 Jul 2020

Spain: Catalan government agrees to improve privacy in schools

By Xnet

The Catalan Department of Education has signed an agreement accepting the plan proposed by Xnet, EDRi member from Spain, titled “Privacy and Democratic Digitization of Educational Centers,” to guarantee the privacy of data and the democratic digitization of schools. The plan foresees the creation of a software-pack and protocols that ensure the educational establishments have alternatives to what until now seemed the only option: the technological dependence on Google and its attached elements, with worrying consequences on individual data.

Things can be different. With this plan, Xnet seeks to create an organic system in educational institutions that guarantees the use of auditable and accessible technologies and that said technologies contribute to preserving the rights of the educational community.

The key points of the project are:

  • Safe and human rights-compliant servers;
  • Auditable tools already in use, added in a stable pack;
  • Training that updates the culture in educational centers in favor of the use of digital technologies that respect human rights.

In Spain and in many other places, COVID-19 has shown how late institutions are in digitisation and their will to understand it. Digital-related public policies often range from carefree technosolutionism to technophobic neo-ludism. The result of these policies in which the educational community is being lectured about the dangers of technology while forced to bend to the will of large digital corporations, is that those dominant platforms already control the vast majority of educational establishments … and therefore the behavior of the students, their families and teachers.

In order to be a society suitable for the digital age in which we live in, it is not necessary to know about technology, nor to be more afraid of it than any other tool. This means that digitisation should be undertaken in an accessible and rational way for everyone; a truly democratic digitisation that improves society. Books have served to build our societies. Nobody expects that whoever wants to use or teach them has to know bookbinding. Perhaps this is where the initial problem arises. If the other subjects are taught by experts on these subjects, why in the field of digitisation do we often only resort to “technicians” and security officers to warn of their dangers?

The notions of network and connectivity allow us to operate in an agile way, having the ability to start processes being few in numbers but having a huge impact, even without the need of advanced technological knowledge.

Read more:

(In Spanish) Propuesta para la excelencia en la privacidad de datos y la digitalización democrática de los centros educativos (03.06.2020)
https://xnet-x.net/privacidad-datos-digitalizacion-democratica-educacion-sin-google/

(In Spanish) El encuadernador y el exorcista: sobre el futuro de la digitalización en la educación (y en todo lo demás) (10.06.2020)
https://blogs.publico.es/dominiopublico/33360/el-encuadernador-y-el-exorcista-sobre-el-futuro-de-la-digitalizacion-en-la-educacion-y-en-todo-lo-demas/

(Contribution by Simona Levi, from EDRi member Xnet).

close
24 Jun 2020

Massive political data leak in Malta

By noyb

After a massive leak of the voter’s list showing the voting preferences, addresses, phones and dates of birth of a majority of the Maltese population, EDRi member noyb.eu will assist the Daphne Foundation and Repubblika in their class action and file complaints about the data breach in various EU Member States.

Colossal privacy violations of voters’ data

At the end of March 2020, independent Maltese media reported that a database containing 337,384 records of Maltese voters’ personal information had been freely accessible online for at least a year. The data did not only include the fields available in the published electoral register but also included mobile and fixed telephone numbers, dates of birth, polling booth and polling box numbers, and a numerical identifier indicating an individual’s political affiliation.

How could this happen?

Maltese voters are enrolled in the Maltese electoral register, which is maintained by the Electoral Commission – a body set up by the Maltese Constitution and whose role it is to maintain the register and organise local, national and European Parliament elections. Around the end of March it was discovered that, C-Planet IT Solutions, an IT company connected to the Labour Party to have stored a copy of the electoral register in an open directory, which was indexed by Google. The database was unprotected and accessible to anyone with a web browser, reported the Times of Malta.

Data protection and democracy

After the Cambridge Analytica scandal, everyone understands the fundamental role of data protection in a democracy, especially when the data at stake includes political opinions. As a principle, the GDPR prohibits the processing of data revealing political opinions. What is even more worrying is the total lack of protection of these data which were publicly accessible by everyone.

In a democracy, we cannot accept the processing of political data spiraling out of control. Political parties in particular should not be using voters’ information for purposes other than what the law permits them to do. Could you imagine your political preferences being used to deny you access to a public service or an employment opportunity?

Romain ROBERT, data protection lawyer at noyb.

Civil society in Malta reacts. 

Against this context, two NGOs – the Daphne Foundation and Repubblika –have teamed up and organised a platform that allows citizens affected by this data breach to sue C-Planet IT Solutions Limited and any other entity involved. An investigation has been launched by the Maltese DPA, but the class action targets civil damages, including moral damages. The Daphne Caruana Galizia Foundation set up a tool that allows everyone to check what information was collected on them. They invite everyone wanting to join the collective action to visit the FAQ. Also, if you want to join a complaint filed by noyb outside Malta, please contact them at [email protected]

Read more:

Investigation after huge data leak leaves 337,000 voters’ records exposed (01.04.2020)
https://timesofmalta.com/articles/view/massive-data-leak-leaves-more-than-377000-voting-records-exposed.782483

Collective action against C-Planet data breach (03.04.2020)
https://www.daphne.foundation/en/2020/04/03/collective-action-data-breach

IDPC launches investigation after over 330,000 voters’ personal data leaked in security breach (01.04.2020)
https://www.maltatoday.com.mt/news/national/101403/over_330000_voters_personal_data_leaked_in_security_breach#.Xuh6ABMzbGI

Labour Party distances itself from massive data breach (02.04.2020)
https://timesofmalta.com/articles/view/labour-party-holds-emergency-meeting-over-data-breach.782906

(Contribution by Ala Krinickytė, from EDRi member noyb)

close
10 Jun 2020

UK: Stop social media monitoring by local authorities

By Privacy International

Would you like your local government to judge you by your Facebook activity? In a recent study, we investigated how local authorities (Councils) in Great Britain are looking at social media accounts as part of their investigation tactics on issues such as benefits, debt recovery, fraud, environmental investigations, and children’s social care.

Social media platforms are a vast trove of information about individuals and collectives, including their personal preferences, political and religious views, physical and mental health and the identity of their friends and families. Social media monitoring or social media intelligence (SOCMINT) are the techniques and technologies that allow the monitoring and gathering of information on social media platforms such as Facebook and Twitter.

Life-changing decisions could be made on the basis of this intelligence but yet no quality check on the effectiveness of this form of surveillance is in place as of now. This has particular consequences and a disproportionate negative impact on certain individuals and communities.

What PI found out

In October 2019 Privacy International sent a Freedom of Information request to every local authority in Great Britain asking not only about whether they had conducted an audit, but sought to uncover the extent to which ‘overt’ social media monitoring in particular was being used and for what local authority functions.

We have analysed 136 responses to our Freedom of Information requests, specifically those that were received by November 2019. All responses are publicly available on the platform “What Do They Know”.

Our investigation has found that:

  • A significant number of local authorities are now using ‘overt’ social media monitoring as part of their intelligence gathering and investigation activities. This substantially out-paces the use of ‘covert’ social media monitoring
  • If you don’t have good privacy settings, your data is fair game for overt social media monitoring.
  • There is no quality check on the effectiveness of this form of surveillance on decision making.
  • Your social media profile could be used by a local authority, without your knowledge or awareness, in a wide variety of their functions; predominantly intelligence gathering and investigations.

The UK Surveillance Commissioner’s Guidance defines overt social media monitoring as looking at ‘open source’ data, that is, publicly available data, and data where privacy settings are available but not applied. This may include: “List of other users with whom an individual shares a connection (friends/followers); Users’ public posts: audio, images, video content, messages; “likes”, shared posts and events”. According to the Guidance, “[r]epetitive examination/monitoring of public posts as part of an investigation” constitutes instead ‘covert’ monitoring and “must be subject to assessment.”

Who is being targeted?

Everyone is potentially targeted as at some point in our lives we all interact with local authorities as we go through some of the processes listed above. The difference, however, is that we all are affected differently.

As in many other instances when it comes to the digitalisation and use of new technologies, those belonging to already marginalised and precarious groups and who are already subject to additional monitoring and surveillance, are once again experiencing the brunt of such practices.

There are particular groups of the populations which are being impacted dramatically by the use of such techniques because they are dependent and subject to the functions of local authorities such as individuals receiving social assistance/welfare as well as migrants.

We have seen similar developments in the migration sector where for immigration enforcement purposes governments are resorting to social media intelligence. Some of these activities are undertaken directly by government themselves but in some instances, governments are calling on companies to provide them with the tools and/or know-how to undertake these sort of activities.

How to protect those most vulnerable

As local authorities in Great Britain and elsewhere seize on the opportunity to use this treasure trove of information about individuals, use of social media by local authorities is set to rise and in the future we are likely to see more sophisticated tools used to analyse this data, automate decision-making, generate profiles and assumptions.

The collection and processing of personal data obtained from social media as part of local authority investigations and intelligence gathering, must be strictly necessary and proportionate to make a fair assessment of an individual. There needs to be effective oversight over the use of social media monitoring, both overt and covert, to ensure that particular groups of people are not disproportionately affected, and where violations of guidance and policies do occur, they are effectively investigated and sanctioned.

It is urgent to ensure that the necessary and adequate safeguards are in place to protect those in the most vulnerable and precarious positions where such information could lead to tragic life altering decisions such as the denial of welfare support.

Therefore, we urge local authorities to:

  • Refrain from using social media monitoring, and avoid it entirely where they do not have a clear, publicly accessible policy regulating this activity

When exceptionally used:

  • Local authorities should use social media monitoring only if and when in compliance with their legal obligations, including data protection and human rights.
  • Every time a local authority employee views a social media platform, this is recorded in an internal log including, but not limited to, the following information:
    • Date/time of viewing, including duration of viewing of a single page
    • Reason/justification for viewing and/or relevance to internal investigation
    • Information obtained from social platform
    • Why it was considered that the viewing was necessary
    • Pages saved and where saved to
  • Local authorities should develop internal policies creating audit mechanisms, including:
    • The availability of a designated staff member to address queries regarding the prospective use of social media monitoring, as well as her/his contact details;
    • A designated officer to review the internal log at regular intervals, with the power to issue internal recommendations

Whilst we may post publicly, we don’t expect local authorities to look at our photos and screenshot our thoughts, and use this without our knowledge to make decisions that could have serious consequences on our life.

The growing intrusion by government authorities’ – without a public and parliamentary debate – also risks impacting what people say online, leading to self-censorship, with the potential deleterious effect on free speech. We may have nothing to hide, but if we know our local authority is looking at our social media accounts, we are likely to self-censor.

Social media platforms should not be reframed as spaces for the state to freely gather information about us and treat people as suspects.

Read more:

When Local Authorities aren’t your Friends (24.05.2020)
https://privacyinternational.org/long-read/3586/when-local-authorities-arent-your-friends

Social Media Monitoring Freedom of Information Act Request to Local Authorities (24.05.2020)
https://privacyinternational.org/long-read/3585/social-media-monitoring-freedom-information-act-request-local-authorities

The use of social media monitoring by local authorities – who is a target? (24.05.2020)
https://privacyinternational.org/explainer/3587/use-social-media-monitoring-local-authorities-who-target

Is your Local Authority looking at your Facebook likes? (01.05.2020)
https://privacyinternational.org/sites/default/files/2020-05/Is%20Your%20Local%20Authority%20Looking%20at%20your%20Facebook%20Likes_%20May2020_0.pdf

Social Media Monitoring – a batch request (07.10.2019)
https://www.whatdotheyknow.com/info_request_batch/858

Social Media Intelligence (23.10.2017)
https://privacyinternational.org/explainer/55/social-media-intelligence

Security Through Human Rights: New Liberties Report (18.10.2017)
https://www.liberties.eu/en/news/security-through-human-rights-liberties-report/13238

(Contribution by Antonella Napolitano from EDRi member Privacy International)

close
27 May 2020

COVID-Tech: Surveillance is a pre-existing condition

By Guest author

In EDRi’s series on COVID-19, COVIDTech, we will explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the virus. Each post in this series will tackle a specific issue about digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised that “measures taken should not lead to discrimination of any form, and governments must remain vigilant to the disproportionate harms that marginalised groups can face.” In this third post of the series, we look at surveillance – situating the measures in their longer term trajectory – particularly of marginalised communities.

One minor highlight in this otherwise bleak public health crisis is that privacy is trending. Now more than ever, conversations about digital privacy are reaching the general public. This is a vital development as states and private actors pose ever greater threats to our digital rights in their responses to COVID-19. The more they watch us, the more we need to watch them.

One concern, however, is that these debates have siphoned this new attention to privacy into a highly technical, digital realm. The debate is dominated by the mechanics of digital surveillance, whether we should have centralised or decentralised contact tracing apps, and how zoom traces us as we work, learn and do yoga at home.

Although important, this is only a partial framing of how privacy and surveillance are experienced during the pandemic. Less prominently featured are the various other privacy infringements being ushered in as a result of COVID-19. We should not forget that for many communities, surveillance is not a COVID-19 issue – it was already there.

The other sides of COVID surveillance

Very real concerns about digital measures proposed as pandemic responses should not overshadow the broader context of mass-scale surveillance emerging before our eyes. Governments across Europe are increasingly rolling out measures to physically track the public, via telecommunications and other data, without explicit reference to how this will impede the spread of the virus, or when the use and storage of this data will end.

We are also seeing the emergence of bio-surveillance dressed in a public health response’s clothing. From the Polish government’s app mandating the use of geo-located selfies, to talks of using facial biometrics to create immunity passports to facilitate the return of of workers in the UK, governments have, and will continue to, use the pandemic as a cover to get into our homes, and closer to us.

Yet, less popular in media coverage are physical surveillance techniques. Such measures are – in many European countries – coupled with heightened punitive powers for law enforcement. Police have deployed drones in France, Belgium and Spain, and communities in cities across Europe are feeling the pressure of increased police presence in their communities. Heightened measures of physical surveillance cannot be accepted at face value or ignored. Instead, they must be viewed in tandem with new digital developments.

Who can afford privacy?

These measures are not neutrally harmful. In unequal societies, surveillance will always target racialised1 people, migrants, and the working classes. These people bear the burden of heightened policing powers and punitive ‘public health’ enforcement – being more likely to need to leave the house for work, take public transport, live in over-policed neighbourhoods, and in general be perceived as suspicious, criminal, necessitating surveillance.

This is a privacy issue as much as it is about inequality. Except, for some, the consequences of intensified surveillance under COVID-19 means heightened exposure to the virus through direct contact with police, increased monitoring of their social media, the anxiety of constant sirens, and in the worst cases, the real bodily harm of police brutality.

In the last few days, Romani communities in Slovakia reported numerous cases of police brutality, some against children playing outside. Black, brown and working class communities across Europe are experiencing the physical and psychological effects of being watched even more than normal. In Brussels, where EDRi is based, a young man has died in contact with the police during raids.

This vulnerability is economic, too – for many, privacy is a sparse commodity.It is purchased by those who live in affluent neighbourhoods, by those with ‘work from home’ jobs. Those who cannot afford privacy in this more basic sense will, unfortunately, not be touched by debates about contact tracing. For many, digital exclusion means that measures such as contact-tracing apps are completely irrelevant. Worse, if future measures in response to COVID-19 are designed with the assumption that we all use smart phones, or have identity documents, they will be immensely harmful.

These measures are being portrayed as ‘new’, at least in our European ‘liberal’ democracies. But for many, surveillance is not new. Governmental responses to the virus have simply brought to the general public a reality reserved for people of colour and other marginalised communities for decades. Prior to COVID-19, European governments have deployed technology and other data-driven tools to identify, ‘risk-score’ and experiment on groups at the margins, whether by way of predicting crime, forecasting benefit fraud, or assessing whether or not asylum applicants are telling the truth by their facial movements.

We need to integrate these experiences of surveillance into the mainstream privacy debate. These conversations have been sidelined or explained away with the logic of individual responsibility. For example, last year, in a public debate on technology and surveillance of marginalised communities, one participant swiftly moved the conversation away from police profiling and toward privacy literacy. They asked the room of anti-racist activists “does everybody here use a VPN?”

Without a holistic picture of how surveillance affects people differently – the vulnerabilities of communities and the power imbalances that produce this – we will easily fall into the trap that quick fix solutions can guarantee our privacy, and that surveillance can be justified.

Is surveillance a price worth paying?

If we don’t root our arguments in people’s real life experiences of surveillance, not only do we devalue the right to privacy for some, but we also risk losing the argument to those who believe that surveillance is a price worth paying.

This narrative is a direct consequence of an abstract, technical and neutral framing of surveillance and its harms. Through this lens, infringements of privacy are minor, necessary evils. As a result, privacy will always lose the the false ‘privacy vs health’ trade-off. We should challenge the trade-off itself, but we can also ask: who will really will pay the price of surveillance? How do people experience breaches of privacy?

Another question we need to ask is who profits from surveillance? Numerous companies have shown their willingness to enter public-private alliances, using COVID-19 as the opportunity to market surveillance based ‘solutions’ to issues of health (often with dubious claims). Yet, again, this is not new – companies like Palantir, contracted by the UK government to process confidential health data during COVID-19, have a much longer-standing role in the surveillance of migrants and people of colour and facilitating deportations. Other large tech companies will use COVID-19 to continue their expansion into areas like ‘digital welfare’. Here, deeply uneven power relationships will be further cemented with the introduction of digitalised tools, making them harder to challenge and posing ever greater risks to those who rely on the state. If unchallenged, this climate of techno-solutionism will only increase the risk of new technology testing and data-extraction from marginalised groups for profit.

A collective privacy

There is a danger to viewing surveillance as exceptional; a feature of COVID-19 times. It suggests that protecting privacy is only newsworthy when it is about ´everyone’ or ‘society as a whole’. What that means, though is that actually we don’t mind if a few don’t have privacy.

Surveillance measures and other threats to privacy have countless times been justified for the ‘public good’. Privacy – framed in abstract, technical and individualistic terms – simply cannot compete, and ever greater surveillance will be justified. This surveillance will be digital and physical and everything in between, and profits will be made. Alternatively, we can fight for privacy as a collective vision – something everybody should have. Collective privacy is not exclusive or abstract – it means looking further than how individuals might adjust their privacy settings, or how privacy can be guaranteed in contact tracing apps.

A collective vision of privacy means contesting ramped-up police monitoring, the use of marginalised groups as guinea pigs for new digital technologies, as well as ensuring new technologies have adequate privacy protections. It also requires us to think about who will be the first to feel the impact of surveillance? How do we support them? To answer these questions, we need to recognise surveillance in all its manifestations, including way before the outbreak of COVID-19.

Original illustration by Miguel Brieva, licensed under CBNA 2020, La Imprenta, included in “Que No Haya Sido en Vano

Read more:

Telco data and Covid-19: A primer (21.04.20)
https://privacyinternational.org/explainer/3679/telco-data-and-covid-19-primer

Slovak police officer said to have beaten five Romani children in Krompachy settlement and threatened to shoot them (29.04.20)
http://www.romea.cz/en/news/world/slovak-police-officer-said-to-have-beaten-five-romani-children-in-krompachy-settlement-and-threatened-to-shoot-them

Amid COVID-19 Lockdown, Justice Initiative Calls for End to Excessive Police Checks in France (27.03.20)
https://www.justiceinitiative.org/newsroom/amid-covid-19-lockdown-justice-initiative-calls-for-end-to-excessive-police-checks-in-france

Digital divide ‘isolates and endangers’ millions of UK’s poorest (28.04.20)
https://www.theguardian.com/world/2020/apr/28/digital-divide-isolates-and-endangers-millions-of-uk-poorest

The EU is funding dystopian Artificial Intelligence projects (22.01.20)
https://www.euractiv.com/section/digital/opinion/the-eu-is-funding-dystopian-artificial-intelligence-projects

A Price Worth Paying: Tech, Privacy and the Fight Against Covid-19 (24.04.20)
https://institute.global/policy/price-worth-paying-tech-privacy-and-fight-against-covid-19

COVID-Tech: Emergency responses to COVID-19 must not extend beyond the crisis (15.04.20)
https://edri.org/emergency-responses-to-covid-19-must-not-extend-beyond-the-crisis/

COVID-Tech: COVID infodemic and the lure of censorship (13.04.2020)
https://edri.org/covid-infodemic-and-the-lure-of-censorship/

Footnotes

  1. This term refers to racial, ethnic and religious minorities, emphasising that racialisation is a structural process inflicted on people, groups and communities.

(Contribution by Sarah Chander, EDRi senior policy advisor)

close
27 May 2020

Competition law: Big Tech mergers, a dominance tool

By Laureline Lemoine

This is the third article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this. Read the first article on the impact of competition law on your digital rights here and the second article on what to do against Big tech’s abuse here.

One way Big Tech has been able to achieve a dominant position in our online life, is through mergers and acquisitions. In recent years, the five biggest tech companies (Amazon, Apple, Alphabet – parent company of Google, Facebook and Microsoft) spent billions to strengthen their position in acquisitions that shaped our digital environment. Notorious acquisitions which made headlines include: Facebook/WhatsApp, Facebook/Instagram, Microsoft/LinkedIn, Google/YouTube, and more recently, Amazon/Twitch.

Beyond infamous social media platforms and big deals, Big Tech companies also acquire less known-companies and start-ups, which also greatly contribute to ther growth. While not making big newsworthy acquisitions, Apple still “buys a company every two to three weeks on average” according to its CEO. Since 2001, Google-Alphabet has been acquiring over 250 companies and since 2007, while Facebook acquired over 90. Big Tech’s intensive acquisition policy particularly applies to artificial intelligence (AI) startups. This is worrying because reducing competitors also means reducing diversity, leaving Big Tech in charge of developing these technologies, at a time where AI technologies are more and more used in decisions affecting individuals and are known to be susceptible to bias.

Big Tech’s intensive acquisition policy can have different goals at play, sometimes at the same time. These companies acquire competitors who could have offer, or were offering consumers, an alternative, in order to eliminate or shelve them (“killer acquisitions”), in order to consolidate a position in the same market or in a neighbouring market, or in order to acquire their technical or human skills (“talent acquisitions”). See for example this overview of Google and Facebook’s acquisitions.

And in time of economic trouble, Big Tech is even more lurking. In the US, Senator Warren wants to introduce a moratorium on COVID-era acquisitions.

Big Tech’s mergers are mostly unregulated

While mergers and acquisitions are part of business life, the issue is that most Big Tech’s acquisitions are not subject to any control. And the few ones which are reviewed have been authorised without conditions. This led to debates on the state of competition law: are the current rules fit for today’s age of data-driven acquisitions and technology takeovers?

While some already called for a ban on acquisitions by certain companies, others are discussing the thresholds set in competition law to allow review by competent authorities, but also, more intrinsically, the criteria used to review mergers.

The issue with thresholds is that they depend on monetary turnover, which many companies and startups do not reach, either because they haven’t yet monetised their innovations or because their value is not reflected in their turnover but, for example, in their data trove. Despite low turnovers, Facebook was still willing to spent 1 and 19 billions for, respectively, Instagram and WhatsApp. These data-driven mergers allowed for these companies’ data sets to be aggregated, increasing the (market) power of Facebook.

The French competition authority suggests for example, to introduce an obligation to inform the EU and/or national competition authorities of all the mergers implemented in the EU by “structuring” companies. These “structuring” companies would be defined clearly according to objective criteria and in cases of risks, the authorities would ask these players to notify the mergers for review.

However, although the acquisition of WhatsApp by Facebook was reviewed by the European Commission thanks to a referee from national competition authorities, the operation was still authorised. This poses another issue: the place of data protection and privacy in merger control. Competition authorities assume that, since there is a data protection framework, data protection rights are respected and individuals are exercising their rights and choices. But this assumption does not take into account the reality of the power imbalance between users and Big Tech. In this regard, some academics, such as Orla Lynskey suggests solutions such as the increased cooperation between competition, consumers and data protection authorities to understand and examine the actual barriers to consumer choice in data-driven markets. Moreover, where it is found that consumers value data privacy as a dimension of quality, the competitive assessment should therefore reflect whether a given operation would deteriorate such quality.

A wind of change might already be coming from the US, as the Federal Trade Commission issued last February “Special Orders” to the five Big Tech companies, “requiring them to provide information about prior acquisitions not reported to the antitrust agencies”, including how acquired data has been treated.

Google/Fitbit: the quest for our sensitive data

The debate recently resurfaced when Google’s proposed acquisition of Fitbit was announced. Immediately, a number of concerns were raised, both in terms of competition and of privacy (see for example the European Consumer Organisation BEUC, and the Electronic Frontier Foundation (EFF)’s concerns). From a fundamental rights perspective, the most worrying issue lies in the fact that Google would be acquiring Fitbit’s health data. As Privacy International warns, “a combination of Google / Alphabet’s potentially extensive and growing databases, user profiles and dominant tracking capabilities with Fitbit’s uniquely sensitive health data could have pervasive effects on individuals’ privacy, dignity and equal treatment across their online and offline existence in future.”

Such concerns are also shared beyond civil society, as the announcement led the European Data Protection Board to issue a statement, warning that “the possible further combination and accumulation of sensitive personal data regarding people in Europe by a major tech company could entail a high level of risk to the fundamental rights to privacy and to the protection of personal data.”

It is a fact that Google cannot be trusted with our personal data. As well as a long history of competition and data protection infringements, Google is questionably trying to enter the healthcare market, and already breaking patients’ trust.

Beyond concerns, this operation will be the opportunity for the European Commission to adopt a new approach after the Facebook/WhatsApp debacle. Google is acquiring Fitbit for its data and therefore the competitive assessment should reflect that. Moreover, the Commission should use this case as an opportunity to consult with consumer and data protection authorities.

Read more:

Google wants to acquire Fitbit, and we shouldn’t let it! (13.11.19)
https://privacyinternational.org/news-analysis/3276/google-wants-acquire-fitbit-and-we-shouldnt-let-it

GOOGLE-FITBIT MERGER: Competition concerns and harms to consumers (07.07.20)
http://www.beuc.eu/publications/beuc-x-2020-035_google-fitbit_merger_competition_concerns_and_harms_to_consumers.pd

Considering Data Protection in Merger Control Proceedings (06.06.18)
https://one.oecd.org/document/DAF/COMP/WD(2018)70/en/pdf

Competition law: what to do against Big Tech’s abuse? (01.04.2020)
https://edri.org/competition-law-what-to-do-against-big-tech-abuse/

The impact of competition law on your digital rights (19.02.2020)
https://edri.org/the-impact-of-competition-law-on-your-digital-rights/

(Contribution by Laureline Lemoine, EDRi senior policy advisor)

close
27 May 2020

France: First victory against police drones

By La Quadrature du Net

Since the beginning of the COVID-19 crisis, French police has been using drones to watch people and make sure they respect the lockdown. Drones had been used before by the police for the surveillance of protests, but the COVID-19 crisis represented a change of scale: all over France, hundred of drones have been used to broadcast an audio about sanitary instructions, but also to monitor and capture images of people in the street that may or may not respect the lockdown rules.

On May 4, EDRi observer La Quadrature Du Net (LQDN) and their ally La Ligue des Droits de l’Homme used some information published by the newspaper Mediapart to file a lawsuit against the Parisian police and force them to stop using drones for surveillance activity. They based their appeal in particular on the absence of any legal framework concerning the use of images captured by these drones.

On 18 May 2020, the Conseil d’État, the French highest administrative court, issued its decision on the case. It sets as illegal any drone equipped with camera and flying low enough, as such a drone would allow the police to detect individuals by their clothing or a distinctive sign. This decision is a major victory against drone surveillance.

Indeed, according to the Conseil d’État, only a ministerial decree reviewed by the CNIL (National Commission on Informatics and Liberty) could allow the police to use such drones. As long as such a decree has not been issued, the French police will not be able to use its drones anymore. Indeed, the decision is all about the COVID-19 health crisis, a much more important purpose than those usually pursued by the police to deploy drones.

This action was part of the Technopolice campaign, developed by La Quadrature Du Net. Other devices are still being used without a legal framework: automated CCTV, sound sensors, predictive police… With Technopolice, LQDN aims at collectively highlighting and combating the deployment of new police technologies without the necessary legal safeguards. This decision proves they are on the right track.

Read more:

La Quadrature Du Net and La Ligue des Droits de l’Homme public letter (18.05.20)
https://www.laquadrature.net/wp-content/uploads/sites/8/2020/05/440442-440445-quadrature-du-net-et-ldh.pdf

French Covid-19 Drones Grounded After Privacy Complaint (18.05.2020)
https://www.bloomberg.com/news/articles/2020-05-18/paris-police-drones-banned-from-spying-on-virus-violators

Why COVID-19 is a Crisis for Digital Rights (29.04.20)
https://edri.org/why-covid-19-is-a-crisis-for-digital-rights

Strategic litigation against civil rights violations in police laws (24.04.19)
https://edri.org/strategic-litigation-against-civil-rights-violations-in-police-laws/

Data retention: “National security” is not a blank cheque (29.01.20)
https://edri.org/data-retention-national-security-is-not-a-blank-cheque

(Contribution by Martin Drago, La Quadrature Du Net)

close
25 May 2020

Open Letter: EDRi urges enforcement and actions for the 2 year anniversary of the GDPR

By EDRi

On 25 May 2020, for the General Data Protection Regulation (GDPR) 2 year anniversary, EDRi sent a letter to Executive Vice-President Jourová and Commissioner Reynders to highlight and urge action to the tackle the GDPR’s vast enforcement gap.

EDRi and its members widely welcomed the increased protections and rights enshrined in GDPR. Two years later, we call for the urgent actions by the EU Commission, the European Data Protection Board (EDPB) and the national data protection authorities (DPA) to ensure strong enforcement and implementation of the GDPR to make these rights a reality.

EDRi is especially concerned by the way many Member States have been implementing the GDPR and the misuses of GDPR by some DPAs. Finally, while we urge the European Commission not to reopen the GDPR, we highlight the need for complimentary and supporting legislation, such as through the upcoming Digital Service Act (DSA) and through a strong and clear ePrivacy Regulation.

You can read the letter here (PDF) and below:

Dear Executive Vice-President Jourová,
Dear Commissioner Reynders,

European Digital Rights (EDRi) is an umbrella organisation with 44 NGO members with representation in 19 countries that promotes and defends fundamental rights in the digital environment.

For the second anniversary of the GDPR’s entry into application, we wish to highlight and urge action to tackle the vast enforcement gap. The GDPR was designed to address information and power asymmetries between individuals and entities that process their data, and to empower people to control it. Two years since it was introduced, this is unfortunately still not the case. Effectiveness and enforcement are two pillars of the EU data protection legislation where national data protection authorities (DPAs) have a crucial role to play.

“Business as usual” should urgently be put to an end

In our experience as EDRi network, we have observed numerous infringements of the very principles of the GDPR but controllers are not being sufficiently held to account. The most striking infringements include:

  • Abuse of consent

Consent for processing data for marketing purposes is notoriously obtained through deceptive design (“dark patterns”)1, bundled into terms of service, or forced on individuals under economic pressure, and used to “legitimise” unnecessary and invasive forms of data processing, including profiling based on their sensitive data. Two years into the GDPR, internet platforms and other companies which rely on monetising information about people still conduct “business as usual”, and users’ weaknesses and vulnerabilities continue to be exploited. In this respect, our members found out as well that the minimization principle is often not fully enforced in the Member States, leading to abuses on the collection of personal data both by private and public entities.2

  • Failure of access to behavioural profiles

While internet platforms generate more and more profit from monetising knowledge about people’s behaviours, they are notorious in ignoring the fact that observations and inferences made about users are personal data as well, and are subject to all safeguards under the GDPR. However, individuals still do not have access to their full behavioural profiles or effective means of controlling them. Infringements do not only further exarcebate the opacity surrounding the online data ecosystem but also constitue a major obstacle to the effective exercise of data subjects’ rights, effectively undermining the protection afforded by the Regulation and equally citizens’ trust in the EU to protect their fundamental rights.

Please see the following articles for further elaboration of this problem:

Uncovering the Hidden Data Ecosystem” by Privacy International; “Your digital identity has three layers, and you can only protect one of them” by Panoptykon Foundation.

Urgent action by DPAs is needed to make the protections in GDPR a reality

Many national DPAs do not have the financial and technical capacity to effectively tackle cases against big online companies. They should therefore be properly equipped with resources, staff, technical knowledge and IT specialists, and they must use these to take action. In this regard, we urge the European Commission to start infringement procedures against Member States that do not provide DPAs with enough resources.

Moreover, our experience as a network, through GDPR and AdTech complaints3, illustrates the urgent need for enforcement, as well as issues with a lack of coordination, a slow pace and sometimes an evasive approach of national DPAs.

Please see the following materials for further elaboration of this problem: Response to the roadmap of the European Commission’s report on the GDPR by Open Rights Group, Panoptykon Foundation and Liberties EU and “Two years under the GDPR” by Access Now.

The role of the EU Commission and of the European Data Protection Board (EDPB) when applying the cooperation and consistency mechanisms is crucial. The EDPB is an essential forum for the DPAs to exchange relevant information regarding enforcement of the GDPR. Even if we understand that not every aspect of the one-stop-shop mechanism is handled at the EDPB level, cooperation between DPAs is of the essence to complete procedures and handle complaints appropriately and promptly, in order to offer to the individuals an effective redress, in particular in cross borders cases.

Furthermore, full transparency should be afforded to the complainant, including information on the investigation made by the DPAs, copies of the reports and the possibility to take part in the procedure if appropriate.

When necessary, we urge DPAs to consider calling upon Article 66 of the GDPR and trigger the urgency procedure to adopt temporary measures, or to force other authorities to act where there is an urgent need to do so. We regret that such possibility has not yet been explored.

Derogations by Members States and DPAs

EDRi is deeply concerned by the way most Member States have implemented the derogations, undermining the GDPR protections and by the misuses of GDPR by some DPAs.

Please see Access Now’s 2019 report “One year under the GDPR” for more details.

Our concerns relate to the introduction of wide and over-arching exemptions under Article 23, removing the protections of GDPR from huge amounts of processing with consequences for people’s rights.4 Moreover, Member States have been stretching the interpretation of the conditions set out in Article 6 and introducing broad conditions for processing special category personal data under Article 9 which are open to exploitation, including for example loopholes that can be abused by political parties.5

The majority of Member States also decided not to implement the provision in Article 80(2) of GDPR allowing for collective complaints. Many of the infringements we see are systemic, vast in scale and complex, yet without Article 80(2) there is no effective redress in place since only individuals are able to lodge complaints, and not associations independently.

Moreover, there are serious concerns as to political independence of DPAs in some countries. In Slovakia6, Hungary7, and Romania8, DPAs are abusing the law to go after journalists and/or NGOs. In Poland the DPA has presented interpretations of the GDPR that support the government’s agenda9. Not only is such an interpretation incorrect, but it risks being political as well as undermining the GDPR as it gives the false impression that the law infringes on free expression and media freedom. Disparities on the (lack of) implementation of Article 85 are also concerning10.

Need for complimentary and supporting legislation

GDPR does not and cannot operate in a silo. Just as the right to data protection interacts with other rights, it is essential that other legal frameworks bolster the protections of GDPR. We urge the Commission not to reopen the GDPR but we emphasise the need for complimentary and supporting legislation11.

The use of algorithms or AI in decisions affecting individuals, which are not fully automated or not based on personal data, are not covered by Article 22 GDPR, despite being potentially harmful. To address this insufficiency, some of our members highlight the need for a complimentary and comprehensive legislation on such decisions.

Moreover, the upcoming Digital Services Act (DSA) is an opportunity for the European Union to make the necessary changes to fix some of the worst outcomes of the advertisement-driven and privacy-invading economy, including the lack of transparency of users’ marketing profiles and of users’ control over their data in the context of profiling and targeted advertisement.

Finally, EDRi and our members repeatedly stated12, we believe that a strong and clear ePrivacy Regulation is urgently needed to further advance Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values.

In May 2018, EDRi and our members widely and warmly welcomed the increased protections and rights enshrined in GDPR. Now and two years on, we call on the EU Commission, EDPB, and DPAs to move forward with the enforcement and implementation of the GDPR to make these rights a reality.

Footnotes

  1. Please see “Deceived by design” report by Norwegian Consumer Council for examples of this practice.
  2. See for example Xnet’s report on Privacy and Data Protection against Institutionalised Abuses in Spain.
  3. See our members complaints: https://privacyinternational.org/legal-action/challenge-hidden-data-ecosystem; https://noyb.eu/en/projects; https://en.panoptykon.org/complaints-google-iab; https://www.openrightsgroup.org/campaigns/adtech-data-protection-complaint
  4. A deeply concerning example is the immigration exemption introducted in the UK’s Data Protection Act 2018. See also Homo Digitalis complaint regarding Greek Law 4624/2019:https://www.homodigitalis.gr/en/posts/4603
  5. See for example https://edri.org/apti-submits-complaint-on-romanian-gdpr-implementation/
  6. See https://www.europarl.europa.eu/doceo/document/E-9-2020-001520_EN.html
  7. See https://ipi.media/court-orders-recall-of-forbes-hungary-following-gdpr-complaint/
  8. See https://www.gdprtoday.org/gdpr-misuse-in-romania-independence-of-dpa-and-transparency-keywords-or-buzzwords/
  9. See https://edpb.europa.eu/news/news/2020/edpb-adopts-letter-polish-presidential-elections-data-disclosure-discusses-recent_sv
  10. See for example https://xnet-x.net/en/complaints-ec-data-protection-spanish-legislation/
  11. See part III of the report “Who (really) targets you? Facebook in Polish election campaigns” by Panoptykon Foundation (https://panoptykon.org/political-ads-report) for specific recommendations on changes, which should be introduced in the Digital Services Act
  12. See https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/
close