26 Feb 2020

Click here to allow notifications in cross-border access to data

By Chloé Berthélémy

From a fundamental rights perspective, it’s essential that the proposal enabling cross-border access to data for criminal proceedings (“e-evidence”) includes a notification mechanism. However, this requirement of a notification seems to be out of the question for those advocating for “efficiency” of cross-border criminal investigations, even if that means abandoning the most basic procedural safeguards that exist in the European judicial cooperation. Another argument against notifying at least one second authority is that the system would be “too complicated”.

To solve these intricacies, others (with similar goals in mind) have proposed to restrict the scope of the future legal instrument to “domestic” cases only. This means that the Production and Preservation Orders would only be used in cases where the person whose data is being sought is residing in the country of the issuing authority. For example, if a French prosecutor is investigating a case involving a French resident as a suspect, she would not need to request the approval of another State. Only the executing State – where the online service provider is established – would be required to intervene exclusively in cases where the provider refuses to execute the Order.

“Domestic” cross-border cases?

Traditionally, EU Member States execute their own national laws to summon service providers established on their territory to hand over their customers’ data for criminal matters. However, when the evidence that is being searched for is located in another Member State, European Union’s rules for judicial cooperation kick in. The new regime, as proposed by the EU Commission, allows Member States to extend their judicial jurisdiction beyond their territorial boundaries. This affects the sovereignty of other states – and this is why we talk about cross-border access to data and cannot refer to “purely domestic cases”.

The notification of a second authority, notably the executing authority, is essential for the new instrument to be compatible with EU primary (the EU treaties) and secondary law. For the principle of mutual recognition – which is the legal basis chosen for the proposal – to apply, it is indeed crucial that the executing State is first aware that an order is executed on its territory, before it is able to recognise it.

Notification to the executing authority

Some stakeholders in the e-evidence debate grumble about the administrative burden that a notification procedure would entail. They further underline the problematic situation it would create for Irish judicial authorities since Ireland hosts so many prevailing service providers. However, there are several counter-arguments to these claims:

  1. The proposal does not solely cover circumstances in which Ireland will be involved but all cross-border cases in the EU;
  2. It is vital to design policies that are future-proof. The current lack of European harmonisation in the field of taxation policies impacts the efficiency of judicial cooperation instruments. This doesn’t, however, mean it will always be that way. It is also not a satisfactory justification for bypassing fundamental rights safeguards;
  3. Service providers should not be required to execute orders that are unlawful under the law of the country where they are located;
  4. In cases where the affected person’s residence is unknown, a notification mechanism to access identifying information would be imperative, because there would be no indication at this stage of the investigation as to whether there is a third State concerned or not.

There are cases in which the affected person is residing somewhere else than in the issuing State. Excluding these cases from the e-evidence proposal’s scope would allow a relief for law enforcement authorities from a couple of additional legal checks – that would allow them, in particular, to avoid the verification of immunities or other specific protections that are granted by national laws, and that restrict the access to certain categories of personal data. Nonetheless, excluding those cases would not suffice to meet the obligation to respect fundamental rights and rule of law standards provided in the EU legal system when using mutual recognition instruments. Instead, a notification mechanism with a genuine review opportunity for both the executing and the affected States should feature in the final text.

Double legality check in e-evidence: Bye bye “direct data requests” (12.02.2020)
https://edri.org/double-legality-check-in-e-evidence-bye-bye-direct-data-requests/

“E-evidence”: Repairing the unrepairable (14.11.2019)
https://edri.org/e-evidence-repairing-the-unrepairable/

Independent study reveals the pitfalls of “e-evidence” proposals (10.10.2018)
https://edri.org/independent-study-reveals-the-pitfalls-of-e-evidence-proposals/

EU “e-evidence” proposals turn service providers into judicial authorities (17.04.2018)
https://edri.org/eu-e-evidence-proposals-turn-service-providers-into-judicial-authorities/

(Contribution by Chloé Berthélémy, EDRi)

close
26 Feb 2020

Swedish law enforcement given the permission to hack

By Dataskydd.net

On 18 February 2020, the Swedish parliament passed a law that enables Swedish law enforcement to hack into devices such as mobile phones and computers that the police thinks a suspect might use. As with the recent new data retention law only one party (and one member of another party) voted against the resolution (286-26 with 37 absent). The previous data retention law was struck down, and given the directions of the recent Court of Justice of the European Union (CJEU) Advocate General (AG) Opinions on data retention, the current data retention law is likely to be struck down as well.

What capabilities does this give law enforcement agencies?

For crimes that “under the circumstances” can reasonably give at least a two-year prison sentence, law enforcement agencies (LEAs) can request a court warrant to hack into the suspect’s device. This warrant can be given to gather information (for example from encrypted messaging apps) or even in some cases to stop information from being sent from that device.

The law has a number of serious issues that has been pointed out to lawmakers over several years when the law was going through the public inquiry phase. For example, the law does not say that a minimum sentence of two years in prison is required, but that if the prosecutor just believes that the suspected crime might carry two, or more, years in prison, that already give LEAs the legal basis to ask for a court warrant.

Even more worryingly, even citizens who are not suspect of having committed any crimes, but are associated with a suspect might be the targets of hacking by the police. The law gives the LEAs a mandate to hack devices that they reasonably think a suspect might primarily use. So if a suspect might uses their mother’s phone, for example, that device is open to hacking. If you are someone that the police think their suspect will call or message, your phone might also be in danger of being hacked, just because you happen to know someone that the police suspects of a crime. They can also be allowed to use hacking to find a suspect – this means you simply shouldn’t be at the wrong place at the wrong time, or else the police might hack your devices.

The law also includes a clause that states that if the prosecutor feels like the courts will be too slow to issue a warrant, he or she can issue it. If the court then finds that the warrant was wrongly issued, the prosecutor will then have to go to court for review, and any evidence gathered can not be used against the suspect. Of course, the person whose device was hacked (who might not even be a person suspected of a crime) has already had their privacy breached, and the law doesn’t provide any recourse for such abuses.

The new law goes into effect on the 1 April 2020 and will be valid for five years, after which the Swedish parliament will decide to make it permanent or not.

Dataskydd.net
https://dataskydd.net/english/

AG’s Opinion: Mass retention of data incompatible with EU law (29.01.2020)
https://edri.org/ag-opinion-mass-retention-of-data-incompatible-with-eu-law/

Proposal for the police hacking law 2019/20:64 (only in Swedish)
https://data.riksdagen.se/fil/8AB041AD-9F29-4602-8630-1AB528FA4673

Dataskydd.net’s statement on the report proposing the new law (only in Swedish)
https://www.regeringen.se/493a2f/contentassets/32d970c3c63140d68350d964dccffb51/39.-dataskydd.net.pdf

(Contribution by Eric Skoglund, EDRi observer Dataskydd.net)

close
26 Feb 2020

Romania: Mandatory SIM registration declared unconstitutional, again

By ApTI

On 18 February 2020, the Romanian Constitutional Court unanimously declared unconstitutional a new legislative act adopted in September 2019 introducing mandatory SIM card registration. The legislative act in question was an emergency ordinance issued by the Government which wanted to introduce this obligation as a measure “to improve the operation of the 112 emergency service number”. This is the second time the court issues an unconstitutionality decision on mandatory SIM card registration proposals.

The court dismissed the law on procedural grounds, as the government failed to demonstrate the emergency of adopting the ordinance. It also highlighted in its press release that this act of introducing the SIM card registration provision has actually been postponed twice, therefore the urgency to issue an emergency ordinance was non-existent.

Although this is the sixth attempt to introduce legislation on mandatory SIM card registration in Romania, the battle is far from over as, this time, the court did not go into a substantive analysis.

Coincidentally or not, two days later, a false bomb threat (the first in years) was reported in the media. A man called the 112 emergency number claiming that he placed a bomb inside a shopping mall. The call was made from a prepaid SIM card, a fact that the 112 service specifically highlighted in their press release, while amping up about the number of staff involved in responding to this call, subtly suggesting the implications of this types of calls are massive, and if they are proven false, there is nobody that can be taken responsible as prepaid SIM card calls can’t be traced back to an individual.

The Constitutional Court’s decision is a welcomed victory for now, but given the track record of proposing a new legislative proposal on this topic every year or two, we can expect similar legislative proposals in the future.

Background

After a tragic failure by the police to save a teenage girl who was abducted but managed to call the 112 emergency number three times before she was murdered, the Romanian Government adopted an Emergency Ordinance which introduced the obligation to register prepaid SIM cards.

EDRi member ApTI, together with the Association for the Defence of Human Rights in Romania – the Helsinki Committee (APADOR-CH), asked the Romanian Ombudsman to send the law to the Constitutional Court. The Ombudsman challenged the case in court and ApTI sent an amicus curiae in support the unconstitutionality claims, showing that:

  1. there was no reason to justify the adoption of an emergency ordinance instead of going through the normal parliamentary procedure;
  2. restricting individual freedoms by requesting to register prepaid SIM cards is a measure disproportionate to the goal that was intended – to limit fake calls to the 112 emergency number;
  3. no data protection impact assessment has been carried out and the national data protection authority did not support the law;
  4. the Constitutional Court already decided in 2014 that the measure to introduce mandatory SIM card registration limits the fundamental rights and freedoms and such measures can only be introduced if they are necessary and proportionate.

The 6th attempt to introduce mandatory SIM card registration in Romania (23.10.2020)
https://edri.org/the-sixth-attempt-to-introduce-mandatory-sim-registration-in-romania/

Constitutional Court press release (only in Romanian, 18.02.2020)
http://www.ccr.ro/download/comunicate_de_presa/Comunicat-de-presa-18-februarie-2020.pdf

Timeline of legislative initiatives to introduce mandatory SIM card registration (only in Romanian)
https://apti.ro/Ini%C5%A3iativ%C4%83-legislativ%C4%83-privind-%C3%AEnregistrarea-utilizatorilor-serviciilor-de-comunica%C5%A3ii-electronice-tip-Prepay

Fake bomb threat at a shopping mall in Romania (only in Romanian, 20.02.2020)
https://www.hotnews.ro/stiri-esential-23674657-sts-cifrele-alarmei-false-bomba-veranda-mall-8-secunde-apel-5-institutii-alerta-100-specialisti-apelul-fost-facut-cartela-prepay.htm

Constitutional Court decision nr. 461/2014 (only in Romanian)
https://privacy.apti.ro/decizie-curtea-constitutionala-prepay-461-2014/

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close
26 Feb 2020

Copyright stakeholder dialogues: Compromise, frustration, dead end?

By Laureline Lemoine

The second phase of the stakeholder dialogues on Article 17 of the Copyright Directive finished in December 2019. The two meetings of the third phase, focusing on the provisions of Article 17, were held on 16 January and 10 February 2020.

The concepts of authorisation and best efforts were discussed in the first meeting on 16 January. The second meeting on 10 February covered users’ safeguards and redress mechanisms.

Can we still fix Article 17?

On 10 February, for the first time, the dialogue gave the opportunity to discuss the central issue of Article 17 (former Article 13): the conflict between platforms’ obligations and users’ rights.

Article 17(4) requires platforms to make best efforts to avoid the availability of copyright protected content, which in practice forces the implementation of upload filters. On the other side, Article 17(7) ensures that the cooperation between rightsholders and platforms does not lead to the blocking or removal of legitimate content, such as copyright exceptions. It has been demonstrated, even during these dialogues, that filters cannot understand context and therefore cannot recognise when users make use of copyright exceptions such as criticisms or parody. Stakeholders discussed how this conflict could be addressed.

Users’ organisations presented concrete scenarios to avoid the automated blocking of content

How could this conflict be solved in practice? EDRi supports a scenario where the content flagged by filters would stay online during the whole process of review, from the moment when being uploaded, until human review, as seen in the image below.

Image: Communia

Communia, an NGO promoting policies that expand the public domain, presented a slightly different scenario, based on recommendations from academics. They differentiate between two types of copyright infringement flagged by filters:

  • First, a “prima facie” copyright infringement, an identical or equivalent match to a protected content, which would lead to the content being automatically blocked. However, users would still be able to complain and be entitled to the safeguards of Article 17(9). Because of the potential for abuse, the proposal also includes possible sanctions for rightsholders who repeatedly make wrongful ownership claims.
  • Second, in cases of partial matches to protected content, users would be able to make declarations of lawful use, allowing their content to stay up during the whole process, until the human review.

The European Consumer Organisation BEUC also suggested that users should be educated about copyright to allow them to fully make use of this declaration.

Proposed scenario by Communia, taken from their presentation during the stakeholders dialogue meeting of 10 February.

Surprisingly, Studio 71, a global media company owned by broadcasters, presented a similar model based on the same academics paper. The difference with Communia’s scenario is the introduction of a “red button” that rightsholders would be able to use to immediately block content, even after a user makes a declaration of lawful use. Studio 71’s proposal also includes sanctions for both users and rightsholders in case of repeated abuse. They even provided evidence of their own overblocking, saying that they allegedly once wrongfully blocked 50 000 pieces of content.

The European Commission showed interest and kept asking for more details especially regarding the concrete implementation of a prima facie rule. However, it quickly became clear that most rightsholders had no interest in joining such discussion, and instead insisted that their rights should prevail and that the complaint and redress mechanism was the only solution to overblocking.

This is not the first time that stakeholders refuse to meaningfully engage in the dialogue process. On 16 January, most of them refused to provide useful input to help define important concepts of Article 17. Rightsholders have not shown interest in guidelines that would limit their blocking capabilities, and big platforms already manage their own blocking tools.

European Commission to the rescue?

Faced with the rightholders’ attitude, the European Commission explained that their role in this dialogue was to ensure the effectiveness of the exceptions. They believe that the interpretation of Article 17 where the only remedy available for users is the redress mechanism might not ensure such effectiveness. They quoted the Court of Justice’s case law, saying that exceptions are particularly important for freedom of expression. They then strongly encouraged the rightsholders to suggest workable concrete solutions, instead of shutting down any attempt at compromise.

However, despite the Commission’s intervention, we are still in the dark regarding the content of the guidelines and whether they will actually be able to safeguard users’ rights against powerful commercial interests. Moreover, will the guidelines be followed by EU Member States when they implement the Directive? Some Member States already drafted their implementation laws, and users’ rights safeguards are missing. The Commission must actively ensure that national implementations of the Directive effectively protect users’ rights.

What’s next?

The European Commission said it will reflect on the input received during these six meetings with stakeholders, but wants to meet up again around 30 March. This final meeting would be the opportunity to present their initial view on the content of the guidelines. They will also to launch a written and targeted stakeholder consultation, details on which will be provided at the meeting in March.

We hope that this means the whole draft guidelines will be shared with us and that the purpose of this consultation will be to seek feedback on whether the document can be further improved to ensure compliance with the Charter of Fundamental Rights of the EU, as we requested in January 2020.

Article 17 stakeholder dialogue (day 6): Hitting a brick wall
https://www.communia-association.org/2020/02/13/article-17-stakeholder-dialogue-day-6-hitting-brick-wall/

Copyright stakeholder dialogues: Filters can’t understand context
https://edri.org/copyright-stakeholder-dialogues-filters-cant-understand-context/

EU copyright dialogues: The next battleground to prevent upload filters (18.10.2019)
https://edri.org/eu-copyright-dialogues-the-next-battleground-to-prevent-upload-filters/

NGOs call to ensure fundamental rights in copyright implementation (20.05.2019)
https://edri.org/ngos-call-to-ensure-fundamental-rights-in-copyright-implementation/

Copyright: Open letter asking for transparency in implementing guidelines (15.01.2020)
https://edri.org/copyright-open-letter-asking-for-transparency-in-implementing-guidelines

(Contribution by Laureline Lemoine, EDRi)

close
26 Feb 2020

ECtHR: UK Police data retention scheme violated the right to privacy

By Laureline Lemoine

On 13 February 2020, the European Court of Human Rights (ECtHR) issued its judgment in the case Gaughran v. The United Kingdom (UK), on the indefinite retention of personal data (DNA profile, fingerprints and photograph) of a man who had a spent conviction. The Court ruled that in the case of the applicant, the retention at issue constituted a disproportionate interference and therefore a violation of his right to respect for private life (Art. 8 of the European Convention on Human Rights) since the interference could not be regarded as necessary in a democratic society.

The Court relied much of its arguments on the S. and Marper v. The UK (2008) case where it found that the blanket and indiscriminate nature of the powers of retention of the fingerprints, cellular samples and DNA profiles of the applicants, suspected but not convicted of offences, as set out in the UK law, breached their right to respect of private life. Subsequently, the relevant legislation was amended in England and Wales, but not in Northern Ireland, where the applicant in the Gaughran case committed the criminal offence for which he was convicted.

Indefinite retention regime and lack of necessary and relevant safeguards

According to the ECtHR case law, the Court does not necessarily or solely focus on the duration of the retention period to assess the proportionality of a measure. The Court will rather look at whether the regime takes into account the seriousness of the offending and the need to retain the data, as well as the safeguards available to the individual.

In this case, because the UK chose to put in place a regime of indefinite retention, the Court argues that there is therefore “a need to for the State to ensure that certain safeguards were present and effective for the applicant” (para. 94).

The Court pointed out in this regard that the lack of any relevant safeguards. First, the biometric data and the photograph of the applicant were retained “without reference to the seriousness of his offence and without regard to any continuing need to retain that data indefinitely”. Moreover, the Court noted the absence of any real review available to the individual, as well as the police, which can only delete the data in exceptional circumstances (para. 94).

The UK overstepped the acceptable margin of appreciation

Part of the proportionality assessment is also looking at the margin of appreciation of the state. Similar to the national security argument, a wide margin of appreciation is often invoked by the governments to justify measures interfering with fundamental rights.

The Court found the margin to be way narrower than what the UK claimed, and still not wide enough to conclude that the retention was proportionate. Contrary to what the UK stated, the Court states (para. 82-84) that the majority of states have regimes in which there is a defined retention period. Furthermore, it states that the UK is one of the few of the Council of Europe jurisdictions to permit indefinite retention of DNA profiles, fingerprints and photographs of convicted persons.

Moreover, the UK claimed that indefinite retention measures were relevant and sufficient as the more data is retained, the more crime is prevented. This dangerous and false narrative was challenged by the Court as this would justify the “storage of information on the whole population and their deceased relatives, which would most definitely be excessive and irrelevant” (para. 89).

Beware of evolving technologies

One of the UK’s argument regarding the proportionality of the measure was that the applicant’s photograph was held on a local database and could not be searched against other photographs. However, the technology had developed since then, and the police is now able to apply facial recognition and facial mapping techniques to the applicant’s photograph by uploading it to a national database.

The potential use of facial recognition was influential in determining whether there had been an interference with the right to privacy. The Court also highlighted the risk of such evolving technologies, in relation to state powers. In this regard, the Court stresses the need to examine compliance with the right to privacy when “obscure powers” are vested in a state, “creating a risk of arbitrariness, especially where the technology available is continually becoming more sophisticated” (para. 88).

Because the applicant’s personal data has been retained indefinitely without consideration of the seriousness of his offence, the need for indefinite retention and without any real possibility of review, the Court held, unanimously, that there had been a violation of Article 8 (right to respect for private and family life) of the European Convention on Human Rights.

This judgment is especially relevant because it shows that blanket data retention policies without any safeguards breach the right to privacy of individuals, even when measures are considered to fall within the state’s discretion. This judgment could also impact ongoing discussions in the EU around future data retention legislation, as well as ongoing cases in the Court of Justice of the EU.

Judgment – Case of Gaughran v. The United Kingdom
https://hudoc.echr.coe.int/fre#{%22itemid%22:[%22001-200817%22]}

Press release: Indefinite retention of DNA, fingerprints and photograph of man convicted of drink driving breached his privacy rights (13.02.2020)
https://hudoc.echr.coe.int/fre#{%22itemid%22:[%22003-6638275-8815904%22]}

Data retention: “National security” is not a blank cheque (29.01.2020)
https://edri.org/data-retention-national-security-is-not-a-blank-cheque/

Indiscriminate data retention considered disproportionate, once again (15.01.2020)
https://edri.org/indiscriminate-data-retention-considered-disproportionate-once-again/

(Contribution by Laureline Lemoine, EDRi)

close
26 Feb 2020

Immigration, iris-scanning and iBorderCTRL

By Petra Molnar

Technologies like automated decision-making, biometrics, and unpiloted drones are increasingly controlling migration and affecting millions of people on the move. This second blog post in our series on AI and migration highlights some of these uses, to show the very real impacts on people’s lives, exacerbated by a lack of meaningful governance and oversight mechanisms of these technological experiments.

What can happen to you in your migration journey when bots are involved?

Before the border: Want to eat? Get your eyes scanned!

Before you even cross a border, you will be interacting with various technologies. Unpiloted drones are surveilling the Mediterranean corridor under the guise of border control. However, if similar technologies, like the so-called “smart border” surveillance at the US borders, are any indication, this may lead to more people dying as they are seeking safety. Biometrics such as iris scanning are increasingly being rolled out in humanitarian settings – where refugees, on top of their already difficult living conditions, are required to get their eyes scanned in order to eat, for example. And now, not even the personal posts you intended to share only with your friends and family are safe – social media scraping to screen your immigration applications is becoming common practice.

What is happening with all this data? Various international organisations like the United Nations (UN) are using Big Data, or extremely large data sets, to predict population movements. However, data collection is not an apolitical exercise, especially when powerful actors like states or international organisations like the UN collect information on vulnerable populations. In an increasingly anti-immigrant global landscape, migration data has also been misrepresented for political ends, to affect the distribution of aid dollars and resources and support hardline anti-immigration policies.

What is also concerning is the growing role of the private sector in the collection, use, and storage of this data. The World Food Program recently signed a 45 million USD deal with Palantir Technologies, who recently joined the EU lobby register, the same company that has been widely criticised for providing technology that supports the detention and deportation programs run by US Immigration and Customs Enforcement (ICE). What will happen with the data of 92 million aid recipients shared with Palantir? What data accountability mechanisms are in place during this partnership, and can data subjects refuse to have their data shared?

At the border: Are you lying? A bot can tell!

When you arrive at the border, more and more machines have appeared to scan, surveil, and collect information about you. Increasingly, these machines rely on automated decision-making. However, instances of bias in automated decision-making are widely documented. Pilot projects have emerged to monitor your face for signs of lying, and if the system becomes more “skeptical” through a series of increasingly complicated questions, you will be selected for further screening by a human officer. However, can this system account for the cross-cultural differences in which we communicate? What about if you are traumatised and unable to recall details clearly? Discriminatory applications of facial or emotion recognition technologies has far reaching consequences on people’s lives and rights, particularly in the realms of migration.

Beyond the border: Automating decisions

When you are applying for a visa or want to sponsor your spouse to come join you, how do you feel about algorithms making decisions on your applications? A variety of countries have begun experimenting with automating decisions in immigration and refugee applications, visas, and even immigration detention. This use of technology may seem like a good idea, but many immigration applications are complicated. Already two human officers looking at the same set of evidence can make two completely different determinations. How will an automated system be able to deal with the nuances? Or what if you want to challenge in court a decision you don’t agree with? Right now it is unclear who is responsible for when things go wrong – is it the coder who creates the algorithm, the immigration officer using it, or even the algorithm itself?

Mistakes in migration decisions can have lasting repercussions – you can be separated from your family, lose your job, or even be wrongfully deported. This happened to over 7000 students who were deported from the UK based on a faulty algorithm that accused them of cheating on a language test. Where are these students now?

What about your human rights?

A number of your internationally protected rights are already engaged in the increasingly widespread use of new technologies that manage migration. These include equality rights and freedom from discrimination; life, liberty, and security of the person; freedom of expression; and privacy rights. When public entities are making decisions about you, your rights to due process are also affected, including a right to an impartial decision-maker, a right to appeal, and a right to know the case against you. These rights are particularly important to think about in a high-risk context such as when deciding on your unemployent benefits, whether or not you would be selected for a job interview, where the repercussions of getting decisions wrong can separate families, ruin career, or in the extreme circumstances, impact your life and liberty.

However, currently there is no integrated regulatory global governance framework for the use of automated technologies, and no specific regulations in the context of migration management. Much of the global conversation centres on ethics without clear enforceability mechanisms and meaningful accountability. The signing and ratification of Convention 108+ should be a priority for states around the globe, as well as the strong enforcement of the protection it envisages.

Our next blog post will explore some of these rights and suggests a way towards context-specific accountability and governance mechanisms for migration control technologies.

One of the UN’s largest aid programmes just signed a deal with the CIA-backed data monolith Palantir (12.02.2020)
https://privacyinternational.org/news-analysis/2712/one-uns-largest-aid-programmes-just-signed-deal-cia-backed-data-monolith

The U.S. border patrol and an Israeli military contractor are putting a native American reservation under “persistent surveillance” (25.08.2019)
https://theintercept.com/2019/08/25/border-patrol-israel-elbit-surveillance/

Data protection, immigration enforcement and fundamental rights: What the EU’s Regulations on interoperability mean for people with irregular status
https://picum.org/wp-content/uploads/2019/11/Data-Protection-Immigration-Enforcement-and-Fundamental-Rights-Exec-Summary-EN.pdf

Your body is a passport
https://www.politico.eu/article/future-passports-biometric-risk-profiles-the-codes-we-carry/

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)

close
26 Feb 2020

Can we rely on machines making decisions for us on illegal content?

By Access Now

While automation is necessary for handling a vast amount of content shared by users, it makes mistakes that can be far-reaching for your rights and the well-being of society.

Most of us like to discuss our ideas and opinions on silly and serious issues, share happy and sad moments, and play together on the internet. And it’s a great thing. We all want to be free to learn about new things, get in touch with our friends, and reach out to new people. Every minute, photos, videos, and ideas are being shared. Every single minute, 527 760 photos are shared on Snapchat, 4 146,600 videos watched on YouTube, 456 000 tweets shared, and around 46 740 photos posted on Instagram. Do you know how many minutes we have in one day? 1440.

These pieces of information are different in nature. Some of them are home videos, and the law has nothing to do with them. But there is content that clearly breaches the law, such as child abuse material, or incitement to violence. And between legal and illegal content, there is a third group, which some people find harmful, while others have no problem with it. Although not illegal, but some parents would like to avoid their children getting access to pornography at the age of 12. It is not easy to define and let alone categorise what is harmful and for whom. It depends on culture, age, circumstances, and so many other factors.

Because a large quantity of internet content is hosted by online platforms, they have to rely on automated tools to find and tackle different categories of illegal or potentially harmful content. In particular, dominant players such as Facebook and Google have been using monitoring and filtering technologies for identification and removal of content. Do we agree on removing child abuse materials by allowing the police to investigate crimes? Certainly. Are we against permanent deletion of the content that would otherwise serve as vital evidence and documentation of gross human rights abuse as well as war crimes? Absolutely.

The EU, together with some Member States, has been continuously pushing online platforms to swiftly remove illegal or potentially harmful content, such as online hate speech or terrorism, often under the threat of fines if they don’t act fast enough. To meet these demands, tech companies have to rely on automated tools to filter out information that should not go online.

While automation is necessary for handling a vast amount of content shared by users, it makes mistakes that can be far-reaching for your rights and the well-being of society.

Contextual blindness of automated measures silences legitimate speech

Automated decision-making tools lack an understanding of linguistic or cultural differences. Content recognition technologies are unable to assess the context of expressions accurately. Even in straightforward cases, they make false matches. In 2017, the pop star Ariana Grande streamed her benefit concert “One Love Manchester” via her YouTube channel. The stream was promptly shut down by YouTube‘s upload filter, which wrongly flagged Grande’s show as a violation of her own copyright. On a more serious note, the same automated tools removed thousands of YouTube videos that could serve as evidence of atrocities committed against civilians in Syria, potentially jeopardising any future war crimes investigation that could bring war criminals to justice. Because of their contextual blindness or, in other words, inability to understand users’ real meaning and intentions, they flag and remove content that is completely legitimate. Thus, journalists, activists, comedians, artists, as well as any of us sharing our opinions and videos or pictures online risk being censored because internet companies are relying on these poorly working tools.

They’re not a silver bullet

These technologies are sometimes described as “Artificial Intelligence” (AI), a term that conjures up notions of superhuman computational intelligence. However, nothing of the sort exists, nor is it on the horizon. Instead, what this term refers to is advanced statistical models that have been trained to recognise patterns, but with no actual “understanding” or “intelligence”. Content recognition technologies cannot understand the meaning or intention of those who share a post on social media or the effect it has on others. They merely scan content for certain patterns such as visual, verbal, or audio files, which correspond to what they have been trained to identify as “hate speech” or “terrorist content”. There is no perfect, unambiguous training data, and so their ability to recognise these patterns is inherently limited to what they have been trained to recognise. Although they can achieve very high levels of accuracy in identifying unambiguous, consistent patterns, their ability to automate the very sensitive task of judging whether something constitutes hate speech will always be fundamentally limited.

Understandably, governments want to show their citizens that they are doing something to keep us safe from terrorism, hate speech, child abuse, or copyright breach. And companies are very happy to sell their automation technologies as a silver-bullet solution to politicians desperately digging for a simple answer. But we have to keep in mind that no automation will solve the problems deeply rooted in our society. We can use them as a tool to lessen the burden on platforms, but we need safeguards that ensure that we don’t sacrifice our human rights freedom because of poorly trained automated tools.

So who should decide what we see online? Read the next installment of this series in the next issue of the EDRi-gram newsletter.

Access Now
https://www.accessnow.org/

How much data do we create every day? The mind-blowing stats everyone should read (21.05.2018)
https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/

A human-centric internet for Europe (19.02.2020)
https://edri.org/a-human-centric-internet-for-europe/

Automation and illegal content: Can we rely on machines making decisions for us? (17.02.2020)
https://www.liberties.eu/en/news/automation-and-illegal-content-article-1/18746

Automation and illegal content: can we rely on machines making decisions for us? (17.02.2020)
https://www.accessnow.org/automation-and-illegal-content-can-we-rely-on-machines-making-decisions-for-us/

(Contribution by Eliška Pírková, EDRi member Access Now, and Eva Simon, Civil Liberties Union for Europe)

close
19 Feb 2020

A human-centric internet for Europe

By EDRi

The European Union has set digital transformation as one of its key pillars for the next five years. New data-driven technologies, including Artificial Intelligence (AI), offer societal benefits – but addressing their potential risks to our democratic values, the rule of law, and fundamental rights must be a top priority.

“By driving a human rights-centric digital agenda Europe has the opportunity to continue being the leading voice on data protection and privacy,” said Diego Naranjo, Head of Policy at European Digital Rights (EDRi). “This means ensuring fundamental rights protections for personal data processing and digitalisation, and a regulatory framework for governing the full lifecycle of AI applications.”

The EU must proactively ensure that regulatory frameworks (such as GDPR and the future ePrivacy Regulation) are implemented and enforced effectively. Where this doesn’t suffice, the EU and its Member States must ensure that the legislative ecosystem is “fit for the digital age”. This can be done by increasing the comprehensiveness (filling gaps and closing loopholes), clarity (clear interpretation), and transparency of EU and national rules. The principles of necessity and proportionality should always be front and centre whenever there is an inference with fundamental rights.

To deal with technological developments in a thorough way, in addition to data protection and privacy legislation, we need to take a look at other areas, such as competition rules and consumer law – including civil liability for harmful products or algorithms. Adopting a strong ePrivacy Regulation to ensure the privacy and confidentiality of our communications is also crucial.

From a fundamental rights perspective, one specific concern is the deployment of facial recognition technologies – whether AI-based or not.

“It is of utmost importance and urgency that the EU prevents the deployment of mass surveillance and identification technologies without fully understanding their impacts on people and their rights, and without ensuring that these systems are fully compliant with data protection and privacy law as well as all other fundamental rights,” said Naranjo.

Facial recognition and fundamental rights 101 (04.12.2020)
https://edri.org/facial-recognition-and-fundamental-rights-101/

The human rights impacts of migration control technologies (12.02.2020)
https://edri.org/the-human-rights-impacts-of-migration-control-technologies/

A Human-Centric Digital Manifesto for Europe
https://www.opensocietyfoundations.org/publications/a-human-centric-digital-manifesto-for-europe

close
19 Feb 2020

The impact of competition law on your digital rights

By Laureline Lemoine

This is the first article in a series dealing with competition law and Big Tech. The aim of the series is to look at what competition law has achieved when it comes to protecting our digital rights, where it has failed to deliver on its promises, and how to remedy this.

This series will first look at how competition and privacy law interact, to then focus on how they can support each other in tackling data exploitation and other issues related to Big Tech companies. With a potential reform of competition rules in mind, this series is also a reflection on how competition law could offer a mechanism to regulate Big Tech companies to limit their increasing power over our democracies.

Our personal data is seen by Big Tech companies as a commodity with economic value, and they cannot get enough of it. They track us online and harvest our personal data, including sensitive health data. Data protection and online privacy legislations aim to protect individuals against invasive data exploitation. Even though well-enforced privacy and data protection legislation are a must-have in our connected societies, there are other avenues that could be explored simultaneously. Because of the power imbalance between individuals and companies, as well as other issues affecting our fundamental rights, there is a need for a more structural approach, involving other policies and legislation. Competition law is often referred to as one of the tools that could redress this power imbalance, because it controls and regulates market power, including in the digital economy.

During her keynote speech at the International Association of Privacy Professionals (IAPP) conference in November 2019, Margrethe Vestager, European Commissioner for Competition and Executive Vice-President for A Europe Fit for the Digital Age, argued that, “[…] to tackle the challenges of a data-driven economy, we need both competition and privacy regulation, and we need strong enforcement in both. Neither of these two things can take the place of one another, but in the end, we’re dealing with the same digital world. Privacy and competition are both fundamentally there for the same reason: to protect our rights as consumers”.

Privacy and competition law are different policies

Competition and privacy law (which includes data protection and online privacy legislations) are governed by different legal texts and overseen by different authorities with distinct mandates.

According to Wojciech Wiewiórowski, the European Data Protection Supervisor (EDPS), “the main purpose of these two kinds of oversight is […] very different, because what the competition authorities want to achieve is the well-working fair market, what we want to achieve is to defend the fundamental rights [to privacy and data protection]”.

This means that, in assessing competition infringements, competition authorities do not go beyond competition issues. They have to assume that companies are or will be in compliance with their other legal obligations, including their privacy obligations.

The Court of Justice of the European Union confirmed this difference of mandates in 2006. In the Facebook/WhatsApp merger case, the Commission concluded that privacy-related concerns “do not fall within the scope of the EU competition law rules but within the scope of the EU data protection rules”. Facebook was later fined for “misleading” the competition authority.

Since then, Europe has seen the development of a data-driven economy and its fair share of privacy scandals and data breaches, too. And despite numerous investigations into problematic behaviours, Big Tech companies keep on growing.

But this goes far beyond competition issues, as the dominant position of Big Tech companies also gives them the power and the incentive to limit our freedoms, and to infringe on our fundamental rights. Their dominance is even a threat to our democracies.

As a way to tackle these issues, more people are calling for the alignment of enforcement initiatives of data protection authorities as well as competition and consumer authorities. This has led to debates about the silos between competition and data protection law, their differences but also their common objectives.

Data protection and competition against Big Tech powers

Both competition and data protection law impact economic activities and, at EU level, both are used to to ensure the further deepening of the EU single market. The General Data Protection Regulation (GDPR), as well as ensuring a high level of the protection of personal data, aims to harmonise the Member States’ legislations to remove obstacles to a common European market. Similarly, competition law prevents companies from enacting barriers to trade between competitors.

Moreover, data protection can be considered as an element of competition in cases where companies compete for who can better satisfy privacy preferences. There is, in this case, a common objective of allowing the individual to have control (as a consumer or as a data subject).

In her keynote speech, Vestager explained: “competition and competition policy have an important role to play … because the idea of competition is to put consumers in control. For markets to serve consumers and not the other way around,” she said, “it means if you don’t like the deal we’re getting, we can walk away and find something that meets our needs in a better way. And consumers can also use that power to demand something we really … care about, including maybe our privacy.”

Indeed, giving consumers a genuine choice to use privacy-friendly companies would help uphold standards in terms of privacy. Although now it is hard to believe, once upon a time Facebook prioritized privacy as a way to distinguish itself from MySpace, its biggest competitor back then.

However, the issue in the world of Big Tech today is that privacy is not a leverage. The dominant positions of the few players controlling the market leave no room for others proposing privacy-friendly products. As a result, there is no other choice but to use the services of Big Tech to stay connected online – the consumer is no longer in control.

One way to remedy this power imbalance between individuals and these giant companies could be through a greater cooperation between regulatory authorities. BEUC, the European Consumer Organisation, has called, regarding Facebook’s exploitation of consumers, for a “coherent enforcement approach for the data economy between regulators and across Member States” and wants the “European Commission to explore – with relevant authorities – how to deal with a concrete commercial behaviour that simultaneously breaches different areas of EU law”.

In 2016, The EDPS launched the Digital Clearinghouse, a voluntary network of regulators involved in the enforcement of legal regimes in digital markets, with a focus on data protection, and consumer and competition law. National competition authorities are also looking into competition and data, while in 2019, the European Commission published a report on Competition Policy for the Digital Era, to which EDRi member Privacy International contributed.

Greater cooperation between regulators, inclusion of data protection principles in competition law, and many other ideas are being discussed to redress this issue of power imbalance. Some of them will be explored in the next articles of this series.

Regarding antitrust law, we will look at discussions regarding new sets of rules designed especially for the Big Tech market, as well as the development of the right to portability and interoperability. As for merger control, we will focus on to what extent privacy could be considered as a theory of harm.

Opinion 8/2016 – EDPS Opinion on coherent enforcement of fundamental rights in the age of big data (2016)
https://edps.europa.eu/sites/edp/files/publication/16-09-23_bigdata_opinion_en.pdf

Competition and data
https://privacyinternational.org/learning-topics/competition-and-data

Factsheet – Competition in the digital era (2020)
https://www.beuc.eu/publications/beuc-x-2020-007_competition_in_digital_era.pdf

Report of the European Commission – Competition Policy for the digital era (2019)
https://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf

Family ties: the intersection between data protection and competition in EU Law (2017)
http://eprints.lse.ac.uk/68470/7/Lynskey_Family%20ties%20the%20intersection%20between_Author_2016_LSERO.pdf

close
12 Feb 2020

The human rights impacts of migration control technologies

By Petra Molnar

This is the first blogpost of a series on our new project which brings to the forefront the lived experiences of people on the move as they are impacted by technologies of migration control. The project, led by our Mozilla Fellow Petra Molnar, highlights the need to regulate the opaque technological experimentation documented in and around border zones of the EU and beyond. We will be releasing a full report later in 2020, but this series of blogposts will feature some of the most interesting case studies.

At the start of this new decade, over 70 million people have been forced to move due to conflict, instability, environmental factors, and economic reasons. As a response to the increased migration into the European Union, many states are looking into various technological experiments to strengthen border enforcement and manage migration. These experiments range from Big Data predictions about population movements in the Mediterranean to automated decision-making in immigration applications and Artificial Intelligence (AI) lie detectors at European borders. However, often these technological experiments do not consider the profound human rights ramifications and real impacts on human lives

A human laboratory of high risk experiments

Technologies of migration management operate in a global context. They reinforce institutions, cultures, policies and laws, and exacerbate the gap between the public and the private sector, where the power to design and deploy innovation comes at the expense of oversight and accountability. Technologies have the power to shape democracy and influence elections, through which they can reinforce the politics of exclusion. The development of technology also reinforces power asymmetries between countries and influence our thinking around which countries can push for innovation, while other spaces like conflict zones and refugee camps become sites of experimentation. The development of technology is not inherently democratic and issues of informed consent and right of refusal are particularly important to think about in humanitarian and forced migration contexts. For example, under the justification of efficiency, refugees in Jordan have their irises scanned in order to receive their weekly rations. Some refugees in the Azraq camp have reported feeling like they did not have the option to refuse to have their irises scanned, because if they did not participate, they would not get food. This is not free and informed consent.

These discussions are not just theoretical: various technologies are already used to control migration, to automate decisions, and to make predictions about people’s behaviour.

Palantir machine says: no

However, are these appropriate tools to use, particularly without any governance or accountability mechanisms in place for if or when things go wrong? Immigration decisions are often opaque, discretionary, and hard to understand, even when human officers, not artificial intelligence, are the ones making decisions. Many of us have had difficult experiences trying to get a work permit, reunite with our spouse, or adopt a baby across borders, not to mention seek refugee protection as a result of a conflict and a war. These technological experiments to augment or replace human immigration officers can have drastic results: in the UK, 7000 students were wrongfully deported because a faulty algorithm accused them of cheating on a language acquisition text. In the US, the Immigration and Customs Enforcement Agency (ICE) has partnered with Palantir Technologies to track and separate families and enforce deportations and detentions of people escaping violence in Central and Latin America.

Image credit: Jenny Kim, “Bots at the Gate” Report, University of Toronto September 2018

What if you wanted to challenge one of these automated decisions? Where does responsibility and liability lie – with the designer of the technology, its coder, the immigration officer, or the algorithm itself? Should algorithms have legal personality? It’s paramount to answer these questions, as much of the decision-making related to immigration and refugee decisions already sits at an uncomfortable legal nexus: the impact on the rights of individuals is very significant, even where procedural safeguards are weak.

Sauron Inc. watches you – the role of the private sector

The lack of technical capacity within government and the public sector can lead to potentially inappropriate over-reliance on the private sector. Adopting emerging and experimental tools without in-house talent capable of understanding, evaluating, and managing these technologies is irresponsible and downright dangerous. Private sector actors have an independent responsibility to make sure technologies that they develop do not violate international human rights and domestic legislation. Yet much of technological development occurs in so-called “black boxes,” where intellectual property laws and proprietary considerations shield the public from fully understanding how the technology operates. Powerful actors can easily hide behind intellectual property legislation or various other corporate shields to “launder” their responsibility and create a vacuum of accountability.

While the use of these technologies may lead to faster decisions and shorten delays, they may also exacerbate and create new barriers to access to justice. At the end of the day, we have to ask ourselves, what kind of world do we want to create, and who actually benefits from the development and deployment of technologies used to manage migration, profile passengers, or other surveillance mechanisms?

Technology replicates power structures in society. Affected communities must also be involved in technological development and governance. While conversations around the ethics of AI are taking place, ethics do not go far enough. We need a sharper focus on oversight mechanisms grounded in fundamental human rights.

This project builds on critical examinations of the human rights impacts of automated decision-making in Canada’s refugee and immigration system. In the coming months, we will be collecting testimonies in locations including the Mediterranean corridor and various border sites in Europe. Our next blogpost will explore how new technologies are being used before, at, and beyond the border, and we will highlight the very real impacts that these technological experiments have on people’s lives and rights as they are surveilled and as their movement is controlled.

If you are interested in finding out more about this project or have feedback and ideas, please contact petra.molnar [at] utoronto [dot] ca. The project is funded by the Mozilla and Ford Foundations.

Mozilla Fellow Petra Molnar joins us to work on AI & discrimination (26.09.2020)
https://edri.org/mozilla-fellow-petra-molnar-joins-us-to-work-on-ai-and-discrimination/

Technology on the margins: AI and global migration management from a human rights perspective, Cambridge International Law Journal, December 2019
https://www.researchgate.net/publication/337780154_Technology_on_the_margins_AI_and_global_migration_management_from_a_human_rights_perspective

Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee Systems, University of Toronto, September 2018
https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf

New technologies in migration: human rights impacts, Forced Migration Review, June 2019
https://www.fmreview.org/ethics/molnar

Once migrants on Mediterranean were saved by naval patrols. Now they have to watch as drones fly over (04.08.2019)
https://www.theguardian.com/world/2019/aug/04/drones-replace-patrol-ships-mediterranean-fears-more-migrant-deaths-eu

Mijente: Who is Behind ICE?
https://mijente.net/notechforice/

The Threat of Artificial Intelligence to POC, Immigrants, and War Zone Civilians
https://towardsdatascience.com/the-threat-of-artificial-intelligence-to-poc-immigrants-and-war-zone-civilians-e163cd644fe0

(Contribution, Petra Molnar, Mozilla Fellow, EDRi)

close