17 Jul 2019

New privacy alliance to be formed in Russia, Central and Eastern Europe

By EDRi

Civil Society advocates from Russia, and Central and Eastern Europe have joined forces to form a new inter-regional NGO to promote privacy in countries bordering the EU.

The initiative also involves activists from the Post-Soviet countries, the Balkans and the EU Accession candidate countries. One of its primary objectives is to build coalitions and campaigns in countries that have weak or non-existing privacy protections. The project emerged from a three-day regional privacy workshop held earlier in 2019 at the Nordic Non-violence Study Group (NORNONS) centre in Sweden. The workshop agreed that public awareness of privacy in the countries represented was at a dangerously poor level, and concluded that better collaboration between advocates is one solution.

There has been a pressing need for such an alliance for many years. A vast arc of countries from Russia through Western Asia and into the Balkans has been largely overlooked by international NGOs and intergovernmental organisations (IGOs) concerned with privacy and surveillance.

The initiative was convened by Simon Davies, founder of EDRi member Privacy International and the Big Brother Awards. He warned that government surveillance and abuse of personal information has become endemic in many of those countries:

“There is an urgency to our project. The citizens of places like Azerbaijan, Kazakhstan, Kyrgyzstan, Turkmenistan, and Armenia are exposed to wholesale privacy invasion, and we have little knowledge of what’s going on there. Many of these countries have no visibility in international networks. Most have little genuine civil society, and their governments engage in rampant surveillance. Where there is privacy law, it is usually an illusion. This situation applies even in Russia.”

A Working Group has been formed involving advocates from Russia, Serbia, Georgia, Ukraine and Belarus, and its membership includes Danilo Krivokapić from EDRi member SHARE foundation in Serbia. The role of this group is to steer the legal foundation of the initiative and to approve a formal Constitution.

The initiative’s Moderator is the former Ombudsman of Georgia, Ucha Nanuashvili. He too believes that the new NGO will fill a desperately needed void in privacy activism:

“In my view, regions outside the EU need this initiative. Privacy is an issue that is becoming more prominent, and yet there is very little regional collaboration and representation. Particularly in the former Soviet states there’s an urgent need for an initiative that brings together advocates and experts in a strong alliance.”

Seed funding for the project has been provided by the Public Voice Fund of the Electronic Privacy Information Center (EPIC). EPIC’s president, Marc Rotenberg, welcomed the initiative and said he believed it would “contribute substantially” to the global privacy movement:

“We have been aware for some time that there is a dangerous void around privacy protection in those regions. We appreciate the good work of NGOs and academics to undertake this important collaboration.”

The Working Group hopes to formally launch the NGO in October in Albania. The group is presently considering several options for a name. Anyone interested in supporting the work of the initiative or wanting more information can contact Simon Davies at simon <at> privacysurgeon <dot> org.

The Nordic Nonviolence Study Group
https://www.nornons.org/

SHARE Foundation
https://www.sharefoundation.info/en/

EPIC’s Public Voice fund
https://epic.org/epic/publicvoicefund/

Mass surveillance in Russia
https://en.wikipedia.org/wiki/Mass_surveillance_in_Russia

Ucha Nanuashvili, Georgian Human Rights Centre
http://www.hridc.org/

close
17 Jul 2019

The first GDPR fines in Romania

By ApTI

The Romanian Data Protection Authority (DPA) has recently announced the first three fines applied in Romania as a result of the enforcement of the EU General Data Protection Regulation (GDPR).

On 27 June 2019, a Romanian bank was fined approximately 130 000 euro (613 912 RON) for revealing too much personal information such as the national identification number and the postal address of the payment issuers to the payment recipients. According to the Romanian DPA, 337 042 individuals were affected between February and December 2018.

The Romanian DPA based their decision on Article 5 (1) c) of the GDPR on data minimisation, and also mentioned Recital 78. Inadequate technical and organisational measures and the inability to design processes that reduce the collected personal information to the minimum necessary led to the failure to integrate appropriate safeguards for protecting individuals’ data.

It could be discussed why the DPA did not fine the bank for breaching Article 5 (1) b) on purpose limitation and Article 5 (1) f) on integrity and confidentiality of the data. The national identification number and the address of individuals were collected for internal identification purposes, not for revealing this information to third parties. The bank failed to ensure the security and confidentiality of the data by revealing it to the beneficiaries of the payments, exposing individuals’ personal data to potential unauthorised or unlawful processing.

Another fine of approximately 15 000 euro (71 028 RON) followed on 2 July 2019. It was given to a hotel unit for breaching the security of personal information of its clients. A list with information about 46 guests who were serving breakfast at the hotel was photographed by an unauthorised person and published online. The hotel filed a data security breach to the DPA and after the investigation, the DPA fined the hotel based on Article 24 of the GDPR for the lack of implementing appropriate technical and organisational safeguards to protect personal data. The hotel did not take measures to assure the security of the data against accidental or illegal disclosure and against unauthorised processing. The DPA’s decision reminds of Recital 75 mentioning the risk and type of damages associated with the processing of personal data.

A third GDPR fine was announced on 12 July 2019. It was applied to a website that, due to improper security measures after a platform migration, allowed public access via two links to a list of files, including details of several business contacts, which included name, surname, postal address, email, phone, workplace and transaction details. The company was fined 3 000 euros.

The first GDPR fine (04.07.2019)
https://www.dataprotection.ro/index.jsp?page=Comunicat_Amenda_Unicredit&lang=en

The second GDPR fine (only in Romanian, 08.07.2019)
https://www.dataprotection.ro/index.jsp?page=O_noua_amenda_GDPR&lang=ro

The third GDPR fine (only in Romanian, 12.07.2019)
https://www.dataprotection.ro/?page=2019%20A%20treia%20amenda%20in%20aplicarea%20RGPD&lang=ro

(Contribution by Valentina Pavel, EDRi member ApTI, Romania)

close
17 Jul 2019

The digital rights of LGBTQ+ people: When technology reinforces societal oppressions

By Chloé Berthélémy

Online surveillance and censorship impact everyone’s rights, and particularly those of already marginalised groups such as lesbian, gay, bisexual, transgender and queer and others (LGBTQ+) people. The use of new technologies usually reinforces existing societal biases, making those communities particularly prone to discrimination and security threats. As a follow-up to Pride Month, here is an attempt to map out what is at stake for LGBTQ+ people in digital and connected spaces.

The internet has played a considerable role in the development and organisation of the LGBTQ+ community. It represents an empowering tool for LGBTQ+ people to meet with each other, to build networks and join forces, to access information and acquire knowledge about vital health care issues, as well as to express, spread and strengthen their political claims.

We’ve got a monopoly problem

The centralisation of electronic communications services around a few platforms has created new barriers for LGBTQ+ people to exercising their digital rights. Trapped into a network effect – whereby the decision to leave the platform would represent a big lost for the user – most of them have only one place to go to meet and spread their ideas. The content they post is moderated arbitrarily by these privately owned platforms, following standards and “community guidelines”.

Powerful platforms’ practices result in many LGBTQ+ accounts, posts and themed ads being taken down on, while homophobic, transphobic and sexist content often remains untouched. In practice, these double-standards for reporting and banning contents mean that when queer and transgender people use typical slurs to reclaim and take pride from them, social media reviewers often disregard the intent and block them; whereas attackers use identical offensive terms without fearing the same punishment. More, the process being automated just worsens the injustice as algorithms are incapable of making the difference between the two cases. This leaves the LGBTQ+ community disenfranchised without reasonable explanations and possibilities to appeal the decisions.

Community standards apply both on the open part of social media as well as on the related private chats (such as Facebook Messenger and Wired). Since those networks play an essential role to discuss queer issues, to date and to engage in sexting, LGBTQ+ people become highly dependent on the platforms’ tolerance for sexual expression and nudity. Sometimes sudden changes in community guidelines are carried out without any user consultation or control. For example, the LGBTQ+ community was particularly harmed when Tumblr decided not to allow Not Safe For Work (NSFW) content anymore and Facebook banned “sexual solicitation” on its services.

Another example of companies’ policies affecting transgender people specifically is the rising trend of applying strict real-name policies online. The authentication requirement based on official ID documents prevents transgender people to use their new name and identity. For many of them, notably those living in repressive countries, it is difficult to obtain the change of their name and gender markers. As a consequence, they see their accounts deleted on a regular basis, after a few months of use, losing all their content and contacts. With little chance to retrieve their accounts, their freedoms online are severely hindered.

There is no such thing as a safe space online

Even when LGBTQ+ people leave the social media giants, they cannot necessarily turn to a safer platform online. Grindr, the biggest social networking app for gay, bi, trans, and queer people, was used by Egyptian authorities to track down and persecute LGBTQ+ people. Using fake profiles, the police is able to collect evidence, imprison, torture and prosecute for illegal sexual behaviour. This led to a chilling effect on the community, reluctant to engage in new encounters.

Other dangerous practices imply the outing of LGBTQ+ people online. For instance, a Twitter account was purposely set up in Paraguay to expose people’s sexual orientation by extracting revealing contents, such as nude pictures posted on Grindr, and posting them publicly. Despite many appeals made against the account, it disseminated content during six weeks before the platform finally deleted it. The damages for the victims are long-term and irreparable. This is, in particular, the cases in countries where there is no hate crime legislation, or where this legislation is not fully implemented, resulting in impunity for State and non-State actor’s homophobic and transphobic violence.

Technology is not neutral

The way those services and apps are built with poor security levels reflects their Western-centric, heteronormative and gender-biased nature. This endangers already vulnerable LGBTQ+ communities when they develop globally and become viral, especially in the Global South. Technologies, in particular emerging ones, can be misused to discriminate. For instance, a facial recognition system has been trained to recognise homosexual people based on their facial features. Not only the purpose of this technology is dubious, but it is also dangerous if scaled up and lands in the hands of repressive governments.

The main problem is that communities are not involved in the production stages. It’s hard to incentivise profit-driven companies to change their services according to specific needs while maintaining them free and accessible for all. Marginalised groups can usually not afford additional premium security features. Furthermore, the developers community remains in the majority white, middle aged and heterosexual, with little understanding of the local realities and dangers in other regions in the world. Encouraging LGBTQ+ people with diverse regional backgrounds to join this community would improve sensibly the offer of community-led, free, open and secure services. A lot remains to be made to push companies to engage with affected communities in order to develop tools that are privacy friendly and inclusive-by-design.

A leading good example is the Grindr initiative by EDRi member ARTICLE 19 that includes the ability to change the app’s icon appearance and the addition of a password security lock to better protect LGBTQ+ users.

This article is based on an interview of Eduardo Carrillo, digital LGBTQI+ activist in Paraguay and project director at TEDIC. TEDIC applies a gender perspective to its work on digital rights and carries out support activities for the local LGBTQ+ community to mitigate the discrimination it encounters.

In this article, we use the term LGBTQ+ to designate Lesbians, Gays, Bisexuals, Transsexuals, Queers, and all the other gender identities and sexual orientations that do not correspond to the heterosexual and cisgender (when the gender identity of a person matches the sex assigned at birth) norms.

Women’s rights online: tips for a safer digital life (08.03.2019)
https://edri.org/womens-rights-online-tips-for-a-safer-digital-life/

How to retrieve our account on Facebook: Online censorship of the LGBTQI community (02.05.2018)
https://www.tedic.org/como-recuperar-nuestra-cuenta-en-facebook-censura-en-linea-hacia-colectivo-lgbtqi/

App Security Flaws Could Create Added Risks for LGBTQI Communities (17.12.2018)
https://cyborgfeminista.tedic.org/app-security-flaws-could-create-added-risks-for-lgbtqi-communities/

No, Facebook’s updated sex policy doesn’t prohibit discussing your sexual orientation (06.12.2018)
https://www.wired.com/story/facebooks-hate-speech-policies-censor-marginalized-users/

Designing for the crackdown (25.4.2018)
https://www.theverge.com/2018/4/25/17279270/lgbtq-dating-apps-egypt-illegal-human-rights

(Contribution by Chloe Berthélémy, EDRi)

close
17 Jul 2019

“SIN vs Facebook”: First victory against privatised censorship

By Panoptykon Foundation

In an interim measures ruling on 11 June 2019, the District Court in Warsaw has temporarily prohibited Facebook from removing fan pages, profiles, and groups run by Civil Society Drug Policy Initiative (SIN) on Facebook and Instagram, as well as from blocking individual posts. SIN, a Polish non-profit organisation promoting evidence-based drug policy, filed a lawsuit in May 2019 against Facebook, with the support of the Polish EDRi member Panoptykon Foundation.

SIN filed a lawsuit against Facebook in May 2019 that blocking content restricted, in an unjustified way, the possibility to disseminate information by the organisation, express opinions and communicate with their audience. Concerned about further censorship, SIN was not able to freely carry out their educational activities. Moreover, the removal of content suggested that the organisation’s activity on the platforms was harmful, thus undermining SIN’s credibility. By allowing the request for interim measures, the court decided that SIN substantiated their claims. Although it is only the beginning of the trial, this is a first important step in the fight against excessive and opaque content blocking practices on social media.

This interim measures ruling from 11 June imply that – at least until the final judgement in the case – SIN’s activists may carry out their activities on drug policy without concerns that they will suddenly lose the possibility to communicate with their audience. The court has furthermore obliged Facebook to store profiles, fan pages and groups deleted in 2018 and 2019 but not to restore them. The storage would allow SIN – if they were to win the case – to have the content quickly restored, together with the entire published content, comments by other users, as well as followers and people who liked the fan page. This is not the only good news: the court has also confirmed that Polish users can enforce their rights against the tech giant in Poland. Unfortunately, the court did not approve, at this stage, the request to restore pre-emptively deleted fan pages, profiles, and groups for the duration of the trial. The court argued that it would be a far-fetched measure, which would, in practice, lead to recognising the fundamental claim expressed in the lawsuit.

In June 2019, educational posts in which SIN’s educators cautioned against the use of some substances during hot weather were again blocked from Instagram. SIN received a warning that “subsequent infringements of the community standards” may result in removing the entire profile. Now, after the interim measures ruling, they will be able to catch a breath and continue their social media activity without worrying that they may be blocked again at any time. This “private censorship” is one of the modern-day threats to freedom of speech. Platforms such as Facebook and Instagram have become “gatekeepers” to online expression, and, just as in the SIN’s case, there’s no viable alternative to them. Getting blocked on these platforms is a significant limitation to disseminating information.

The court’s interim decision means that for now, Facebook will not be able to arbitrarily decide to block content published by SIN. By issuing this decision, the court also recognised its jurisdiction to hear the case in Poland under Polish law. This is great news for Polish users and possibly users from other EU Member States. In cases against global internet companies the possibility to claim one’s rights before the domestic court is a condition for a viable access to justice – if the only possibility was to sue them in their home countries, the costs, the language barrier, and a foreign legal system would make it very difficult, if not impossible, for most citizens to exercise their rights.

However, the court’s decision is not final – after the delivery of the decision, Facebook Ireland will have the right to appeal it with the Appeal Court. The decision has been made ex parte, solely on the basis of a position presented by SIN, without the participation of the other party, and it only implements a temporary measure and does not prejudge the final verdict of the entire trial – the main proceedings are only about to begin.

Panoptykon Foundation
https://panoptykon.org/

SIN vs Facebook
https://panoptykon.org/sinvsfacebook

SIN v Facebook: Tech giant sued over censorship in landmark case (08.05.2019)
https://edri.org/sin-v-facebook/

(Contribution by Anna Obem and Dorota Glowacka, EDRi member Panoptykon Foundation, Poland)

close
17 Jul 2019

Microsoft Office 365 banned from German schools over privacy concerns

By Jan Penfrat

In a bombshell decision, the Data Protection Authority (DPA) of the German Land of Hesse has ruled that schools are banned from using Microsoft’s cloud office product “Office 365”. According to the decision, the platform’s standard settings expose personal information about school pupils and teachers “to possible access by US officials” and are thus incompatible with European and local data protection laws.

The ruling is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all, reports ZDNet. In 2018, investigators in the Netherlands discovered that the data collected by Microsoft “could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines.” All of which contravenes the General Data Protection Regulation (GDPR) and potentially local laws for the protection of personal data of underaged pupils.

While Microsoft’s “Office 365” is not a new product, the company has recently changed its offer in Germany: Until now, it provided customers with a special German cloud version hosted on servers run by German telecoms giant Deutsche Telekom. Deutsche Telekom served as a kind of infrastructure trustee, putting customer data outside the legal reach of US law enforcement and intelligence agencies. In 2018, however, Microsoft announced that in 2019 this special arrangement will be terminated and German customers are offered to move to Microsoft’s standard cloud offer in the EU.

Microsoft insists that nothing changes for customers because the new “Office 365” servers are also located in the EU or even in Germany. However, legal developments in the US have put the Hesse DPA on high alert: The newly enacted “US Cloud Act” empowers US government agencies to request access to customer data from all US-based companies no matter where their servers are located.

To make things even worse, Germany’s Federal Office for Information Security (BSI) recently expressed concerns about telemetry data that the Windows 10 operating system collects and transmits to Microsoft. So even if German (or European) schools stopped using the company’s cloud office, its ubiquitous Windows operating system also leaks data to the US with no control or stopping it for users.

School pupils are usually not able to give consent, Max Schrems from EDRi member noyb told ZDNet. “And if data is sent to Microsoft in the US, it is subject to US mass surveillance laws. This is illegal under EU law.” Even if that was legal, says the Hesse DPA, schools and other public institutions in Germany have a “particular responsibility for what they do with personal data, and how transparent they are about that.”

It seems that fulfilling those responsibilities hasn’t been possible when using Microsoft Office 365. In a next step, it is crucial that European DPAs discuss those findings within the European Data Protection Board to come to an EU-wide rule that protects children’s personal data from unregulated access by US agencies. Otherwise European schools would be well-advised to switch to privacy-friendly alternatives such as Linux, LibreOffice, and Nextcloud.

Statement of the Commissioner for Data Protection and Freedom of Information of the Land of Hesse regarding the use of Microsoft Office 365 in schools in Hesse (only in German, 09.07.2019)
https://datenschutz.hessen.de/pressemitteilungen/stellungnahme-des-hessischen-beauftragten-f%C3%BCr-datenschutz-und

Microsoft Office 365: Banned in German schools over privacy fears (12.07.2019)
https://www.zdnet.com/article/microsoft-office-365-banned-in-german-schools-over-privacy-fears

Microsoft offers cloud services in new German data centers as of 2019 in reaction to changes in demand (only in German, 31.08.2018)
https://news.microsoft.com/de-de/microsoft-cloud-2019-rechenzentren-deutschland/

(Contribution by Jan Penfrat, EDRi)

close
11 Jul 2019

E-Commerce review: Technology is the solution. What is the problem?

By Kirsten Fiedler

This is the second article in our series on Europe’s future rules for intermediary liability and content moderation. You can read the introduction here.

When it comes to tackling illegal and “harmful” content online, there’s a major trend in policy-making: Big tech seems to be both the cause of and the solution to all problems.

However, hoping that technology would solve problems that are deeply rooted in our societies is misguided. Moderating content that people post online can only be seen as a partial solution to much wider societal issues. It might help us to deal with some of the symptoms but it won’t solve the root of the problems.

Secondly, giving in to hypes and trying to find “quick fixes” for trending topics occupying the news cycle is not good policy-making. Rushed policy proposals rarely allow for an in-depth analysis of the full picture, or for the consideration and mitigation of potential side-effects. Worse, such proposals are often counter-productive.

For instance, an Oxford Internet Institute study revealed that the problem of disinformation on Twitter during the EU elections had been overstated. Less than 4% of sources circulating on that platform during the researchers’ data collection period qualified as disinformation. Overall, users shared far more links to established news outlets than to suspicious online sources.

Therefore, before launching any review of the EU’s e-Commerce Directive, policy-makers should ask themselves: What are the problems we want to address? Do we have a clear understanding of the nature, scale, and evolution of those problems? What can be done to efficiently tackle them? Even though the Directive’s provisions on the liability of online platforms also impact content moderation, the upcoming e-Commerce review is too important to be hijacked by the blind ambition to eradicate all objectionable speech online.

In Europe, the decision about what is illegal is part of the democratic process in the Member States. Defining “harmful online content” that is not necessarily illegal is much harder and there is no process or authority to do it. Therefore, regulatory efforts should focus on illegal content only. The unclear and slippery territory of attempting to regulate “harmful” (but legal) content puts our democracy, our rights and our freedoms at risk. When reviewing the E-Commerce Directive, the EU Commission should follow its Communication on Platforms from 2016.

Once the problems are properly defined and policy-makers agree on what kind of illegal activity should be tackled online, any regulation of online platforms and uploaded content should take a closer look at the services it attempts to regulate, as well as assess how content spreads and at what scale. Regulating the internet as if it consisted only of Google and Facebook, will inevitably lead to an internet that does consist only of Google and Facebook. Unfortunately, as we’ve seen in the debate around upload filters during the copyright reform, political thinking around speech regulation is focused on a small number of very dominant players (most notably Facebook, YouTube, and Twitter). This political focus paradoxically turned out to reinforce the dominant market position of existing monopolies. It would be very unfortunate to repeat the mistakes that were made, in the context of legislation which has as far-reaching consequences as the EU’s e-Commerce Directive.


European Commission Communication on Online Platforms and the Digital Single Market Opportunities and Challenges for Europe (25.05.2016)
https://ec.europa.eu/digital-single-market/en/news/communication-online-platforms-and-digital-single-market-opportunities-and-challenges-europe

Junk News during the EU Parliamentary Elections (21.05.2019)
https://comprop.oii.ox.ac.uk/research/eu-elections-memo/

close
09 Jul 2019

Join EDRi as policy intern!

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations from across Europe. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defence of our rights and freedoms online!

The EDRi office in Brussels is currently accepting applications for an intern to support our policy team. This is your opportunity to get first-hand experience in EU policy-making and contribute to a change in favour of digital rights and freedoms across Europe. The internship will go from 15 September (or 1 October) to 31 March and is remunerated minimum 750 EUR per month (according to “convention d’immersion professionnelle”).

Key tasks:

  • Conducting research and analysis on topics such as data protection, privacy, net neutrality, intermediary liability and freedom of expression, encryption, cross-border access to data and digital trade
  • Drafting regular internal policy updates for the EDRi network
  • Monitoring international, EU and national policy developments
  • Organising and participating in meetings and events
  • Supporting the creation of the EDRi-gram newsletter
  • Assisting in the preparation of draft reports, presentations and other internal and external documents
  • Supporting EDRi’s day-to-day office management
  • Developing public education materials

Qualifications:

  • Demonstrated interest in and enthusiasm for human rights and technology-related legal issues
  • Good understanding of European policy-making
  • Excellent research and writing skills
  • Fluent command of spoken and written English
  • Computer literacy
  • Experience in the fields of data protection, privacy, copyright, net neutrality, intermediary liability and freedom of expression, surveillance and law enforcement, or digital trade is an asset

Read about previous internship experiences at EDRi here.

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV in English and only as pdf files (other formats such as doc and docx will not be accepted) to jan >dot< penfrat >at< edri >dot< org.

The closing date for applications is 22 July 2019. The interviews and written assignments will take place between 26-30 August 2019. Please note that due to limited resources only shortlisted candidates will be contacted.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. People from all backgrounds are encouraged to apply and we strive to have a diverse and inclusive working environment.

Twitter_tweet_and_follow_banner
close
04 Jul 2019

Real Time Bidding: The auction for your attention

By Andreea Belu

The digitalisation of marketing has introduced novel industry practices and business models. Some of these new systems have developed into crucial threats to people’s freedoms. A particularly alarming one is Real Time Bidding (RTB).

When you visit a website, you often encounter content published by the website’s owner/author, and external ads. Since a certain type of content attracts a certain audience, the website owner can sell some space on their website to advertisers that want to reach those readers.

In the earlier years of the web, ads used to be contextual, and the website would sell its ad space to a certain advertiser in the field. For example, ads on a website about cars would typically relate to cars. Later, ads have become more personalised, and they now focus on the unique website reader. They have become “programmatic advertising”. The website still sells its space, but now it sells it to advertisement platforms, “ad exchanges”. Ad exchanges are digital marketplaces that connect publishers (like websites) to advertisers by auctioning the attention you give that website. This automated auction process is called Real Time Bidding (RTB).

How does Real Time Bidding work?

Imagine auctions, stock exchange, traders, big screens, noise, graphs, percentages. Similarly, RTB systems facilitate the auction of website ad space to the highest bidding advertiser. How does it work?

A website rents its advertising space to one (or many) ad exchanges. In the blink of an eye, the ad exchange creates a “bid request” that can include information from the website: what you’re reading, watching or listening to on the website you are on, the categories into which that content goes, your unique pseudonymous ID, your profile’s ID from the ad buyer’s system, your location, device type (smartphone or laptop), operating system, browser, IP address, and so on.

From their side, advertisers inform the ad exchange about who they want to reach. Sometimes they provide detailed customer segments. These categories have been obtained by combining the advertisers’ data about (potential) customers, and the personal profiles generated by data brokers such as Cambridge Analytica, Experian, Acxiom or Oracle. The ad exchange has now a complex profile of you, made of information from the website supplying the ad space, and information from the advertiser demanding the ad space. When there is a match between a bid request and the advertiser’s desired customer segment, a Demand Side Platform (DSP) acting on behalf of thousands of advertisers starts placing bids for the website’s ad space. The highest bid wins, places its ad in front of a particular website viewer, and the rest is history.

Click to watch the animation

TL:DR

Every time you visit a website that uses RTB, your personal data is publicly broadcasted to possibly thousands of companies ready to target their ads. Whenever this happens, you have no control over who has access to your personal data. Whenever this happens, you have no way of objecting to being traded. Whenever this happens, you cannot oppose to being targeted as Jew hater, incest or abuse victim, impotent, or right wing extremist. Whenever this happens, you have no idea whether you are being discriminated.

Whenever this happens, you have no idea where your data flows.

EDRi’s members suing against RTB

Real time bidding poses immense risks for our human rights in the digital space, specifically for the rights recognised in the EU General Data Protection Regulation (GDPR). More, it puts you at high risks of being discriminated. For these reasons, several EDRi members and observers have taken action and filed lawsuits against RTB in different EU countries. Privacy International, Panoptykon Foundation, Open Rights Group, Bits of Freedom, Digitale Gesellschaft, digitalcourage, La Quadrature du Net and Coalizione Italiana per le Libertà e i Diritti civili are taking part in a wider campaign that urges the ad tech industry to #StopSpyingOnUs.

Support their effort in fighting for your rights and spread the word!

Read More:

Privacy International full timeline of complaints
https://privacyinternational.org/adtech-complaints-timeline

GDPR Today: Ad Tech GDPR complaint is extended to four more European regulators
https://www.gdprtoday.org/ad-tech-gdpr-complaint-is-extended-to-four-more-european-regulators/

Prevent the Online Ad Industry from Misusing Your Data – Join the #StopSpyingOnUs Campaign
https://www.liberties.eu/en/campaigns/stop-spying-on-us-fix-ad-tech-campaign/307

The Adtech Crisis and Disinformation – Dr Johnny Ryan
https://vimeo.com/317245633

Blogpost series: Your privacy, security and freedom online are in danger (14.09.2016)
https://edri.org/privacy-security-freedom/

close
03 Jul 2019

EDRi is looking for a Communications Intern

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

Join EDRi now and become a superhero for the defense of our rights and freedoms online!

The EDRi Brussels office is currently looking for an intern to support Senior Communications Manager, Campaigns and Communications Manager, and Community Coordinator. The internship will focus on social media, publications, campaigning, press work, and the production of written materials. The intern will also assist in tasks related to community coordination.

The internship will begin in September 2019 and go on for 4-6 months. You will receive a monthly remuneration of minimum 750 EUR (according to “convention d’immersion professionnelle”).

Key tasks:

  • maintaining social media accounts: drafting posts, creating visuals, engaging with followers, monitoring
  • contributing to drafting and editing of press releases and briefings, newsletter articles, and supporter mailings
  • maintaining mailing lists for press distribution, newsletter and supporter mailings
  • layouts and editing of visual (and audiovisual) materials
  • updating and analysing communications statistics and visibility in media
  • assisting in event organisation

Must-have skills and qualifications:

  • experience in social media community management and publications
  • photo editing skills
  • excellent skills in writing and editing
  • fluent command of spoken and written English

Desired skills:

  • video editing skills
  • experience in journalism, media or public relations
  • interest in online activism and campaigning for digital human rights

How to apply:

To apply please send a maximum one-page cover letter and a maximum two-page CV (only PDFs are accepted) by email to heini >dot< jarvinen >at< edri >dot< org. Closing date for applications has been extended until 22 July 2019. Interviews with selected candidates will take place during the last two weeks of July.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment. We encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

close
03 Jul 2019

Fighting online hatespeech: An alternative to mandatory real names

By Gesellschaft für Freiheitsrechte

The internet facilitates debates: People around the globe can connect at almost zero cost, and information and opinions that would otherwise hardly be noticed can go viral through social media. However, services like Twitter and Facebook can also be used for targeted defamation. Especially people who belong to minorities or endorse views outside the mainstream have described grave verbal attacks. Women who are active in politics often face rape threats. Such abuses of online communication should not be tolerated in a democracy.

An obligation for real names is not a solution

In response, “number plates” for the internet have been proposed – people should be required to disclose their real names before they can participate in forums and on social media. However, such a “real name obligation” would achieve very little in terms of protections against verbal abuse online, and at the same time, it would cause serious collateral damage.

The arguments against an obligation for real names are manifold: For example, its supporters fail to notice that there has been an obligation for real names on Facebook for many years, which many users simply ignore. It’s doubtful whether such an obligation would even be admissible under European law. In any case, such a policy would only apply at the national level. Should platforms simply hide all posts by users from other countries where real names are not required by law?

Everyday experience and recent studies show that a remarkable number of users do not shy away from criminal online activities, even if they are acting under their real names. This is because the problem with pursuing crimes online is not the anonymity of the offenders; it is the irritatingly low level of engagement from the responsible authorities. If it’s possible to commit such crimes without any risk of consequences, this will impact the popular sense of right and wrong.

The biggest disadvantage of a real name obligation is that it would silence those who depend on anonymous or pseudonymous communication. Conservatives often assume that such a need only exists in authoritarian states. However, even in a democracy many people have comprehensible reasons why they would not or cannot communicate openly. For example, people who engage against Nazis can hardly make this public in some regions of Germany without facing significant risk of physical harm. Interestingly, even almost all German judges and prosecutors who actively use Twitter prefer to do so under a pseudonym.

Better: Target the accounts

Introducing a real name obligation would be a dangerous error of judgement, but legislators do need to act. Because online bullies cannot always be identified, the focus should be on their weapons – their accounts, which they use to undertake verbal acts of violence. A judicial process should be introduced in which victims or victim protection organisations can request for accounts that are abused for unlawful speech to be blocked. Courts of law could impose blockages on individual accounts for a certain period of time – or permanently, especially in recurrent cases. The platforms would be barred from showing these accounts to users in a specific geographical location.

Such a judicial process would have many advantages: The identity of the people behind an account would not matter anymore. This would also be an effective course of action against account holders who are known but out of reach, for example because they are located abroad. Contrary to the approach of the Network Enforcement Act (NetzDG) it would not be the platforms who decide, often in dubious ways, which articles are illegal – this would be left to an independent court. Courts have demonstrated that they are capable of making such decisions – in particular, there are courts that specialise in press law and are accustomed to rule even on delicate freedom of speech questions within a few hours.

The NetzDG made social media platforms “addressable”

Of course, such a judicial process would raise questions: Who would be the subject of such a request if the responsible person is not known? With a bit of creativity, those details can be resolved. In the US a judicial petition against “John Doe” is filed in such cases. This anonymous party would be represented in court by the platform that would be responsible to implement any blockages.

Each of the large platforms has already registered a point of contact in Germany pursuant to § 5 NetzDG, so that they are always reachable for courts of law. This procedure could also ensure that the people behind an affected account can be heard in court, if the law would oblige platforms to forward the petition to them (via email for example). This would give the account holder the option to reveal their identity and take over the judicial process under their own name.

Legislative competence probably with the Federal Government

The law to create such a judicial process could be enacted by the German Federal Government. This is not about a new regulation on which content would be admissible online – this would be for the Federal States to enact and would require an arduous update of the Interstate Broadcasting Treaty (Rundfunkstaatsvertrag). The Federal Government could base this law on its competences to regulate judicial procedures as well as telemedia law. The Federal Government should urgently take this opportunity and create a “Protection against Digital Violence Act”, allowing for accounts that publish unlawful content to be blocked. The onus is still on the Federal States to become more effective in pursuing supposedly lesser online offences, which is within their legal purview.

A German version of this article was first published at https://background.tagesspiegel.de/statt-klarnamen-digitales-gewaltschutzgesetz

EU action needed: German NetzDG draft threatens freedom of expression (23.05.2017)
https://edri.org/eu-action-needed-german-netzdg-draft-threatens-freedomofexpression/

(Contribution by Ulf Buermeyer, EDRi member Gesellschaft für Freiheitsrechte – GFF, Germany; translation from German into English by EDRi volunteers Stefan and Sebastian)

close