Stay informed and sign up to our newsletter:

Thanks for signing up!

Show:

MUSTMUST NOTRECOMMENDEDNOT RECOMMENDEDDISCUSS

an attempt at a fundamental rights based proposal

Dominant internet platforms like Facebook, Amazon and Google are more and more becoming the arena of social and legal conflicts. We witness a worldwide debate about potential new rules for dominant social media platforms (a so called new “platform regulation”). These debates are highly complex in a law-based society because they require us to resolve the conflict between fundamental rights and risk delegation of essential tasks to private actors. Still, the negative effects of harmful behaviour by these actors increases political appetite for regulation.

To navigate the upcoming debate, we want to propose, collect, and evaluate concrete policy solutions within the fundamental rights framework of the European Union. These proposals have been reviewed by a group of experts from academia, civil society and selected experts. The project aims at broad acceptance of developed positions within various European civil society stakeholders. Given the complexity and novelty of the underlying problems this proposal cannot be considered the solution to all questions in this field, but instead aims to further the debate with a concrete proposal that also addresses enforcement processes. Importantly, this proposal does not tie enforcement to liability as such an approach would inherently create an incentive for over-blocking on the part of platforms.

This is a policy proposal in the form and in the spirit of a request for comments. We invite everybody to participate in the discussion, to provide feedback, and to propose amendments on any of the proposals outlined below on this website. feedback@platformregulation.eu

0 Definitions and Basic Concepts

0.1MUST Types of Recommendations

MUST This word means that the proposal is an absolute requirement of the recommendation.

MUST NOT This phrase means that the proposal is an absolute prohibition of the recommendation.

RECOMMENDED This word means that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

NOT RECOMMENDED This phrase means that there may exist valid reasons in particular circumstances when the particular behaviour is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behaviour described with this label.

DISCUSS Policy proposal that is worth discussion within the community and requires further evaluation.

0.2MUST Scope Limitations

These policy recommendations and discussions are limited in their scope to democratic countries with a stable rule of law and strong fundamental rights protections.

0.3MUST Online Platforms

By online platforms, we indicate a service that provides an intermediary function in the access to information, goods or services that are residing on the systems or networks at the direction of users.

0.9MUST API Accessibility

By API accessible, we understand a computer information system that gives access to content via a unique identifier. This requires that data has to be downloadable in bulk, by day, week, year and per country. New data shall be accessible via the system within a day of being published. APIs should be designed in a way to sustain independent research and long-term studies.

0.10MUST Content Provider

By content provider, we understand the person or entity that has published or created the post with the content in question.

0.11MUST Political Accounts

By political accounts, we indicate those accounts run by, or acting on behalf of, political parties, associations affiliated with political parties, or politically exposed persons as defined by Article 2 of EU Directive 2006/70/EC.

1 Content Regulation

1.1MUST Procedural Safeguards for Content Notifications

A central pitfall of the current notification and action regime is the lack of procedural safeguards for the notification procedure. Every online platform needs to present to the user easily accessible, user-friendly and contextual notification options. These options should be available without the obligation to sign-in or sign-up with the service itself, if the content in question is publicly available.

Notifications should offer categories of different types of violations, ranging from various classes of illegal content to legal content that might be in breach of the Terms of Service or other rules of the platform. Different notification categories should trigger different procedures, which take into account the fundamental rights of all parties in question, meaning that procedures with stricter safeguards cannot be substituted by procedures with less strict ones. For example, a notification of illegality with the possibility of legal redress cannot be circumvented by deletion of the content in question under the Terms of Service of the platform.

A valid notification should be sufficiently precise and adequately substantiated. This should include 1) the location of the content (URL); 2) the reason for the complaint (potentially including legal basis under which the content has to be assessed); 3) evidence of the claim and potentially legal standing; 4) a declaration of good faith that the information provided is accurate 5) considerations on limitations, exceptions, and defences available to the content provider. Only in notifications of violations of personality rights or intellectual property rights is the identification information of the notifier mandatory. In all other cases, identification and contact information of the notifier are optional.

For purposes of procedural fairness and increasing the quality of content moderation, the content provider should be informed about a notification of his or her content, the reason for the notification, information about the subsequent process and possible ways to appeal or file a counter-notifications. The content provider should be informed immediately once the platform has received the notification and not just after a decision has been taken. Exceptions from this obligation to notify the content provider might apply only if sending notifications would hamper ongoing law enforcement investigations.

Possibility for counter-notification should be offered to the content provider to respond to the claim of the original notifier with evidence and arguments to the contrary. This counter-notification should be an option even before a decision by the platform is taken. Both original notification and counter-notification should apply the same standards in terms of declarations of good faith. The counter-notification can also be filled after the content has already been removed and can also challenge the category of the content in question.

Online Platforms have to inform the parties involved in notification about the outcome of the decision a platform has taken in their case. This communication is always sent to content providers and to notifiers if they have provided contact details in their notification. This communication needs to include 1) the reasoning of the platform for why it came to this decision; 2) the circumstances via which the decision was made, and if the decision was made by a human or an automated decision agent; and 3) information about the possibility to appeal this decision by either party with the platform, courts or other entities. This communication should also be sent for counter-notifications.

Online platforms need to publish information about their procedures and time frames for intervention by interested parties. This information should include 1) time before a notification is sent to the content provider; 2) the time for the content provider to respond with a counter-notification; 3) the average and maximum time for a decision by the platform for categories of cases; 4) the time at which the platform will inform both parties about the result of the procedure; 5) the time for different forms of appeal against the decision.

1.2RECOMMENDED Notification and Fair Balance for Illegal Content

The current notification and takedown system should be replaced by a notification-and-fair-balance procedure, which obligates different types of actions depending on the content that gets notified and the affected fundamental rights. To allow for transparency and sufficient oversight within the rule of law, the required procedure after a notification of illegal content (notification of illegality) that grants suspicion for a criminal offense cannot be preempted by the deletion of this content based on the Terms of Services or Community Guidelines of the platform. Otherwise, the economic incentive of the dominant Platform would stipulate overblocking potentially legal content in order to avoid a more burdensome procedure.

Efficient notification and take down on dangerous threats and calls for violence A comprehensive ruleset on a national level shall define a clear set of cases of dangerous threats and calls for violence against individuals or protected groups (hate speech). There is no general monitoring obligation for the platform, but once a platform obtains a notification of illegality it has to assess it in a timely manner. If the content poses an imminent danger to life or existence, the timeframe for the conclusion of the content moderation is shorter. Once knowledge of content in conflict with these rules is established, the dominant platform needs to take immediate action to temporarily block and report the case to judicial authorities. The platforms must inform the complainant and the person affected by such deletion of the outcome and justification for the case. Both parties have the possibility to appeal the decision or the process of notification with a judicial authority. This authority can overturn the decision of the platform and/or permanently delete blocked content. If the judicial authority decides that the decision of the platform regarding the notified content was not justified, or recognises that a report of a user was not exhaustively pursued, a proportional fine for the online platform must be imposed.

State examination of contentious or non-severe content For content that does not fall under the aforementioned offences, there must be a government body that can swiftly issue injunctions to clarify whether the platform operator is required to delete them. Platforms need to notify state authorities and must be given a certain deadline to handle cases.

1.3RECOMMENDED Enforcement via European Platform Regulator

A competent regulatory authority in the form of a European agency is tasked with the enforcement and supervision of the obligations for online platforms. The regulatory authority is tasked with ensuring compliance to rules on notification and action procedures, reporting and information requirements, advertisement transparency, service inter-operability and the cooperation of the platform based on law with other competent authorities. Furthermore, the regulator has to ex-ante approve the Terms of Service of any dominant social media platform (including other documents relevant to content moderation and account suspension, like Community Guidelines and Code of Conducts). It is explicitly outside of the scope of the supervision and enforcement duties of the regulator to make any decisions about the legality or permissibility of individual content or classes of content. The regulator can impose penalties of up to 2% of annual global revenue. The organisation shall follow the Common Approach of the European Union and has to publish bi-annual reports on all of its activities to fulfill its mandate.

1.4RECOMMENDED Social Media Oversight Council

To curtail the risk of Terms of Services to become a vehicle for human rights infringements, their application should be governed by a co-regulatory approach. Given the strong influence of dominant platforms on the excercise of fundamental rights, we argue that freedom of expression in particular can’t be enjoyed if it isn’t upheld by large companies that are essential infrastructure to the dominate public debates. A considerable degree of content regulation on the internet takes places outside the scope of court decisions about specific pieces of content. This has created a grey area of content regulation in which political pressure and private power reign supreme. The current model in which platforms regulate for whoever shouts loudest is not a sustainable form of governance, nor does it promote freedom of expression. At the same time any government regulation risks even more government interference in legal content takedown. To limit such attempts, we propose a governance model based on press councils, with strong incentives for platforms to participate in such a governance model. The EU shall lay down rules to create independent social media council with strong conflicts of interest policies.

The primary function of the council is to oversee the adherence of content moderation practices to the Terms of Services and other rules governing the content moderation of relevant and dominant online platforms. Its mandate is focused on the content moderation practices of legal content and excludes the handling of illegal content.

The primary aim of the council is to highlight systemic and relevant cases of content moderation decisions that are discriminatory, harmful or in conflict with the rules of the platform. The council can receive complaints from the general public, but is independent in its selection of cases or which topics it investigates.

Once the council has decided to start an investigation into the handling of certain types of content, it can request assistance from the platform regulator to obtain relevant information and data. The regulator is obliged to give assistance if the request is within the mandate of the social media council. The regulator can obtain this data within its supervision mandate from the platform, but cannot base any other proceedings on knoweldge obtained thereby.

Relevant and dominant platforms are obliged to join the social media council.

The council must be comprised of a diverse group of experts, representing a range of different views and experiences. Members of the council should be selected in a transparent and independent manner. The council must have a clear and public code of ethics and must be fully independent and able to make genuine, impartial and recognized decisions. The Oversight council shall conduct itself as transparent as possible and seek input from the general public via open consultations on all relevant aspects of its work.

To inform the public and enable informed user decisions, council has to publish bi-annual reports about its work and findings.

The social media council and platform regulator together jointly develop minimum standards for transparency and accountability of content moderation practices. These minimum standards are based on international standards and human rights law as well as existing content regulation best practices among online platforms. The platform regulator shall take the utmost account of these standards in its assessment of the content moderation frameworks of relevant and dominant platforms.

Currently the entire public debate and policy implementation focusses on content and account deletion. Platforms should be encouraged to use such measures as a last resort and instead explore other measures which are less invasive for freedom of expression but may have a similar effect. These may include but are not limited to: content curation and community management, changing operation style and design of forums away from maximising screen time and active setting of explicit speech norms within online communities. All of these measures can contribute to reducing the need for deletion in the first place.

1.7DISCUSS Trusted Flaggers

Dominant and relevant social media platforms may appoint trusted flaggers within a country. Notifications of trusted flaggers are dealt with more expeditiously than others, but they are subject to the same safeguards as regulator notifications. A list of all current and previous trusted flaggers has to be published by the platform. The application and revocation process, as well as criteria for an organisation or an individual to be awarded trusted flagger status must be made public. Governmental institutions should never be able to become trusted flaggers.

1.8MUST Establish Registered Offices to Interact with Authorities

Dominant platforms have to establish registered offices in the EU countries where they conduct their business. These offices allow local law enforcement and courts to reach the platform under their jurisdiction.

1.9MUST NOT Real Name Policy

For many marginalised groups anonymity is a pre-condition for the excise of the right to freedom of speech. Therefore, the idea to foster effective law enforcement by obliging all account holders on social media platforms to register with their real identity would lead to a chilling effect.

2 Algorithmic Accountability and Disinformation

2.1MUST Empower Users to Take Control Over Algorithmic Curation of Information

Users must have an easily accessible option to sort the content being displayed to them Dominant and relevant social media platforms have to offer this possibility to users. At the least, the setting should incorporate a fully chronological timeline, but would benefit from including also other factors that empower the user to take control of their information diet. Users can take these decisions actively over the duration of individual sessions. The concrete options the platform must offer can be evaluated by the regulator, which can issue guidance on potential additions and the design of the feature. This obligation does not exclude the potential insertion of sponsored content.

In allowing users to see how much content is otherwise hidden from them, this measure improves the user’s understanding of algorithms. It also enables them to understand the amount of content posted by accounts they follow. Technically this option should not create an undue burden for the platform provider.

After a negatively discriminatory effect of algorithmic recommendations of dominant platforms has been proven, and after obtaining knowledge that the platform sustains that same negative effect for a prolonged period, the platform becomes NOT liable for the damages caused to the infringed group. Associations of marginalised groups can NOT bring class-action cases to court to establish the facts of the case and subsequently ask for damages. The decision about the discriminating effect has to be established by court with the possibility for both parties to appeal. (see description for detailed explanation)

Discrimination in recommendation algorithms is incidental, not intentional, as recommendation engines are generally trained to optimise the commercial success of a platform (e.g. by optimising the total time a user spends on a site). The mathematical models employed by recommendation engines do not in general have representations for specific types of content that could be manipulated to give results desired by a specific group, and any change of the model to incorporate such features would require a categorisation of the data set the model is trained on that is unlikely to be available, nor is there any guarantee that changing the model in favour of one group would not, as an unintended consequence, hurt a different group. As such, there should not be any liability that goes beyond current anti-discrimination legislation that protects against intentional discrimination.

Establishment of an EU committee which receives and decides on research applications from independent academic institutions that offer a benefit to society. The approval is dependent on ethical data protection and scientific standards of the research proposal. Once approved, the dominant social media platform has to grant access to the defined data sets. An oversight board will enforce the compliance of researchers and the platform with the agreed data protection and research standards. Data provided by the dominant platform needs to be consistent and in a standardized machine-readable format.

Social Science One: positive attempt that got stuck because of failed sanctions for refusal of dominant platforms to cooperate.

Dominant platforms should provide access to their data via a differential privacy interface to the researchers selected by a committee. To protect private user data, the differential privacy measure introduces statistical noise into the output of every query.

Different systems of randomization of user data could be bypassed by requesting multiple sets of data to reverse engineer the randomization process. The risk of large amounts of personal data being published is thereby higher than the benefits possibly gained by publically accessible research data.

Possible options could be providing quarterly sets of randomized data for public research, which is only once randomized and then published as such.

2.5MUST Transparency Reports

Proportionate transparency obligations have to empower users to adequately assess the trustworthiness of platforms. Reporting obligations have to be fulfilled with a proportionate regularity and in an openly licensed, easily understandable and machine-readable format. Platforms are required to publish such reports proportionate to their size, market share, and the potential risk for users. Transparency reports need to be published on the following topics:

Report on law-enforcement information requests for user data containing, at least, on the total number of requests for user data that were fully complied with and the total number of accounts affected, the sensitivity of the data requested, the total number of fully complied with requests and, listed separately, the total number of requests with which the platform has not or only partially complied. This data must be provided per country, per legal basis for the request and, if different security authorities are involved, also per authority.

Report on legal requests on content and account blocking containing, at least, the total global number of notifications of illegal content, the total number of accounts affected by the request, the total number of requests complied with, with the total number of requests complied with partially or not at all listed separately. The total number of requests for blocking accounts, the total number of accounts affected thereby, and the type of legal demands requiring for content to be blocked or deleted should also be included. This data must be provided per country, per legal basis and, if different security authorities are involved, also per authority.

Report on the enforcement of Terms of Services containing, at least, the total number of account blockings or suspensions and content deletion in the categories of violations. The reporting needs to include the average time elapsed between publication of the content, notification, potential counter-notification, and action. This reporting will be categorized by the different sections of the Terms of Service, the actions that were taken, and if the decisions were partially or fully automated. Dominant platforms need to lay out how the enforcement of the Terms of Services is implemented and overseen. The report should also highlight all cases where the outcome of a content moderation decision based on Terms of Services contradicted the outcome of a notification of illegality.

Public authorities should make available publicly and in regular manner comprehensive information on the number, nature and legal basis of content restriction requests sent to intermediaries and on the actions taken as a result of those requests. Further, the information should include content restrictions based on international MLAs. States should publish detailed transparency reports on all content-related requests issued to intermediaries.

2.6MUST Rectification of Behavioural Profiles

Users must be enabled to rectify and edit their personal advertisement profile. The user can have information changed that has arisen from algorithms due to incorrect data, as well as information that an algorithm has incorrectly composed from correct information, without the necessity to prove the truthfulness of the request. The user-interface of the platform needs to display the option for rectification close to every targeted advertisement that is based on profiling. The user-interface also has to display the criteria via which the user was targeted with this particular advertisement.

2.7MUST Advertisement Archive

Dominant platforms must make an archive of sponsored content available if the content was either displayed within the European Union or paid for by an account registered within the European Union. This archive must contain all sponsored content displayed within the last several years, with full functionality, as they were displayed to the user. The additional information stored in this archive must be also provided in a machine-readable format and accessible by an API. Additional information that needs to be supplied within the archive includes: whether the sponsored content is currently active or inactive; the start date for active content and the timespan in which the sponsored content was active for inactive content; the name and contact details of the advertiser; the total number of impressions; the exact description of the target group; the exact amount of money paid and, while active, the estimated amount. For sponsored content that needs to be depublished due to Terms of Service violations or legal proceedings, the additional information needs to stay in the ad-archive and further information about the type of rule violation or pending lawsuits needs to be provided. Each piece of sponsored content must contain an attached info button that directly links to the content within the Advertisement Archive.

This provides more transparency about commercial advertisement in general and, by building awareness, this may also have a positive impact on public manipulation in general. In addition to algorithmic transparency, the possibility to understand the reason that an advertisement is shown to you may also be an important step in understanding why people see what they see online (algorithmic content composition). For more information on the current Ad Libraries see Description on “Political Advertisement Archive”. About Facebook Ads

2.8MUST Political Advertisement Archive

All political sponsored content needs to be centrally visible in a public advertisement archive. This archive must store all political sponsored content for several legislative terms. This archive must contain all sponsored content displayed within the last several years, with their full functionality as displayed to the user. The additional information stored in this archive must be also provided in a machine-readable format and accessible by an API. Additional information that needs to be supplied within the archive includes: whether the sponsored content is currently active or inactive; the start date for active content and the timespan in which the sponsored content was active for inactive content; the name and contact details of the advertiser; the total number of impressions; the exact description of the target group; the exact amount of money paid and, while active, the estimated amount. According to a follow-the-money approach, intermediaries have to list the ultimate client or beneficiary of the sponsored content. (Political sponsored content must be distinguishable from common sponsored content. To differentiate political sponsored content from common content, political accounts need to register with the platform and subsequently be distinguishable from common accounts.) To increase accountability of political actors, politically sponsored content needs to contain a link referencing this content in the political advertisement archive. For sponsored content that needs to be depublished due to Terms of Service violations or legal proceedings, the additional information needs to stay in the ad-archive and further information about the type of rule violation or pending lawsuits needs to be provided. Each piece of sponsored content must contain an attached info button that directly links to the content within the Advertisement Archive.

The rectification or apology for content on dominant social media platforms that has been ruled as election interference or defamation by a court needs to be published by the platform on channels with equivalent audiences to the original content. Once a court has ordered the content provider to issue a rectification or apology statement according to national media or civil law, the obligation of the content provider shall extend to the dominant social media platform to publish this statement on the same level, and with the same parameters, via which the original content was displayed to users. The purpose of this obligation is to reach the same or an equivalent audience. To implement this obligation, the platform is not obliged to track user behaviour or retain additional information about user interactions.

See NETPEACE: “Right to digital counter statement: notification of rectification in case of identified false reports due to court decisions. In the field of false reports / honorary offences, the right to a digital counter statement should be established or expanded, according to which all those users who have been notified of a judicially established hoax or defamation must also be notified of the counter-notification. Any notifications of rectification must be sent out via all channels in which the hoax was displayed (i.e. also in the profiles of those users who shared the hoax) and to all users who interacted with the causing message (likes, comments) Etc.). The obligation to correct is to be designed in a way that there is no obligation for additional tracking.”

3 Interoperability and Competition

3.1MUST Interoperability Obligation

The platform regulator has within its mandate the power to order, on a case-by-case basis, the provision of data transfer and service inter-operability measures. Such measures can only be ordered from dominant platforms.

Criteria for the evaluation are the technical feasibility, increase in consumer choice and competition and innovation to the benefit of smaller market participants. Orders of data transfer and inter-operability measures shall not lead to a risk increase for user privacy and security.

The regulator should follow a co-regulatory approach on the detailed standardisation of APIs and the semantic markup of the data in question. Regular consultations should be held with other EU institutions like ENISA, EDPS, industry, and interested parties to identify potentials needs for regulatory intervention.

Data transfers must entail a semantic markup of the data to be transfered. API accessibility shall, where technically feasible, be built upon decentralised technologies (OAuth) instead of intermediary data portability platforms.

3.2RECOMMENDED Stricter Merger Control

Mergers that create a monopoly or dominant platforms acquiring their rivals or nascent competitors should be prohibited. Similar to the telecommunications market, mergers between, and acquisitions of, dominant platforms are subject to approval by competition authorities. The public authority shall take into account the effect of the merger on consumer choice, on the potential concentration of market power, on risks of gatekeeping roles in other markets, as well as on the concentration of user data under one centralized entity. Therefore, specific models should be created that take the peculiarities of markets based on data into account. Additionally, the “potential competition” test should be applied more consistently to prevent that bigger firms absorbing small companies like start-ups that in the future could become competitors. Authorities can prohibit or allow the merger and also place conditions on it.

3.3RECOMMENDED Reparation of the Harm Caused to Consumers and Competition

An effective collective enforcement mechanism empowers consumers to obtain timely redress in case of competition law infringements. While consumers are the ones ultimately affected by abusive conduct, they currently have little or no remedies at their disposal. Therefore, the scope of the Directive on Representative Actions for the protection of the collective interests of consumers should also include infringements of competition law. Further, part of the fines imposed on companies for breaches of competition law should contribute to projects and initiatives aiming at creating a culture of compliance and helping consumers to reap the benefits of competitive markets.

3.4DISCUSS Effective Assessment of Market Power in Digital Markets

The criteria upon which market power is assessed should include proxies, such as the control of data necessary for the creation and provisions of services. Abuses of competition power often also entail other breaches, such as consumer law or privacy protections. Close cooperation between competent authorities is a key requirement for effective enforcement.