Remember Hal Varian recently argued data is non rival and only partially excludable. This implies that a way to protect user data is that a silos collects and holds it and provides authenticated access to it. Thus implying that Google’s and the likes is somewhat a ‘natural’ consequence.

thanks to lockins and network effects, this leads to monopolistic/monopsonistic effects.

monopolistic power in the intermediation may artificially alter the equilibrium in the value chain, making markets fail.

the vast possibility of targeting a specific single user with the “best” personalized content/offer thanks to the detailed knowledge of a each specific person, has deep impacts on the economy and the socio-political spheres (think of filter bubbles and the related fake news phenomenon).

but cryptography can make data rival and excludable.

Rivality and Excludability are the basis of private property, and, as I explained in my last two books, making my data mine implies bringing private property to data. And cryptography can make private date stay private (it can be very friendly, privacy-wise).

breaking up a company by antitrust is a last resort but will not prevent the possibility that monopolies quickly form again. (and there may be many other negative externalities I will write (hopefully) soon).

the concept of making data private, with the underlying implicit idea of still having the possibility of using online service, implies the need for interoperability, interconnection and portability of data.

we already have data portability provisions in the european data pivacy regulations (GDPR).

we need to give regulators the possibility to impose interoperability and interconnection obligations to companies with Significant Market Power (and the power to inspect the interfaces and, in the case of misbehavior, sanction them with the strongest possible means) and let the market generate alternatives and innovate.

HTML5 added a “feature” to the web called hyperlink auditing. You can read the specification from the Web Hypertext Application Technology Working Group (WHATWG). Hyperlink auditing is added to a web page via
the ping attribute on an HTML anchor element (<a>), i.e., a link.

Notice that when you hover over the “Ping Me” link, you only see the href URL, you don’t see the ping URL, so you don’t even know that the attribute exists unless you look at the HTML page source. When you click the link, it loads the page http://lapcatsoftware.com/ as expected. But it also sends an HTTP POST request to http://underpassapp.com/ without any visible indication to the user.
You can only see it if you do a packet trace. It should come as no surprise that the primary usage of hyperlink auditing is for tracking of link clicks.

Firefox disables hyperlink auditing by default, as explained in a knowledge base article. You can see this if you open about:config and look at browser:send_pings.

Prior to Safari 12.1, you could disable hyperlink auditing with a hidden preference:

This is a point I raised today at the HLEG on AI of the EC (High-level Expert Group on AI of the European Commission)

Like we have the principle of “privacy by design” for systems managing personal data, we should have a principle of “redress by design” for systems based on AI which take decisions that can affect people’s lives.

The basic consideration is that a perfectly functioning AI system will make wrong decisions. For example, a person could be denied a service because of a decision by an AI system.

Such a system is not deterministic like, for example, a speeding camera can be: if you exceed the speed limit with your car, the speeding camera detects it and you get a ticket. You can appeal the ticket, but you’re guilty until proven innocent because a perfectly working (properly configured, certified and audited) deterministic system “decides” that you are guilty.

But with a perfectly working AI system, as it is a statistical engine that necessarily produces probabilistic results, this decision might be right 98% of the times and wrong 2% of the times (it would be inappropriate to classify these wrong decisions as mistakes), meaning that in these 2%, the person is detemined to be guilty even when she is not (or cannot obtain a service, even if she has full right of obtaining it).

For the person, the wrong decision can generate spillovers, exceeding the scope of the decision itself, for example by generating social reproach, negative feedbacks online and other consequences that may spread online and become impossible to remove.

In these wrong cases (they can be false positives or false negatives), the appeal procedure may not exist or, if it exist, it may be ineffective, its cost may be excessive, it will not be accessible to all, it may require an excessive time or it will not rectify the above mentioned spillovers.

Redress by design relates to the idea of establishing, from the design phase, mechanisms to ensure redundancy, alternative systems, alternative procedures, etc. in order to be able to effectively detect, audit, rectify the wrong decisions taken by a perfectly functioning system and, if possible, improve the system.

As an example of redress-by-design implementation, consider the recent EU Directive on copyright: an AI system will decide if a content is purportedly legitimate or is in violation of someone’s copyright.