It’s sim­ply not true to say that Google or Face­book are sell­ing off your data. Google and Face­book do know a lot about indi­vid­u­als, but adver­tis­ers don’t know any­thing — that’s why Google and Face­book can charge a pre­mi­um! [They] are high­ly moti­vat­ed to pro­tect user data — their com­pet­i­tive advan­tage in adver­tis­ing is that they have data on cus­tomers that no one else has.

The “you are the prod­uct” thing is pure slo­ga­neer­ing. It sounds con­vinc­ing on first prin­ci­ples but doesn’t hold up to analy­sis. It’s essen­tial­ly say­ing all two-sided plat­forms are immoral, which is daft.

I’ve always main­tained that this is about a val­ue exchange — you can use my data, as long as I get con­trol and trans­paren­cy over who sees it, and a use­ful ser­vice in return. But beyond that, anoth­er prob­lem with mak­ing pre­mi­um ser­vices where you pay for pri­va­cy is that you make a two-tier sys­tem. Cen­ny­dd again:

The sup­po­si­tion that only a con­sumer-fund­ed mod­el is eth­i­cal­ly sound is itself polit­i­cal and exclu­sion­ary (of the poor, chil­dren, etc).

Two-tier social media: the rich pay to opt out of Face­book ads, the poor get tar­get­ed end­less­ly. Pri­va­cy becomes a lux­u­ry good.

Aside: Of course this suits Apple, as if wealth­i­er clients can afford to opt out of adver­tis­ing, then adver­tis­ing itself becomes less valu­able — as do, in turn, Google and Face­book.

The fact that peo­ple are will­ing to enter into a data exchange which ben­e­fits them when they get good ser­vices in return high­lights the sec­ond prob­lem with Tim Cook’s attack: Apple are cur­rent­ly fail­ing to pro­vide good ser­vices. As Thomas Rick­er says in his snap­pi­ly-titled Tim Cook brings a knife to a cloud fight:

Fact is, Apple is behind on web ser­vices. Arguably, Google Maps is bet­ter than Apple Maps, Gmail is bet­ter than Apple Mail, Google Dri­ve is bet­ter than iCloud, Google Docs is bet­ter than iWork, and Google Pho­tos can “sur­prise and delight” bet­ter than Apple Pho­tos.

Apple needs to pro­vide best-of-breed ser­vices and pri­va­cy, not sec­ond-best-but-more-pri­vate ser­vices. Many peo­ple will and do choose con­ve­nience and reli­a­bil­i­ty over pri­va­cy. Apple’s supe­ri­or posi­tion on pri­va­cy needs to be the icing on the cake, not their pri­ma­ry sell­ing point.

As this piece by Jay Yarow for Busi­ness Insid­er points out, in the age of machine learn­ing, more data makes bet­ter ser­vices. Face­book and Google are ahead in ser­vices because they make prod­ucts that under­stand their users bet­ter than Apple do.

In our research, we showed how a sim­ple, small robot could pres­sure peo­ple to con­tin­ue a high­ly tedious task—even after the peo­ple expressed repeat­ed desire to quit—simply with ver­bal prod­ding.

The ten­den­cy to anthro­po­mor­phism, assign­ing a per­son­al­i­ty to a non-human object, is well known, but it’s still amus­ing to think of peo­ple curs­ing their robot co-work­er:

Most sur­pris­ing was not that peo­ple obeyed the robot, but the strate­gies they employed to try to resist the pres­sure. Peo­ple tried argu­ing with and ratio­nal­iz­ing with the robot, or appeal­ing to an author­i­ty who wasn’t present (a researcher), but either con­tin­ued their work or only gave up when the robot gave per­mis­sion.

I once read some­thing (can’t find it now) about our nat­ur­al def­er­ence to author­i­ty lead­ing to us pre­sum­ing infal­li­bil­i­ty in com­put­ers, even if that means sat­nav leads us into the sea. I can see this hap­pen­ing:

One could imag­ine a robot giv­ing seem­ing­ly innocu­ous direc­tion such as to make a bolt tighter, change a tool set­ting or pres­sure lev­el, or even to change which elec­tron­ic parts are used. How­ev­er, what if the robot is wrong (for exam­ple, due to a sen­sor error) and yet keeps insist­ing? Will peo­ple doubt them­selves giv­en robots’ advanced knowl­edge and sen­sor capa­bil­i­ty?

The very notion of a sar­cas­tic robot with a shit-eat­ing grin made me laugh too much:

Research has shown peo­ple feel less com­fort­able around robots who break social norms, such as by hav­ing shifty eyes or mis­matched facial expres­sions. A robot’s per­son­al­i­ty, voice pitch or even the use of whis­per­ing can affect feel­ings of trust and com­fort.

Work­ing with a robot that always grins while crit­i­ciz­ing you, stares at your feet while giv­ing rec­om­men­da­tions, stares off into space ran­dom­ly or sounds sar­cas­tic while pro­vid­ing pos­i­tive feed­back would be awk­ward and uncom­fort­able and make it hard to devel­op one’s trust in the machine.

I began read­ing this as a cute, slight­ly fun­ny piece about the future, then realised that this is hap­pen­ing right now and it stopped being quite so fun­ny. I, for one, wel­come our new robot co-work­ers

Dutch jour­nal­ism exper­i­ment, Blendle, on what they’ve learned from their first year of oper­a­tion. It’s a pret­ty inter­est­ing idea: you buy a sub­scrip­tion, read the sto­ries, but if there’s some­thing you don’t like, you can request a refund. What they’ve found is that peo­ple don’t want to pay what they can get for free:

We don’t sell a lot of news in Blendle. Peo­ple do spend mon­ey on back­ground pieces. Great analy­sis. Opin­ion pieces. Long inter­views. Stuff like that. In oth­er words: peo­ple don’t want to spend mon­ey on the ‘what’, they want to spend mon­ey on the ‘why’.

And they don’t want to pay for what they per­ceive as with­out val­ue:

Gos­sip mag­a­zines, for exam­ple, get much high­er refund per­cent­ages than aver­age (some up to 50% of pur­chas­es), as some of them are basi­cal­ly click­bait in print. Peo­ple will only pay for con­tent they find worth their mon­ey. So in Blendle, only qual­i­ty jour­nal­ism starts trend­ing.

They have 250,000 users, and some amaz­ing ana­lyt­ics for how they can grow that num­ber.

My words might not be as impor­tant as the great works of print that have sur­vived thus far, but because they are dig­i­tal, and because they are online, they can and should be pre­served… along with all the mil­lions of oth­er words by mil­lions of oth­er his­tor­i­cal nobod­ies like me out there on the web.

In that piece he ref­er­ences the mar­gin­a­lia of medieval scribes, but this remind­ed me more of Pom­peii. One of the high­lights of my vis­it to the exca­vat­ed town was to see the per­fect­ly-pre­served graf­fit­ti on the walls of streets and pub­lic build­ings. There are tales of rival­ry:

“Suc­ces­sus, a weaver, loves the innkeeper’s slave girl named Iris. She, how­ev­er, does not love him. Still, he begs her to have pity on him. His rival wrote this. Good­bye.”

It’s not artis­tic or even par­tic­u­lary lit­er­ate, but it pro­vides a much more vivid impres­sion of the peo­ple who lived there than any con­tem­po­rary account can ever match. And that’s why I agree with Jere­my on the val­ue of pre­serv­ing the “unim­por­tant”.