PING - informal chairs’ summary – 26 February 2015
Many thanks to Joe Hall for acting as scribe.
Regrets from Frank Dawson and Jose M. del Alamo.
Welcome to new PING members!
There will be an informal face-to-face meeting at IETF 92 on 26 March 2015 (11:30 – 13:00) in Vista. Next call details to be advised.
Please note that feedback on privacy and security considerations for Manifest for Web Application is due 5 March 2015. Please volunteer on the wiki:
https://www.w3.org/wiki/Privacy/Privacy_Reviews
* Personas (Charles Nevile and David Singer)
David provided an overview of the “persona” idea introduced at the 2014 W3C Workshop on Privacy and User-centric Controls [1]. By way of introduction, David mentioned that, according to a paper he read recently, 25% of Internet users think “private browsing” stops servers from remembering their browsing session, however, this is not the case. While engaging private browsing mode initiates a separate session in the user agent and discards the history on the user’s device when the session is terminated, that data is not discarded by the browser servers, search engine servers, or other web servers that were part of the session. Currently, a server is not aware when a user is in private browsing mode. This means that even if a user uses private browsing mode, for example, to buy a surprise gift or research a medical condition, another user may subsequently see advertisements based on the user’s search history.
David made the point that privacy is not always about secrecy: it is about context. He gave the example of a customer being willing to have a discussion about a bank overdraft with a bank employee at the bank, but not with the same employee at a party.
He proposed an idea for using a HTTP header field to allow a user to communicate to a server “at the moment, I'm using a particular persona”. In other words, a mechanism to allow the user to ask servers to keep the data associated with different personas separated (i.e. to respect context). A second component would be some sort of signal from the server acknowledging the segregation. This would improve trust, and a lie could be addressed by regulators. He and Charles observed that, without a persona signalling mechanism, servers may not know that a user wishes to keep sessions (and histories) segregated and may aggregate separate sessions through user agent identifiers such as IP address, fingerprints, etc.
Nick Doty noted that TAG conversations regarding standardising private browsing mode included discussion of the difference between client-side clearing and server-side clearing.
Charles expressed the view that users are more concerned about how tracking data is used rather than the tracking itself. On the question of anonymity, Charles also observed that it is relatively easy for servers to de-segregate even “anonymous” users.
There was some discussion of existing profiles/persona features in the Firefox and Mozilla browsers, to what extent they might address identified use cases, and what level of granularity might be possible (e.g. from a technical and usability perspective) for such a persona idea (e.g. discover what backend servers know, ask them to segregate and forget).
Action items =>
David will post a summary of his idea on the public-privacy email. Please discuss the idea and consider what PING and/or other W3C working groups could and/or should do.
* Web RTC local IP address disclosure (Wendy Seltzer)
WebRTC WG has asked for privacy and security considerations around the disclosure of a user's local IP address in Web RTC: Real-Time Communication in Browsers [2]. Wendy created a space on the wiki [3] to collect some of the considerations. This is in part a response to news items that WebRTC exposes local IP addresses, not just public IP addresses. (Note: the local IP address may be different from the public IP address, if for example, a user is using a VPN, Tor or a NAT.) WebRTC uses P2P technology so local IP addresses are needed for communication. She suggested that PING could help by answering these questions:
- what risks are associated with the exposure of local IP addresses?
- in what circumstances should WebRTC have access to local IP addresses?
- what user controls should there be?
Mike O’Neill observed that an attacker can easily obtain the local IP address by running a little JavaScript and that it is a simple way to do fingerprinting. He volunteered to consider the privacy considerations of WebRTC local IP address exposure. Wendy noted that the local IP address is even available when the user is not engaged in WebRTC communications.
Nick asked whether local IP address exposure is an issue for any other APIs. He noted that the issue was discussed in the development of Network Service Discovery, but that he is not aware whether it was implemented.
Action items =>
Please review the WebRTC local IP address disclosure issue and fill in the wiki [3].
Timing – please complete before 26 March 2015
* Header enrichment (Nick Doty and Wendy Seltzer)
(postponed due to a full agenda)
* Fingerprinting guidance (Nick Doty)
Action items =>
Please review Nick’s updates to the draft Fingerprinting Guidance for Web Specification Authors [4] outlined in this email [5] and provide comments before 26 March 2015.
* Privacy reviews and guidance
Thanks to Nick and the Internationalization Working Group, we now have a wiki to track the progress of our privacy reviews [6]. Each review should have a shepherd (who is responsible for compiling the input and ensuring the final review text is submitted to the relevant working group) and one or more volunteers for each review.
* Manifest for Web Application
The Web Applications Working Group has requested feedback on privacy and security considerations of Manifest for Web Application [7] by 5 March 2015.
* W3C TAG Finding – Securing the Web
The TAG published a finding “Securing the Web” on 22 January 2015 [8]. It contains a “to-do” list [9]. It also identifies some concerns with HTTPS.
Nick advised that there has also been some discussion in the TAG certificates and HTTPS, about HTTPS as a three-party protocol.
The Web Application Security Working Group has a draft coming out today – Upgrade Insecure Requests [10] (that defines a mechanism which allows authors to instruct a user agent to upgrade a priori insecure resource requests to secure transport before fetching them). They also a have a draft – Requirements for Powerful Features [11] for features that require privileged content. The TAG will be helping the WebAppSec WG identify features that should be allowed to access this type of content.
Joe, who is “show-runner” for the confidentiality piece of the IAB Privacy and Security Program referred to the IAB Statement on Confidentiality [12]. He emphasized that HTTPS is not only about confidentiality, but also integrity. He is interested in any differences between the IAB and the TAG documents. Mark Nottingham has been invited to the PING face-to-face meeting in March, so this may be a good opportunity to discuss these aspects.
Minutes are available here: http://www.w3.org/2015/02/26-privacy-minutes.html
Christine and Tara
[1] http://www.w3.org/2014/privacyws/
[2] http://www.w3.org/TR/webrtc/
[3] https://www.w3.org/wiki/Privacy/IPAddresses
[4] http://w3c.github.io/fingerprinting-guidance/
[5] https://lists.w3.org/Archives/Public/public-privacy/2015JanMar/0095.html
[6] https://www.w3.org/wiki/Privacy/Privacy_Reviews
[7] http://www.w3.org/TR/2015/WD-appmanifest-20150212/
[8] http://www.w3.org/2001/tag/doc/web-https
[9] http://www.w3.org/2001/tag/doc/web-https#building-a-secure-web-with-w3c-standards
[10] http://www.w3.org/TR/2015/WD-upgrade-insecure-requests-20150226/
[11] http://www.w3.org/TR/powerful-features/
[12] http://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/

We were asked to review the app manifest document as part of its wide review. Comments are requested by this week.
I chose this opportunity to try out Mike West's questionnaire as a basis for organizing a read-through and review. I believe the idea of that questionnaire was for self-review, but I think it can still be helpful for external review, and it's a way to give it a little exercise anyway. I've also tried to apply the fingerprinting guidance best practices list (though I've not created separate sections for those). [Comments in square brackets are more meta comments for use of this questionnaire, the rest is actually about the draft under review.]
Comments on my answers to the questions below, suggestions for recommendations or comments on how this list of questions works are all welcome. I think 3.3 (persistent local state) and 3.11 (affecting UA UI) are the most important.
—Nick
So, here's the "Manifest for web application" draft:
http://www.w3.org/TR/2015/WD-appmanifest-20150212/ <http://www.w3.org/TR/2015/WD-appmanifest-20150212/>
The super-short summary: a manifest is a little JSON file that provides some metadata about an "installable" web page, so that the name/icon etc. can be nicely bookmarked on a device and so that a standalone web app can indicate some display mode features so that it acts more like a native app.
Questions to Consider
3.1 Does this specification deal with personally-identifiable information?
[Not among my favorite questions. *skipping*]
3.2 Does this specification deal with high-value data?
[Also not among my favorite questions.] The spec specifically addresses this in the media type registration, albeit a little ambiguously:
> This specification does not directly deal with high-value data. However, installed web applications and their data could be seen as "high value" (particulaly from a privacy perspective).
3.3 Does this specification introduce new state for an origin that persists across browsing sessions?
Likely. What does "installing" an app via use of the app manifest have on persistent state? Does the start_url parameter introduce an effective permanent cookie upon "installing" the app? How should UAs handle clearing of cookies or other local state when it comes to installed apps? Should we indicate to sites that they shouldn't customize manifests to users for this reason? (If so, could browsers effectively detect such violations and prevent it?)
I believe this should be marked as contributing to fingerprinting and creating a new cookie-like local state mechanism.
Is it clear that pages receive the information that the user has "installed" the page? (This should at least be noted as a consideration.)
While it is noted that user agents may choose not to use the start_url hint for an installed web application, we should say more about why not. For example, when I bookmark a URL in my desktop browser and come back to it, I may have a look at the URL and get an impression of what it contains; but if the start_url is not typically shown to the user at bookmarking time but is used for future navigations, then it could be persisting login state in an unexpected way. User agents serving those particularly conscious about fingerprinting are likely to specifically not implement this feature. Are there use cases where user-specific start_url's are advantageous? Are sites sufficiently forewarned of the reasons why start_url shouldn't be relied upon?
3.4 Does this specification expose cross-origin persistent state to the web?
Though not required by interop requirements in this spec, installation is likely to include caching (for example, of icons) which may be observable by the server on subsequent uses of the app. Via timing attacks, is it possible for other pages or apps to detect whether an app is installed? What protections are necessary to prevent one page/app from enumerating the other installed web apps?
3.5 Does this specification expose any other data to an origin that it doesn’t currently have access to?
I don't think so.
3.6 Does this specification enable new script execution mechanisms?
I don't think so.
3.7 Does this specification allow an origin access to a user’s location?
3.8 Does this specification allow an origin access to sensors on a user’s device?
3.9 Does this specification allow an origin access to aspects of a user’s local computing environment?
No particular sensor or device access that I'm aware of.
3.10 Does this specification allow an origin access to other devices?
I don't think so.
3.11 Does this specification allow an origin some measure of control over a user agent’s native UI?
Yes! The majority of my security/privacy concerns lie in this area, and the existing security considerations text is predominantly about this as well. It seems worth emphasizing the potential challenges of implementing this installation, deep-linking and browser display mode control in a way not vulnerable to phishing and other attacks. Some concerns:
What are the use cases for unbounded scope? What are the implications for my manifest "applying" to navigations to other origins entirely? Beyond the potential security dangers (phishing, in particular) cases of fullscreen and navigating to different origins, what are the advantages? "for legacy reasons, this is the default behavior" is a parenthetical in a section describing this potential security risk. What are the legacy reasons? Are legacy reasons a sufficient reason for the specified default behavior to incur security risks for the user?
Are the security considerations of scope and display mode "fullscreen" limited to unbounded cases? I believe not. For example, if a user installs evil-weather.com to their phone and then clicks on a Google search result for evil-weather.com/app/blah and that sends the user directly to a fullscreen mode of evil-weather.com, they could show an apparent Google login screen with no indication of the URL. ("Oh, I guess I have to login to my Google account to get to this search result.")
Why is the display mode "fullscreen" orthogonal to the Fullscreen API? Do the security considerations of the Fullscreen API all apply here as well?
This is noted in the section on updates, but I think it's important beyond that:
> To avoid one application masquerading as another, it is RECOMMENDED that users be made aware of any such updates using implementation or platform specific conventions.
What are the privacy and security considerations of using site-provided names and icons when installing an app? Say I go to evil-weather.com, click install/bookmark (so that I can easily get to my weather tomorrow), and evil-weather.com provides "Bank of America" and a BofA icon in its manifest, display mode of minimal or full screen and a start_url to go to a special sub-page that mimics my bank's website. When I look at my list of icons tomorrow morning, how will I know that "Bank of America" is hosted by evil-weather.com?
3.12 Does this specification expose origin-controlled data to an origin?
It seems like caching of icons and the suggested start_url parameter do indicate to the origin that it's installed, but otherwise I don't think this adds more such methods. Customizing the manifest with an identifier could provide another cookie-like thing -- see above.
Yay for explicitly noting that developer errors may be reported or may be ignored.
3.13 Does this specification expose temporary identifiers to the web?
[We should elaborate on this question a little more. Are we talking about identifiers that a passive network attacker could see? Or for the page itself?]
3.14 Does this specification distinguish between behavior in first-party and third-party contexts?
> "This specification distinguishes between behavior in first-party and third-party contexts."
But "first-party" and "third-party" aren't defined anywhere. This paragraph could be better explained.
3.15 Does this specification have a "Security Considerations" and "Privacy Considerations" section?
Yes. In particular, security considerations are noted for displaymode and navigations. The media type registration notes a long list of related privacy and security considerations. That may be helpful, but it would also be useful to elaborate on which parts are specific to implementation of this feature.

Hi all.
The call for comments has been extended to 1 April 2015.
Begin forwarded message:
Resent-From: <public-geolocation@w3.org<mailto:public-geolocation@w3.org>>
From: "Mandyam, Giridhar" <mandyam@quicinc.com<mailto:mandyam@quicinc.com>>
Subject: Requiring Authenticated Origins for Geolocation API's: Status
Date: 26 February 2015 12:51:53 am GMT+1
To: public-geolocation <public-geolocation@w3.org<mailto:public-geolocation@w3.org>>
Cc: "public-webappsec@w3.org<mailto:public-webappsec@w3.org>" <public-webappsec@w3.org<mailto:public-webappsec@w3.org>>, "public-web-mobile@w3.org<mailto:public-web-mobile@w3.org>" <public-web-mobile@w3.org<mailto:public-web-mobile@w3.org>>, "www-tag@w3.org<mailto:www-tag@w3.org>" <www-tag@w3.org<mailto:www-tag@w3.org>>
As you may recall if you have been reading this list, there was an open call for comments on requiring authenticated origins for the Geoloc API. There was one detailed response to this CFC from Martin Thomson of Mozilla (seehttp://lists.w3.org/Archives/Public/public-geolocation/2014Nov/0008.html), and some discussion after that.
Since that time, there has been related work coming out of WebAppSec that affects this area:
a) The Mixed Content document (http://w3c.github.io/webappsec/specs/mixedcontent/) has continued to evolve.
b) The Privileged Contexts (“Powerful Features”) document (http://w3c.github.io/webappsec/specs/powerfulfeatures/) has taken shape as well, with a section on legacy features using Geoloc. as an example: seehttp://w3c.github.io/webappsec/specs/powerfulfeatures/#legacy. Note that there are specific guidelines for sunsetting support for insecure origins in this section.
While useful, it is hard to determine whether these documents (particularly handling of Legacy Features as described in the Privileged Contexts doc) represent strategies that user agent vendors are willing to implement specifically for Geolocation. It is also unclear whether developers who are using the Geolocation API will be able to adapt to sunsetting of support for insecure origins. The feedback received so far on the CFC has not represented enough of the affected parties. Based on this, I would like to continue the call for comments on this list until April 1.
I have CC’ed the WebAppSec group and WebMob group, as there has been similar discussion in both groups. I’ve also CC’ed the TAG.
-Giri Mandyam, W3C Geolocation Working Group Chair
From: Mandyam, Giridhar [mailto:mandyam@quicinc.com]
Sent: Wednesday, November 05, 2014 7:24 AM
To: public-geolocation
Subject: Requiring Authenticated Origins for Geolocation API's: Open Call for Comments (deadline - February 1, 2015)
As was discussed at TPAC 2014, the topic of requiring authenticated origins for geolocation is now being taken up in the form of an open call for comments on the public-geo mailing list. An overview of the issue was presented at last week’s face-to-face meeting: https://www.w3.org/2008/geolocation/wiki/images/1/12/Geolocation_-_Trusted_Origin.pdf. The definition of “authenticated origin” may be found athttp://w3c.github.io/webappsec/specs/mixedcontent/. This requirement would apply to all specifications developed by the Geolocation Working Group.
As decided at that meeting, before acting upon this issue it is important to gather feedback from affected parties. This includes web service providers, developers, and browser (web runtime engine) vendors.
The following is requested from respondents:
a) If you are against requiring authenticated origins for geolocation API’s, please state so and state your reasons for objection.
b) If you are in favor of requiring authenticated origins for geolocation API’s, please state so and your reasons for support. In addition, please provide a proposal for how support for unauthenticated origins could be phased out (e.g. a schedule for developer evangelization, warning dialog boxes in browsers, hard cutoff for ending support in browsers).
After responses are received, I will do my best to compile results and provide a representative synopsis of the feedback. I hope this call for comments is clear as written above, but if not please let me know.
-Giri Mandyam, Geolocation Working Group Chair
P

Based on some of our discussions at TPAC, during recent calls and in using the fingerprinting doc as a guide for a review for HTML/a11y, I've made a series of updates to the Fingerprinting Guidance doc:
* switch to best practices rather than should/must requirements
* update references to highlight browser pages, especially Chromium document, and testing sites
* anticipate behavior when functionality is disabled
* describe cross-origin property of fingerprinting
* add more TODOs
As noted, there are still things to be written and revised, but I'm hope we're coming to the point where this can be practical advice for spec authors. Your feedback would be welcome and illustrative examples would be particularly useful. Also, we now have a short list of those practices. Do the following sound about right to you all?
• Avoid any increase to the surface for passive fingerprinting.
• Prefer functionally-comparable designs that don’t increase the surface for active fingerprinting.
• Mark features that contribute to fingerprintability.
• Specify orderings and non-functional differences.
• Design APIs to access only the entropy necessary.
• Anticipate disabled functionality for the fingerprinting-conscious.
• Avoid new cookie-like local state mechanisms.
• Highlight any local state mechanisms to enable simultaneous clearing.
Full document available online here:
http://w3c.github.io/fingerprinting-guidance/ <http://w3c.github.io/fingerprinting-guidance/>
As discussed on the teleconference last month, there could be some things here that could be usefully merged with the privacy considerations document or with the checklist of security/privacy questions from Mike West. I should emphasize that I'm not wedded to any particular content or format.
Cheers,
Nick

PING - informal chairs’ summary – 15 January 2015
(Apologies for lateness of summary; PING business and my vacation overlapped... - Tara)
Our next call will be on 26 February 2015 at the usual time.
Thanks very much to chaals (Charles McCathie Nevile) for acting as scribe.
Regrets from Mark Nottingham and Joe Hall.
A warm welcome to our new PING members!
Our 15 January call was originally expected to run without a formal agenda, but to instead focus on current privacy issues of general concern to PING members. Such a broad discussion did take place during the call, but there were also sufficient additional items that an agenda was drafted.
We were fortunate to have Simon Rice, Group Manager (Technology) at the UK Information Commissioner's Office, to present and discuss the Article 29 WP Opinion regarding device fingerprinting.[1] Simon explained that the Opinion was written to clarify that device fingerprinting requires consent, just as cookies do, and that the practice can be even more intrusive than cookie-based tracking given that there is little way to detect that fingerprinting is occurring or to change a device's fingerprint. The Opinion identifies some narrowly-scoped exceptions to consent, such as MAC addresses of network controllers being necessary for communications and thus exempt from consent requirements. There were a number of questions about this Opinion from PING members, such as:
Q: for embedded third parties, does the website have responsibility for getting consent for fingerprinting done by the first party? A: As with cookies, it is the party who is processing the data that has the legal requirement to get consent (generally the third party).
Q: how would the Working Party characterize Google Analytics, which uses a first party cookie that they transmit to another server. Does that require consent? A: Analytics we would view as being done by the website operator. Google Analytics may perform analysis of a single site, but if it is shared across sites we would treat it as a third party.
Q: As for the second exception - use for licensing or security purposes -- is there required disclosure to the user that the fingerprint will be used? A: It is clear in the legislation that the exemption is from the requirement for consent, but the user must still be informed that this is taking place.
Q: If fingerprinting gets through the 5.3 rule, how is consent handled? Is it sufficient to have it in general usage rules? And how will this change in the new regulation? A: In terms of practically getting consent, we don't want a banner to accept cookies, then another for a fingerprint, and another for some more fingerprinting...But there is no reason that a website cannot include device fingerprinting in the same step as consent for cookies - e.g., by increasing the amount of information and scope of the existing request.
The next agenda item was discussion of Mike West's (Google) draft privacy and security questionnaire [2], which was introduced during our previous call on 4 December. Mike joined the call to present his document. This is a "strawman questionnaire" that specification authors should read in order to understand some possible privacy and security issues that their specification might run into. The idea is to try to get authors to consider these issues early in the process, so it acts as a sort of an "early review" and may pinpoint concerns well before the implementation stage. The goal is not to block features, but to help spec authors who are not privacy experts to think about issues. As an added benefit, this often obviates the need for a review, because the developers figure out the issue before we get there, and ask us the right questions in advance. Mike is hoping to collaborate on this questionnaire, perhaps coordinating it with existing PING documents. We expect to look further at this on our next call, when we turn our attentions back to the Privacy Considerations and related documents.
Next there was some discussion of the TAG draft finding "Transitioning the Web to HTTPS" [3] -- primarily to alert PING members that this document was to be concluded shortly (before the next PING call) and thus members were urged to share any comments promptly and directly with Mark Nottingham and the TAG.
The final item was the open discussion and information sharing on recent developments in privacy. Mainly this centered on news items around possible outlawing of certain types of encryption in the UK; these issues have been the subject of some heated debate, with concerns not only for confidentiality but also for integrity of communications.
[1] http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp224_en.pdf
[2] https://mikewest.github.io/spec-questionnaire/security-privacy/
[3] https://w3ctag.github.io/web-https/
Minutes are available here: http://www.w3.org/2015/01/15-privacy-minutes.html
Christine and Tara

Eric J. Bowman wrote:
>
> >
> > I encourage you to read more about cryptography and cryptographic
> > network protocols, and to try your hand at subverting HTTP and HTTPS
> > traffic (on your own systems and networks only, of course). I think
> > you'll find that the available security guarantees and
> > non-guarantees of HTTP and of HTTPS are very different from what
> > you have expressed in this thread.
> >
>
> Thanks, but I don't think you've understood what it is I'm trying to
> express.
>
Particularly, Superfish illustrates that the guarantees and non-
guarantees of HTTP and HTTPS are *exactly* what I tried to express in
this thread.
Yes, I know. You're above this list now, or at least until March 30,
while you write a book on Web security. Let's just say I'm not pre-
ordering.
-Eric

http://zitseng.com/archives/7489
*Government-Linked Certificate Authorities in OS X (zitseng.com
<http://zitseng.com>)*
>From the comments on Hacker News:
"No, if they want to hack your SSL comms, they aren't going to do it by
using a MITM attack backed by a government-issued root CA, they are going
to do it by gaining access to a "neutral" CA (such as Verisign), and
obtaining the root certificate's private key. Now you would have a much
harder time of figuring out that something has gone wrong, but then, if
you're paranoid of the government spying on you, and you are using a CA
other than one you own yourself, you've already lost the battle."
I agree, no protocol or method can stop a nation state because things
ultimately come down to physical security.
But it is more reason to put the breaks on the idea that moving the whole
web to https is going to make a real difference. I don't think it will.
Once the users see https as a selective spying mechanism (open for govs,
closed for petty criminals) they really won't trust the web ever again,
unless you come up with a new protocol/story and keep evolving it in major
ways to stay ahead of the inevitable.
Copying the wisdom below (via another developer):
*On Derived Values*
This, milord, is my family's axe. We have owned it for almost nine hundred
years, see. Of course, sometimes it needed a new blade. And sometimes it
has required a new handle, new designs on the metalwork, a little
refreshing of the ornamentation . . . but is this not the nine
hundred-year-old axe of my family? And because it has changed gently over
time, it is still a pretty good axe, y'know. Pretty good.
-- Terry Pratchett, The Fifth Elephant
On Sun, Feb 22, 2015 at 6:33 PM, Eric J. Bowman <eric@bisonsystems.net>
wrote:
> Eric J. Bowman wrote:
> >
> > >
> > > I encourage you to read more about cryptography and cryptographic
> > > network protocols, and to try your hand at subverting HTTP and HTTPS
> > > traffic (on your own systems and networks only, of course). I think
> > > you'll find that the available security guarantees and
> > > non-guarantees of HTTP and of HTTPS are very different from what
> > > you have expressed in this thread.
> > >
> >
> > Thanks, but I don't think you've understood what it is I'm trying to
> > express.
> >
>
> Particularly, Superfish illustrates that the guarantees and non-
> guarantees of HTTP and HTTPS are *exactly* what I tried to express in
> this thread.
>
> Yes, I know. You're above this list now, or at least until March 30,
> while you write a book on Web security. Let's just say I'm not pre-
> ordering.
>
> -Eric
>
>

Hi all,
We will be again organising an informal PING and friends get-together alongside IETF.
Please join us on Thursday 26 March 2015 during the lunch break.
(Precise meeting time and location to be advised)
Christine and Tara

Dear all,
We have our monthly teleconference on Thursday 26 February 2015 at 9am PT, 12pm ET, 17 UTC, 6pm CET
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150226T18&p1=87&ah=1
The draft agenda for the call will be circulated shortly.
In the meantime, please let us know if you would like to add anything to the agenda.
Call details:
Zakim Bridge +1.617.761.6200, conference 7464 ("PING")
SIP/VOIP details available here: http://www.w3.org/2006/tools/wiki/Zakim-SIP
Please also join us on IRC in the #privacy room.
Server: irc.w3.org
Username: <your name>
Port: 6665 N.B.: not the default IRC port!
Channel: #privacy
Christine and Tara

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I agree that an identity verification protocol based on explicit consent should be a standard component of the web platform, but I think it should be designed so there would no need for a fixed “real-world” identity.
The third-party entities could validate an arbitrary set of attributes, some of which may identify a legal person i.e. passport or birth certificate, but others could be anonymous attributes such as membership of a club, a child’s age, an anonymous audience category, or any attribute that the parties need and agree to without the necessity to inform any of the parties, including the validating parties, of other identifying attributes.
It follows from this that the identity reference should be short lived and not linkable beyond a particular transaction, i.e. a session state protocol would be part of it. A reference associating a legal identity should not be capable of being linked beyond a “session”, outside a secure context or with another origin. This means for example a reference in a cookie or other http header should have a short expiry time and be deleted when no longer required. The user would give the UA explicit consent for the creation of a new reference without further user interaction for a limited period, for example to authenticate a login.
This could eventually replace the arbitrary use of cookies, fingerprinting, cross-origin data leakage etc., which have led to the security and privacy problems plaguing the web .
From: Dave Raggett [mailto:dsr@w3.org]
Sent: 13 February 2015 18:21
To: public-web-security@w3.org
Subject: [WebCrypto.Next] Linking web identities with real-world identities
The payments world has use cases for secure access to bank accounts from your browser and for installing and activating payment instruments as part of your digital wallet. Both of these require some way to bind web identities to real-world identities. An argument for an intent based approach is given in the following blog post for the Web Payments IG, see:
http://www.w3.org/blog/wpig/2015/02/13/linking-web-identities-with-real-world-identities/
Please note that this is my personal viewpoint and should not be taken as that of the Payments IG, nor of W3C.
—
Dave Raggett <dsr@w3.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJU3mssAAoJEHMxUy4uXm2JtV8H/Ryaw85/mKMKvlOonzEYn0rZ
cj1MPkQqnu7FlKqYO9WUzCEw7Avpq/q6K3s3Wjb2zBDR+5wP/YpXtm5QE894G2rh
snJWUKh48NBNFTGY1Hv2459RJMjoLeHtPJhFughfb7lJnfvIUq6+qXT0DKUZbuNP
LTOZgY58QxAff3bANQRdGnLC8FxnTfnaERL8FfyS4IFzA9ntfqYV/LFjnkNs5GgI
9sQQ+0Yd9nPmbLrKModeO5PdEBxLoSdd1f1B1eHvQQ+XIH3t8pudM16VEqgP+xWe
MJjC6q83DUKuC/AEJLgopK+HEKWPhkhMWh9Ph3jMKhwOfz5sNL7B+84RdyRNJG8=
=98Qz
-----END PGP SIGNATURE-----

> On 13 Feb 2015, at 21:22, Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> I agree that an identity verification protocol based on explicit consent should be a standard component of the web platform, but I think it should be designed so there would no need for a fixed “real-world” identity.
>
> The third-party entities could validate an arbitrary set of attributes, some of which may identify a legal person i.e. passport or birth certificate, but others could be anonymous attributes such as membership of a club, a child’s age, an anonymous audience category, or any attribute that the parties need and agree to without the necessity to inform any of the parties, including the validating parties, of other identifying attributes.
These refer to additional use cases, e.g. to prove that I am a child for access to a safe site for children. I would encourage you to describe the use cases, since this is important for justifying work on a standard. There are no major technical barriers to pseudo-anonymous identity verification, so this is mostly about consensus building.
I built a demo for this kind of approach some years back around a use case where you need to prove you are a current student at a given university to gain access to a site run by students for students. The demo uses a Firefox extension for idemix. More details are given at:
http://people.w3.org/~dsr/blog/?p=95 <http://people.w3.org/~dsr/blog/?p=95>
It might be easier, however, to start with work on a standard for simple comparisons against attributes, where the website/app already knows your name and address etc., and wants to verify that the web identity you are logged in with corresponds to that real-world identity. This doesn’t involve a loss of privacy since the website and the identity agent being asked to perform the verification already know your real-world identity.
—
Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>>

Hello Dave:
This sounds interesting to me. I work on an electronic voting system
and identity verification is, as you can imagine, a very important
issue. Some thoughts:
- This kind of thing might be useful for payments, but of course can
be very handy in many other use cases.
- how does this relate to HOBA? [2] (HOBA provides auth credentials
and implements a verification procedure)
- In e-voting, having a somehow standardized yet powerful/flexible
procedure would be useful. Sometimes we need to verify age, others we
have verify postal codes, and I can only wonder what would be the next
thing they might need to verify.
- Mention of the idea of using coordinate cards (as some banks use)
as a challenge/verification procedure.
Regards,
--
[2] https://github.com/razevedo/hoba-authentication
--
Eduardo Robles Elvira @edulix skype: edulix2
http://agoravoting.org @agoravoting +34 634 571 634
On Sat, Feb 14, 2015 at 11:31 AM, Dave Raggett <dsr@w3.org> wrote:
>
> On 13 Feb 2015, at 21:22, Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> I agree that an identity verification protocol based on explicit consent
> should be a standard component of the web platform, but I think it should be
> designed so there would no need for a fixed “real-world” identity.
>
> The third-party entities could validate an arbitrary set of attributes, some
> of which may identify a legal person i.e. passport or birth certificate, but
> others could be anonymous attributes such as membership of a club, a child’s
> age, an anonymous audience category, or any attribute that the parties need
> and agree to without the necessity to inform any of the parties, including
> the validating parties, of other identifying attributes.
>
>
> These refer to additional use cases, e.g. to prove that I am a child for
> access to a safe site for children. I would encourage you to describe the
> use cases, since this is important for justifying work on a standard. There
> are no major technical barriers to pseudo-anonymous identity verification,
> so this is mostly about consensus building.
>
> I built a demo for this kind of approach some years back around a use case
> where you need to prove you are a current student at a given university to
> gain access to a site run by students for students. The demo uses a Firefox
> extension for idemix. More details are given at:
>
> http://people.w3.org/~dsr/blog/?p=95
>
> It might be easier, however, to start with work on a standard for simple
> comparisons against attributes, where the website/app already knows your
> name and address etc., and wants to verify that the web identity you are
> logged in with corresponds to that real-world identity. This doesn’t involve
> a loss of privacy since the website and the identity agent being asked to
> perform the verification already know your real-world identity.
>
> —
> Dave Raggett <dsr@w3.org>
>
>
>

[ Bcc: public-webappsec, www-style, public-privacy, public-sysapps,
public-digipub-ig, public-pfwg, public-web-mobile, www-international,
chairs^1, public-review-announce; Reply-to: public-webapps ]
This is a Request for Comments (RfC) for WebApp's "Manifest for web
application" specification:
<http://www.w3.org/TR/2015/WD-appmanifest-20150212/>
"This specification defines a JSON-based manifest that provides
developers with a centralized place to put metadata associated with a
web application."
This Working Draft is intended to meet the wide review requirements as
defined in the 2014 Process Document. The deadline for comments is 5
March 2015 and all comments should be sent to the public-webapps @
w3.org mail list [Archive] with a Subject: prefix of "[manifest]". The
next anticipated publication of this specification is a Candidate
Recommendation. (See [CR-Plan] for the specification's Candidate
Recommendation status.)
WebApps welcomes review and comments from all individuals and/or groups
and we explicitly ask the following groups to review the document and to
submit comments: WebAppSec, CSS WG (in particular, the "display mode"
media feature), PING, SysApps, Digital Publishing IG, WAI (PF, User
Agent, Authoring Tools), and I18N WG.
In addition to substantive comments, to help us get a sense of how much
review the document receives, we also welcome data about "silent
reviews", f.ex. "I reviewed section N.N and have no comments".
-Thanks, AB
^1 RfC is the new LCWD TransAnn
[CR-Plan] <https://github.com/w3c/manifest/issues/308>
[Archive] <https://lists.w3.org/Archives/Public/public-webapps/>

A great use of TOR is market price hollering for controlled products.
PGP: 9425a6af

DOJ asks Google to back off on httpsMike O'Neillmichael.oneill@baycloud.commid:3b0101d03aec$f395ad70$dac10850$@baycloud.com2015-01-28T11:24:13-00:00

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I haven't got a subscription but this appears relevant to our discussions. Has anyone got more information?
http://www.lexisnexis.com/legalnewsroom/technology/b/newsheadlines/archive/2015/01/27/doj-atty-joins-call-for-google-to-back-off-data-encryption.aspx
Mike
> -----Original Message-----
> From: Mike O'Neill [mailto:michael.oneill@baycloud.com]
> Sent: 28 January 2015 09:12
> To: 'David Singer'
> Cc: 'Danny Weitzner'; 'Rigo Wenning'; public-privacy@w3.org
> Subject: RE: On the european response to Snowden
>
> *** gpg4o | Valid Signature from 7331532E2E5E6D89 Mike O'Neill
> <michael.oneill@btinternet.com> ***
>
> David, comments to your comments inline
>
> > -----Original Message-----
> > From: David Singer [mailto:singer@apple.com]
> > Sent: 27 January 2015 14:33
> > To: Mike O'Neill
> > Cc: Danny Weitzner; Rigo Wenning; public-privacy@w3.org
> > Subject: Re: On the european response to Snowden
> >
> > Thanks Mike, comments inline
> >
> > > 1) Signalling.
> > > We saw a bit of this in the DNT discussions. How to create a signal
> > conveying a user's explicit agreement for something or their preferences for
> > something to one or more entities that may exist across multiple origins, in a
> > secure untamperable way. This may eventually be superseded by:
> >
> > A challenging problem. These signals and preferences tend to be small, and
> > padding them and then signing them digitally would seem to be using a
> > sledgehammer to crack a walnut. But maybe the walnut is growing in
> > importance. Other ideas?
>
> I was meaning more the general problem of signalling between entities, i.e.
> between the UA acting for an individual and companies which control many
> domains/origins. There are several use-cases that came up in DNT and it
> requires authentication of identity which was also why it will be subsumed into
> point 2.
>
> >
> > > 2) Anonymity.
> > > To ensure privacy we should be able to trawl the net anonymously, but
> > with some identity available through defined transactional processes. For
> > example we may allow a subset of our identity to be discovered by some
> parties
> > we know about and have reached agreement with. This might just be a broad
> > audience categorisation (male, geek, whatever) or it might be more specific
> > (MEP, a particular child's parent, member of a club). Visible identity changes
> with
> > circumstances i.e. I could anonymously apply for a loan or agree to pay for a
> > purchase but I would need to be accountable. My legal identity would have to
> be
> > discoverable in certain agreed circumstances. We may also agree, through
> > membership of a "rule of law" jurisdiction ,that our identity is discoverable by
> > law enforcement under agreed (by society) circumstances.
> > >
> > > This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or
> a
> > new internetworking layer, focus on stopping fingerprinting, and it is a big one.
> > It will need heavy guns.
> >
> > Online anonymity — secrecy — is hard, as you know. ToR is hardly an easy or
> > universal solution. I recently did the thought experiment “what if every router
> > was a NAT box?” — this would mean that IP addresses would be useless as
> > proxies for identity — and the answer is that anonymity would improve but
> > many other things (e.g. phone calls) would suffer. Again, ideas for this would
> be
> > good.
>
> I think there should be an out-of-band identity exchange, non-trackable i.e. does
> not use UUIDs but established below the tunnel. Maybe in the https handshake
> or in an internetwork layer.
> The identity exchange should be under the control of both parties, but also
> visible to third-parties in defined circumstances for instance when accountability
> or law enforcement is required.
>
> >
> > > 3) Encryption.
> > >
> > > There is talk about making end-to-end encryption illegal. While this may
> seem
> > silly and is probably a shot across the bows, https everywhere stirs the hornet's
> > nest. I think an answer involves some process whereby https is made more
> > secure (via certificate pinning etc.), available to anyone but that law
> > enforcement is given the means to determine identity through an
> internationally
> > agreed process i.e. along the lines of 2).
> > >
> > > I think any backdooring process will just end up helping the bad guys, so we
> > have full ETO encryption available but if the net can properly ensure privacy
> and
> > security only a minority will need it.
> >
> > So you envisage encryption that is end-to-end and backdoor free, but
> > nonetheless accessible to lawful intercept. Challenging in today’s
> environment,
> > but maybe there is a solution.
>
> I was thinking more that the identity was visible to lawful intercept, not
> necessarily the encrypted content. But if privacy and security are guaranteed
> without encryption then there would be less need for it. I forgot to mention
> integrity, there should be a way to ensure integrity of the data (such as
> javascript) transmitted between mutually identified parties, without having to
> put everything through an encrypted tunnel.
>
> >
> > David Singer
> > Manager, Software Standards, Apple Inc.
> >
>
> Mike
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUyMbdAAoJEHMxUy4uXm2JT/MIANU1HsCIzE0NvqYGBerIZOGm
ccTLlJ5JPs9FqRQ2rmhUVDZ0I8SbhbP0mSiHOMtMkXRJKr6HzTDWgQES4NcUOs2j
qvN5075sbyc/iySfEFqBRYM/nBtYBTMNZRc5Arv5VBCPaJVSfSxqSaEZ3HtD0hbW
L/2McPaw3ZAnEDAU1Dz0mFfdn0f40Gog0EqOFpTUIXC5QuuFiyDmJOKwE5IfOfoH
4Ca9u4DHbyYAKn7H73wP3QfzLQUKNkgwPnH756RM3aGFhpHv/PRVAGhe7utRuPkP
r35134ey75dC+4aP9tNzDka5Vco+Nlk9TDfoGmPMCKr3UhHfu1P7GbWQajLC44o=
=C1+x
-----END PGP SIGNATURE-----

Re: DOJ asks Google to back off on httpsJoe Halljoe@cdt.orgmid:CABtrr-WeVj6CZ7KntkuGp7aNAf3_zH+YiLGXH8MVODHbFn1nzw@mail.gmail.com2015-01-28T13:56:30-05:00

I can't read it either but the focus of FBI/DOJ has been device
encryption... if it is transport encryption (https, dtls, etc.)
someone let me know!!! best, Joe
On Wed, Jan 28, 2015 at 6:24 AM, Mike O'Neill
<michael.oneill@baycloud.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> I haven't got a subscription but this appears relevant to our discussions. Has anyone got more information?
>
> http://www.lexisnexis.com/legalnewsroom/technology/b/newsheadlines/archive/2015/01/27/doj-atty-joins-call-for-google-to-back-off-data-encryption.aspx
>
> Mike
>
>
>
>> -----Original Message-----
>> From: Mike O'Neill [mailto:michael.oneill@baycloud.com]
>> Sent: 28 January 2015 09:12
>> To: 'David Singer'
>> Cc: 'Danny Weitzner'; 'Rigo Wenning'; public-privacy@w3.org
>> Subject: RE: On the european response to Snowden
>>
>> *** gpg4o | Valid Signature from 7331532E2E5E6D89 Mike O'Neill
>> <michael.oneill@btinternet.com> ***
>>
>> David, comments to your comments inline
>>
>> > -----Original Message-----
>> > From: David Singer [mailto:singer@apple.com]
>> > Sent: 27 January 2015 14:33
>> > To: Mike O'Neill
>> > Cc: Danny Weitzner; Rigo Wenning; public-privacy@w3.org
>> > Subject: Re: On the european response to Snowden
>> >
>> > Thanks Mike, comments inline
>> >
>> > > 1) Signalling.
>> > > We saw a bit of this in the DNT discussions. How to create a signal
>> > conveying a user's explicit agreement for something or their preferences for
>> > something to one or more entities that may exist across multiple origins, in a
>> > secure untamperable way. This may eventually be superseded by:
>> >
>> > A challenging problem. These signals and preferences tend to be small, and
>> > padding them and then signing them digitally would seem to be using a
>> > sledgehammer to crack a walnut. But maybe the walnut is growing in
>> > importance. Other ideas?
>>
>> I was meaning more the general problem of signalling between entities, i.e.
>> between the UA acting for an individual and companies which control many
>> domains/origins. There are several use-cases that came up in DNT and it
>> requires authentication of identity which was also why it will be subsumed into
>> point 2.
>>
>> >
>> > > 2) Anonymity.
>> > > To ensure privacy we should be able to trawl the net anonymously, but
>> > with some identity available through defined transactional processes. For
>> > example we may allow a subset of our identity to be discovered by some
>> parties
>> > we know about and have reached agreement with. This might just be a broad
>> > audience categorisation (male, geek, whatever) or it might be more specific
>> > (MEP, a particular child's parent, member of a club). Visible identity changes
>> with
>> > circumstances i.e. I could anonymously apply for a loan or agree to pay for a
>> > purchase but I would need to be accountable. My legal identity would have to
>> be
>> > discoverable in certain agreed circumstances. We may also agree, through
>> > membership of a "rule of law" jurisdiction ,that our identity is discoverable by
>> > law enforcement under agreed (by society) circumstances.
>> > >
>> > > This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or
>> a
>> > new internetworking layer, focus on stopping fingerprinting, and it is a big one.
>> > It will need heavy guns.
>> >
>> > Online anonymity — secrecy — is hard, as you know. ToR is hardly an easy or
>> > universal solution. I recently did the thought experiment “what if every router
>> > was a NAT box?” — this would mean that IP addresses would be useless as
>> > proxies for identity — and the answer is that anonymity would improve but
>> > many other things (e.g. phone calls) would suffer. Again, ideas for this would
>> be
>> > good.
>>
>> I think there should be an out-of-band identity exchange, non-trackable i.e. does
>> not use UUIDs but established below the tunnel. Maybe in the https handshake
>> or in an internetwork layer.
>> The identity exchange should be under the control of both parties, but also
>> visible to third-parties in defined circumstances for instance when accountability
>> or law enforcement is required.
>>
>> >
>> > > 3) Encryption.
>> > >
>> > > There is talk about making end-to-end encryption illegal. While this may
>> seem
>> > silly and is probably a shot across the bows, https everywhere stirs the hornet's
>> > nest. I think an answer involves some process whereby https is made more
>> > secure (via certificate pinning etc.), available to anyone but that law
>> > enforcement is given the means to determine identity through an
>> internationally
>> > agreed process i.e. along the lines of 2).
>> > >
>> > > I think any backdooring process will just end up helping the bad guys, so we
>> > have full ETO encryption available but if the net can properly ensure privacy
>> and
>> > security only a minority will need it.
>> >
>> > So you envisage encryption that is end-to-end and backdoor free, but
>> > nonetheless accessible to lawful intercept. Challenging in today’s
>> environment,
>> > but maybe there is a solution.
>>
>> I was thinking more that the identity was visible to lawful intercept, not
>> necessarily the encrypted content. But if privacy and security are guaranteed
>> without encryption then there would be less need for it. I forgot to mention
>> integrity, there should be a way to ensure integrity of the data (such as
>> javascript) transmitted between mutually identified parties, without having to
>> put everything through an encrypted tunnel.
>>
>> >
>> > David Singer
>> > Manager, Software Standards, Apple Inc.
>> >
>>
>> Mike
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.13 (MingW32)
> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
> Charset: utf-8
>
> iQEcBAEBAgAGBQJUyMbdAAoJEHMxUy4uXm2JT/MIANU1HsCIzE0NvqYGBerIZOGm
> ccTLlJ5JPs9FqRQ2rmhUVDZ0I8SbhbP0mSiHOMtMkXRJKr6HzTDWgQES4NcUOs2j
> qvN5075sbyc/iySfEFqBRYM/nBtYBTMNZRc5Arv5VBCPaJVSfSxqSaEZ3HtD0hbW
> L/2McPaw3ZAnEDAU1Dz0mFfdn0f40Gog0EqOFpTUIXC5QuuFiyDmJOKwE5IfOfoH
> 4Ca9u4DHbyYAKn7H73wP3QfzLQUKNkgwPnH756RM3aGFhpHv/PRVAGhe7utRuPkP
> r35134ey75dC+4aP9tNzDka5Vco+Nlk9TDfoGmPMCKr3UhHfu1P7GbWQajLC44o=
> =C1+x
> -----END PGP SIGNATURE-----
>
>
--
Joseph Lorenzo Hall
Chief Technologist
Center for Democracy & Technology
1634 I ST NW STE 1100
Washington DC 20006-4011
(p) 202-407-8825
(f) 202-637-0968
joe@cdt.org
PGP: https://josephhall.org/gpg-key
fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

Re: DOJ asks Google to back off on httpsJohn Ericksonolyerickson@gmail.commid:CAC1Gg8QVXmcvm0EBAf-NaLg1=UibRWEs5GuzXSD85LwDuhbD7w@mail.gmail.com2015-01-28T14:43:46-05:00

Talk about ironic: None of us can read the story of DOJ complaining
about user encryption, because of Law360's use of encryption...
V'q funer zl gubhtugf ba gung ohg gurl ner rapelcgrq! ;)
On Wed, Jan 28, 2015 at 1:56 PM, Joe Hall <joe@cdt.org> wrote:
> I can't read it either but the focus of FBI/DOJ has been device
> encryption... if it is transport encryption (https, dtls, etc.)
> someone let me know!!! best, Joe
>
> On Wed, Jan 28, 2015 at 6:24 AM, Mike O'Neill
> <michael.oneill@baycloud.com> wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> I haven't got a subscription but this appears relevant to our discussions. Has anyone got more information?
>>
>> http://www.lexisnexis.com/legalnewsroom/technology/b/newsheadlines/archive/2015/01/27/doj-atty-joins-call-for-google-to-back-off-data-encryption.aspx
>>
>> Mike
>>
>>
>>
>>> -----Original Message-----
>>> From: Mike O'Neill [mailto:michael.oneill@baycloud.com]
>>> Sent: 28 January 2015 09:12
>>> To: 'David Singer'
>>> Cc: 'Danny Weitzner'; 'Rigo Wenning'; public-privacy@w3.org
>>> Subject: RE: On the european response to Snowden
>>>
>>> *** gpg4o | Valid Signature from 7331532E2E5E6D89 Mike O'Neill
>>> <michael.oneill@btinternet.com> ***
>>>
>>> David, comments to your comments inline
>>>
>>> > -----Original Message-----
>>> > From: David Singer [mailto:singer@apple.com]
>>> > Sent: 27 January 2015 14:33
>>> > To: Mike O'Neill
>>> > Cc: Danny Weitzner; Rigo Wenning; public-privacy@w3.org
>>> > Subject: Re: On the european response to Snowden
>>> >
>>> > Thanks Mike, comments inline
>>> >
>>> > > 1) Signalling.
>>> > > We saw a bit of this in the DNT discussions. How to create a signal
>>> > conveying a user's explicit agreement for something or their preferences for
>>> > something to one or more entities that may exist across multiple origins, in a
>>> > secure untamperable way. This may eventually be superseded by:
>>> >
>>> > A challenging problem. These signals and preferences tend to be small, and
>>> > padding them and then signing them digitally would seem to be using a
>>> > sledgehammer to crack a walnut. But maybe the walnut is growing in
>>> > importance. Other ideas?
>>>
>>> I was meaning more the general problem of signalling between entities, i.e.
>>> between the UA acting for an individual and companies which control many
>>> domains/origins. There are several use-cases that came up in DNT and it
>>> requires authentication of identity which was also why it will be subsumed into
>>> point 2.
>>>
>>> >
>>> > > 2) Anonymity.
>>> > > To ensure privacy we should be able to trawl the net anonymously, but
>>> > with some identity available through defined transactional processes. For
>>> > example we may allow a subset of our identity to be discovered by some
>>> parties
>>> > we know about and have reached agreement with. This might just be a broad
>>> > audience categorisation (male, geek, whatever) or it might be more specific
>>> > (MEP, a particular child's parent, member of a club). Visible identity changes
>>> with
>>> > circumstances i.e. I could anonymously apply for a loan or agree to pay for a
>>> > purchase but I would need to be accountable. My legal identity would have to
>>> be
>>> > discoverable in certain agreed circumstances. We may also agree, through
>>> > membership of a "rule of law" jurisdiction ,that our identity is discoverable by
>>> > law enforcement under agreed (by society) circumstances.
>>> > >
>>> > > This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or
>>> a
>>> > new internetworking layer, focus on stopping fingerprinting, and it is a big one.
>>> > It will need heavy guns.
>>> >
>>> > Online anonymity -- secrecy -- is hard, as you know. ToR is hardly an easy or
>>> > universal solution. I recently did the thought experiment "what if every router
>>> > was a NAT box?" -- this would mean that IP addresses would be useless as
>>> > proxies for identity -- and the answer is that anonymity would improve but
>>> > many other things (e.g. phone calls) would suffer. Again, ideas for this would
>>> be
>>> > good.
>>>
>>> I think there should be an out-of-band identity exchange, non-trackable i.e. does
>>> not use UUIDs but established below the tunnel. Maybe in the https handshake
>>> or in an internetwork layer.
>>> The identity exchange should be under the control of both parties, but also
>>> visible to third-parties in defined circumstances for instance when accountability
>>> or law enforcement is required.
>>>
>>> >
>>> > > 3) Encryption.
>>> > >
>>> > > There is talk about making end-to-end encryption illegal. While this may
>>> seem
>>> > silly and is probably a shot across the bows, https everywhere stirs the hornet's
>>> > nest. I think an answer involves some process whereby https is made more
>>> > secure (via certificate pinning etc.), available to anyone but that law
>>> > enforcement is given the means to determine identity through an
>>> internationally
>>> > agreed process i.e. along the lines of 2).
>>> > >
>>> > > I think any backdooring process will just end up helping the bad guys, so we
>>> > have full ETO encryption available but if the net can properly ensure privacy
>>> and
>>> > security only a minority will need it.
>>> >
>>> > So you envisage encryption that is end-to-end and backdoor free, but
>>> > nonetheless accessible to lawful intercept. Challenging in today's
>>> environment,
>>> > but maybe there is a solution.
>>>
>>> I was thinking more that the identity was visible to lawful intercept, not
>>> necessarily the encrypted content. But if privacy and security are guaranteed
>>> without encryption then there would be less need for it. I forgot to mention
>>> integrity, there should be a way to ensure integrity of the data (such as
>>> javascript) transmitted between mutually identified parties, without having to
>>> put everything through an encrypted tunnel.
>>>
>>> >
>>> > David Singer
>>> > Manager, Software Standards, Apple Inc.
>>> >
>>>
>>> Mike
>>
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.13 (MingW32)
>> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
>> Charset: utf-8
>>
>> iQEcBAEBAgAGBQJUyMbdAAoJEHMxUy4uXm2JT/MIANU1HsCIzE0NvqYGBerIZOGm
>> ccTLlJ5JPs9FqRQ2rmhUVDZ0I8SbhbP0mSiHOMtMkXRJKr6HzTDWgQES4NcUOs2j
>> qvN5075sbyc/iySfEFqBRYM/nBtYBTMNZRc5Arv5VBCPaJVSfSxqSaEZ3HtD0hbW
>> L/2McPaw3ZAnEDAU1Dz0mFfdn0f40Gog0EqOFpTUIXC5QuuFiyDmJOKwE5IfOfoH
>> 4Ca9u4DHbyYAKn7H73wP3QfzLQUKNkgwPnH756RM3aGFhpHv/PRVAGhe7utRuPkP
>> r35134ey75dC+4aP9tNzDka5Vco+Nlk9TDfoGmPMCKr3UhHfu1P7GbWQajLC44o=
>> =C1+x
>> -----END PGP SIGNATURE-----
>>
>>
>
>
>
> --
> Joseph Lorenzo Hall
> Chief Technologist
> Center for Democracy & Technology
> 1634 I ST NW STE 1100
> Washington DC 20006-4011
> (p) 202-407-8825
> (f) 202-637-0968
> joe@cdt.org
> PGP: https://josephhall.org/gpg-key
> fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
>
--
John S. Erickson, Ph.D.
Director of Operations, The Rensselaer IDEA
Deputy Director, Web Science Research Center (RPI)
<http://tw.rpi.edu> <olyerickson@gmail.com>
Twitter & Skype: olyerickson

Re: DOJ asks Google to back off on httpsNicholas Dotynpdoty@ischool.berkeley.edumid:DB39B3C6-B884-4FB1-9C14-5D2D094D551F@ischool.berkeley.edu2015-01-28T14:20:57-08:00

The article provides an account of Assistant Attorney General Leslie Caldwell’s remarks at State of the Net, including a “zone of lawfulness” (I think that’s sic, intended to be “zone of lawlessness”). I’m not aware of any details of the proposal, and it doesn’t seem clear whether the intention is preventing encryption of storage or encryption in transit.
http://stateofthenet2015.sched.org/event/a3ad4721a9802b089955eecb258a8600 <http://stateofthenet2015.sched.org/event/a3ad4721a9802b089955eecb258a8600>
http://www.law360.com/articles/615091 <http://www.law360.com/articles/615091>
> On Jan 28, 2015, at 10:56 AM, Joe Hall <joe@cdt.org> wrote:
>
> I can't read it either but the focus of FBI/DOJ has been device
> encryption... if it is transport encryption (https, dtls, etc.)
> someone let me know!!! best, Joe
>
> On Wed, Jan 28, 2015 at 6:24 AM, Mike O'Neill
> <michael.oneill@baycloud.com <mailto:michael.oneill@baycloud.com>> wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> I haven't got a subscription but this appears relevant to our discussions. Has anyone got more information?
>>
>> http://www.lexisnexis.com/legalnewsroom/technology/b/newsheadlines/archive/2015/01/27/doj-atty-joins-call-for-google-to-back-off-data-encryption.aspx <http://www.lexisnexis.com/legalnewsroom/technology/b/newsheadlines/archive/2015/01/27/doj-atty-joins-call-for-google-to-back-off-data-encryption.aspx>
>>
>> Mike
>>
>>
>>
>>> -----Original Message-----
>>> From: Mike O'Neill [mailto:michael.oneill@baycloud.com <mailto:michael.oneill@baycloud.com>]
>>> Sent: 28 January 2015 09:12
>>> To: 'David Singer'
>>> Cc: 'Danny Weitzner'; 'Rigo Wenning'; public-privacy@w3.org <mailto:public-privacy@w3.org>
>>> Subject: RE: On the european response to Snowden
>>>
>>> *** gpg4o | Valid Signature from 7331532E2E5E6D89 Mike O'Neill
>>> <michael.oneill@btinternet.com <mailto:michael.oneill@btinternet.com>> ***
>>>
>>> David, comments to your comments inline
>>>
>>>> -----Original Message-----
>>>> From: David Singer [mailto:singer@apple.com]
>>>> Sent: 27 January 2015 14:33
>>>> To: Mike O'Neill
>>>> Cc: Danny Weitzner; Rigo Wenning; public-privacy@w3.org
>>>> Subject: Re: On the european response to Snowden
>>>>
>>>> Thanks Mike, comments inline
>>>>
>>>>> 1) Signalling.
>>>>> We saw a bit of this in the DNT discussions. How to create a signal
>>>> conveying a user's explicit agreement for something or their preferences for
>>>> something to one or more entities that may exist across multiple origins, in a
>>>> secure untamperable way. This may eventually be superseded by:
>>>>
>>>> A challenging problem. These signals and preferences tend to be small, and
>>>> padding them and then signing them digitally would seem to be using a
>>>> sledgehammer to crack a walnut. But maybe the walnut is growing in
>>>> importance. Other ideas?
>>>
>>> I was meaning more the general problem of signalling between entities, i.e.
>>> between the UA acting for an individual and companies which control many
>>> domains/origins. There are several use-cases that came up in DNT and it
>>> requires authentication of identity which was also why it will be subsumed into
>>> point 2.
>>>
>>>>
>>>>> 2) Anonymity.
>>>>> To ensure privacy we should be able to trawl the net anonymously, but
>>>> with some identity available through defined transactional processes. For
>>>> example we may allow a subset of our identity to be discovered by some
>>> parties
>>>> we know about and have reached agreement with. This might just be a broad
>>>> audience categorisation (male, geek, whatever) or it might be more specific
>>>> (MEP, a particular child's parent, member of a club). Visible identity changes
>>> with
>>>> circumstances i.e. I could anonymously apply for a loan or agree to pay for a
>>>> purchase but I would need to be accountable. My legal identity would have to
>>> be
>>>> discoverable in certain agreed circumstances. We may also agree, through
>>>> membership of a "rule of law" jurisdiction ,that our identity is discoverable by
>>>> law enforcement under agreed (by society) circumstances.
>>>>>
>>>>> This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or
>>> a
>>>> new internetworking layer, focus on stopping fingerprinting, and it is a big one.
>>>> It will need heavy guns.
>>>>
>>>> Online anonymity — secrecy — is hard, as you know. ToR is hardly an easy or
>>>> universal solution. I recently did the thought experiment “what if every router
>>>> was a NAT box?” — this would mean that IP addresses would be useless as
>>>> proxies for identity — and the answer is that anonymity would improve but
>>>> many other things (e.g. phone calls) would suffer. Again, ideas for this would
>>> be
>>>> good.
>>>
>>> I think there should be an out-of-band identity exchange, non-trackable i.e. does
>>> not use UUIDs but established below the tunnel. Maybe in the https handshake
>>> or in an internetwork layer.
>>> The identity exchange should be under the control of both parties, but also
>>> visible to third-parties in defined circumstances for instance when accountability
>>> or law enforcement is required.
>>>
>>>>
>>>>> 3) Encryption.
>>>>>
>>>>> There is talk about making end-to-end encryption illegal. While this may
>>> seem
>>>> silly and is probably a shot across the bows, https everywhere stirs the hornet's
>>>> nest. I think an answer involves some process whereby https is made more
>>>> secure (via certificate pinning etc.), available to anyone but that law
>>>> enforcement is given the means to determine identity through an
>>> internationally
>>>> agreed process i.e. along the lines of 2).
>>>>>
>>>>> I think any backdooring process will just end up helping the bad guys, so we
>>>> have full ETO encryption available but if the net can properly ensure privacy
>>> and
>>>> security only a minority will need it.
>>>>
>>>> So you envisage encryption that is end-to-end and backdoor free, but
>>>> nonetheless accessible to lawful intercept. Challenging in today’s
>>> environment,
>>>> but maybe there is a solution.
>>>
>>> I was thinking more that the identity was visible to lawful intercept, not
>>> necessarily the encrypted content. But if privacy and security are guaranteed
>>> without encryption then there would be less need for it. I forgot to mention
>>> integrity, there should be a way to ensure integrity of the data (such as
>>> javascript) transmitted between mutually identified parties, without having to
>>> put everything through an encrypted tunnel.
>>>
>>>>
>>>> David Singer
>>>> Manager, Software Standards, Apple Inc.
>>>>
>>>
>>> Mike
>>
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.13 (MingW32)
>> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/ <http://www.gpg4o.com/>
>> Charset: utf-8
>>
>> iQEcBAEBAgAGBQJUyMbdAAoJEHMxUy4uXm2JT/MIANU1HsCIzE0NvqYGBerIZOGm
>> ccTLlJ5JPs9FqRQ2rmhUVDZ0I8SbhbP0mSiHOMtMkXRJKr6HzTDWgQES4NcUOs2j
>> qvN5075sbyc/iySfEFqBRYM/nBtYBTMNZRc5Arv5VBCPaJVSfSxqSaEZ3HtD0hbW
>> L/2McPaw3ZAnEDAU1Dz0mFfdn0f40Gog0EqOFpTUIXC5QuuFiyDmJOKwE5IfOfoH
>> 4Ca9u4DHbyYAKn7H73wP3QfzLQUKNkgwPnH756RM3aGFhpHv/PRVAGhe7utRuPkP
>> r35134ey75dC+4aP9tNzDka5Vco+Nlk9TDfoGmPMCKr3UhHfu1P7GbWQajLC44o=
>> =C1+x
>> -----END PGP SIGNATURE-----
>>
>>
>
>
>
> --
> Joseph Lorenzo Hall
> Chief Technologist
> Center for Democracy & Technology
> 1634 I ST NW STE 1100
> Washington DC 20006-4011
> (p) 202-407-8825
> (f) 202-637-0968
> joe@cdt.org <mailto:joe@cdt.org>
> PGP: https://josephhall.org/gpg-key <https://josephhall.org/gpg-key>
> fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

Re: DOJ asks Google to back off on httpsRigo Wenningrigo@w3.orgmid:1479530.btGUSGN4hI@hegel2015-01-29T21:21:20+01:00

The ghost of "zones of lawlessness" is haunting again. It is a mantra of
politicians and law enforcement people meaning the Internet is bad and must be
under the control of government. Once, someone says "lawless zone", the
phrases after have a probability of over 90% to be utter rubbish. Which let a
friend of mine to request that those politicians and law enforcement people
should wear an electronic bracelet that gives them an electric charge every
time they say "lawless zone". That would tremendously increase the quality of
the discussion and would make those conferences fun for once.
--Rigo
On Wednesday 28 January 2015 14:20:57 Nicholas Doty wrote:
> The article provides an account of Assistant Attorney General Leslie
> Caldwell’s remarks at State of the Net, including a “zone of lawfulness” (I
> think that’s sic, intended to be “zone of lawlessness”). I’m not aware of
> any details of the proposal, and it doesn’t seem clear whether the
> intention is preventing encryption of storage or encryption in transit.
>
> http://stateofthenet2015.sched.org/event/a3ad4721a9802b089955eecb258a8600
> <http://stateofthenet2015.sched.org/event/a3ad4721a9802b089955eecb258a8600>
> http://www.law360.com/articles/615091
> <http://www.law360.com/articles/615091>

Re: DOJ asks Google to back off on httpsKarl Dubostkarl@la-grange.netmid:C6123990-527B-4753-830D-3C681F8FA2E5@la-grange.net2015-01-30T10:53:02+09:00

Also sprach Rigo:
Le 30 janv. 2015 à 05:21, Rigo Wenning <rigo@w3.org> a écrit :
> The ghost of "zones of lawlessness" is haunting again. It is a mantra of
> politicians and law enforcement people meaning the Internet is bad and must be
> under the control of government.
Much agreed. My brain is full of lawlessness, my private physical space is full of lawlessness, my public physical space of full lawlessness. I'm pretty sure these politicians never ever commit any crimes or let's say have misconduct such as crossing the street at the wrong time, etc. It's also a speech which has been here since the beginning of the public Internet. It brings back memories of politicians in France 20 years ago. Rigo will remember.
So yes, this is clearly strawman weaving to distract from the real issues.
--
Karl Dubost 🐄
http://www.la-grange.net/karl/

FTC Report on Internet of Things Urges Companies to Adopt Best Practices to Address Consumer Privacy and Security RisksDavid Singersinger@apple.commid:47E07434-1C6D-4D58-ABA8-3610DEB3A675@apple.com2015-01-27T16:36:57+01:00

FTC Report on Internet of Things Urges Companies to Adopt Best Practices to Address Consumer Privacy and Security Risks
Report Recognizes Rapid Growth of Connected Devices Offers Societal Benefits, But Also Risks That Could Undermine Consumer Confidence
For Release: January 27, 2015
In a detailed report on the Internet of Things, released today <http://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf>, the staff of the Federal Trade Commission recommend a series of concrete steps that businesses can take to enhance and protect consumers’ privacy and security, as Americans start to reap the benefits from a growing world of Internet-connected devices.
The Internet of Things is already impacting the daily lives of millions of Americans through the adoption of health and fitness monitors, home security devices, connected cars and household appliances, among other applications. Such devices offer the potential for improved health-monitoring, safer highways, and more efficient home energy use, among other potential benefits. However, the FTC report also notes that connected devices raise numerous privacy and security concerns that could undermine consumer confidence.
“The only way for the Internet of Things to reach its full potential for innovation is with the trust of American consumers,” said FTC Chairwoman Edith Ramirez. “We believe that by adopting the best practices we’ve laid out, businesses will be better able to provide consumers the protections they want and allow the benefits of the Internet of Things to be fully realized.”
The Internet of Things universe is expanding quickly, and there are now over 25 billion connected devices in use worldwide, with that number set to rise significantly as consumer goods companies, auto manufacturers, healthcare providers, and other businesses continue to invest in connected devices, according to data cited in the report.
The report is partly based on input from leading technologists and academics, industry representatives, consumer advocates and others who participated in the FTC’s Internet of Things workshop <http://www.ftc.gov/news-events/events-calendar/2013/11/internet-things-privacy-security-connected-world> held in Washington D.C. on Nov. 19, 2013, as well as those who submitted public comments to the Commission. Staff defined the Internet of Things as devices or sensors – other than computers, smartphones, or tablets – that connect, store or transmit information with or between each other via the Internet. The scope of the report is limited to IoT devices that are sold to or used by consumers.
Security was one of the main topics addressed at the workshop and in the comments, particularly due to the highly networked nature of the devices. The report includes the following recommendations for companies developing Internet of Things devices:
build security into devices at the outset, rather than as an afterthought in the design process;
train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization;
ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers;
when a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk;
consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network;
monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks.
Commission staff also recommend that companies consider data minimization – that is, limiting the collection of consumer data, and retaining that information only for a set period of time, and not indefinitely. The report notes that data minimization addresses two key privacy risks: first, the risk that a company with a large store of consumer data will become a more enticing target for data thieves or hackers, and second, that consumer data will be used in ways contrary to consumers’ expectations.
The report takes a flexible approach to data minimization. Under the recommendations, companies can choose to collect no data, data limited to the categories required to provide the service offered by the device, less sensitive data; or choose to de-identify the data collected.
FTC staff also recommends that companies notify consumers and give them choices about how their information will be used, particularly when the data collection is beyond consumers’ reasonable expectations. It acknowledges that there is no one-size-fits-all approach to how that notice must be given to consumers, particularly since some Internet of Things devices may have no consumer interface. FTC staff identifies several innovative ways that companies could provide notice and choice to consumers.
Regarding legislation, staff concurs with many stakeholders that any Internet of Things-specific legislation would be premature at this point in time given the rapidly evolving nature of the technology. The report, however, reiterates the Commission’s repeated call for strong data security and breach notification legislation. Staff also reiterates the Commission’s call from its 2012 Privacy Report for broad-based privacy legislation that is both flexible and technology-neutral, though Commissioner Ohlhausen did not concur in this portion of the report.
The FTC has a range of tools currently available to protect American consumers’ privacy related to the Internet of Things, including enforcement actions under laws such as the FTC Act, the Fair Credit Reporting Act, the Children’s Online Privacy Protection Act; developing consumer education and business guidance; participation in multi-stakeholder efforts; and advocacy to other agencies at the federal, state and local level.
In addition to the report, the FTC also released a new publication for businesses containing advice about how to build security into products connected to the Internet of Things. “Careful Connections: Building Security in the Internet of Things” <http://www.ftc.gov/tips-advice/business-center/careful-connections-building-security-internet-things>encourages companies to implement a risk-based approach and take advantage of best practices developed by security experts, such as using strong encryption and proper authentication.
The Commission vote to issue the staff report was 4-1, with Commissioner Wright voting no. Commissioner Ohlhausen issued a concurring statement <http://www.ftc.gov/system/files/documents/public_statements/620691/150127iotmkostmt.pdf>, and Commissioner Wright issued a dissenting statement <http://www.ftc.gov/system/files/documents/public_statements/620701/150127iotjdwstmt.pdf>.
David Singer
Manager, Software Standards, Apple Inc.

On the european response to SnowdenDavid Singersinger@apple.commid:85127D00-21EA-4F42-BE6B-BE129E8FA96C@apple.com2015-01-26T09:52:35+01:00

Re: On the european response to SnowdenRigo Wenningrigo@w3.orgmid:1603764.HuYCCyVQ6T@hegel2015-01-26T21:52:53+01:00

On Monday 26 January 2015 9:52:35 David Singer wrote:
> interesting article
>
> <http://www.ecfr.eu/publications/summary/mass_surveillance_privacy_and_secur
> ity_europes_confused_response329>
Decoding:
The privacy regulation is under way and shall be voted in 2015. But it only
touches on data protection and communications of the private sector.
There is a parallel "Directive" on data protection in government. Directive
means that it isn't directly nationally applicable (the regulation above will
be). A Directive needs a national legislative act to be effective. This
Directive touches on the issue of Pervasive Monitoring. There is debate, but
the debate is somewhat silenced by other issues, like Greece, the war in
Ukraine. Also the freedom of speech against islamist threatening is at the
forefront as you can imagine
BTW, while the discussion is somewhat low in France, Italy and Greece, the
debate on Pervasive Monitoring is happening in the Netherlands, Germany and
Belgium. There is certainly a tension between what governments would like to
do and what the population is willing to let them do in terms of surveillance.
--Rigo

Re: On the european response to SnowdenDanny Weitznerdjweitzner@csail.mit.edumid:CAM5xY4ecxW19ZcHd+j_OVr2LVOd7ZmDuEouvsN9RXOaKen+txw@mail.gmail.com2015-01-27T01:15:00+00:00

further decoding: the EU has no authority over national security matters
(ie foreign intelligence gathering) in its member states. Directive Rigo
mentions will apply to law enforcement -- a good start, but not sufficient.
On Mon Jan 26 2015 at 12:54:41 PM Rigo Wenning <rigo@w3.org> wrote:
> On Monday 26 January 2015 9:52:35 David Singer wrote:
> > interesting article
> >
> > <http://www.ecfr.eu/publications/summary/mass_
> surveillance_privacy_and_secur
> > ity_europes_confused_response329>
>
> Decoding:
>
> The privacy regulation is under way and shall be voted in 2015. But it only
> touches on data protection and communications of the private sector.
>
> There is a parallel "Directive" on data protection in government. Directive
> means that it isn't directly nationally applicable (the regulation above
> will
> be). A Directive needs a national legislative act to be effective. This
> Directive touches on the issue of Pervasive Monitoring. There is debate,
> but
> the debate is somewhat silenced by other issues, like Greece, the war in
> Ukraine. Also the freedom of speech against islamist threatening is at the
> forefront as you can imagine
>
> BTW, while the discussion is somewhat low in France, Italy and Greece, the
> debate on Pervasive Monitoring is happening in the Netherlands, Germany and
> Belgium. There is certainly a tension between what governments would like
> to
> do and what the population is willing to let them do in terms of
> surveillance.
>
> --Rigo

RE: On the european response to SnowdenMike O'Neillmichael.oneill@baycloud.commid:032701d03a1e$89723ea0$9c56bbe0$@baycloud.com2015-01-27T10:46:36-00:00

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
There is also a international dimension, with transatlantic agreements on privacy, cybersecurity and surveillance being publically discussed, and it is clear these things are interrelated, addressing one will always involve consideration of the others.
There does not have to be a trade-off, no need to forgo privacy for the sake of security. We should be able to build a system with them all.
What is needed is a clearly expressed “statement of requirements” i.e. we want to protect privacy and security within a transparent and democratically accountable framework which, for example, allows law enforcement to do its job (using warranted surveillance if necessary), but rules out mass surveillance. Because the net knows no borders there has to be a transnational component.
The W3C could then do its part helping to create the necessary protocols and standards, while the politicians take charge of the oversight process and creating the legal environment.
Mike
From: Danny Weitzner [mailto:djweitzner@csail.mit.edu]
Sent: 27 January 2015 01:15
To: Rigo Wenning; public-privacy@w3.org
Cc: David Singer
Subject: Re: On the european response to Snowden
further decoding: the EU has no authority over national security matters (ie foreign intelligence gathering) in its member states. Directive Rigo mentions will apply to law enforcement -- a good start, but not sufficient.
On Mon Jan 26 2015 at 12:54:41 PM Rigo Wenning <rigo@w3.org> wrote:
On Monday 26 January 2015 9:52:35 David Singer wrote:
> interesting article
>
> <http://www.ecfr.eu/publications/summary/mass_surveillance_privacy_and_secur
> ity_europes_confused_response329>
Decoding:
The privacy regulation is under way and shall be voted in 2015. But it only
touches on data protection and communications of the private sector.
There is a parallel "Directive" on data protection in government. Directive
means that it isn't directly nationally applicable (the regulation above will
be). A Directive needs a national legislative act to be effective. This
Directive touches on the issue of Pervasive Monitoring. There is debate, but
the debate is somewhat silenced by other issues, like Greece, the war in
Ukraine. Also the freedom of speech against islamist threatening is at the
forefront as you can imagine
BTW, while the discussion is somewhat low in France, Italy and Greece, the
debate on Pervasive Monitoring is happening in the Netherlands, Germany and
Belgium. There is certainly a tension between what governments would like to
do and what the population is willing to let them do in terms of surveillance.
- --Rigo
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUx2yLAAoJEHMxUy4uXm2JkrkH/RPppNjeXQZCNfmuCKfyv+eE
403K0IdpBvvpFPwrvBSrZWzHQyssWVXBh75d2Y9NFKKAbW2/uPUS+jT8TE+st477
Lyhy8jSY5oPLmIwic9YW41K3kM5Tk0Dj2vkZBtxM7JM4wHkFsZi03Fz5AkpNFOsT
mbDVVHkbM3rVbuIr7e4WNucrnLBYIDIrz2SSLb1Zwf84yfcxGqCrHjmcunKzWujl
HUlD8c9903kJupuJplMuWjLGP6T2Dv/5jhAifXevY1RroCT6B/+DGOVnjXsfgAwT
QuN6USGKnSxKuyLqlaIJ1jsPpxdvqHHSgv4P2OCtkB8c23BAYQ0p6qtN5kq+KAw=
=hTP5
-----END PGP SIGNATURE-----

Re: On the european response to SnowdenDavid Singersinger@apple.commid:68CE7573-9BBB-499A-BD2C-588F5E2C1D32@apple.com2015-01-27T11:49:47+01:00

> On Jan 27, 2015, at 11:46 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> There is also a international dimension, with transatlantic agreements on privacy, cybersecurity and surveillance being publically discussed, and it is clear these things are interrelated, addressing one will always involve consideration of the others.
>
> There does not have to be a trade-off, no need to forgo privacy for the sake of security. We should be able to build a system with them all.
>
> What is needed is a clearly expressed “statement of requirements” i.e. we want to protect privacy and security within a transparent and democratically accountable framework which, for example, allows law enforcement to do its job (using warranted surveillance if necessary), but rules out mass surveillance. Because the net knows no borders there has to be a transnational component.
>
> The W3C could then do its part helping to create the necessary protocols and standards, while the politicians take charge of the oversight process and creating the legal environment.
>
If you have even vague visions for what protocols and standards could help here, could you sketch them out?
David Singer
Manager, Software Standards, Apple Inc.

Re: On the european response to SnowdenMathias Vermeulenmathias.vermeulen@gmail.commid:CAJP3LDQnE8qYfsCU5F35Jn-L1Bv79z4XWR3sEhp+L+R47EZHDg@mail.gmail.com2015-01-27T12:17:46+01:00

In that context I'd like to draw the attention of this group to a report on
'mass surveillance' which was adopted yesterday by the Council of Europe.
http://website-pace.net/documents/19838/1085720/20150126-MassSurveillance-EN.pdf/df5aae25-6cfe-450a-92a6-e903af10b7a2
The Assembly of the Council of Europe urged Council of Europe Member States
and Observer States (which includes the U.S.) to "agree on a multilateral
“Intelligence Codex” for their intelligence services, which lays down rules
governing cooperation for purposes of the fight against terrorism and
organised crime. *The Codex should include a mutual engagement to apply to
the surveillance of each other’s nationals and residents the same rules as
those applied to their own, and to share data obtained through lawful
surveillance measures solely for the purposes for which they were
collected."*
More details on this recommendation can be found in paragraphs 115-118 of
the report. The report is in line with the proposals of the European
Council on Foreign Affairs which was initially posted in this discussion,
and this paper that was published by the Oxford Internet Institute earlier
this month: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2551164
2015-01-27 11:49 GMT+01:00 David Singer <singer@apple.com>:
>
> > On Jan 27, 2015, at 11:46 , Mike O'Neill <michael.oneill@baycloud.com>
> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > There is also a international dimension, with transatlantic agreements
> on privacy, cybersecurity and surveillance being publically discussed, and
> it is clear these things are interrelated, addressing one will always
> involve consideration of the others.
> >
> > There does not have to be a trade-off, no need to forgo privacy for the
> sake of security. We should be able to build a system with them all.
> >
> > What is needed is a clearly expressed “statement of requirements” i.e.
> we want to protect privacy and security within a transparent and
> democratically accountable framework which, for example, allows law
> enforcement to do its job (using warranted surveillance if necessary), but
> rules out mass surveillance. Because the net knows no borders there has to
> be a transnational component.
> >
> > The W3C could then do its part helping to create the necessary protocols
> and standards, while the politicians take charge of the oversight process
> and creating the legal environment.
> >
>
> If you have even vague visions for what protocols and standards could help
> here, could you sketch them out?
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>
>

Re: On the european response to SnowdenDavid Singersinger@apple.commid:94573210-EAFC-4483-9A1B-F928906876F6@apple.com2015-01-27T12:29:15+01:00

Ambarish, Mathias,
the W3C writes voluntary specifications: protocols, formats, and so on. Mike implied that it could help by defining some protocols and standards. I’m missing in your responses what protocols the W3C could work on. Did you have ideas?
I have no doubt that governments and so on can work on the principles, but principles are a poor fit for a technical standards group.
David Singer
Manager, Software Standards, Apple Inc.

Re: On the european response to Snowdenjoseph alhadeffjoseph.alhadeff@oracle.commid:54C7783E.8000908@oracle.com2015-01-27T06:36:30-05:00

David:
Governments work on principles at OECD and in such dialogs as the High
Level Group all the time. The question is whether principles are
sufficient at this point to establish the needed foundation for progress
on these issues. Principles are in many cases the best approach as they
provide the flexibility of application that allows for technology and
other developments that make detailed regulations obsolete in short order.
Joe
On 1/27/2015 6:29 AM, David Singer wrote:
> Ambarish, Mathias,
>
> the W3C writes voluntary specifications: protocols, formats, and so on. Mike implied that it could help by defining some protocols and standards. I’m missing in your responses what protocols the W3C could work on. Did you have ideas?
>
> I have no doubt that governments and so on can work on the principles, but principles are a poor fit for a technical standards group.
>
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>

Re: On the european response to Snowdenjoseph alhadeffjoseph.alhadeff@oracle.commid:54C776D7.9000606@oracle.com2015-01-27T06:30:31-05:00

Mathias:
What form does the proposed CODEX take? Is it a treaty-like instrument
or is it more like a consensus agreement? If the latter, perhaps we
should revisit the investigatory principles that were agreed on both
sides of the Atlantic by the high level group?
Joe
On 1/27/2015 6:17 AM, Mathias Vermeulen wrote:
> In that context I'd like to draw the attention of this group to a
> report on 'mass surveillance' which was adopted yesterday by the
> Council of
> Europe.http://website-pace.net/documents/19838/1085720/20150126-MassSurveillance-EN.pdf/df5aae25-6cfe-450a-92a6-e903af10b7a2
>
> The Assembly of the Council of Europe urged Council of Europe Member
> States and Observer States (which includes the U.S.) to "agree on a
> multilateral “Intelligence Codex” for their intelligence services,
> which lays down rules governing cooperation for purposes of the fight
> against terrorism and organised crime. *The Codex should include a
> mutual engagement to apply to the surveillance of each other’s
> nationals and residents the same rules as those applied to their own,
> and to share data obtained through lawful surveillance measures solely
> for the purposes for which they were collected."*
>
> More details on this recommendation can be found in paragraphs 115-118
> of the report. The report is in line with the proposals of the
> European Council on Foreign Affairs which was initially posted in this
> discussion, and this paper that was published by the Oxford Internet
> Institute earlier this month:
> http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2551164
>
> 2015-01-27 11:49 GMT+01:00 David Singer <singer@apple.com
> <mailto:singer@apple.com>>:
>
>
> > On Jan 27, 2015, at 11:46 , Mike O'Neill
> <michael.oneill@baycloud.com <mailto:michael.oneill@baycloud.com>>
> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > There is also a international dimension, with transatlantic
> agreements on privacy, cybersecurity and surveillance being
> publically discussed, and it is clear these things are
> interrelated, addressing one will always involve consideration of
> the others.
> >
> > There does not have to be a trade-off, no need to forgo privacy
> for the sake of security. We should be able to build a system with
> them all.
> >
> > What is needed is a clearly expressed “statement of
> requirements” i.e. we want to protect privacy and security within
> a transparent and democratically accountable framework which, for
> example, allows law enforcement to do its job (using warranted
> surveillance if necessary), but rules out mass surveillance.
> Because the net knows no borders there has to be a transnational
> component.
> >
> > The W3C could then do its part helping to create the necessary
> protocols and standards, while the politicians take charge of the
> oversight process and creating the legal environment.
> >
>
> If you have even vague visions for what protocols and standards
> could help here, could you sketch them out?
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>
>

RE: On the european response to SnowdenMike O'Neillmichael.oneill@baycloud.commid:036101d03a2d$fc791f40$f56b5dc0$@baycloud.com2015-01-27T12:37:11-00:00

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
There are 3 areas I have been thinking about, all rather vague but you asked for them.
1) Signalling.
We saw a bit of this in the DNT discussions. How to create a signal conveying a user's explicit agreement for something or their preferences for something to one or more entities that may exist across multiple origins, in a secure untamperable way. This may eventually be superseded by:
2) Anonymity.
To ensure privacy we should be able to trawl the net anonymously, but with some identity available through defined transactional processes. For example we may allow a subset of our identity to be discovered by some parties we know about and have reached agreement with. This might just be a broad audience categorisation (male, geek, whatever) or it might be more specific (MEP, a particular child's parent, member of a club). Visible identity changes with circumstances i.e. I could anonymously apply for a loan or agree to pay for a purchase but I would need to be accountable. My legal identity would have to be discoverable in certain agreed circumstances. We may also agree, through membership of a "rule of law" jurisdiction ,that our identity is discoverable by law enforcement under agreed (by society) circumstances.
This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or a new internetworking layer, focus on stopping fingerprinting, and it is a big one. It will need heavy guns.
3) Encryption.
There is talk about making end-to-end encryption illegal. While this may seem silly and is probably a shot across the bows, https everywhere stirs the hornet's nest. I think an answer involves some process whereby https is made more secure (via certificate pinning etc.), available to anyone but that law enforcement is given the means to determine identity through an internationally agreed process i.e. along the lines of 2).
I think any backdooring process will just end up helping the bad guys, so we have full ETO encryption available but if the net can properly ensure privacy and security only a minority will need it.
> -----Original Message-----
> From: David Singer [mailto:singer@apple.com]
> Sent: 27 January 2015 10:50
> To: Mike O'Neill
> Cc: Danny Weitzner; Rigo Wenning; public-privacy@w3.org
> Subject: Re: On the european response to Snowden
>
>
> > On Jan 27, 2015, at 11:46 , Mike O'Neill <michael.oneill@baycloud.com>
> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > There is also a international dimension, with transatlantic agreements on
> privacy, cybersecurity and surveillance being publically discussed, and it is clear
> these things are interrelated, addressing one will always involve consideration of
> the others.
> >
> > There does not have to be a trade-off, no need to forgo privacy for the sake of
> security. We should be able to build a system with them all.
> >
> > What is needed is a clearly expressed “statement of requirements” i.e. we
> want to protect privacy and security within a transparent and democratically
> accountable framework which, for example, allows law enforcement to do its
> job (using warranted surveillance if necessary), but rules out mass surveillance.
> Because the net knows no borders there has to be a transnational component.
> >
> > The W3C could then do its part helping to create the necessary protocols and
> standards, while the politicians take charge of the oversight process and
> creating the legal environment.
> >
>
> If you have even vague visions for what protocols and standards could help
> here, could you sketch them out?
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUx4Z3AAoJEHMxUy4uXm2Ju6gIAIjJvKPsA1dlsUJzmswMfHDf
GVmuSPc+ipTOQVvCjfZBhYecD/y8tGGH7YW8gdbQ+q3npemhDC26+e3Re9aKUIfX
uAmyXuMI3C4D658RzTtWj45qCsAkrT+R5ZAL0nKnlQiGjWTZVfwkUlso6A9GTjmC
urkW7Vbt7+5OJFBGhYA0RekyveP7Xmi5mSrdB99c4FQLOtRXECsHBJwF5d0SGJq/
nR3ZGvPkiKK8mXjrIwCTmY3QRNi0QySqLSg5Wc/WKtKtXJz6kuG99Xrakgg6AMKj
GlEDTnXTHkykuXr1x3FO0yvnFtIJqyxkzRNU/Nxzj6ZMSkHVqkEQs8Qgi6CV8V4=
=7evL
-----END PGP SIGNATURE-----

Re: On the european response to SnowdenDavid Singersinger@apple.commid:6C906DF6-F70C-4B70-97E7-118CD9E2F922@apple.com2015-01-27T15:33:21+01:00

Thanks Mike, comments inline
> 1) Signalling.
> We saw a bit of this in the DNT discussions. How to create a signal conveying a user's explicit agreement for something or their preferences for something to one or more entities that may exist across multiple origins, in a secure untamperable way. This may eventually be superseded by:
A challenging problem. These signals and preferences tend to be small, and padding them and then signing them digitally would seem to be using a sledgehammer to crack a walnut. But maybe the walnut is growing in importance. Other ideas?
> 2) Anonymity.
> To ensure privacy we should be able to trawl the net anonymously, but with some identity available through defined transactional processes. For example we may allow a subset of our identity to be discovered by some parties we know about and have reached agreement with. This might just be a broad audience categorisation (male, geek, whatever) or it might be more specific (MEP, a particular child's parent, member of a club). Visible identity changes with circumstances i.e. I could anonymously apply for a loan or agree to pay for a purchase but I would need to be accountable. My legal identity would have to be discoverable in certain agreed circumstances. We may also agree, through membership of a "rule of law" jurisdiction ,that our identity is discoverable by law enforcement under agreed (by society) circumstances.
>
> This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or a new internetworking layer, focus on stopping fingerprinting, and it is a big one. It will need heavy guns.
Online anonymity — secrecy — is hard, as you know. ToR is hardly an easy or universal solution. I recently did the thought experiment “what if every router was a NAT box?” — this would mean that IP addresses would be useless as proxies for identity — and the answer is that anonymity would improve but many other things (e.g. phone calls) would suffer. Again, ideas for this would be good.
> 3) Encryption.
>
> There is talk about making end-to-end encryption illegal. While this may seem silly and is probably a shot across the bows, https everywhere stirs the hornet's nest. I think an answer involves some process whereby https is made more secure (via certificate pinning etc.), available to anyone but that law enforcement is given the means to determine identity through an internationally agreed process i.e. along the lines of 2).
>
> I think any backdooring process will just end up helping the bad guys, so we have full ETO encryption available but if the net can properly ensure privacy and security only a minority will need it.
So you envisage encryption that is end-to-end and backdoor free, but nonetheless accessible to lawful intercept. Challenging in today’s environment, but maybe there is a solution.
David Singer
Manager, Software Standards, Apple Inc.

Re: On the european response to SnowdenRigo Wenningrigo@w3.orgmid:16010165.6aAoy3CcfD@hegel2015-01-27T20:55:50+01:00

David,
On Tuesday 27 January 2015 15:33:21 David Singer wrote:
> > 1) Signalling.
>
> > We saw a bit of this in the DNT discussions. How to create a signal
conveying a user's explicit agreement for something or their preferences for
something to one or more entities that may exist across multiple origins, in a
secure untamperable way. This may eventually be superseded by:
> A challenging problem. These signals and preferences tend to be small, and
> padding them and then signing them digitally would seem to be using a
> sledgehammer to crack a walnut. But maybe the walnut is growing in
> importance. Other ideas?
The mistake is to look for a universal solution. In fact, we could start by
one useful semantic bucket. Once this is taken up, we can add other buckets.
As there is no compulsory legal obligation, the only way to get PETs accepted
is to offer benefits. This is hard in the absence of a legal system in an
environment where you can collect everything and use it for every possible
purpose. But it is meaningful in a regulated environment where it actually
opens doors as it conveys permissions.
--Rigo

RE: On the european response to SnowdenMike O'Neillmichael.oneill@baycloud.commid:3a1401d03ada$8dad49d0$a907dd70$@baycloud.com2015-01-28T09:12:23-00:00

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
David, comments to your comments inline
> -----Original Message-----
> From: David Singer [mailto:singer@apple.com]
> Sent: 27 January 2015 14:33
> To: Mike O'Neill
> Cc: Danny Weitzner; Rigo Wenning; public-privacy@w3.org
> Subject: Re: On the european response to Snowden
>
> Thanks Mike, comments inline
>
> > 1) Signalling.
> > We saw a bit of this in the DNT discussions. How to create a signal
> conveying a user's explicit agreement for something or their preferences for
> something to one or more entities that may exist across multiple origins, in a
> secure untamperable way. This may eventually be superseded by:
>
> A challenging problem. These signals and preferences tend to be small, and
> padding them and then signing them digitally would seem to be using a
> sledgehammer to crack a walnut. But maybe the walnut is growing in
> importance. Other ideas?
I was meaning more the general problem of signalling between entities, i.e. between the UA acting for an individual and companies which control many domains/origins. There are several use-cases that came up in DNT and it requires authentication of identity which was also why it will be subsumed into point 2.
>
> > 2) Anonymity.
> > To ensure privacy we should be able to trawl the net anonymously, but
> with some identity available through defined transactional processes. For
> example we may allow a subset of our identity to be discovered by some parties
> we know about and have reached agreement with. This might just be a broad
> audience categorisation (male, geek, whatever) or it might be more specific
> (MEP, a particular child's parent, member of a club). Visible identity changes with
> circumstances i.e. I could anonymously apply for a loan or agree to pay for a
> purchase but I would need to be accountable. My legal identity would have to be
> discoverable in certain agreed circumstances. We may also agree, through
> membership of a "rule of law" jurisdiction ,that our identity is discoverable by
> law enforcement under agreed (by society) circumstances.
> >
> > This may go beyond HTTP, i.e. IPv6 anon. auto configuration everywhere or a
> new internetworking layer, focus on stopping fingerprinting, and it is a big one.
> It will need heavy guns.
>
> Online anonymity — secrecy — is hard, as you know. ToR is hardly an easy or
> universal solution. I recently did the thought experiment “what if every router
> was a NAT box?” — this would mean that IP addresses would be useless as
> proxies for identity — and the answer is that anonymity would improve but
> many other things (e.g. phone calls) would suffer. Again, ideas for this would be
> good.
I think there should be an out-of-band identity exchange, non-trackable i.e. does not use UUIDs but established below the tunnel. Maybe in the https handshake or in an internetwork layer.
The identity exchange should be under the control of both parties, but also visible to third-parties in defined circumstances for instance when accountability or law enforcement is required.
>
> > 3) Encryption.
> >
> > There is talk about making end-to-end encryption illegal. While this may seem
> silly and is probably a shot across the bows, https everywhere stirs the hornet's
> nest. I think an answer involves some process whereby https is made more
> secure (via certificate pinning etc.), available to anyone but that law
> enforcement is given the means to determine identity through an internationally
> agreed process i.e. along the lines of 2).
> >
> > I think any backdooring process will just end up helping the bad guys, so we
> have full ETO encryption available but if the net can properly ensure privacy and
> security only a minority will need it.
>
> So you envisage encryption that is end-to-end and backdoor free, but
> nonetheless accessible to lawful intercept. Challenging in today’s environment,
> but maybe there is a solution.
I was thinking more that the identity was visible to lawful intercept, not necessarily the encrypted content. But if privacy and security are guaranteed without encryption then there would be less need for it. I forgot to mention integrity, there should be a way to ensure integrity of the data (such as javascript) transmitted between mutually identified parties, without having to put everything through an encrypted tunnel.
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
Mike
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUyKf2AAoJEHMxUy4uXm2JlVcH/1r0l9iUX8wPdcBN2kgM/Eim
u3Nzs3K2I72jSMju5Mih2Cpt43/qr/B3LxNSPnE+ZZezHfTMqTF9xGgL1GK1zN4b
i0FabfQ1yDVANpH3pPnUqSgpUvcxtbg2bXOjsJkwTRkjCg+hOmqvz0DoLh+8MARM
rdkpwYV8SiLlcIM5gKwTfm6TrDIs1T2Xl1D7qUIaXJ2JF98SeHoaizGcgCgUN1Kd
RhMEaXM+Wa3gQ/ZtrJgea9YkyNiBkBK9xuHdpcq4aTiCVmY8ZrCtzjv5F0ddWTWa
Z1hxD9svQ6Z9bNhZRKe2r93F63pzfnEmY3buNL6DZrk4IdtQwHsjveKbr9I90qo=
=3sOn
-----END PGP SIGNATURE-----

Re: On the european response to SnowdenAmbarish S Natuambarish.natu@gmail.commid:CAO6L_b770_NAP6xq5o4h7U1DZfn-Szdv7fy96ZG6MmXqqt+hFw@mail.gmail.com2015-01-27T22:17:02+11:00

Here is a list of requirements to start thinking of a framework ! The
problem could easily spiral out of any proportion !
Privacy-Abusive Data Collection and Retention
- *Demands for User Data*
- identity data
- profile data
- contacts data
- location data
- *Enticement of the Disclosure of User Data*
- about the user
- about the user's location
- about others
- *Collection of User Data *
- about users' online behaviour
- when transacting with the particular social media service
- even when transacting with other services
- about users' reading, interests, opinions and attitudes
- about users' locations over time
- from third parties:
- without notice to the user and/or
- without meaningful consent
- *Retention of User Data*
- without meaningful consent
- without a deletion-cycle
- compiling an intensive track of users' readings, behaviours and
movements
Privacy-Abusive Service-Provider Rights
- *Terms of Service Features*
- substantial self-declared, non-negotiable service-provider rights
- a right to exploit users' data for the service-providers' own
purposes
- a right to disclose users' data to other organisations
- a right to retain users' data permanently, even if the person
terminates their account
- a right to change Terms of Service:
- unilaterally
- without advance notice to users; and/or
- without any notice to users
- *Exercise of Self-Declared Service-Provider Rights*
- in ways harmful to users' interests
- in order to renege on previous undertakings
- without notice of the action being provided to the user
- *Avoidance of Consumer Protection and Privacy Laws*
- location of storage and processing in data havens
- location of contract-jurisdiction distant from users
- ignoring of regulatory and oversight agencies
- acceptance of nuisance-value fines and nominal undertakings as 'a
cost of doing business'
Privacy-Abusive Functionality and User Interfaces
- *Privacy-Related Settings*
- non-conservative default settings, such as default-open for
profile-data, postings, and even location-data
- inadequate granularity
- complex and unhelpful user interfaces
- changes to the effects of settings
- without advance notice
- without any notice and/or
- without meaningful consent
- *'Real Names' Policies*
- denial of anonymity
- denial of pseudonymity
- denial of multiple identities
- enforced publication of 'real name' and associated profile data
- *Changes to Functionality and User Interface*
- frequent
- without advance notice to users
- without any notice to users
- without meaningful consent
- *User Access to Their Data*
- lack of clarity about whether data can be accessed
- lack of clarity about how data can be accessed
- failure to implement effective processes for user access
- unreasonable limitations on a right of access
- denial of a right of access
- *User Deletion of Their Data*
- lack of clarity about whether each category of data can be deleted
- lack of clarity about how each category of data can be deleted
- failure to implement effective processes for user-initiated deletion
- unreasonable limitations on a right of deletion
- denial of a right of deletion
Privacy-Abusive Data Exploitation
- *Exposure of User Data to Third Parties*
- wide exposure, in violation of previous Terms of Service, of:
- users' profile-data - even to the point of publishing
street-address and mobile-phone number
- users' postings
- users' advertising and purchasing behaviour
- users' declared social networks
- users' inferred social networks, based on messaging-traffic
- changes to the scope of exposure:
- without advance notice to users
- without any notice to users; and/or
- without meaningful consent
- ready access by government agencies, without demonstrated legal
authority for the demand
- *Exposure of Data about Other People*
- upload of users' address-books, including:
- their contact-points
- other personal data, such as children's names
- comments about them
- by implication, their social networks
- exploitation of non-users' interactions with users
Regards
Ambarish S Natu
This list is from one of roger clarke's paper
http://www.rogerclarke.com/II/COSM-1301.html
On Tuesday, 27 January 2015, David Singer <singer@apple.com> wrote:
>
> > On Jan 27, 2015, at 11:46 , Mike O'Neill <michael.oneill@baycloud.com
> <javascript:;>> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > There is also a international dimension, with transatlantic agreements
> on privacy, cybersecurity and surveillance being publically discussed, and
> it is clear these things are interrelated, addressing one will always
> involve consideration of the others.
> >
> > There does not have to be a trade-off, no need to forgo privacy for the
> sake of security. We should be able to build a system with them all.
> >
> > What is needed is a clearly expressed “statement of requirements” i.e.
> we want to protect privacy and security within a transparent and
> democratically accountable framework which, for example, allows law
> enforcement to do its job (using warranted surveillance if necessary), but
> rules out mass surveillance. Because the net knows no borders there has to
> be a transnational component.
> >
> > The W3C could then do its part helping to create the necessary protocols
> and standards, while the politicians take charge of the oversight process
> and creating the legal environment.
> >
>
> If you have even vague visions for what protocols and standards could help
> here, could you sketch them out?
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>
>
--
अंबरीष श्रिकृष्ण नातू
Sent from Gmail Mobile

RE: On the european response to SnowdenKWASNY SophieSophie.KWASNY@coe.intmid:009140F0156579499DBFD29352345F1C7D2F6BAD@V-Linguistix00.key.coe.int2015-01-27T11:19:01+00:00

Dear All,
Speaking about politicians, let me jump in with recent news from Strasbourg:
the Rapporteur's report on mass surveillance adopted yesterday in the Legal Committee, which should be final after the April Plenary session where the Parliamentary Assembly as a whole should adopt it : http://website-pace.net/documents/19838/1085720/20150126-MassSurveillance-EN.pdf/df5aae25-6cfe-450a-92a6-e903af10b7a2
Interesting call for action, in particular the position regarding privacy in Transatlantic Trade Investment Partnership and other international agreements (paragraph 18 of the draft Resolution), the call to promote the wide use of encryption, to further develop user-friendly (automatic) data protection techniques capable of countering mass surveillance and any other threats to internet security.
Regarding Danny's comment on the scope of the draft EU Directive, let me point out that to the contrary national security is not excluded from the ECHR and Convention 108 scope, and while it is a ground for possible limitation of the right to privacy and to data protection, this is very strictly framed as it as to be prescribed by law and be necessary (proportionality test, safeguards etc).
Best regards,
Sophie
Sophie Kwasny
Data Protection Unit
Human Rights and Rule of Law
CONSEIL DE L'EUROPE - COUNCIL OF EUROPE
www.coe.int/dataprotection
-----Original Message-----
From: Mike O'Neill [mailto:michael.oneill@baycloud.com]
Sent: mardi 27 janvier 2015 11:47
To: 'Danny Weitzner'; 'Rigo Wenning'; public-privacy@w3.org
Cc: 'David Singer'
Subject: RE: On the european response to Snowden
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
There is also a international dimension, with transatlantic agreements on privacy, cybersecurity and surveillance being publically discussed, and it is clear these things are interrelated, addressing one will always involve consideration of the others.
There does not have to be a trade-off, no need to forgo privacy for the sake of security. We should be able to build a system with them all.
What is needed is a clearly expressed “statement of requirements” i.e. we want to protect privacy and security within a transparent and democratically accountable framework which, for example, allows law enforcement to do its job (using warranted surveillance if necessary), but rules out mass surveillance. Because the net knows no borders there has to be a transnational component.
The W3C could then do its part helping to create the necessary protocols and standards, while the politicians take charge of the oversight process and creating the legal environment.
Mike
From: Danny Weitzner [mailto:djweitzner@csail.mit.edu]
Sent: 27 January 2015 01:15
To: Rigo Wenning; public-privacy@w3.org
Cc: David Singer
Subject: Re: On the european response to Snowden
further decoding: the EU has no authority over national security matters (ie foreign intelligence gathering) in its member states. Directive Rigo mentions will apply to law enforcement -- a good start, but not sufficient.
On Mon Jan 26 2015 at 12:54:41 PM Rigo Wenning <rigo@w3.org> wrote:
On Monday 26 January 2015 9:52:35 David Singer wrote:
> interesting article
>
> <http://www.ecfr.eu/publications/summary/mass_surveillance_privacy_and_secur
> ity_europes_confused_response329>
Decoding:
The privacy regulation is under way and shall be voted in 2015. But it only
touches on data protection and communications of the private sector.
There is a parallel "Directive" on data protection in government. Directive
means that it isn't directly nationally applicable (the regulation above will
be). A Directive needs a national legislative act to be effective. This
Directive touches on the issue of Pervasive Monitoring. There is debate, but
the debate is somewhat silenced by other issues, like Greece, the war in
Ukraine. Also the freedom of speech against islamist threatening is at the
forefront as you can imagine
BTW, while the discussion is somewhat low in France, Italy and Greece, the
debate on Pervasive Monitoring is happening in the Netherlands, Germany and
Belgium. There is certainly a tension between what governments would like to
do and what the population is willing to let them do in terms of surveillance.
- --Rigo
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUx2yLAAoJEHMxUy4uXm2JkrkH/RPppNjeXQZCNfmuCKfyv+eE
403K0IdpBvvpFPwrvBSrZWzHQyssWVXBh75d2Y9NFKKAbW2/uPUS+jT8TE+st477
Lyhy8jSY5oPLmIwic9YW41K3kM5Tk0Dj2vkZBtxM7JM4wHkFsZi03Fz5AkpNFOsT
mbDVVHkbM3rVbuIr7e4WNucrnLBYIDIrz2SSLb1Zwf84yfcxGqCrHjmcunKzWujl
HUlD8c9903kJupuJplMuWjLGP6T2Dv/5jhAifXevY1RroCT6B/+DGOVnjXsfgAwT
QuN6USGKnSxKuyLqlaIJ1jsPpxdvqHHSgv4P2OCtkB8c23BAYQ0p6qtN5kq+KAw=
=hTP5
-----END PGP SIGNATURE-----

Re: On the european response to SnowdenAmbarish S Natuambarish.natu@gmail.commid:CAO6L_b6B4-S1Qym6RU_QA7_Ls1St1AQ14winyr08sva5ohtPRQ@mail.gmail.com2015-01-27T12:50:36+11:00

Great to hear from you Danny,
Last time, we spoke about the role of standards in this you were Canberra
giving a talk at DoC.
The Australian Privacy Principles are a good guideline for organizations on
maintaining privacy. These are also applicable to govt, agencies apart from
few discretions added but that rests with the minister.
Recently the ausralian govt expressed the 2 year data retention law for
service providers, that is going to be a huge privacy challenge when it
comes to accessing content in the database.
http://www.oaic.gov.au/privacy/applying-privacy-law/app-guidelines/http://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/r5375_first-reps/toc_pdf/14242b01.pdf;fileType=application%2Fpdf
Links for those interested on the side of law and technical interpretation
Ambarish S Natu.
On Tuesday, 27 January 2015, Danny Weitzner <djweitzner@csail.mit.edu>
wrote:
> further decoding: the EU has no authority over national security matters
> (ie foreign intelligence gathering) in its member states. Directive Rigo
> mentions will apply to law enforcement -- a good start, but not sufficient.
>
> On Mon Jan 26 2015 at 12:54:41 PM Rigo Wenning <rigo@w3.org
> <javascript:_e(%7B%7D,'cvml','rigo@w3.org');>> wrote:
>
>> On Monday 26 January 2015 9:52:35 David Singer wrote:
>> > interesting article
>> >
>> > <http://www.ecfr.eu/publications/summary/mass_
>> surveillance_privacy_and_secur
>> > ity_europes_confused_response329>
>>
>> Decoding:
>>
>> The privacy regulation is under way and shall be voted in 2015. But it
>> only
>> touches on data protection and communications of the private sector.
>>
>> There is a parallel "Directive" on data protection in government.
>> Directive
>> means that it isn't directly nationally applicable (the regulation above
>> will
>> be). A Directive needs a national legislative act to be effective. This
>> Directive touches on the issue of Pervasive Monitoring. There is debate,
>> but
>> the debate is somewhat silenced by other issues, like Greece, the war in
>> Ukraine. Also the freedom of speech against islamist threatening is at the
>> forefront as you can imagine
>>
>> BTW, while the discussion is somewhat low in France, Italy and Greece, the
>> debate on Pervasive Monitoring is happening in the Netherlands, Germany
>> and
>> Belgium. There is certainly a tension between what governments would like
>> to
>> do and what the population is willing to let them do in terms of
>> surveillance.
>>
>> --Rigo
>
>
--
अंबरीष श्रिकृष्ण नातू
Sent from Gmail Mobile

RE: Re: On the european response to SnowdenMike O'Neillmichael.oneill@baycloud.commid:401501d03bc4$6747ec60$35d7c520$@baycloud.com2015-01-29T13:06:29-00:00

Happy New Year!
Interesting article about how HTTP Strict Transport Security can be used to
circumvent the protections in the private browsing mode. But it seems to be
fixed in firefox >34. I don't know about the other browsers.
--Rigo

Thanks Rigo - interesting article.
If I read it correctly, the offending flag is set when the browser is in “normal” mode, and the vulnerability introduced by it is that it persists if the user switches to private browsing mode without first flushing the cookies accumulated while in “normal” mode. Doing that when you switch modes is a hassle, but then, so is cleaning your teeth - and both are good hygiene ;^)
Presumably another workable option is to have multiple browser instances, and to ensure that at least one is always set to private mode (so that you’re not switching from normal to private in the same browser).
R
Robin Wilton
Technical Outreach Director - Identity and Privacy
Internet Society
email: wilton@isoc.org
Phone: +44 705 005 2931
Twitter: @futureidentity
On 8 Jan 2015, at 20:56, Rigo Wenning <rigo@w3.org> wrote:
> And here is the link after popular demand :( Sorry that I missed it..
>
> http://arstechnica.com/security/2015/01/browsing-in-privacy-mode-super-cookies-can-track-you-anyway/
>
> On Thursday 08 January 2015 21:13:27 Rigo Wenning wrote:
>> Happy New Year!
>>
>> Interesting article about how HTTP Strict Transport Security can be used to
>> circumvent the protections in the private browsing mode. But it seems to be
>> fixed in firefox >34. I don't know about the other browsers.
>>
>> --Rigo

I think we might need a consensus definition of what private browsing mode is, and how it affects servers. We had some offline conversation about it at the workshop.
For example, for some people ‘private browsing’ starts a sandbox that is initialized from the regular browsing context (cookies and all), but that is discarded at the end of the private browsing session. There’s no need for supercookies to correlate the regular browsing into private browsing, as the cookies are there. Correlating the other way will simply raise the ire of users if you are not careful, as it would persist state and hence ‘leak’ from the private session back into the general one.
I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
> On Jan 8, 2015, at 12:13 , Rigo Wenning <rigo@w3.org> wrote:
>
> Happy New Year!
>
> Interesting article about how HTTP Strict Transport Security can be used to
> circumvent the protections in the private browsing mode. But it seems to be
> fixed in firefox >34. I don't know about the other browsers.
>
> --Rigo
David Singer
Manager, Software Standards, Apple Inc.

Hi David,
Regarding your query about private browsing modes -
Copying from the summary of the PING meeting in November …
“ … => TAG and private browsing mode
Mark Nottingham gave an overview of the TAG’s work on browsers “private browsing mode”. The work looks at the mode for three use cases: other users, network attacker, the website itself. The aim is to provide “best class” protection in private browsing mode while not lowering privacy standards outside privacy browsing mode.
The work can be followed on the tag email list [2]. Mark hopes to have a draft ready by the January TAG face-to-face meeting."
[2] www-tag@w3.org"
Christine
On 8 Jan 2015, at 11:39 pm, David Singer <singer@apple.com> wrote:
> I think we might need a consensus definition of what private browsing mode is, and how it affects servers. We had some offline conversation about it at the workshop.
>
> For example, for some people ‘private browsing’ starts a sandbox that is initialized from the regular browsing context (cookies and all), but that is discarded at the end of the private browsing session. There’s no need for supercookies to correlate the regular browsing into private browsing, as the cookies are there. Correlating the other way will simply raise the ire of users if you are not careful, as it would persist state and hence ‘leak’ from the private session back into the general one.
>
> I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
>
>> On Jan 8, 2015, at 12:13 , Rigo Wenning <rigo@w3.org> wrote:
>>
>> Happy New Year!
>>
>> Interesting article about how HTTP Strict Transport Security can be used to
>> circumvent the protections in the private browsing mode. But it seems to be
>> fixed in firefox >34. I don't know about the other browsers.
>>
>> --Rigo
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>

09.01.2015, 01:52, "Christine Runnegar" <runnegar@isoc.org>:
> Hi David,
>
> Regarding your query about private browsing modes -
>
> Copying from the summary of the PING meeting in November …
>
> “ … => TAG and private browsing mode
http://w3ctag.github.io/private-mode/ is the editors' draft.
cheers
> Mark Nottingham gave an overview of the TAG’s work on browsers “private browsing mode”. The work looks at the mode for three use cases: other users, network attacker, the website itself. The aim is to provide “best class” protection in private browsing mode while not lowering privacy standards outside privacy browsing mode.
>
> The work can be followed on the tag email list [2]. Mark hopes to have a draft ready by the January TAG face-to-face meeting."
>
> [2] www-tag@w3.org"
>
> Christine
>
> On 8 Jan 2015, at 11:39 pm, David Singer <singer@apple.com> wrote:
>> I think we might need a consensus definition of what private browsing mode is, and how it affects servers. We had some offline conversation about it at the workshop.
>>
>> For example, for some people ‘private browsing’ starts a sandbox that is initialized from the regular browsing context (cookies and all), but that is discarded at the end of the private browsing session. There’s no need for supercookies to correlate the regular browsing into private browsing, as the cookies are there. Correlating the other way will simply raise the ire of users if you are not careful, as it would persist state and hence ‘leak’ from the private session back into the general one.
>>
>> I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
>>> On Jan 8, 2015, at 12:13 , Rigo Wenning <rigo@w3.org> wrote:
>>>
>>> Happy New Year!
>>>
>>> Interesting article about how HTTP Strict Transport Security can be used to
>>> circumvent the protections in the private browsing mode. But it seems to be
>>> fixed in firefox >34. I don't know about the other browsers.
>>>
>>> --Rigo
>> David Singer
>> Manager, Software Standards, Apple Inc.
--
Charles McCathie Nevile - web standards - CTO Office, Yandex
chaals@yandex-team.ru - - - Find more at http://yandex.com

> On Jan 8, 2015, at 16:16 , chaals@yandex-team.ru wrote:
>
> 09.01.2015, 01:52, "Christine Runnegar" <runnegar@isoc.org>:
>> Hi David,
>>
>> Regarding your query about private browsing modes -
>>
>> Copying from the summary of the PING meeting in November …
>>
>> “ … => TAG and private browsing mode
>
> http://w3ctag.github.io/private-mode/ is the editors' draft.
Thanks
but this draft, and what I described, are almost completely different. We may need different names.
This draft attempts to achieve privacy by limiting information flow, while not explicitly saying to the servers what it is trying to do.
My suggestion is almost precisely the opposite: ask the server politely to do something for the user, that actually barely impacts its business.
>
> cheers
>
>> Mark Nottingham gave an overview of the TAG’s work on browsers “private browsing mode”. The work looks at the mode for three use cases: other users, network attacker, the website itself. The aim is to provide “best class” protection in private browsing mode while not lowering privacy standards outside privacy browsing mode.
>>
>> The work can be followed on the tag email list [2]. Mark hopes to have a draft ready by the January TAG face-to-face meeting."
>>
>> [2] www-tag@w3.org"
>>
>> Christine
>>
>> On 8 Jan 2015, at 11:39 pm, David Singer <singer@apple.com> wrote:
>>> I think we might need a consensus definition of what private browsing mode is, and how it affects servers. We had some offline conversation about it at the workshop.
>>>
>>> For example, for some people ‘private browsing’ starts a sandbox that is initialized from the regular browsing context (cookies and all), but that is discarded at the end of the private browsing session. There’s no need for supercookies to correlate the regular browsing into private browsing, as the cookies are there. Correlating the other way will simply raise the ire of users if you are not careful, as it would persist state and hence ‘leak’ from the private session back into the general one.
>>>
>>> I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
>>>> On Jan 8, 2015, at 12:13 , Rigo Wenning <rigo@w3.org> wrote:
>>>>
>>>> Happy New Year!
>>>>
>>>> Interesting article about how HTTP Strict Transport Security can be used to
>>>> circumvent the protections in the private browsing mode. But it seems to be
>>>> fixed in firefox >34. I don't know about the other browsers.
>>>>
>>>> --Rigo
>>> David Singer
>>> Manager, Software Standards, Apple Inc.
>
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
> chaals@yandex-team.ru - - - Find more at http://yandex.com
David Singer
Manager, Software Standards, Apple Inc.

09.01.2015, 03:23, "David Singer" <singer@apple.com>:
>> On Jan 8, 2015, at 16:16 , chaals@yandex-team.ru wrote:
>>
>> 09.01.2015, 01:52, "Christine Runnegar" <runnegar@isoc.org>:
>> http://w3ctag.github.io/private-mode/ is the editors' draft.
>
> but this draft, and what I described, are almost completely different. We may need different names.
>
> This draft attempts to achieve privacy by limiting information flow, while not explicitly saying to the servers what it is trying to do.
>
> My suggestion is almost precisely the opposite: ask the server politely to do something for the user, that actually barely impacts its business.
My instinct is that we should aim for a consensus on what browsers do when trying to support privacy, and that we would be better to get one spec together. Although that may prove a pipe dream.
I note that there are at least two distinct aspects to private browsing - one in which the point is to provide privacy "within the browser" - i.e. so it doesn't later expose to a casual observer what you were doing - and the other is to maintain privacy from casual observers in the network or at the sites visited.
These things are already mixed. Making a search in Yandex browser will provide you with suggestions that can include things that you have looked at before in the browser, things that Yandex remembers you were interested in earlier, and so on. Hence my preference to try and cover the spectrum in a single document.
cheers
>>> Mark Nottingham gave an overview of the TAG’s work on browsers “private browsing mode”. The work looks at the mode for three use cases: other users, network attacker, the website itself. The aim is to provide “best class” protection in private browsing mode while not lowering privacy standards outside privacy browsing mode.
>>>
>>> The work can be followed on the tag email list [2]. Mark hopes to have a draft ready by the January TAG face-to-face meeting."
>>>
>>> [2] www-tag@w3.org"
>>>
>>> Christine
>>>
>>> On 8 Jan 2015, at 11:39 pm, David Singer <singer@apple.com> wrote:
>>>> I think we might need a consensus definition of what private browsing mode is, and how it affects servers. We had some offline conversation about it at the workshop.
>>>>
>>>> For example, for some people ‘private browsing’ starts a sandbox that is initialized from the regular browsing context (cookies and all), but that is discarded at the end of the private browsing session. There’s no need for supercookies to correlate the regular browsing into private browsing, as the cookies are there. Correlating the other way will simply raise the ire of users if you are not careful, as it would persist state and hence ‘leak’ from the private session back into the general one.
>>>>
>>>> I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
>>>>> On Jan 8, 2015, at 12:13 , Rigo Wenning <rigo@w3.org> wrote:
>>>>>
>>>>> Happy New Year!
>>>>>
>>>>> Interesting article about how HTTP Strict Transport Security can be used to
>>>>> circumvent the protections in the private browsing mode. But it seems to be
>>>>> fixed in firefox >34. I don't know about the other browsers.
>>>>>
>>>>> --Rigo
>>>> David Singer
>>>> Manager, Software Standards, Apple Inc.
>> --
>> Charles McCathie Nevile - web standards - CTO Office, Yandex
>> chaals@yandex-team.ru - - - Find more at http://yandex.com
>
> David Singer
> Manager, Software Standards, Apple Inc.
--
Charles McCathie Nevile - web standards - CTO Office, Yandex
chaals@yandex-team.ru - - - Find more at http://yandex.com

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi David,
I am definitely interested in these ideas, can you give a summary?
Mike
> -----Original Message-----
> From: David Singer [mailto:singer@apple.com]
> Sent: 08 January 2015 22:40
> To: W3C Privacy IG
> Subject: Re: Super Cookies in Privacy Browsing mode
>
> I think we might need a consensus definition of what private browsing mode is,
> and how it affects servers. We had some offline conversation about it at the
> workshop.
>
> For example, for some people ‘private browsing’ starts a sandbox that is
> initialized from the regular browsing context (cookies and all), but that is
> discarded at the end of the private browsing session. There’s no need for
> supercookies to correlate the regular browsing into private browsing, as the
> cookies are there. Correlating the other way will simply raise the ire of users if
> you are not careful, as it would persist state and hence ‘leak’ from the private
> session back into the general one.
>
> I have some ideas around codifying ‘private browsing mode’ and how to
> communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of
> interest to others?
>
> > On Jan 8, 2015, at 12:13 , Rigo Wenning <rigo@w3.org> wrote:
> >
> > Happy New Year!
> >
> > Interesting article about how HTTP Strict Transport Security can be used to
> > circumvent the protections in the private browsing mode. But it seems to be
> > fixed in firefox >34. I don't know about the other browsers.
> >
> > --Rigo
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUsRRmAAoJEHMxUy4uXm2JqTAIAJreaMJcmw816tTzFmicavEm
onX1WFTeDdYlMwcrWOZJ6C0hff3lWG0fK5i3qK7AgmYuw8MQVC651eJuZ8v1CARZ
9UHj36jPuzmw0TkdVPZPoxFF+25VUBV+vgENppvADxPLg02Y78F9EDhG2iAAPokP
s2XMCcKjTDxBvkAlWVYG1zsYbAI1Rcy4ZeW/ceNyO1vYRbYPdGuHSj/z/kCuSRdO
LY0vmwUH5kYkttleM5030wJvuiZoOZiniy4wSI6VvM/npsdKlNA/P1enyQyKBfE+
g3J2nJeLKr7Tdqx3uS+6KMTDCdbDVay6bQuv7yLSBsScNMc+Kp6BIixJ6tjnCE8=
=4ICH
-----END PGP SIGNATURE-----

> On Jan 10, 2015, at 4:00 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi David,
>
> I am definitely interested in these ideas, can you give a summary?
>
> Mike
sure. try this.
The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
We do not think it is particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways.
This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
* * * *
I think it’s interesting in a number of respects:
a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
David Singer
Manager, Software Standards, Apple Inc.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
The trouble with fixed UUIDs for this is that they will be used for tracking, cross-domain to boot. Being origin invariant will make it unpopular with brand sites and publishers also, because their customer data will be leaked to any third-party. I know the persona identifier will be mutable and under user control, but most people will not bother to change it or forget to, so it will become permanent for most.
I think making the UA responsible for personae (if that’s the plural) is interesting, because it helps user control, and maybe eventually the abolition of server initiated (and user unaware) tracking.
I do not think it would be hard to come up with a non-trackable persona, i.e. one that varies between requests, using asymmetric crypto. Concatenating a per-request nonce with the UUID then encrypting it with a user specific key, so only those party to the decryption key could determine the UUID, and making the header value different for every request. This would give absolute user control over tracking and cross-party secrecy for sites.
Mike
> -----Original Message-----
> From: David Singer [mailto:singer@apple.com]
> Sent: 12 January 2015 23:08
> To: Mike O'Neill
> Cc: W3C Privacy IG
> Subject: Re: Super Cookies in Privacy Browsing mode
>
>
> > On Jan 10, 2015, at 4:00 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> > Hi David,
> >
> > I am definitely interested in these ideas, can you give a summary?
> >
> > Mike
>
> sure. try this.
>
>
>
> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a
> suitable machine-generatable distinct identifier (e.g. a UUID). If the header is
> absent, the user is operating under their default (unlabeled) persona, which is
> distinct from all the identified personas, which in turn are also distinct from each
> other. A user and their user-agent may return to a persona at any time, or
> continue using a persona for any length of time. A persona identifier is expected
> to be universally unique, not contextualized to the current user-agent or device.
>
> Servers respecting this are requested to ensure that the labeled personas leave
> no trace or influence on each other or on the unlabeled persona. For example,
> activity under one persona should not affect the ads shown under a different
> persona; any history records that the user can see should be distinct for each
> persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled
> ones, but optional; if servers wish, they can initialize a named persona from the
> default, un-named one, when they first see it.)
>
> Server implementers may choose how long they retain records relating to
> separate personas, just as they do for today’s default persona.
>
> This is NOT a request to stop tracking or keeping records; that is an orthogonal
> question that is covered by activities such as do-not-track, cookie directives, and
> so on. This is about giving users control of their privacy by controlling what gets
> linked to what, and exposed when.
>
> We do not think it is particularly necessary or valuable to have a machine-
> readable means of discovery over whether servers support this feature. Any
> support that they provide is an improvement on today’s experience, where
> servers are unaware that users are trying to be private. Claims of support for this
> feature are probably better conveyed in advertising or other human-readable
> ways.
>
> This feature might also be valuable for shared terminals; for example, in
> libraries, airline lounges, internet cafes and the like, a new persona can be
> minted each time the terminal is unlocked for a new session. Libraries might tie
> the persona to the library card, so users returning get re-linked to their online
> history and so on. It might also be a lightweight replacement of logging-in, for
> browsers on shared devices — a browser might have a simple way of saying
> which family member it is right now (e.g. a pull-down menu).
>
> * * * *
>
> I think it’s interesting in a number of respects:
>
> a) it’s an improvement on the status quo, where servers are completely unaware
> of any attempt to be private
>
> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much
> as before; there are very few privacy proposals that don’t slide into trying to be
> secret, and this is one. Privacy is also about where information is exposed, what
> it is linked to, and so on.
>
> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have
> it or you don’t); it’s a spectrum, and it’s about perception and control and
> exposure as much as it is about recording and so on.
>
>
>
>
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUt9aoAAoJEHMxUy4uXm2JKf4IALUcqSXigTcAjzVqiy2P0B3I
kQtehja2DAAEyswuRMRjwU3j+NDu4rTpLdBtRYJm9IvpfQnZrUbZjUcElvHpHCFz
US67AwD+bPdMEp8UcnRyuCdtbeeQKmIHbkOdh9Dm3m5xoYtSPZ87cCBtcTTsHMMI
qNIkMi7mGAkij8neRc2cUCc6bPfQzbkUz8SAZ6EStMdd/l+U49UWrpI/qRMWyF/l
ywciLEMr9qvYEDHQrOrwZH/mh56ts+qWvTfvc4ztYAIzh42C9G+8v5L1b6P2jqrU
79DO7+KHE4WKrJKBq6vmvU55rax24jHUURHCsAFy2NAegc2asm61eIMiLPSEUdQ=
=vD4Z
-----END PGP SIGNATURE-----

Hi Mike
Briefly, more inline: I think you misunderstand the proposal. Not all privacy concerns secrecy, and this is an idea that explicitly doesn’t attempt to keep anything *secret* but instead asks to control its *exposure*.
> On Jan 15, 2015, at 7:03 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> The trouble with fixed UUIDs for this is that they will be used for tracking, cross-domain to boot.
You are assuming that UUIDs will be shared between devices and re-used, and while that’s neither expected nor forbidden — it’s something a group of user-agents may do. Yes, a user’s UUID set could be shared and named, so that on another device they can say “resume searching for my partner’s birthday present” but that’s a choice. In the simple case, a new UUID would be coined each time ‘private browsing’ is enabled.
> Being origin invariant will make it unpopular with brand sites and publishers also, because their customer data will be leaked to any third-party.
Not sure what you mean here. To enable the re-use case above, my suggestion is that the IDs are not ‘within the context of a single UA’ but instead globally unique, but that’s all.
> I know the persona identifier will be mutable and under user control, but most people will not bother to change it or forget to, so it will become permanent for most.
No, exactly the opposite of what I expect: when you turn on ‘private browsing’ a new persona is made, and that persona ID is forgotten at the end of that private browsing session and not re-used.
>
> I think making the UA responsible for personae (if that’s the plural) is interesting, because it helps user control, and maybe eventually the abolition of server initiated (and user unaware) tracking.
This is completely orthogonal to whether the user is being tracked, deliberately so. This is not a request ‘please don’t track me’, this is a request ‘please keep this activity separate’. ‘Please don’t track me is, I believe, being worked on in another WG’. :-(
>
> I do not think it would be hard to come up with a non-trackable persona, i.e. one that varies between requests, using asymmetric crypto.
Not a goal of mine. I’d rather attack the tracking question and the privacy-of-a-session question independently. Indeed, I think that we tend too easily to fall into the ‘to get privacy I need secrecy’ trap, whereas there are aspects of privacy — when is this data exposed?, what and who is it linked to? — which are about other aspects of privacy that deserve more attention.
> Concatenating a per-request nonce with the UUID then encrypting it with a user specific key, so only those party to the decryption key could determine the UUID, and making the header value different for every request. This would give absolute user control over tracking and cross-party secrecy for sites.
>
>
> Mike
>
>
>
>> -----Original Message-----
>> From: David Singer [mailto:singer@apple.com]
>> Sent: 12 January 2015 23:08
>> To: Mike O'Neill
>> Cc: W3C Privacy IG
>> Subject: Re: Super Cookies in Privacy Browsing mode
>>
>>
>>> On Jan 10, 2015, at 4:00 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>>>
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>>
>>> Hi David,
>>>
>>> I am definitely interested in these ideas, can you give a summary?
>>>
>>> Mike
>>
>> sure. try this.
>>
>>
>>
>> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a
>> suitable machine-generatable distinct identifier (e.g. a UUID). If the header is
>> absent, the user is operating under their default (unlabeled) persona, which is
>> distinct from all the identified personas, which in turn are also distinct from each
>> other. A user and their user-agent may return to a persona at any time, or
>> continue using a persona for any length of time. A persona identifier is expected
>> to be universally unique, not contextualized to the current user-agent or device.
>>
>> Servers respecting this are requested to ensure that the labeled personas leave
>> no trace or influence on each other or on the unlabeled persona. For example,
>> activity under one persona should not affect the ads shown under a different
>> persona; any history records that the user can see should be distinct for each
>> persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled
>> ones, but optional; if servers wish, they can initialize a named persona from the
>> default, un-named one, when they first see it.)
>>
>> Server implementers may choose how long they retain records relating to
>> separate personas, just as they do for today’s default persona.
>>
>> This is NOT a request to stop tracking or keeping records; that is an orthogonal
>> question that is covered by activities such as do-not-track, cookie directives, and
>> so on. This is about giving users control of their privacy by controlling what gets
>> linked to what, and exposed when.
>>
>> We do not think it is particularly necessary or valuable to have a machine-
>> readable means of discovery over whether servers support this feature. Any
>> support that they provide is an improvement on today’s experience, where
>> servers are unaware that users are trying to be private. Claims of support for this
>> feature are probably better conveyed in advertising or other human-readable
>> ways.
>>
>> This feature might also be valuable for shared terminals; for example, in
>> libraries, airline lounges, internet cafes and the like, a new persona can be
>> minted each time the terminal is unlocked for a new session. Libraries might tie
>> the persona to the library card, so users returning get re-linked to their online
>> history and so on. It might also be a lightweight replacement of logging-in, for
>> browsers on shared devices — a browser might have a simple way of saying
>> which family member it is right now (e.g. a pull-down menu).
>>
>> * * * *
>>
>> I think it’s interesting in a number of respects:
>>
>> a) it’s an improvement on the status quo, where servers are completely unaware
>> of any attempt to be private
>>
>> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much
>> as before; there are very few privacy proposals that don’t slide into trying to be
>> secret, and this is one. Privacy is also about where information is exposed, what
>> it is linked to, and so on.
>>
>> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have
>> it or you don’t); it’s a spectrum, and it’s about perception and control and
>> exposure as much as it is about recording and so on.
>>
>>
>>
>>
>>
>> David Singer
>> Manager, Software Standards, Apple Inc.
>>
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.13 (MingW32)
> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
> Charset: utf-8
>
> iQEcBAEBAgAGBQJUt9aoAAoJEHMxUy4uXm2JKf4IALUcqSXigTcAjzVqiy2P0B3I
> kQtehja2DAAEyswuRMRjwU3j+NDu4rTpLdBtRYJm9IvpfQnZrUbZjUcElvHpHCFz
> US67AwD+bPdMEp8UcnRyuCdtbeeQKmIHbkOdh9Dm3m5xoYtSPZ87cCBtcTTsHMMI
> qNIkMi7mGAkij8neRc2cUCc6bPfQzbkUz8SAZ6EStMdd/l+U49UWrpI/qRMWyF/l
> ywciLEMr9qvYEDHQrOrwZH/mh56ts+qWvTfvc4ztYAIzh42C9G+8v5L1b6P2jqrU
> 79DO7+KHE4WKrJKBq6vmvU55rax24jHUURHCsAFy2NAegc2asm61eIMiLPSEUdQ=
> =vD4Z
> -----END PGP SIGNATURE-----
>
>
David Singer
Manager, Software Standards, Apple Inc.

Hi David,
> On Jan 12, 2015, at 3:08 PM, David Singer <singer@apple.com> wrote:
>
> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
>
> Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
I think it’s definitely an interesting idea. I think there may be similar thinking behind the advertising identifier proposals, although I’m not sure the exact details on those.
I share some of Mike’s concerns. Even if some servers could use a change in Persona header to help users separate their shopping activity and help them avoid seeing ads they wouldn’t like, other servers (intentionally or unintentionally) would use the new unique and persistent user identifier to conduct tracking the user might not want. That could (*could*) undermine work done to prevent passive fingerprinting of users without their knowledge.
The use case seems very relevant though. Personally, I use private browsing modes more than anything as a way to get a new, short-term cookie jar. "What does this site look like when I’m not logged in?" "I’m using my friend’s computer but don’t want to be logged into their Facebook account while I’m browsing.” “Can I log into my email for a minute on your computer?” etc.
Are there cases where a Persona identifier header would be more useful than just clearing or separating the “cookie jars” or other stores of local state? As in the case reported in the Ars Technica article, the implemented fix was just treating HSTS records as state that shouldn’t be persisted into private browsing mode. As in previous “evercookie” cases, user agents that can clear all (or most) state mechanisms simultaneously can mitigate the concern. I think HSTS is a more difficult case because persisting the HSTS records is typically a way of increasing the user’s security against downgrade attacks.
> Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
>
> This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
>
> We do not think it is particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways.
>
> This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
Yeah, I think these are good use cases. Again, I expect that some of these are implemented now by clearing cookies / local state when a new guest logs in. Firefox has a “profiles” feature that can be used for that purpose (it also separates the add-ons, bookmarks, etc. between different users of the same machine): https://developer.mozilla.org/en-US/docs/Mozilla/Multiple_Firefox_Profiles <https://developer.mozilla.org/en-US/docs/Mozilla/Multiple_Firefox_Profiles>
To your earlier point:
> I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
Would servers see a benefit from an indication that the user is in a private browsing mode (however defined, but in this case, particularly for the mode of not persisting state on the local machine)? Maybe they could avoid downloading files or storing certain types of state — rather than asking users to check a box when they’re on a public computer, if they’re in guest/private mode the site would know that this wasn’t going to be a device with persistence for the user. Related: are private browsing modes in user agents observable by servers today?
Thanks for sharing ideas,
Nick

> On Jan 15, 2015, at 12:31 , Nick Doty <npdoty@w3.org> wrote:
>
> Hi David,
>
>> On Jan 12, 2015, at 3:08 PM, David Singer <singer@apple.com> wrote:
>>
>> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
>>
>> Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
>
> I think it’s definitely an interesting idea. I think there may be similar thinking behind the advertising identifier proposals, although I’m not sure the exact details on those.
>
> I share some of Mike’s concerns. Even if some servers could use a change in Persona header to help users separate their shopping activity and help them avoid seeing ads they wouldn’t like, other servers (intentionally or unintentionally) would use the new unique and persistent user identifier to conduct tracking the user might not want. That could (*could*) undermine work done to prevent passive fingerprinting of users without their knowledge.
>
> The use case seems very relevant though. Personally, I use private browsing modes more than anything as a way to get a new, short-term cookie jar. "What does this site look like when I’m not logged in?" "I’m using my friend’s computer but don’t want to be logged into their Facebook account while I’m browsing.” “Can I log into my email for a minute on your computer?” etc.
>
> Are there cases where a Persona identifier header would be more useful than just clearing or separating the “cookie jars” or other stores of local state?
Yes.
Here’s an example. A couple of years ago I used ‘private browsing’ on our home computer to look for my wife’s present. Yes, all the history, cookies etc. were cleared of the history.
But when I checked ‘search history’ on Google, of course, there was all the data! Servers are currently unaware that the user is currently trying to do something private; I am suggesting this as a way that they can be aware and nice, without actually impacting their business.
> As in the case reported in the Ars Technica article, the implemented fix was just treating HSTS records as state that shouldn’t be persisted into private browsing mode.
I am not trying to be anonymous when I am asking to be private; that’s secrecy and is much much harder.
> As in previous “evercookie” cases, user agents that can clear all (or most) state mechanisms simultaneously can mitigate the concern. I think HSTS is a more difficult case because persisting the HSTS records is typically a way of increasing the user’s security against downgrade attacks.
>
>> Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
>>
>> This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
>>
>> We do not think it is particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways.
>>
>> This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
>
> Yeah, I think these are good use cases. Again, I expect that some of these are implemented now by clearing cookies / local state when a new guest logs in.
But the server can still think “same UA, same IP address, same OS, same fingerprint => same user who’s just cleared their cookies”. We want the server to think “I should segregate this”.
> Firefox has a “profiles” feature that can be used for that purpose (it also separates the add-ons, bookmarks, etc. between different users of the same machine): https://developer.mozilla.org/en-US/docs/Mozilla/Multiple_Firefox_Profiles
>
> To your earlier point:
>> I have some ideas around codifying ‘private browsing mode’ and how to communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of interest to others?
>
> Would servers see a benefit from an indication that the user is in a private browsing mode (however defined, but in this case, particularly for the mode of not persisting state on the local machine)?
The benefit is being nice to their users, and respecting their wish for privacy. The cost is an increase in the number of ‘users’ (the cheapest way to support this is to treat each persona as separate).
> Maybe they could avoid downloading files or storing certain types of state — rather than asking users to check a box when they’re on a public computer, if they’re in guest/private mode the site would know that this wasn’t going to be a device with persistence for the user. Related: are private browsing modes in user agents observable by servers today?
No, that’s the problem. At least for us, private browsing mode doesn’t “put you in a green field” or restrict what you can do. Indeed, *entry* to private browsing might do no more than snapshot the local state. It’s *exit* that discards the current state and reverts to the prior saved snapshot. For the server, exit gets you back to your ‘anonymous’ persona. Hence the permission to initialize any server state from the anonymous persona, but activity adds to the records under the named persona.
>
> Thanks for sharing ideas,
Thanks for discussing!
> Nick
David Singer
Manager, Software Standards, Apple Inc.

On Thursday 15 January 2015 16:35:23 David Singer wrote:
> Here’s an example. A couple of years ago I used ‘private browsing’ on our
> home computer to look for my wife’s present. Yes, all the history, cookies
> etc. were cleared of the history.
>
> But when I checked ‘search history’ on Google, of course, there was all the
> data! Servers are currently unaware that the user is currently trying to do
> something private; I am suggesting this as a way that they can be aware and
> nice, without actually impacting their business.
Yes, this could be a signal that could be carried over an extended DNT
infrastructure. And you need the feedback from the server to make sure they're
actually doing it. And if they lie, let the legal system do the work...
Yep, this was also meandering through my thought garden and is an extension of
the sticky policy paradigm, if you think it through..
--Rigo

> On Jan 16, 2015, at 13:08 , Rigo Wenning <rigo@w3.org> wrote:
>
> On Thursday 15 January 2015 16:35:23 David Singer wrote:
>> Here’s an example. A couple of years ago I used ‘private browsing’ on our
>> home computer to look for my wife’s present. Yes, all the history, cookies
>> etc. were cleared of the history.
>>
>> But when I checked ‘search history’ on Google, of course, there was all the
>> data! Servers are currently unaware that the user is currently trying to do
>> something private; I am suggesting this as a way that they can be aware and
>> nice, without actually impacting their business.
>
> Yes, this could be a signal that could be carried over an extended DNT
> infrastructure. And you need the feedback from the server to make sure they're
> actually doing it. And if they lie, let the legal system do the work…
Actually, I disagree.
a) It’s independent of DNT. Orthogonal.
b) Unless you are paranoid, you don’t need the feedback. Anything they do is an improvement on today, and I don’t expect there to be much in the way of conformance rules, since the details of the handling are very much specific to the nature of the service.
>
> Yep, this was also meandering through my thought garden and is an extension of
> the sticky policy paradigm, if you think it through..
>
> --Rigo
David Singer
Manager, Software Standards, Apple Inc.

On Friday 16 January 2015 13:22:20 David Singer wrote:
> > Yes, this could be a signal that could be carried over an extended DNT
> > infrastructure. And you need the feedback from the server to make sure
> > they're actually doing it. And if they lie, let the legal system do the
> > work…
> Actually, I disagree.
>
> a) It’s independent of DNT. Orthogonal.
It is yet another signal. Ok, it is not DNT, but it follows the same paradigm.
I understand the branding issue, so let's call it BND (Be Nice Don'tprofile)
> b) Unless you are paranoid, you don’t need the feedback. Anything they do is
> an improvement on today, and I don’t expect there to be much in the way of
> conformance rules, since the details of the handling are very much specific
> to the nature of the service.
Nothing to do with being paranoid. "Denn nur was ihr schwarz auf weiss
besitzt, könnt ihr getrost nach Hause tragen" says Goethe. And he is right :)
Because, without feedback, you're in non-binding hand waving. At this level
and point, a cookie would do. And if you're concerned about the cookie being
ephemeral, use a super-cookie. It is the feedback message, that changes the
nature of protocol and message value, legally...
Which means feedback is the difference between the real thing and the "making
of".
--Rigo

> On Jan 17, 2015, at 18:26 , Rigo Wenning <rigo@w3.org> wrote:
>
> On Friday 16 January 2015 13:22:20 David Singer wrote:
>>> Yes, this could be a signal that could be carried over an extended DNT
>>> infrastructure. And you need the feedback from the server to make sure
>>> they're actually doing it. And if they lie, let the legal system do the
>>> work…
>> Actually, I disagree.
>>
>> a) It’s independent of DNT. Orthogonal.
>
> It is yet another signal. Ok, it is not DNT, but it follows the same paradigm.
> I understand the branding issue, so let's call it BND (Be Nice Don’tprofile)
But that’s not what it is. It is NOT asking “don’t profile” it’s asking “segregate records”.
>
>> b) Unless you are paranoid, you don’t need the feedback. Anything they do is
>> an improvement on today, and I don’t expect there to be much in the way of
>> conformance rules, since the details of the handling are very much specific
>> to the nature of the service.
>
> Nothing to do with being paranoid. "Denn nur was ihr schwarz auf weiss
> besitzt, könnt ihr getrost nach Hause tragen" says Goethe. And he is right :)
OK, I don’t mind a general statement of “we support this feature”, and you can make this machine-readable if you think it’ll result in any action by the UA. I rather suspect that having it human-readable is enough, that’s all.
>
> Because, without feedback, you're in non-binding hand waving.
There is a difference between saying that, for users to know that a server supports the feature, they need to say so somehow, and in requiring that that statement of support be machine-readable.
> At this level
> and point, a cookie would do. And if you're concerned about the cookie being
> ephemeral, use a super-cookie. It is the feedback message, that changes the
> nature of protocol and message value, legally…
Cookies are useless here; cookies are specific to a domain, and this request is quite general. One would need infinite numbers of cookies.
>
> Which means feedback is the difference between the real thing and the "making
> of".
>
> --Rigo
David Singer
Manager, Software Standards, Apple Inc.

On Monday 19 January 2015 10:35:53 David Singer wrote:
> > It is yet another signal. Ok, it is not DNT, but it follows the same
> > paradigm. I understand the branding issue, so let's call it BND (Be Nice
> > Don’tprofile)
This was a joke as BND is the acronym of the German secret service...
> But that’s not what it is. It is NOT asking “don’t profile” it’s asking
> “segregate records”.
This is much better done on the client side. We had nearly running code for
this in the PrimeLife project. You can see remains here:
http://code.w3.org/privacy-dashboard/
There, the architecture is used to track the trackers. But the underlying
architecture and ideas were basically inspired by user centric identities
management. So all this was usable in the same way for personae. And of course
there was also data handling and sticky policies that allowed for data
segregation. AFAIK SAP implemented it and you can have it as a module.
(http://www.primelife.eu/)
> >> b) Unless you are paranoid, you don’t need the feedback. Anything they do
> >> is an improvement on today, and I don’t expect there to be much in the
> >> way of conformance rules, since the details of the handling are very
> >> much specific to the nature of the service.
> >
> > Nothing to do with being paranoid. "Denn nur was ihr schwarz auf weiss
> > besitzt, könnt ihr getrost nach Hause tragen" says Goethe. And he is right
> > :)
> OK, I don’t mind a general statement of “we support this feature”, and you
> can make this machine-readable if you think it’ll result in any action by
> the UA. I rather suspect that having it human-readable is enough, that’s
> all.
If only the UA would remember where somebody said he would follow and didn't
and we could use the feedback as evidence.
As soon as you allow for human-readable declarations, you get a declaration
from lawyers that they "may" offer the feature (in 22 pages and have their
fingers crossed behind their back). So the technical reduction of semantics is
a feature (like having only 140 characters in twitter)
Secondly, you have to define what "segregation" means. If it just means that
my website is less stupid so that your wife won't find out about the gifts you
ordered online, than this is rather intelligent web design than a new feature.
All you need is stateful interaction.
> > Because, without feedback, you're in non-binding hand waving.
>
> There is a difference between saying that, for users to know that a server
> supports the feature, they need to say so somehow, and in requiring that
> that statement of support be machine-readable.
In times when ugly cookie - banners trump smart technology like DNT, you'll
have to offer an added value (legal certainty) in order to get anything. And I
also think that hardcoding the personae into the one use case is too little.
> > At this level
> > and point, a cookie would do. And if you're concerned about the cookie
> > being ephemeral, use a super-cookie. It is the feedback message, that
> > changes the nature of protocol and message value, legally…
>
> Cookies are useless here; cookies are specific to a domain, and this request
> is quite general. One would need infinite numbers of cookies.
Why? We already have an infinite number of cookies (have you looked? :)
Because I want to be one person to one site and another person to another
site. This isn't rocket science at all AFAICT.
There should be a forget my profile after N days, not a "don't annoy me with
your stupid revelations from my profile". Data segregation alone is just
diminishing the annoyance factor, but doesn't add any user control or risk for
democracy (the values that are behind privacy/data protection)
So having only one persona and human readable declaration is kind of 1996. But
I know that sadly enough, we are walking backwards.
--Rigo

> On Jan 19, 2015, at 15:46 , Rigo Wenning <rigo@w3.org> wrote:
>
> On Monday 19 January 2015 10:35:53 David Singer wrote:
>>> It is yet another signal. Ok, it is not DNT, but it follows the same
>>> paradigm. I understand the branding issue, so let's call it BND (Be Nice
>>> Don’tprofile)
>
> This was a joke as BND is the acronym of the German secret service...
>
>> But that’s not what it is. It is NOT asking “don’t profile” it’s asking
>> “segregate records”.
>
> This is much better done on the client side.
I fail to see how I can segregate Google’s history of me, solely on the client side.
Private Browsing DOES this on on the client side; I am exploring conveying this to the servers as an addition.
>>>
>> OK, I don’t mind a general statement of “we support this feature”, and you
>> can make this machine-readable if you think it’ll result in any action by
>> the UA. I rather suspect that having it human-readable is enough, that’s
>> all.
>
> If only the UA would remember where somebody said he would follow and didn't
> and we could use the feedback as evidence.
sure, that’s part of the DNT well-known resource motivation.
> Secondly, you have to define what "segregation" means. If it just means that
> my website is less stupid so that your wife won't find out about the gifts you
> ordered online, than this is rather intelligent web design than a new feature.
> All you need is stateful interaction.
well, I roughly agree. Not sure what you mean by the last, but in general, they promise that your activity in one persona will not affect what is visible in another, except that they may initialize named persona from the anonymous one.
>>> Because, without feedback, you're in non-binding hand waving.
>>
>> There is a difference between saying that, for users to know that a server
>> supports the feature, they need to say so somehow, and in requiring that
>> that statement of support be machine-readable.
>
> In times when ugly cookie - banners trump smart technology like DNT, you'll
> have to offer an added value (legal certainty) in order to get anything. And I
> also think that hardcoding the personae into the one use case is too little.
I am not sure a nice ask, that’s not about tracking/secrecy but about being nice in linking data, needs legal backing.
>
>>> At this level
>>> and point, a cookie would do. And if you're concerned about the cookie
>>> being ephemeral, use a super-cookie. It is the feedback message, that
>>> changes the nature of protocol and message value, legally…
>>
>> Cookies are useless here; cookies are specific to a domain, and this request
>> is quite general. One would need infinite numbers of cookies.
>
> Why? We already have an infinite number of cookies (have you looked? :)
Because I am asking every server I visit, whether or not visited before. Cookies are set by the servers, and have a syntax that is specific to each server.
David Singer
Manager, Software Standards, Apple Inc.

On Monday 19 January 2015 16:01:07 David Singer wrote:
> >> But that’s not what it is. It is NOT asking “don’t profile” it’s asking
> >> “segregate records”.
> >
> > This is much better done on the client side.
>
> I fail to see how I can segregate Google’s history of me, solely on the
> client side.
By giving Google a different identity when shopping gifts. This is done using
another login/cookie/ID. Ok, they theortically can correlate you via the IP
address, but doing so would be clearly abusive.
>
> Private Browsing DOES this on on the client side; I am exploring conveying
> this to the servers as an addition.
Private browsing is just ONE persona you're offering. In real life I see my
kids using at least 3-5 personae while surfing. They do so by remembering in
their head because browsers are too dumb to support them conveniently. It just
doesn't make ad-money to help kids and people segregate their roles online.
And of course, I'm a little too enthusiastic after over 8 years of research in
that area.
> >> OK, I don’t mind a general statement of “we support this feature”, and
> >> you
> >> can make this machine-readable if you think it’ll result in any action by
> >> the UA. I rather suspect that having it human-readable is enough, that’s
> >> all.
> >
> > If only the UA would remember where somebody said he would follow and
> > didn't and we could use the feedback as evidence.
>
> sure, that’s part of the DNT well-known resource motivation.
and I fought for it as long as I was able too. At times nobody cared... It is
more important than people think it is.
>
> > Secondly, you have to define what "segregation" means. If it just means
> > that my website is less stupid so that your wife won't find out about the
> > gifts you ordered online, than this is rather intelligent web design than
> > a new feature. All you need is stateful interaction.
>
> well, I roughly agree. Not sure what you mean by the last,
stateful means that they know that this is still the same visitor. This means
they can attach "forget after this session" to whatever trace they collect.
> but in general,
> they promise that your activity in one persona will not affect what is
> visible in another, except that they may initialize named persona from the
> anonymous one.
While shopping, you're not anonymous anyway. I even would say that without
using Tor you're not anonymous. But nobody wants to be anonymous. I just don't
want to be confronted with my surfing habits from 1995.
> > In times when ugly cookie - banners trump smart technology like DNT,
> > you'll
> > have to offer an added value (legal certainty) in order to get anything.
> > And I also think that hardcoding the personae into the one use case is
> > too little.
> I am not sure a nice ask, that’s not about tracking/secrecy but about being
> nice in linking data, needs legal backing.
If it wouldn't we would have a different discussion. Linking those traces is
true money. And the Zeitgeist is to disrespect you even without money. The
challenge is to exploit the unknown click-sheep the best one can. As I said,
DNT would have been done long ago, had it allowed continued linking that isn't
just shown to the user. But as long as the links are there, they will occur
inadvertently with gifts for your wife. Because you would need two personae to
avoid it. And here we are back. Instead of doing that server side, it is much
smarter to do that client side. In the seventies, data protection was also
about smarter computing. Here we go again.
> >>
> >> Cookies are useless here; cookies are specific to a domain, and this
> >> request is quite general. One would need infinite numbers of cookies.
> >
> > Why? We already have an infinite number of cookies (have you looked? :)
>
> Because I am asking every server I visit, whether or not visited before.
> Cookies are set by the servers, and have a syntax that is specific to each
> server.
You seem to want a general statement of the type: Don't be so stupid to reveal
the gifts I've bought with stupid those-who-bought-this-also-bought-that
statements. Do we really need an http-header for that? And how do you switch?
In fact, what you want is a mode saying: "Hey, this should not be added to my
profile if you respect me." Again, we are in personae. You could switch DNT on
and off to do the same. Ok, we have middle states where I still want my
fidelity points for the gift I bought but I don't want this to be revealed.
This is a persona in the middle between track me and do not track me.
And this is why Matthias wanted to have more states, but then turned the
protocol so that a service could offer certain roles and you could chose with
a signal. And of course, the most basic thing would be to define a simple role
and have human-readable endorsement. The most simple of those actually is Do
not track IMHO..
--Rigo

> On Jan 20, 2015, at 4:42 , Rigo Wenning <rigo@w3.org> wrote:
>
> On Monday 19 January 2015 16:01:07 David Singer wrote:
>>>> But that’s not what it is. It is NOT asking “don’t profile” it’s asking
>>>> “segregate records”.
>>>
>>> This is much better done on the client side.
>>
>> I fail to see how I can segregate Google’s history of me, solely on the
>> client side.
>
> By giving Google a different identity when shopping gifts. This is done using
> another login/cookie/ID. Ok, they theortically can correlate you via the IP
> address, but doing so would be clearly abusive.
So, you’re suggesting that for every server I visit, I have to log off and make a new account? I don’t think that that is practical or pleasant.
>>
>> Private Browsing DOES this on on the client side; I am exploring conveying
>> this to the servers as an addition.
>
> Private browsing is just ONE persona you're offering.
No, a browser might make a new persona at the start of each private browsing session. Or it might allow you to resume a previous persona. That’s UA design.
>>> Secondly, you have to define what "segregation" means. If it just means
>>> that my website is less stupid so that your wife won't find out about the
>>> gifts you ordered online, than this is rather intelligent web design than
>>> a new feature. All you need is stateful interaction.
>>
>> well, I roughly agree. Not sure what you mean by the last,
>
> stateful means that they know that this is still the same visitor. This means
> they can attach "forget after this session" to whatever trace they collect.
And indeed a change of persona separates the previous state from the current one. Whether the server has to delete it is a separate question (that’s a different control).
>
>> but in general,
>> they promise that your activity in one persona will not affect what is
>> visible in another, except that they may initialize named persona from the
>> anonymous one.
> While shopping, you're not anonymous anyway.
I use the name ‘anonymous persona’ to identify what your persona is when you don’t send a header. I should use a different label ‘base persona’ or ‘default persona’ or something, it’s clearly confusing. Anonymous — without name, i.e. without the identifier of the persona carried in the header. It’s not when I am ‘anonymous’ online (very hard to achieve).
> I even would say that without
> using Tor you're not anonymous. But nobody wants to be anonymous. I just don't
> want to be confronted with my surfing habits from 1995.
I have confused you.
>
>>> In times when ugly cookie - banners trump smart technology like DNT,
>>> you'll
>>> have to offer an added value (legal certainty) in order to get anything.
>>> And I also think that hardcoding the personae into the one use case is
>>> too little.
>> I am not sure a nice ask, that’s not about tracking/secrecy but about being
>> nice in linking data, needs legal backing.
>
> If it wouldn't we would have a different discussion. Linking those traces is
> true money.
The header does NOT ask the server to forget data or not link it to me; they are free to remember that all these personae are the same person. It’s a request to keep the data segregated, especially when presenting it or affecting the user’s experience.
> And the Zeitgeist is to disrespect you even without money. The
> challenge is to exploit the unknown click-sheep the best one can. As I said,
> DNT would have been done long ago, had it allowed continued linking that isn't
> just shown to the user. But as long as the links are there, they will occur
> inadvertently with gifts for your wife. Because you would need two personae to
> avoid it. And here we are back. Instead of doing that server side, it is much
> smarter to do that client side. In the seventies, data protection was also
> about smarter computing. Here we go again.
>
>>>>
>>>> Cookies are useless here; cookies are specific to a domain, and this
>>>> request is quite general. One would need infinite numbers of cookies.
>>>
>>> Why? We already have an infinite number of cookies (have you looked? :)
>>
>> Because I am asking every server I visit, whether or not visited before.
>> Cookies are set by the servers, and have a syntax that is specific to each
>> server.
>
> You seem to want a general statement of the type: Don't be so stupid to reveal
> the gifts I've bought with stupid those-who-bought-this-also-bought-that
> statements. Do we really need an http-header for that? And how do you switch?
You switch however the UA allows you to. Trivially, a UA might mint a new persona each time a new private browsing session starts.
> In fact, what you want is a mode saying: "Hey, this should not be added to my
> profile if you respect me.”
No, I don’t. That’s do-not-track. I am asking “please keep the records associated with this persona separate”.
> Again, we are in personae. You could switch DNT on
> and off to do the same.
No, DNT asks the server to stop recording completely. This does not.
> Ok, we have middle states where I still want my
> fidelity points for the gift I bought but I don't want this to be revealed.
> This is a persona in the middle between track me and do not track me.
Yes.
Indeed, one way a server can segregate is not to keep records at all, but it is only one way.
David Singer
Manager, Software Standards, Apple Inc.

* Rigo Wenning wrote:
>By giving Google a different identity when shopping gifts. This is done using
>another login/cookie/ID. Ok, they theortically can correlate you via the IP
>address, but doing so would be clearly abusive.
It seems reasonable to assume they would do that for fraud detection, to
help users merge and link accounts not meant to be fully separate and so
on.
>If it wouldn't we would have a different discussion. Linking those traces is
>true money. And the Zeitgeist is to disrespect you even without money. The
>challenge is to exploit the unknown click-sheep the best one can. As I said,
>DNT would have been done long ago, had it allowed continued linking that isn't
>just shown to the user.
I argued back in 2011 that the Tracking Protection Working Group needs
to form some consensus around what they want to accomplish, otherwise it
will not be able to produce anything of value in reasonable time. Well,
Aleecia M. McDonald, who chaired the group at the time, disagreed,
Bjoern is correct that the charter is very broad. Several people agree
with the idea that we must figure out why we are here, what we want to
accomplish, and we should start with principles like what is privacy,
does privacy matter and if so to whom, and so forth. While I have some
sympathy for that view, I've pushed not to have those discussions.
--
Björn Höhrmann · mailto:bjoern@hoehrmann.de · http://bjoern.hoehrmann.de
D-10243 Berlin · PGP Pub. KeyID: 0xA4357E78 · http://www.bjoernsworld.de
Available for hire in Berlin (early 2015) · http://www.websitedev.de/

> On Jan 22, 2015, at 16:09 , Bjoern Hoehrmann <derhoermi@gmx.net> wrote:
>
> * Rigo Wenning wrote:
>> By giving Google a different identity when shopping gifts. This is done using
>> another login/cookie/ID. Ok, they theortically can correlate you via the IP
>> address, but doing so would be clearly abusive.
>
> It seems reasonable to assume they would do that for fraud detection, to
> help users merge and link accounts not meant to be fully separate and so
> on.
As I say, the ‘persona’ proposal does NOT ask the sites to ignore or not know that it’s you; it asks the sites to keep the records well enough segregated that the personae don’t affect each other in a visible way.
David Singer
Manager, Software Standards, Apple Inc.

On Fri, Jan 23, 2015 at 6:03 AM, David Singer <singer@apple.com> wrote:
>
> As I say, the ‘persona’ proposal does NOT ask the sites to ignore or not
> know that it’s you; it asks the sites to keep the records well enough
> segregated that the personae don’t affect each other in a visible way.
>
Hi David and PING folks... even after spending some quality time with this
thread, I'm not sure I fully understand the sketch of what something like
personae could facilitate (and it's still hard for me to see how this is
orthogonal to efforts like DNT in TPWG). At some point, it would be good to
see not necessarily a spec, but something that is relatively self-contained
-- maybe I should just read the thread again! -- that describes the bounds
of the proposal.
And I could very well just be clueless...
--
Joseph Lorenzo Hall
Chief Technologist
Center for Democracy & Technology
1634 I ST NW STE 1100
Washington DC 20006-4011
(p) 202-407-8825
(f) 202-637-0968
joe@cdt.org
PGP: https://josephhall.org/gpg-key
fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

Oh dear, I am clearly explaining this badly.
Let’s try again.
The problem: quite a few browsers today have what they call “private browsing mode” or the like. In this mode all local state that is accumulated is discarded at the end of the private browsing mode session (when the mode is turned off). After turning it off, the local machine has, ideally, no trace at all of what was done in the private mode. The discard includes browsing history, cookies, local storage etc. I think that browsers can/do initialize the private session from the user’s current state when they start private mode.
Advantage: if it’s a shared computer, you don’t leave any trace.
So, private browsing sort-of-looks like this, in terms of state: two private sessions are started and then ended. These sessions are initialized from the base state, which is not updated while the private sessions are in process.
+[private 2] - - -
+[private 1] - - - |
| |
[base state] - - - - - - - - + . . . . . . . . . . . . - - - -+ . . . . . . . . . . . .- - - - - - - - - -
Time ->
This means that private browsing still ‘works’ on the web; cookies flow, referer headers, and so on, all as normal. The important aspect of this is whether a trace is left on the ‘permanent history’.
Problem statement: the servers are completely unaware of this mode, and so any history etc. THEY keep is still visible.
Proposal:
The servers have various means to work out who this is, and attach history (these means include cookies, fingerprinting and so on). As noted above, we don’t seek to break normal browsing by refusing to accept storage etc. (e.g. of cookies), so a simple ‘binary’ signal in an HTTP header “I am trying to be private here” doesn’t help, as the server won’t know from request to request whether this is part of the same session or not.
Hence, the idea to introduce a header that identifies which ‘private session’ the user is in. Since, in fact, this can be used for other purposes than private browsing, and it’s logically possible for the browser to have multiple windows open, or separate sessions, or to return to a private session, we thought this was essentially an indication of what ‘aspect’ of the user that was being presented here, their persona. So, we needed a session — persona — identifier. Both to make it easy to generate, and to make it possible to transfer a private session from one device to another, we took the easy route of suggesting that UUIDs are a suitable identification tool.
Here is the original suggestion I sent. Note that the server is being asked to segregate state, not to stop keeping state.
* * * * *
The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
We do not think it is particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways.
This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
* * * *
I think it’s interesting in a number of respects:
a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
> On Jan 23, 2015, at 13:28 , Joe Hall <joe@cdt.org> wrote:
>
>
>
> On Fri, Jan 23, 2015 at 6:03 AM, David Singer <singer@apple.com> wrote:
>
> As I say, the ‘persona’ proposal does NOT ask the sites to ignore or not know that it’s you; it asks the sites to keep the records well enough segregated that the personae don’t affect each other in a visible way.
>
>
> Hi David and PING folks... even after spending some quality time with this thread, I'm not sure I fully understand the sketch of what something like personae could facilitate (and it's still hard for me to see how this is orthogonal to efforts like DNT in TPWG). At some point, it would be good to see not necessarily a spec, but something that is relatively self-contained -- maybe I should just read the thread again! -- that describes the bounds of the proposal.
>
> And I could very well just be clueless...
>
> --
> Joseph Lorenzo Hall
> Chief Technologist
> Center for Democracy & Technology
> 1634 I ST NW STE 1100
> Washington DC 20006-4011
> (p) 202-407-8825
> (f) 202-637-0968
> joe@cdt.org
> PGP: https://josephhall.org/gpg-key
> fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871
>
>
David Singer
Manager, Software Standards, Apple Inc.

On Mon, Jan 26, 2015 at 4:33 AM, David Singer <singer@apple.com> wrote:
> Oh dear, I am clearly explaining this badly.
Thanks much for this, David. I definitely see it clearly now.
> I think it’s interesting in a number of respects:
>
> a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
I guess traditional client privacy tools see the servers as potential
adversaries, so leaking an indication of intent in terms of private
browsing could be a risk (e.g., server says, "ooooh, this session I
would have associated with another session seems to want me not to
link those two sessions... in fact, I'll label it as 'stuff this
person really doesn't want people to know about'"). Here I guess this
isn't clearly a leak of "I'm trying to be private, mom!!!" since it
could very well be just a different person's session using essentially
the same UA/env as a previous person. This makes me wonder if existing
tools to segregate "persona"-like elements (accounts on an OS,
profiles for something like Mozilla products) don't do that enough? or
maybe they're too heavy?
Do you see a need for a server-side personae compliance spec, David?
(Or am I thinking too far ahead or making this too complicated?)
> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
Interesting, would servers be at liberty to simply link all the
personas they identify as likely the same user? (e.g., using fancy
analytics like typing analysis, etc. to tell if two different persona
are in fact the same person) That would seem to be a good part of the
bargain to have here... and perhaps this isn't as complicated in terms
of server compliance as TPWG/DNT?
> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
Forgive me again... are you saying that by being able to have as many
persona as I can keep track of that I'm "articulating" (a social
science term of art, sorry) different aspects of my being that I'd
rather servers not link together? That is rather interesting. For
example, you could have a persona for activities that you want privacy
of a certain level (say me looking at job candidate websites online)
and another persona for activities of a higher level (say, if I'm
looking at content online that I'd rather not have linked to my
not-so-private self)?
thanks again, Joe

> On Jan 26, 2015, at 19:38 , Joe Hall <joe@cdt.org> wrote:
>
> On Mon, Jan 26, 2015 at 4:33 AM, David Singer <singer@apple.com> wrote:
>> Oh dear, I am clearly explaining this badly.
>
> Thanks much for this, David. I definitely see it clearly now.
>
>> I think it’s interesting in a number of respects:
>>
>> a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
>
> I guess traditional client privacy tools see the servers as potential
> adversaries, so leaking an indication of intent in terms of private
> browsing could be a risk (e.g., server says, "ooooh, this session I
> would have associated with another session seems to want me not to
> link those two sessions... in fact, I'll label it as 'stuff this
> person really doesn't want people to know about'"). Here I guess this
> isn't clearly a leak of "I'm trying to be private, mom!!!" since it
> could very well be just a different person's session using essentially
> the same UA/env as a previous person. This makes me wonder if existing
> tools to segregate "persona"-like elements (accounts on an OS,
> profiles for something like Mozilla products) don't do that enough? or
> maybe they're too heavy?
yes, it would be a real pain for the same user to log out and log in again to a different account. also, the server might still conclude (wrongly) it’s the same person, if they are doing it on heuristics like IP address and didn’t set or change cookies on the first visit.
>
> Do you see a need for a server-side personae compliance spec, David?
> (Or am I thinking too far ahead or making this too complicated?)
I am not opposed to or proposing such. I have a hard time imagining what it would say, though. The ways in which a server might expose the fact that it thinks these two actions form part of the same person’s history are myriad, and it’s about exposing that.
>
>> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
>
> Interesting, would servers be at liberty to simply link all the
> personas they identify as likely the same user? (e.g., using fancy
> analytics like typing analysis, etc. to tell if two different persona
> are in fact the same person) That would seem to be a good part of the
> bargain to have here... and perhaps this isn't as complicated in terms
> of server compliance as TPWG/DNT?
yes, the bargain to the server is sort-of “YOU can know this is all me; just please segregate it so that that is not evident externally”. It’s all about being nice about who you expose the data to, not what you record in the first place.
>
>> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
>
> Forgive me again... are you saying that by being able to have as many
> persona as I can keep track of that I'm "articulating" (a social
> science term of art, sorry) different aspects of my being that I'd
> rather servers not link together? That is rather interesting. For
> example, you could have a persona for activities that you want privacy
> of a certain level (say me looking at job candidate websites online)
> and another persona for activities of a higher level (say, if I'm
> looking at content online that I'd rather not have linked to my
> not-so-private self)?
yes, indeed. you do a job search, you don’t want ads for open positions appearing when you are at work. and so on.
I think we fall into either/or traps too much in privacy thinking. either the server doesn’t keep the data … or it can do whatever it likes with it. if a single event can be recorded…then everything can be. either something is secret…or it’s completely public. if I am OK with someone taking a holiday snap that includes me…then I am OK with someone following me around with a video camera. and so on. I am increasingly thinking that none of these are true. things you are happy to share with your spouse and doctor are still ‘private’. privacy is not always, or even mostly, secrecy; it’s about awareness, about control, about boundaries.
>
> thanks again, Joe
my pleasure! thanks for pushing me.
David Singer
Manager, Software Standards, Apple Inc.

Joe Hall schreef op 2015-01-26 19:38:
(...)>
>> c) it recognizes that privacy is not a binary state — it’s not an
>> either-or (you have it or you don’t); it’s a spectrum, and it’s about
>> perception and control and exposure as much as it is about recording
>> and so on.
>
> Forgive me again... are you saying that by being able to have as many
> persona as I can keep track of that I'm "articulating" (a social
> science term of art, sorry) different aspects of my being that I'd
> rather servers not link together? That is rather interesting. For
> example, you could have a persona for activities that you want privacy
> of a certain level (say me looking at job candidate websites online)
> and another persona for activities of a higher level (say, if I'm
> looking at content online that I'd rather not have linked to my
> not-so-private self)?
>
> thanks again, Joe
Joe, David,
If I am not mistaken, Joe's description opens up a possible
implementation of contextual integrity [Nissenbaum].
Best,
Rob

On Mon, Jan 26, 2015 at 3:24 PM, Rob van Eijk <rob@blaeu.com> wrote:
>
> Joe Hall schreef op 2015-01-26 19:38:
> (...)>
>>>
>>> c) it recognizes that privacy is not a binary state — it’s not an
>>> either-or (you have it or you don’t); it’s a spectrum, and it’s about
>>> perception and control and exposure as much as it is about recording and so
>>> on.
>>
>>
>> Forgive me again... are you saying that by being able to have as many
>> persona as I can keep track of that I'm "articulating" (a social
>> science term of art, sorry) different aspects of my being that I'd
>> rather servers not link together? That is rather interesting. For
>> example, you could have a persona for activities that you want privacy
>> of a certain level (say me looking at job candidate websites online)
>> and another persona for activities of a higher level (say, if I'm
>> looking at content online that I'd rather not have linked to my
>> not-so-private self)?
>>
>> thanks again, Joe
>
>
> Joe, David,
> If I am not mistaken, Joe's description opens up a possible implementation
> of contextual integrity [Nissenbaum].
That's a neat way to think about it!
The persona concept here is focused on the user controlling the notion
of context, when in Helen's theory contexts are more socially
constructed, I believe (so not just a product of the user's
consciousness, but of norms hammered out in messy society). For
example, in CI you can argue that secondary uses of health information
that may be a privacy violation for the individual (e.g., sharing a
positive HIV test result with a national health service) are not
problematic writ large if that data is used to provide a larger
benefit to the larger context of "health". Said differently, using a
test result to protect population health that may go against the
confidentiality desires of the individual is not a misuse because it
preserves the context of the original information interaction between
the individual and physician.
Ok, now I may have confused myself. I'll stop now!
best, Joe
--
Joseph Lorenzo Hall
Chief Technologist
Center for Democracy & Technology
1634 I ST NW STE 1100
Washington DC 20006-4011
(p) 202-407-8825
(f) 202-637-0968
joe@cdt.org
PGP: https://josephhall.org/gpg-key
fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

Hi folks - just catching up on this very interesting thread after a few days off. I think Joe raises two important questions below, under (a) and (c ).
Some comments inline...
On 26 Jan 2015, at 18:38, Joe Hall <joe@cdt.org> wrote:
> On Mon, Jan 26, 2015 at 4:33 AM, David Singer <singer@apple.com> wrote:
>> Oh dear, I am clearly explaining this badly.
>
> Thanks much for this, David. I definitely see it clearly now.
>
>> I think it’s interesting in a number of respects:
>>
>> a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
>
> I guess traditional client privacy tools see the servers as potential
> adversaries, so leaking an indication of intent in terms of private
> browsing could be a risk (e.g., server says, "ooooh, this session I
> would have associated with another session seems to want me not to
> link those two sessions... in fact, I'll label it as 'stuff this
> person really doesn't want people to know about'"). Here I guess this
> isn't clearly a leak of "I'm trying to be private, mom!!!" since it
> could very well be just a different person's session using essentially
> the same UA/env as a previous person. This makes me wonder if existing
> tools to segregate "persona"-like elements (accounts on an OS,
> profiles for something like Mozilla products) don't do that enough? or
> maybe they're too heavy?
>
> Do you see a need for a server-side personae compliance spec, David?
> (Or am I thinking too far ahead or making this too complicated?)
Right - David is suggesting, if I understand it correctly, that users should be able to associate an identifier with a given private browsing persona - such that any private browsing sessions initiated under that persona share the same identifier.
So - as Joe suggests below - I might use persona A when browsing a job vacancies site, and persona B when *cough* ‘looking at content online’…
My initial reaction was that adding an identifier for each persona just increases the linkability of data gathered by the server. But then, I guess, if the server is recording browser-independent identifiers like IP address, then the “per-persona” identifier does not make things much worse.
>
>> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
>
> Interesting, would servers be at liberty to simply link all the
> personas they identify as likely the same user? (e.g., using fancy
> analytics like typing analysis, etc. to tell if two different persona
> are in fact the same person) That would seem to be a good part of the
> bargain to have here... and perhaps this isn't as complicated in terms
> of server compliance as TPWG/DNT?
>
>> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
>
> Forgive me again... are you saying that by being able to have as many
> persona as I can keep track of that I'm "articulating" (a social
> science term of art, sorry) different aspects of my being that I'd
> rather servers not link together? That is rather interesting. For
> example, you could have a persona for activities that you want privacy
> of a certain level (say me looking at job candidate websites online)
> and another persona for activities of a higher level (say, if I'm
> looking at content online that I'd rather not have linked to my
> not-so-private self)?
I think this kind of persona ‘articulation’ is key to online privacy. It is intimately linked with the way we understand privacy in real life. We represent ourselves differently to, say, our doctor, our employer, our spouse, our children (NB - this is not deception; it’s selectivity. Representing oneself differently according to context does not imply a lack of integrity on the part of the individual). If users cannot selectively represent subsets of their attributes online, then privacy really is dead. Or at least in a possibly-reversible coma. However, as above, we have to be mindful of the fact that persona separation at the client side can only achieve so much. If servers are able to “re-connect” personas that the user is trying to keep separate (for instance, by linking identifiers over which the user has no control), then the goal of "privacy through persona separation" is at risk.
Just as Helen Nissenbaum’s is the seminal work on contextual integrity, I think Andreas Pfitzmann’s paper on anonymity/unlinkability is the definitive work on “re-connecting” personal data…
Hope this helps - as I say, a very interesting thread…
Robin
>
> thanks again, Joe
>

> On Jan 29, 2015, at 15:33 , chaals@yandex-team.ru wrote:
>
> Basically +1… more inline
yay, I think you have it and we’re converging. Yes, the [priest+doctor | server] clearly knows that it’s Chaals under both personae; but as you say, [he it] is being respectful that in one case they are treating your body and the other your soul, and keeps those considerations separate.
Yes, it’s like encountering your shrink at a party. He knows it’s you, you know he knows; but he doesn’t expose in this context (the party) what he knows from the other context (the analysis sessions). That is respecting your privacy.
David Singer
Manager, Software Standards, Apple Inc.

On 01/29/2015 09:43 AM, David Singer wrote:
>
>> On Jan 29, 2015, at 15:33 , chaals@yandex-team.ru wrote:
>>
>> Basically +1… more inline
>
> yay, I think you have it and we’re converging. Yes, the [priest+doctor | server] clearly knows that it’s Chaals under both personae; but as you say, [he it] is being respectful that in one case they are treating your body and the other your soul, and keeps those considerations separate.
>
> Yes, it’s like encountering your shrink at a party. He knows it’s you, you know he knows; but he doesn’t expose in this context (the party) what he knows from the other context (the analysis sessions). That is respecting your privacy.
Interesting mix of norms and tech -- and yes, a different privacy threat
model from the one many of us are accustomed to considering. Here, we're
trusting the server to share our interests and want to help us enforce
the contextual boundaries we choose, even if its knowledge could span
those boundaries.
This model is a better match with the Web Origin security model -- where
an origin site is presumed to have control of the web application
security, and the end-user must choose to trust the origin (with limited
user-side overrides) or not visit the site.
I wonder what sorts of feedback could help to reinforce to end-users
that their trust was in fact merited.
--Wendy
--
Wendy Seltzer -- wseltzer@w3.org +1.617.715.4883 (office)
Policy Counsel and Domain Lead, World Wide Web Consortium (W3C)
http://wendy.seltzer.org/ +1.617.863.0613 (mobile)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> Interesting mix of norms and tech -- and yes, a different privacy threat
> model from the one many of us are accustomed to considering. Here, we're
> trusting the server to share our interests and want to help us enforce
> the contextual boundaries we choose, even if its knowledge could span
> those boundaries.
>
> This model is a better match with the Web Origin security model -- where
> an origin site is presumed to have control of the web application
> security, and the end-user must choose to trust the origin (with limited
> user-side overrides) or not visit the site.
>
> I wonder what sorts of feedback could help to reinforce to end-users
> that their trust was in fact merited.
>
> --Wendy
>
It would have to include all the servers being accessed, third-parties also. I think David's header would be seen all of them, and it would only take one to ignore the contextual boundaries, decide to combine multiple personas with other data in a PII keyed database, then broadcast it to the world (and UA based UUIDs are far more reliably user-identifying than IP addresses which are usually ephemeral and non-unique).
Maybe there should be an implicit web of trust that covers all the servers receiving user specific data on a page, where they all commit to a common declared level of privacy and security. The browser could then have UI to communicate that.
WebID could be used to identify all the parties (not just origins), and a manifest could define the trust relationship.
Mike
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUyndEAAoJEHMxUy4uXm2JSeMIAMmr8UE6vjZuhQnhBfNihFsr
Tjm9k8/l0OwywckMwFadKL/sFP2SSLP8tzWnXI87UScAJXXAM9/y3bxUKLzY88+9
rnYRQYHGzEpIzuSN/rRvf8/EOiVfA2CrMQ0h4c+WofrqARNU2xhI7XPY2nI7v2Nl
sCsK0y89+cKCBDe41jkWvs+vkjrlaCcMvpold6BOPFgIcKSWlDtDKek8bQ78qxi4
sgmAr41TL6/BnBjxgUh5NDescGLh7DPDmK4/YoLjr1E3IAU2io7h1WevVzxgC+tj
H/W2oeFlU9dLASm0aFPOfQ98zWvDen94XYFd4SNFJqYgPGwMgcM+7p+ku429n/Q=
=lP8p
-----END PGP SIGNATURE-----

> On Jan 29, 2015, at 19:09 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>> Interesting mix of norms and tech -- and yes, a different privacy threat
>> model from the one many of us are accustomed to considering. Here, we're
>> trusting the server to share our interests and want to help us enforce
>> the contextual boundaries we choose, even if its knowledge could span
>> those boundaries.
>>
>> This model is a better match with the Web Origin security model -- where
>> an origin site is presumed to have control of the web application
>> security, and the end-user must choose to trust the origin (with limited
>> user-side overrides) or not visit the site.
>>
>> I wonder what sorts of feedback could help to reinforce to end-users
>> that their trust was in fact merited.
>>
>> --Wendy
>>
>
>
> It would have to include all the servers being accessed, third-parties also. I think David's header would be seen all of them, and it would only take one to ignore the contextual boundaries, decide to combine multiple personas with other data in a PII keyed database, then broadcast it to the world (and UA based UUIDs are far more reliably user-identifying than IP addresses which are usually ephemeral and non-unique).
True, but don’t forget we’re coming from a state where the servers don’t even know of the desire. I don’t mind machine-based discoverability, but it’s tricky to work out how to include transparent proxies and caches in that.
>
> Maybe there should be an implicit web of trust that covers all the servers receiving user specific data on a page, where they all commit to a common declared level of privacy and security. The browser could then have UI to communicate that.
The problem comes from elements not directly on the page, of course.
>
> WebID could be used to identify all the parties (not just origins), and a manifest could define the trust relationship.
>
> Mike
>
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.13 (MingW32)
> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
> Charset: utf-8
>
> iQEcBAEBAgAGBQJUyndEAAoJEHMxUy4uXm2JSeMIAMmr8UE6vjZuhQnhBfNihFsr
> Tjm9k8/l0OwywckMwFadKL/sFP2SSLP8tzWnXI87UScAJXXAM9/y3bxUKLzY88+9
> rnYRQYHGzEpIzuSN/rRvf8/EOiVfA2CrMQ0h4c+WofrqARNU2xhI7XPY2nI7v2Nl
> sCsK0y89+cKCBDe41jkWvs+vkjrlaCcMvpold6BOPFgIcKSWlDtDKek8bQ78qxi4
> sgmAr41TL6/BnBjxgUh5NDescGLh7DPDmK4/YoLjr1E3IAU2io7h1WevVzxgC+tj
> H/W2oeFlU9dLASm0aFPOfQ98zWvDen94XYFd4SNFJqYgPGwMgcM+7p+ku429n/Q=
> =lP8p
> -----END PGP SIGNATURE-----
>
>
David Singer
Manager, Software Standards, Apple Inc.

> On Jan 29, 2015, at 1:27 PM, David Singer <singer@apple.com> wrote:
>
>
>> On Jan 29, 2015, at 19:09 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>>> Interesting mix of norms and tech -- and yes, a different privacy threat
>>> model from the one many of us are accustomed to considering. Here, we're
>>> trusting the server to share our interests and want to help us enforce
>>> the contextual boundaries we choose, even if its knowledge could span
>>> those boundaries.
>>>
>>> This model is a better match with the Web Origin security model -- where
>>> an origin site is presumed to have control of the web application
>>> security, and the end-user must choose to trust the origin (with limited
>>> user-side overrides) or not visit the site.
>>>
>>> I wonder what sorts of feedback could help to reinforce to end-users
>>> that their trust was in fact merited.
>>>
>>> --Wendy
>>
>>
>> It would have to include all the servers being accessed, third-parties also. I think David's header would be seen all of them, and it would only take one to ignore the contextual boundaries, decide to combine multiple personas with other data in a PII keyed database, then broadcast it to the world (and UA based UUIDs are far more reliably user-identifying than IP addresses which are usually ephemeral and non-unique).
>
> True, but don’t forget we’re coming from a state where the servers don’t even know of the desire. I don’t mind machine-based discoverability, but it’s tricky to work out how to include transparent proxies and caches in that.
>
>>
>
Cookies and NAT/proxy inspection are entrusted along with explicit disclosure of user information to the server of origin, of course. But there should be a specification for http::forbidden and alternative error code interpretation so for the proxy-server connection policy makers can declare such in terms of use.
Gabriel DB Fernandez
pgp: 9425a6af

trimming the cc - list..
On Thursday 29 January 2015 19:24:45 David Singer wrote:
> > It would have to include all the servers being accessed, third-parties
> > also. I think David's header would be seen all of them, and it would only
> > take one to ignore the contextual boundaries, decide to combine multiple
> > personas with other data in a PII keyed database, then broadcast it to
> > the world (and UA based UUIDs are far more reliably user-identifying than
> > IP addresses which are usually ephemeral and non-unique).
> True, but don’t forget we’re coming from a state where the servers don’t
> even know of the desire. I don’t mind machine-based discoverability, but
> it’s tricky to work out how to include transparent proxies and caches in
> that.
Now comes the feedback again that I mentioned earlier. On a typical site,
there are up to 200 trackers and more. If you have a feedback mechanism, you
know who is making promises and who is not. The machine can work that out
while it would be overkill for the end-user. In case the feedback is that my
request won't be honored, my browser can simply block that GET request, or
fool the server or be creative by sending them the cookie from last year,
or....
--Rigo

> On Jan 29, 2015, at 20:43 , Rigo Wenning <rigo@w3.org> wrote:
>
> trimming the cc - list..
>
> On Thursday 29 January 2015 19:24:45 David Singer wrote:
>>> It would have to include all the servers being accessed, third-parties
>>> also. I think David's header would be seen all of them, and it would only
>>> take one to ignore the contextual boundaries, decide to combine multiple
>>> personas with other data in a PII keyed database, then broadcast it to
>>> the world (and UA based UUIDs are far more reliably user-identifying than
>>> IP addresses which are usually ephemeral and non-unique).
>> True, but don’t forget we’re coming from a state where the servers don’t
>> even know of the desire. I don’t mind machine-based discoverability, but
>> it’s tricky to work out how to include transparent proxies and caches in
>> that.
>
> Now comes the feedback again that I mentioned earlier. On a typical site,
> there are up to 200 trackers and more. If you have a feedback mechanism, you
> know who is making promises and who is not. The machine can work that out
> while it would be overkill for the end-user. In case the feedback is that my
> request won't be honored, my browser can simply block that GET request, or
> fool the server or be creative by sending them the cookie from last year,
> or….
I think that this is interesting, but there are snags.
1. Some sites don’t, in fact, keep history data. They’d have to claim to be honoring the request even though they’ve had to do nothing to do that. I guess that’s not a high burden.
2. UAs can probe the sites it know are involved, but there are a number of sites that are invisible:
proxies and transparent caches
sites that receive relay requests from the sites directly contacted
I guess we could define a ‘strong respecter’ as a top-level site that not only promises that they respect the request, but they require that of all third party sites involved as well.
David Singer
Manager, Software Standards, Apple Inc.

On Friday 30 January 2015 9:21:31 David Singer wrote:
> > Now comes the feedback again that I mentioned earlier. On a typical site,
> > there are up to 200 trackers and more. If you have a feedback mechanism,
> > you know who is making promises and who is not. The machine can work
> > that out while it would be overkill for the end-user. In case the
> > feedback is that my request won't be honored, my browser can simply block
> > that GET request, or fool the server or be creative by sending them the
> > cookie from last year, or….
>
> I think that this is interesting, but there are snags.
Yes, but the things below are not a big issue IMHO
>
> 1. Some sites don’t, in fact, keep history data. They’d have to claim to be
> honoring the request even though they’ve had to do nothing to do that. I
> guess that’s not a high burden.
And they can claim to honor your request as they do not keep data. You can not
determine whether they "keep" things once those things were requested and
collected as you don't have access to their systems. Ok, you can guess that
they do if they maintain state or if this ad glues on you like a piece of dog
shit you stepped on.
> 2. UAs can probe the sites it know are involved, but there are a number
> of sites that are invisible: proxies and transparent caches
> sites that receive relay requests from the sites directly contacted
Here you're definitely over-engineering it. It would be a great improvement if
browsers would already react on the GET requests triggered by a user-loaded
page (resource without active click or written into navigation field)
>
> I guess we could define a ‘strong respecter’ as a top-level site that not
> only promises that they respect the request, but they require that of all
> third party sites involved as well.
I don't see a need for that. I would really like to fix the other first.
--Rigo

One comment inline...
On 29 Jan 2015, at 18:10, "Mike O'Neill" <michael.oneill@baycloud.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>> Interesting mix of norms and tech -- and yes, a different privacy threat
>> model from the one many of us are accustomed to considering. Here, we're
>> trusting the server to share our interests and want to help us enforce
>> the contextual boundaries we choose, even if its knowledge could span
>> those boundaries.
>>
>> This model is a better match with the Web Origin security model -- where
>> an origin site is presumed to have control of the web application
>> security, and the end-user must choose to trust the origin (with limited
>> user-side overrides) or not visit the site.
>>
>> I wonder what sorts of feedback could help to reinforce to end-users
>> that their trust was in fact merited.
>>
>> --Wendy
>>
>
>
> It would have to include all the servers being accessed, third-parties also. I think David's header would be seen all of them, and it would only take one to ignore the contextual boundaries, decide to combine multiple personas with other data in a PII keyed database, then broadcast it to the world (and UA based UUIDs are far more reliably user-identifying than IP addresses which are usually ephemeral and non-unique).
>
> Maybe there should be an implicit web of trust that covers all the servers receiving user specific data on a page, where they all commit to a common declared level of privacy and security. The browser could then have UI to communicate that.
>
> WebID could be used to identify all the parties (not just origins), and a manifest could define the trust relationship.
Really interesting idea. If I understand correctly, one implication of this could be that the onus is on the website,then, to ensure that the manifest fully reflects all the embedded content in the page. This would make it possible for a plug-in like Ghostery or Lightbeam to highlight any disparities (e.g. "I found a tracker here from spamserver.com, and there's no corresponding entry in the trust manifest"). This wouldn't immediately change the 'user bargain' - the user is still faced with a take-it-or-leave-it choice - but over time it could definitely force greater transparency and contribute to a reputation score.
>
> Mike
>
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.13 (MingW32)
> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
> Charset: utf-8
>
> iQEcBAEBAgAGBQJUyndEAAoJEHMxUy4uXm2JSeMIAMmr8UE6vjZuhQnhBfNihFsr
> Tjm9k8/l0OwywckMwFadKL/sFP2SSLP8tzWnXI87UScAJXXAM9/y3bxUKLzY88+9
> rnYRQYHGzEpIzuSN/rRvf8/EOiVfA2CrMQ0h4c+WofrqARNU2xhI7XPY2nI7v2Nl
> sCsK0y89+cKCBDe41jkWvs+vkjrlaCcMvpold6BOPFgIcKSWlDtDKek8bQ78qxi4
> sgmAr41TL6/BnBjxgUh5NDescGLh7DPDmK4/YoLjr1E3IAU2io7h1WevVzxgC+tj
> H/W2oeFlU9dLASm0aFPOfQ98zWvDen94XYFd4SNFJqYgPGwMgcM+7p+ku429n/Q=
> =lP8p
> -----END PGP SIGNATURE-----
>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> > Maybe there should be an implicit web of trust that covers all the servers
> receiving user specific data on a page, where they all commit to a common
> declared level of privacy and security. The browser could then have UI to
> communicate that.
> >
> > WebID could be used to identify all the parties (not just origins), and a manifest
> could define the trust relationship.
>
> Really interesting idea. If I understand correctly, one implication of this could be
> that the onus is on the website,then, to ensure that the manifest fully reflects all
> the embedded content in the page. This would make it possible for a plug-in like
> Ghostery or Lightbeam to highlight any disparities (e.g. "I found a tracker here
> from spamserver.com, and there's no corresponding entry in the trust
> manifest"). This wouldn't immediately change the 'user bargain' - the user is still
> faced with a take-it-or-leave-it choice - but over time it could definitely force
> greater transparency and contribute to a reputation score.
Content Security Policy https://w3c.github.io/webappsec/specs/content-security-policy/ already lets top-level site declare what other-origin resources get loaded. But this is about domains not actual legal entities. If we also leverage WebID we could associate the domains with the actual companies, for example Google Inc might have doubleclick.com, youtube.com, google-analytics.com etc. on the same webpage. WebID-TLS https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/tls-respec.html lets you use certificates to validate the identity also.
Mike
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUy3iEAAoJEHMxUy4uXm2JJCEH/Rq6+NzW1CWNEm7cl/wj1yPs
VKTqhp1tIZ5PUQSybWK2V/mdgKazUR5wbUEmksi2Umll3vd8c2Zo2af1Htpz1s7w
e6v8U4VRTihpQHMtSN/jJXBk37bIFym0cT87jEldjvIwPoEtLELR78JDERS/Mr9S
dCw8lP1jYuGlI8TTjL+MwqMthg1lZzfSRMezaVQdSc6+iYttyq9zsZlqeyjbMKbk
vULkIEpfLW7CA87I6EUBPIxavh2WArZgH6iwtNfSSbOpqny6ahCYGPyEJ+Vl9D/6
f1MqeWtMwAOD8I2QR2syYBUeo3VVR6pzpp7gc1Eur7WuNrkgE/0GKwnLjjL8XA0=
=d/ru
-----END PGP SIGNATURE-----

29.01.2015, 18:41, "Wendy Seltzer" <wseltzer@w3.org>:
> On 01/29/2015 09:43 AM, David Singer wrote:
>>> On Jan 29, 2015, at 15:33 , chaals@yandex-team.ru wrote:
>>>
>>> Basically +1… more inline
>> yay, I think you have it and we’re converging. Yes, the [priest+doctor | server] clearly knows that it’s Chaals under both personae; but as you say, [he it] is being respectful that in one case they are treating your body and the other your soul, and keeps those considerations separate.
>>
>> Yes, it’s like encountering your shrink at a party. He knows it’s you, you know he knows; but he doesn’t expose in this context (the party) what he knows from the other context (the analysis sessions). That is respecting your privacy.
>
> Interesting mix of norms and tech -- and yes, a different privacy threat
> model from the one many of us are accustomed to considering. Here, we're
> trusting the server to share our interests
Actually there is a quid pro quo. We'll give the server data, if they state that they will respect our conditions.
> and want to help us enforce
> the contextual boundaries we choose, even if its knowledge could span
> those boundaries.
Right.
> This model is a better match with the Web Origin security model -- where
> an origin site is presumed to have control of the web application
> security, and the end-user must choose to trust the origin (with limited
> user-side overrides) or not visit the site.
Yeah. And a better match with a lot of reality I think.
The idea that you will use services in private mode without any of the convenience tracking and cookies give isn't one I would like a corporate security policy to be reliant on.
But the idea that you'll treat each other with respect, because it is in both parties' interest, is how people successfully collaborate from a scale of 1-on-1 to global geopolitics.
> I wonder what sorts of feedback could help to reinforce to end-users
> that their trust was in fact merited.
The obvious one is reputation management, which can be done by browsers, third parties,
If we developed some specific terms, you could go beyond that, but I doubt we would ever agree on them.
On the other hand, using such a mode, in conjunction with a privacy-friendly regulatory environment and the sort of warning systems that currently protect us against malware, phishing and spam might be enough for a lot of people - and a motivator for competing services to find ways of demonstrating that they too are equally deserving of people's trust…
Which would probably be a step forward.
cheers
--
Charles McCathie Nevile - web standards - CTO Office, Yandex
chaals@yandex-team.ru - - - Find more at http://yandex.com

On 29 Jan 2015, at 22:31, "chaals@yandex-team.ru" <chaals@yandex-team.ru> wrote:
> 29.01.2015, 18:41, "Wendy Seltzer" <wseltzer@w3.org>:
>> On 01/29/2015 09:43 AM, David Singer wrote:
>>>> On Jan 29, 2015, at 15:33 , chaals@yandex-team.ru wrote:
>>>>
>>>> Basically +1… more inline
>>> yay, I think you have it and we’re converging. Yes, the [priest+doctor | server] clearly knows that it’s Chaals under both personae; but as you say, [he it] is being respectful that in one case they are treating your body and the other your soul, and keeps those considerations separate.
>>>
>>> Yes, it’s like encountering your shrink at a party. He knows it’s you, you know he knows; but he doesn’t expose in this context (the party) what he knows from the other context (the analysis sessions). That is respecting your privacy.
>>
>> Interesting mix of norms and tech -- and yes, a different privacy threat
>> model from the one many of us are accustomed to considering. Here, we're
>> trusting the server to share our interests
>
> Actually there is a quid pro quo. We'll give the server data, if they state that they will respect our conditions.
>
>> and want to help us enforce
>> the contextual boundaries we choose, even if its knowledge could span
>> those boundaries.
>
> Right.
>
>> This model is a better match with the Web Origin security model -- where
>> an origin site is presumed to have control of the web application
>> security, and the end-user must choose to trust the origin (with limited
>> user-side overrides) or not visit the site.
>
> Yeah. And a better match with a lot of reality I think.
>
> The idea that you will use services in private mode without any of the convenience tracking and cookies give isn't one I would like a corporate security policy to be reliant on.
>
> But the idea that you'll treat each other with respect, because it is in both parties' interest, is how people successfully collaborate from a scale of 1-on-1 to global geopolitics.
Except that there's currently a big power imbalance between users and service providers, with the latter tending towards a "take it or leave it" approach, as noted just above, by David.
>
>> I wonder what sorts of feedback could help to reinforce to end-users
>> that their trust was in fact merited.
>
> The obvious one is reputation management, which can be done by browsers, third parties,
Right, and a key factor here is that a third party agent is not economically obliged to prioritise the service provider's interests - so their inclusion in the value chain can help re-balance power in favour of the end user community.
FYI, ISOC has a project called ToSBack/2 which we hope will form part of a similar kind set-up; it's based on a back-end repository in which we track and highlight changes to Terms of Service, over time. Our hope is that third parties will use the resulting data to produce added value, for instance by building up a reputation score that reflects service providers' adherence to the terms they offer (or alerts users when the terms change in some new and privacy-eroding way...).
It's not an identical scenario to the one we're discussing here, but there are definitely parallel in the approach.
>
> If we developed some specific terms, you could go beyond that, but I doubt we would ever agree on them.
>
> On the other hand, using such a mode, in conjunction with a privacy-friendly regulatory environment and the sort of warning systems that currently protect us against malware, phishing and spam might be enough for a lot of people - and a motivator for competing services to find ways of demonstrating that they too are equally deserving of people's trust…
>
> Which would probably be a step forward.
>
> cheers
>
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
> chaals@yandex-team.ru - - - Find more at http://yandex.com

On Friday 30 January 2015 1:31:41 chaals@yandex-team.ru wrote:
> The obvious one is reputation management,
Yes, this is clearly missing. But so far, we only have negative reputation
management with the "known bogus and dangerous sites" that my browser checks
against.
But I think the Workshop in Berlin brought a lot of new insights in this
respect, especially the presentation from Opera.
--Rigo

thanks for clarifying - this persona perspective is important.
again, it looks like (social) trust is needed that server will honor user expectation however, similar to DNT.
regards, Frederick
Frederick Hirsch
www.fjhirsch.com
@fjhirsch
> On Jan 29, 2015, at 9:43 AM, David Singer <singer@apple.com> wrote:
>
>
>> On Jan 29, 2015, at 15:33 , chaals@yandex-team.ru wrote:
>>
>> Basically +1… more inline
>
> yay, I think you have it and we’re converging. Yes, the [priest+doctor | server] clearly knows that it’s Chaals under both personae; but as you say, [he it] is being respectful that in one case they are treating your body and the other your soul, and keeps those considerations separate.
>
> Yes, it’s like encountering your shrink at a party. He knows it’s you, you know he knows; but he doesn’t expose in this context (the party) what he knows from the other context (the analysis sessions). That is respecting your privacy.
>
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>

This is basically a mildly-edited re-statement of the ideas, taking into account some of the discussion. I was asked to re-post a summary, in the discussion this week at the call.
* * * * * * * * * * * *
The problem: quite a few browsers today have what they call “private browsing mode” or the like. In this mode all local state that is accumulated is discarded at the end of the private browsing mode session (when the mode is turned off). After turning it off, the local machine has, ideally, no trace at all of what was done in the private mode. The discard includes browsing history, cookies, local storage etc. I think that browsers can/do initialize the private session from the user’s current state when they start private mode.
Advantage: if it’s a shared computer, you don’t leave any trace.
So, private browsing sort-of-looks like this, in terms of state: two private sessions are started and then ended. These sessions are initialized from the base state, which is not updated while the private sessions are in process.
+[private 2] - - -
+[private 1] - - - |
| |
[base state] - - - - - - - - + . . . . . . . . . . . . - - - -+ . . . . . . . . . . . .- - - - - - - - - -
Time ->
This means that private browsing still ‘works’ on the web; cookies flow, referer headers, and so on, all as normal. The important aspect of this is whether a trace is left on the ‘permanent history’.
Problem statement: the servers are completely unaware of this mode, and so any history etc. THEY keep is still visible.
Proposal:
The servers have various means to work out who this is, and attach history (these means include cookies, fingerprinting and so on). As noted above, we don’t seek to break normal browsing by refusing to accept storage etc. (e.g. of cookies), so a simple ‘binary’ signal in an HTTP header “I am trying to be private here” doesn’t help, as the server won’t know from request to request whether this is part of the same session or not.
Hence, the idea to introduce a header that identifies which ‘private session’ the user is in. Since, in fact, this can be used for other purposes than private browsing, and it’s logically possible for the browser to have multiple windows open, or separate sessions, or to return to a private session, we thought this was essentially an indication of what ‘aspect’ of the user that was being presented here, their persona. So, we needed a session — persona — identifier. Both to make it easy to generate, and to make it possible to transfer a private session from one device to another, we took the easy route of suggesting that UUIDs are a suitable identification tool.
Here is the original suggestion I sent. Note that the server is being asked to segregate state, not to stop keeping state. This is about the aspect of privacy which is respecting the right context to ’say’ something: ‘why did you say that?’ not ‘why did you know/remember that?’. One of the problems with today’s net is not only that servers see and remember too much (not addressed here), but they have absolutely no sense of when it’s appropriate, or not, to reveal what they know (that is addressed here).
* * * * *
The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
It may be that it is not particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways. On the other hand, machine-readable claims of support have two advantages: the browser can filter or warn about sites that don’t claim to respect it, and while not respecting it probably would not be actionable, claiming to and then not doing it would be lying to users, which might be.
This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
* * * *
I think it’s interesting in a number of respects:
a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
* * * * * * *
What are some of the potential downsides?
1) It doesn’t treat servers as adversaries, and if they are, in fact, ‘hostile’ might be giving them a clue ‘look here, someone is doing something under the covers’
2) using a UUID for the persona has advantages — they are not contextualized by the ‘main’ persona that the server knows or guesses, and they can be shared across the user’s devices — but also provides a very explicit key ‘this is (this aspect of) me’, which again, for adversarial servers, might be an issue
Note that there is no attempt to claim “this isn’t me, this is someone else” so linking personas is fine, if the server can work out they are the same person (e.g. by cookie or other means).
David Singer
Manager, Software Standards, Apple Inc.

Thanks for a great write-up!
Would it be useful to say that you are not describing browser support for anonymity but for pseudonymity?
Best regards
Pär Lannerö
CommonTerms.org
28 feb 2015 kl. 01:30 skrev "David Singer" <singer@apple.com>:
> This is basically a mildly-edited re-statement of the ideas, taking into account some of the discussion. I was asked to re-post a summary, in the discussion this week at the call.
>
>
> * * * * * * * * * * * *
>
>
> The problem: quite a few browsers today have what they call “private browsing mode” or the like. In this mode all local state that is accumulated is discarded at the end of the private browsing mode session (when the mode is turned off). After turning it off, the local machine has, ideally, no trace at all of what was done in the private mode. The discard includes browsing history, cookies, local storage etc. I think that browsers can/do initialize the private session from the user’s current state when they start private mode.
>
> Advantage: if it’s a shared computer, you don’t leave any trace.
>
> So, private browsing sort-of-looks like this, in terms of state: two private sessions are started and then ended. These sessions are initialized from the base state, which is not updated while the private sessions are in process.
>
>
> +[private 2] - - -
> +[private 1] - - - |
> | |
> [base state] - - - - - - - - + . . . . . . . . . . . . - - - -+ . . . . . . . . . . . .- - - - - - - - - -
> Time ->
>
> This means that private browsing still ‘works’ on the web; cookies flow, referer headers, and so on, all as normal. The important aspect of this is whether a trace is left on the ‘permanent history’.
>
> Problem statement: the servers are completely unaware of this mode, and so any history etc. THEY keep is still visible.
>
> Proposal:
>
> The servers have various means to work out who this is, and attach history (these means include cookies, fingerprinting and so on). As noted above, we don’t seek to break normal browsing by refusing to accept storage etc. (e.g. of cookies), so a simple ‘binary’ signal in an HTTP header “I am trying to be private here” doesn’t help, as the server won’t know from request to request whether this is part of the same session or not.
>
> Hence, the idea to introduce a header that identifies which ‘private session’ the user is in. Since, in fact, this can be used for other purposes than private browsing, and it’s logically possible for the browser to have multiple windows open, or separate sessions, or to return to a private session, we thought this was essentially an indication of what ‘aspect’ of the user that was being presented here, their persona. So, we needed a session — persona — identifier. Both to make it easy to generate, and to make it possible to transfer a private session from one device to another, we took the easy route of suggesting that UUIDs are a suitable identification tool.
>
> Here is the original suggestion I sent. Note that the server is being asked to segregate state, not to stop keeping state. This is about the aspect of privacy which is respecting the right context to ’say’ something: ‘why did you say that?’ not ‘why did you know/remember that?’. One of the problems with today’s net is not only that servers see and remember too much (not addressed here), but they have absolutely no sense of when it’s appropriate, or not, to reveal what they know (that is addressed here).
>
> * * * * *
>
> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
>
> Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
>
> Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
>
> This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
>
> It may be that it is not particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways. On the other hand, machine-readable claims of support have two advantages: the browser can filter or warn about sites that don’t claim to respect it, and while not respecting it probably would not be actionable, claiming to and then not doing it would be lying to users, which might be.
>
> This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
>
> * * * *
>
> I think it’s interesting in a number of respects:
>
> a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
>
> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
>
> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
>
>
> * * * * * * *
>
> What are some of the potential downsides?
>
> 1) It doesn’t treat servers as adversaries, and if they are, in fact, ‘hostile’ might be giving them a clue ‘look here, someone is doing something under the covers’
>
> 2) using a UUID for the persona has advantages — they are not contextualized by the ‘main’ persona that the server knows or guesses, and they can be shared across the user’s devices — but also provides a very explicit key ‘this is (this aspect of) me’, which again, for adversarial servers, might be an issue
>
>
> Note that there is no attempt to claim “this isn’t me, this is someone else” so linking personas is fine, if the server can work out they are the same person (e.g. by cookie or other means).
>
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>

I agree in general that informing the server about the user's intention is useful, as this will improve the behavior of compliant services.
To address the public at large, this approach relies on the user's concern of privacy, which we all know is quite low. Therefore, it would be beneficial if the other way would work as well: a service knowing it contains private data should be able to spawn a private browsing session, which will automatically kill the session state after the window is closed. Use cases would be not only person-to-self scenarios such as online banking, but enterprise applications that provide access to 3rd party PII like SaaS-HR.
While this is loosely related to David's proposal, the applicability of private browsing mode would be extended by allowing to detect and start private browsing mode via Javascript and/or HTTP headers.
- Rainer Hörbe
> Am 28.02.2015 um 01:28 schrieb David Singer <singer@apple.com>:
>
> This is basically a mildly-edited re-statement of the ideas, taking into account some of the discussion. I was asked to re-post a summary, in the discussion this week at the call.
>
>
> * * * * * * * * * * * *
>
>
> The problem: quite a few browsers today have what they call “private browsing mode” or the like. In this mode all local state that is accumulated is discarded at the end of the private browsing mode session (when the mode is turned off). After turning it off, the local machine has, ideally, no trace at all of what was done in the private mode. The discard includes browsing history, cookies, local storage etc. I think that browsers can/do initialize the private session from the user’s current state when they start private mode.
>
> Advantage: if it’s a shared computer, you don’t leave any trace.
>
> So, private browsing sort-of-looks like this, in terms of state: two private sessions are started and then ended. These sessions are initialized from the base state, which is not updated while the private sessions are in process.
>
>
> +[private 2] - - -
> +[private 1] - - - |
> | |
> [base state] - - - - - - - - + . . . . . . . . . . . . - - - -+ . . . .. . . . . . . . .- - - - - - - - - -
> Time ->
>
> This means that private browsing still ‘works’ on the web; cookies flow, referer headers, and so on, all as normal. The important aspect of this is whether a trace is left on the ‘permanent history’.
>
> Problem statement: the servers are completely unaware of this mode, and so any history etc. THEY keep is still visible.
>
> Proposal:
>
> The servers have various means to work out who this is, and attach history (these means include cookies, fingerprinting and so on). As noted above, we don’t seek to break normal browsing by refusing to accept storage etc. (e.g. of cookies), so a simple ‘binary’ signal in an HTTP header “I am trying to be private here” doesn’t help, as the server won’t know from request to request whether this is part of the same session or not.
>
> Hence, the idea to introduce a header that identifies which ‘private session’ the user is in. Since, in fact, this can be used for other purposes than private browsing, and it’s logically possible for the browser to have multiple windows open, or separate sessions, or to return to a private session, we thought this was essentially an indication of what ‘aspect’ of the user that was being presented here, their persona. So, we needed a session — persona — identifier. Both to make it easy to generate, and to make it possible to transfer a private session from one device to another, we took the easy route of suggesting that UUIDs are a suitable identification tool.
>
> Here is the original suggestion I sent. Note that the server is being asked to segregate state, not to stop keeping state. This is about the aspect of privacy which is respecting the right context to ’say’ something: ‘why did you say that?’ not ‘why did you know/remember that?’. One of the problems with today’s net is not only that servers see and remember too much (not addressed here), but they have absolutely no sense of when it’s appropriate, or not, to reveal what they know (that is addressed here).
>
> * * * * *
>
> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a suitable machine-generatable distinct identifier (e.g. a UUID). If the header is absent, the user is operating under their default (unlabeled) persona, which is distinct from all the identified personas, which in turn are also distinct from each other. A user and their user-agent may return to a persona at any time, or continue using a persona for any length of time. A persona identifier is expected to be universally unique, not contextualized to the current user-agent or device.
>
> Servers respecting this are requested to ensure that the labeled personas leave no trace or influence on each other or on the unlabeled persona. For example, activity under one persona should not affect the ads shown under a different persona; any history records that the user can see should be distinct for each persona; and so on. (It’s OK for your unlabeled persona to be reflected in labeled ones, but optional; if servers wish, they can initialize a named persona from the default, un-named one, when they first see it.)
>
> Server implementers may choose how long they retain records relating to separate personas, just as they do for today’s default persona.
>
> This is NOT a request to stop tracking or keeping records; that is an orthogonal question that is covered by activities such as do-not-track, cookie directives, and so on. This is about giving users control of their privacy by controlling what gets linked to what, and exposed when.
>
> It may be that it is not particularly necessary or valuable to have a machine-readable means of discovery over whether servers support this feature. Any support that they provide is an improvement on today’s experience, where servers are unaware that users are trying to be private. Claims of support for this feature are probably better conveyed in advertising or other human-readable ways. On the other hand, machine-readable claims of support have two advantages: the browser can filter or warn about sites that don’t claim to respect it, and while not respecting it probably would not be actionable, claiming to and then not doing it would be lying to users, which might be.
>
> This feature might also be valuable for shared terminals; for example, in libraries, airline lounges, internet cafes and the like, a new persona can be minted each time the terminal is unlocked for a new session. Libraries might tie the persona to the library card, so users returning get re-linked to their online history and so on. It might also be a lightweight replacement of logging-in, for browsers on shared devices — a browser might have a simple way of saying which family member it is right now (e.g. a pull-down menu).
>
> * * * *
>
> I think it’s interesting in a number of respects:
>
> a) it’s an improvement on the status quo, where servers are completely unaware of any attempt to be private
>
> b) it’s not asking for *secrecy* at all; servers are at liberty to remember as much as before; there are very few privacy proposals that don’t slide into trying to be secret, and this is one. Privacy is also about where information is exposed, what it is linked to, and so on.
>
> c) it recognizes that privacy is not a binary state — it’s not an either-or (you have it or you don’t); it’s a spectrum, and it’s about perception and control and exposure as much as it is about recording and so on.
>
>
> * * * * * * *
>
> What are some of the potential downsides?
>
> 1) It doesn’t treat servers as adversaries, and if they are, in fact, ‘hostile’ might be giving them a clue ‘look here, someone is doing something under the covers’
>
> 2) using a UUID for the persona has advantages — they are not contextualized by the ‘main’ persona that the server knows or guesses, and they can be shared across the user’s devices — but also provides a very explicit key ‘this is (this aspect of) me’, which again, for adversarial servers, might be an issue
>
>
> Note that there is no attempt to claim “this isn’t me, this is someone else” so linking personas is fine, if the server can work out they are the same person (e.g. by cookie or other means).
>
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>

Do you really want the same ID being sent to all sites? On the one hand
we're already spewing IP addresses everywhere and this can be used to do
retargeting and/or various data combination across sites, but now if you've
got a stable identifier (over the life of the browsing session, which could
be long) that actually seems like quite a privacy hit to me.
I've also not really see any notion of "multiple distinct browser sessions"
take off. Incognito / private mode enjoys some nontrivial use, but I'm
still amazed at how few people know it exists. The ability to have multiple
distinct profiles exists in Chrome and other browsers, but as much as we as
an industry try to push the notion, I can't say I've ever personally seen
anyone at an airport or cafe (aka not a Google or Apple office) actually
using this. I think the UI / change aversion / inertia present harder
problems than the technical problem of isolation within a profile.
My $0.02
2015-02-27 16:28 GMT-08:00 David Singer <singer@apple.com>:
> This is basically a mildly-edited re-statement of the ideas, taking into
> account some of the discussion. I was asked to re-post a summary, in the
> discussion this week at the call.
>
>
> * * * * * * * * * * * *
>
>
> The problem: quite a few browsers today have what they call “private
> browsing mode” or the like. In this mode all local state that is
> accumulated is discarded at the end of the private browsing mode session
> (when the mode is turned off). After turning it off, the local machine has,
> ideally, no trace at all of what was done in the private mode. The discard
> includes browsing history, cookies, local storage etc. I think that
> browsers can/do initialize the private session from the user’s current
> state when they start private mode.
>
> Advantage: if it’s a shared computer, you don’t leave any trace.
>
> So, private browsing sort-of-looks like this, in terms of state: two
> private sessions are started and then ended. These sessions are initialized
> from the base state, which is not updated while the private sessions are in
> process.
>
>
>
> +[private 2] - - -
> +[private 1] - - - |
> | |
> [base state] - - - - - - - - + . . . . . . . . . . . . - - - -+ . . . . .
> . . . . . . .- - - - - - - - - -
> Time ->
>
> This means that private browsing still ‘works’ on the web; cookies flow,
> referer headers, and so on, all as normal. The important aspect of this is
> whether a trace is left on the ‘permanent history’.
>
> Problem statement: the servers are completely unaware of this mode, and so
> any history etc. THEY keep is still visible.
>
> Proposal:
>
> The servers have various means to work out who this is, and attach history
> (these means include cookies, fingerprinting and so on). As noted above, we
> don’t seek to break normal browsing by refusing to accept storage etc.
> (e.g. of cookies), so a simple ‘binary’ signal in an HTTP header “I am
> trying to be private here” doesn’t help, as the server won’t know from
> request to request whether this is part of the same session or not.
>
> Hence, the idea to introduce a header that identifies which ‘private
> session’ the user is in. Since, in fact, this can be used for other
> purposes than private browsing, and it’s logically possible for the browser
> to have multiple windows open, or separate sessions, or to return to a
> private session, we thought this was essentially an indication of what
> ‘aspect’ of the user that was being presented here, their persona. So, we
> needed a session — persona — identifier. Both to make it easy to generate,
> and to make it possible to transfer a private session from one device to
> another, we took the easy route of suggesting that UUIDs are a suitable
> identification tool.
>
> Here is the original suggestion I sent. Note that the server is being
> asked to segregate state, not to stop keeping state. This is about the
> aspect of privacy which is respecting the right context to ’say’ something:
> ‘why did you say that?’ not ‘why did you know/remember that?’. One of the
> problems with today’s net is not only that servers see and remember too
> much (not addressed here), but they have absolutely no sense of when it’s
> appropriate, or not, to reveal what they know (that is addressed here).
>
> * * * * *
>
> The user-agent can send an optional HTTP header ‘Persona:’ whose value is
> a suitable machine-generatable distinct identifier (e.g. a UUID). If the
> header is absent, the user is operating under their default (unlabeled)
> persona, which is distinct from all the identified personas, which in turn
> are also distinct from each other. A user and their user-agent may return
> to a persona at any time, or continue using a persona for any length of
> time. A persona identifier is expected to be universally unique, not
> contextualized to the current user-agent or device.
>
> Servers respecting this are requested to ensure that the labeled personas
> leave no trace or influence on each other or on the unlabeled persona. For
> example, activity under one persona should not affect the ads shown under a
> different persona; any history records that the user can see should be
> distinct for each persona; and so on. (It’s OK for your unlabeled persona
> to be reflected in labeled ones, but optional; if servers wish, they can
> initialize a named persona from the default, un-named one, when they first
> see it.)
>
> Server implementers may choose how long they retain records relating to
> separate personas, just as they do for today’s default persona.
>
> This is NOT a request to stop tracking or keeping records; that is an
> orthogonal question that is covered by activities such as do-not-track,
> cookie directives, and so on. This is about giving users control of their
> privacy by controlling what gets linked to what, and exposed when.
>
> It may be that it is not particularly necessary or valuable to have a
> machine-readable means of discovery over whether servers support this
> feature. Any support that they provide is an improvement on today’s
> experience, where servers are unaware that users are trying to be private.
> Claims of support for this feature are probably better conveyed in
> advertising or other human-readable ways. On the other hand,
> machine-readable claims of support have two advantages: the browser can
> filter or warn about sites that don’t claim to respect it, and while not
> respecting it probably would not be actionable, claiming to and then not
> doing it would be lying to users, which might be.
>
> This feature might also be valuable for shared terminals; for example, in
> libraries, airline lounges, internet cafes and the like, a new persona can
> be minted each time the terminal is unlocked for a new session. Libraries
> might tie the persona to the library card, so users returning get re-linked
> to their online history and so on. It might also be a lightweight
> replacement of logging-in, for browsers on shared devices — a browser
> might have a simple way of saying which family member it is right now (e.g.
> a pull-down menu).
>
> * * * *
>
> I think it’s interesting in a number of respects:
>
> a) it’s an improvement on the status quo, where servers are completely
> unaware of any attempt to be private
>
> b) it’s not asking for *secrecy* at all; servers are at liberty to
> remember as much as before; there are very few privacy proposals that don’t
> slide into trying to be secret, and this is one. Privacy is also about
> where information is exposed, what it is linked to, and so on.
>
> c) it recognizes that privacy is not a binary state — it’s not an
> either-or (you have it or you don’t); it’s a spectrum, and it’s about
> perception and control and exposure as much as it is about recording and so
> on.
>
>
> * * * * * * *
>
> What are some of the potential downsides?
>
> 1) It doesn’t treat servers as adversaries, and if they are, in fact,
> ‘hostile’ might be giving them a clue ‘look here, someone is doing
> something under the covers’
>
> 2) using a UUID for the persona has advantages — they are not
> contextualized by the ‘main’ persona that the server knows or guesses, and
> they can be shared across the user’s devices — but also provides a very
> explicit key ‘this is (this aspect of) me’, which again, for adversarial
> servers, might be an issue
>
>
> Note that there is no attempt to claim “this isn’t me, this is someone
> else” so linking personas is fine, if the server can work out they are the
> same person (e.g. by cookie or other means).
>
>
> David Singer
> Manager, Software Standards, Apple Inc.
>
>
>

On Mon, Mar 2, 2015 at 2:22 PM, Ian Fette (イアンフェッティ) <ifette@google.com> wrote:
> Do you really want the same ID being sent to all sites? On the one hand
> we're already spewing IP addresses everywhere and this can be used to do
> retargeting and/or various data combination across sites, but now if you've
> got a stable identifier (over the life of the browsing session, which could
> be long) that actually seems like quite a privacy hit to me.
This is a really great point that I don't think we've seen raised yet
in this discussion. David (Singer): would origin-scoped identifiers
solve this problem or is the shared persona identifier a feature in
your opinion?
> I've also not really see any notion of "multiple distinct browser sessions"
> take off. Incognito / private mode enjoys some nontrivial use, but I'm still
> amazed at how few people know it exists. The ability to have multiple
> distinct profiles exists in Chrome and other browsers, but as much as we as
> an industry try to push the notion, I can't say I've ever personally seen
> anyone at an airport or cafe (aka not a Google or Apple office) actually
> using this. I think the UI / change aversion / inertia present harder
> problems than the technical problem of isolation within a profile.
I've heard grumblings that the notion of sessions altogether are
getting a bit stale in terms of how people use browser UAs... that's a
bit depressing to me (I use one browser locked down and then open
things that need full cookies, JS, etc. in another browser that scrubs
stuff on close (session end)). But I suspect Ian is very correct that
making the distinction between nominal/private/persona interaction
modes to users is going to be very very hard.
best, Joe
> My $0.02
>
> 2015-02-27 16:28 GMT-08:00 David Singer <singer@apple.com>:
>
>> This is basically a mildly-edited re-statement of the ideas, taking into
>> account some of the discussion. I was asked to re-post a summary, in the
>> discussion this week at the call.
>>
>>
>> * * * * * * * * * * * *
>>
>>
>> The problem: quite a few browsers today have what they call “private
>> browsing mode” or the like. In this mode all local state that is
>> accumulated is discarded at the end of the private browsing mode session
>> (when the mode is turned off). After turning it off, the local machine has,
>> ideally, no trace at all of what was done in the private mode. The discard
>> includes browsing history, cookies, local storage etc. I think that
>> browsers can/do initialize the private session from the user’s current state
>> when they start private mode.
>>
>> Advantage: if it’s a shared computer, you don’t leave any trace.
>>
>> So, private browsing sort-of-looks like this, in terms of state: two
>> private sessions are started and then ended. These sessions are initialized
>> from the base state, which is not updated while the private sessions are in
>> process.
>>
>>
>>
>> +[private 2] - - -
>> +[private 1] - - - |
>> | |
>> [base state] - - - - - - - - + . . . . . . . . . . . . - - - -+ . . . . .
>> . . . . . . .- - - - - - - - - -
>> Time ->
>>
>> This means that private browsing still ‘works’ on the web; cookies flow,
>> referer headers, and so on, all as normal. The important aspect of this is
>> whether a trace is left on the ‘permanent history’.
>>
>> Problem statement: the servers are completely unaware of this mode, and so
>> any history etc. THEY keep is still visible.
>>
>> Proposal:
>>
>> The servers have various means to work out who this is, and attach history
>> (these means include cookies, fingerprinting and so on). As noted above, we
>> don’t seek to break normal browsing by refusing to accept storage etc. (e.g.
>> of cookies), so a simple ‘binary’ signal in an HTTP header “I am trying to
>> be private here” doesn’t help, as the server won’t know from request to
>> request whether this is part of the same session or not.
>>
>> Hence, the idea to introduce a header that identifies which ‘private
>> session’ the user is in. Since, in fact, this can be used for other purposes
>> than private browsing, and it’s logically possible for the browser to have
>> multiple windows open, or separate sessions, or to return to a private
>> session, we thought this was essentially an indication of what ‘aspect’ of
>> the user that was being presented here, their persona. So, we needed a
>> session — persona — identifier. Both to make it easy to generate, and to
>> make it possible to transfer a private session from one device to another,
>> we took the easy route of suggesting that UUIDs are a suitable
>> identification tool.
>>
>> Here is the original suggestion I sent. Note that the server is being
>> asked to segregate state, not to stop keeping state. This is about the
>> aspect of privacy which is respecting the right context to ’say’ something:
>> ‘why did you say that?’ not ‘why did you know/remember that?’. One of the
>> problems with today’s net is not only that servers see and remember too much
>> (not addressed here), but they have absolutely no sense of when it’s
>> appropriate, or not, to reveal what they know (that is addressed here).
>>
>> * * * * *
>>
>> The user-agent can send an optional HTTP header ‘Persona:’ whose value is
>> a suitable machine-generatable distinct identifier (e.g. a UUID). If the
>> header is absent, the user is operating under their default (unlabeled)
>> persona, which is distinct from all the identified personas, which in turn
>> are also distinct from each other. A user and their user-agent may return
>> to a persona at any time, or continue using a persona for any length of
>> time. A persona identifier is expected to be universally unique, not
>> contextualized to the current user-agent or device.
>>
>> Servers respecting this are requested to ensure that the labeled personas
>> leave no trace or influence on each other or on the unlabeled persona. For
>> example, activity under one persona should not affect the ads shown under a
>> different persona; any history records that the user can see should be
>> distinct for each persona; and so on. (It’s OK for your unlabeled persona to
>> be reflected in labeled ones, but optional; if servers wish, they can
>> initialize a named persona from the default, un-named one, when they first
>> see it.)
>>
>> Server implementers may choose how long they retain records relating to
>> separate personas, just as they do for today’s default persona.
>>
>> This is NOT a request to stop tracking or keeping records; that is an
>> orthogonal question that is covered by activities such as do-not-track,
>> cookie directives, and so on. This is about giving users control of their
>> privacy by controlling what gets linked to what, and exposed when.
>>
>> It may be that it is not particularly necessary or valuable to have a
>> machine-readable means of discovery over whether servers support this
>> feature. Any support that they provide is an improvement on today’s
>> experience, where servers are unaware that users are trying to be private.
>> Claims of support for this feature are probably better conveyed in
>> advertising or other human-readable ways. On the other hand,
>> machine-readable claims of support have two advantages: the browser can
>> filter or warn about sites that don’t claim to respect it, and while not
>> respecting it probably would not be actionable, claiming to and then not
>> doing it would be lying to users, which might be.
>>
>> This feature might also be valuable for shared terminals; for example, in
>> libraries, airline lounges, internet cafes and the like, a new persona can
>> be minted each time the terminal is unlocked for a new session. Libraries
>> might tie the persona to the library card, so users returning get re-linked
>> to their online history and so on. It might also be a lightweight
>> replacement of logging-in, for browsers on shared devices — a browser might
>> have a simple way of saying which family member it is right now (e.g. a
>> pull-down menu).
>>
>> * * * *
>>
>> I think it’s interesting in a number of respects:
>>
>> a) it’s an improvement on the status quo, where servers are completely
>> unaware of any attempt to be private
>>
>> b) it’s not asking for *secrecy* at all; servers are at liberty to
>> remember as much as before; there are very few privacy proposals that don’t
>> slide into trying to be secret, and this is one. Privacy is also about where
>> information is exposed, what it is linked to, and so on.
>>
>> c) it recognizes that privacy is not a binary state — it’s not an
>> either-or (you have it or you don’t); it’s a spectrum, and it’s about
>> perception and control and exposure as much as it is about recording and so
>> on.
>>
>>
>> * * * * * * *
>>
>> What are some of the potential downsides?
>>
>> 1) It doesn’t treat servers as adversaries, and if they are, in fact,
>> ‘hostile’ might be giving them a clue ‘look here, someone is doing something
>> under the covers’
>>
>> 2) using a UUID for the persona has advantages — they are not
>> contextualized by the ‘main’ persona that the server knows or guesses, and
>> they can be shared across the user’s devices — but also provides a very
>> explicit key ‘this is (this aspect of) me’, which again, for adversarial
>> servers, might be an issue
>>
>>
>> Note that there is no attempt to claim “this isn’t me, this is someone
>> else” so linking personas is fine, if the server can work out they are the
>> same person (e.g. by cookie or other means).
>>
>>
>> David Singer
>> Manager, Software Standards, Apple Inc.
>>
>>
>
--
Joseph Lorenzo Hall
Chief Technologist
Center for Democracy & Technology
1634 I ST NW STE 1100
Washington DC 20006-4011
(p) 202-407-8825
(f) 202-637-0968
joe@cdt.org
PGP: https://josephhall.org/gpg-key
fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Rigo, the cookie banners are bad because they don’t work, not because they are ugly. Some of them are quite pretty. A very few of them also work.
If you want to give users control you have to have a UI somewhere.
Mike
> -----Original Message-----
> From: Rigo Wenning [mailto:rigo@w3.org]
> Sent: 19 January 2015 20:46
> To: David Singer
> Cc: public-privacy@w3.org
> Subject: Re: Super Cookies in Privacy Browsing mode
>
> *** gpg4o | Unknown Signature from 7D4809884A36402B 1 2 00 1421700375 9
> ***
>
> On Monday 19 January 2015 10:35:53 David Singer wrote:
> > > It is yet another signal. Ok, it is not DNT, but it follows the same
> > > paradigm. I understand the branding issue, so let's call it BND (Be Nice
> > > Don’tprofile)
>
> This was a joke as BND is the acronym of the German secret service...
>
> > But that’s not what it is. It is NOT asking “don’t profile” it’s asking
> > “segregate records”.
>
> This is much better done on the client side. We had nearly running code for
> this in the PrimeLife project. You can see remains here:
>
> http://code.w3.org/privacy-dashboard/
>
> There, the architecture is used to track the trackers. But the underlying
> architecture and ideas were basically inspired by user centric identities
> management. So all this was usable in the same way for personae. And of
> course
> there was also data handling and sticky policies that allowed for data
> segregation. AFAIK SAP implemented it and you can have it as a module.
> (http://www.primelife.eu/)
>
> > >> b) Unless you are paranoid, you don’t need the feedback. Anything they do
> > >> is an improvement on today, and I don’t expect there to be much in the
> > >> way of conformance rules, since the details of the handling are very
> > >> much specific to the nature of the service.
> > >
> > > Nothing to do with being paranoid. "Denn nur was ihr schwarz auf weiss
> > > besitzt, könnt ihr getrost nach Hause tragen" says Goethe. And he is right
> > > :)
> > OK, I don’t mind a general statement of “we support this feature”, and you
> > can make this machine-readable if you think it’ll result in any action by
> > the UA. I rather suspect that having it human-readable is enough, that’s
> > all.
>
> If only the UA would remember where somebody said he would follow and
> didn't
> and we could use the feedback as evidence.
>
> As soon as you allow for human-readable declarations, you get a declaration
> from lawyers that they "may" offer the feature (in 22 pages and have their
> fingers crossed behind their back). So the technical reduction of semantics is
> a feature (like having only 140 characters in twitter)
>
> Secondly, you have to define what "segregation" means. If it just means that
> my website is less stupid so that your wife won't find out about the gifts you
> ordered online, than this is rather intelligent web design than a new feature.
> All you need is stateful interaction.
>
> > > Because, without feedback, you're in non-binding hand waving.
> >
> > There is a difference between saying that, for users to know that a server
> > supports the feature, they need to say so somehow, and in requiring that
> > that statement of support be machine-readable.
>
> In times when ugly cookie - banners trump smart technology like DNT, you'll
> have to offer an added value (legal certainty) in order to get anything. And I
> also think that hardcoding the personae into the one use case is too little.
>
> > > At this level
> > > and point, a cookie would do. And if you're concerned about the cookie
> > > being ephemeral, use a super-cookie. It is the feedback message, that
> > > changes the nature of protocol and message value, legally…
> >
> > Cookies are useless here; cookies are specific to a domain, and this request
> > is quite general. One would need infinite numbers of cookies.
>
> Why? We already have an infinite number of cookies (have you looked? :)
> Because I want to be one person to one site and another person to another
> site. This isn't rocket science at all AFAICT.
>
> There should be a forget my profile after N days, not a "don't annoy me with
> your stupid revelations from my profile". Data segregation alone is just
> diminishing the annoyance factor, but doesn't add any user control or risk for
> democracy (the values that are behind privacy/data protection)
>
> So having only one persona and human readable declaration is kind of 1996. But
> I know that sadly enough, we are walking backwards.
>
> --Rigo
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUvXh3AAoJEHMxUy4uXm2JV/0H/10Ie/UX7EHqdaGZmXmum47Y
JKG++oIoNafP6KpqAZ8grA6rZ7J1iUHe5p1eqTN1PJeOfzCw98IMMFSV/yoV9y4L
7H+DBF9bLPOnxdLBJKIJJ3ck3yjuT0H9G0K5JKEBhBVEjmaKIDHB7vD7CtG8ZPDv
yW6JVLZhOGxKcRLOmXoglV+XIYc7sWhJV0ZK6X+ota2IT3WCoMnlX5ovCttM5lwD
iQ9v23p5feN4yR3QrDoHHnXo2IzFqLWqitmOjJ4rptanEK8Vk+7mE1ap5BvJkeL1
gYTM5qQ0wakNBweUFVkn3VYj/d5NsgOwvGWu6WR+L1klvRtOHbSEwxFmmSP6uOQ=
=F8DD
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I agree privacy does not require secrecy. Actually I think confusing them could lead to a catastrophic collision, creating a bad result for everyone; see http://blogs.wsj.com/digits/2015/01/16/obama-sides-with-cameron-in-encryption-fight/
But some form of anonymity could be a useful way forward. Not totally invisible anonymity but a way to operate with user managed identity, which is why I am interested in your Persona header.
In my opinion police should be able to identify and monitor wrong doers (with a proper warrant targeting individuals not whole populations) but users should be able to present multiple identities in different contexts. You may be prepared to share a low entropy audience identity to advertisers (e.g. "aging male geek currently resident in the UK"), but not your Facebook account or friends list to thousands of unknown companies. The user should always be in full control of who gets data about them, but not by being invisible to the rule of law.
I like the potentiality of the Persona concept but not the UUID bit.
> -----Original Message-----
> From: David Singer [mailto:singer@apple.com]
> Sent: 16 January 2015 00:35
> To: Nicholas Doty
> Cc: Mike O'Neill; public-privacy (W3C mailing list)
> Subject: Re: Super Cookies in Privacy Browsing mode
>
>
> > On Jan 15, 2015, at 12:31 , Nick Doty <npdoty@w3.org> wrote:
> >
> > Hi David,
> >
> >> On Jan 12, 2015, at 3:08 PM, David Singer <singer@apple.com> wrote:
> >>
> >> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a
> suitable machine-generatable distinct identifier (e.g. a UUID). If the header is
> absent, the user is operating under their default (unlabeled) persona, which is
> distinct from all the identified personas, which in turn are also distinct from each
> other. A user and their user-agent may return to a persona at any time, or
> continue using a persona for any length of time. A persona identifier is expected
> to be universally unique, not contextualized to the current user-agent or device.
> >>
> >> Servers respecting this are requested to ensure that the labeled personas
> leave no trace or influence on each other or on the unlabeled persona. For
> example, activity under one persona should not affect the ads shown under a
> different persona; any history records that the user can see should be distinct for
> each persona; and so on. (It’s OK for your unlabeled persona to be reflected in
> labeled ones, but optional; if servers wish, they can initialize a named persona
> from the default, un-named one, when they first see it.)
> >
> > I think it’s definitely an interesting idea. I think there may be similar thinking
> behind the advertising identifier proposals, although I’m not sure the exact
> details on those.
> >
> > I share some of Mike’s concerns. Even if some servers could use a change in
> Persona header to help users separate their shopping activity and help them
> avoid seeing ads they wouldn’t like, other servers (intentionally or
> unintentionally) would use the new unique and persistent user identifier to
> conduct tracking the user might not want. That could (*could*) undermine work
> done to prevent passive fingerprinting of users without their knowledge.
> >
> > The use case seems very relevant though. Personally, I use private browsing
> modes more than anything as a way to get a new, short-term cookie jar. "What
> does this site look like when I’m not logged in?" "I’m using my friend’s computer
> but don’t want to be logged into their Facebook account while I’m browsing.”
> “Can I log into my email for a minute on your computer?” etc.
> >
> > Are there cases where a Persona identifier header would be more useful than
> just clearing or separating the “cookie jars” or other stores of local state?
>
> Yes.
>
> Here’s an example. A couple of years ago I used ‘private browsing’ on our home
> computer to look for my wife’s present. Yes, all the history, cookies etc. were
> cleared of the history.
>
> But when I checked ‘search history’ on Google, of course, there was all the
> data! Servers are currently unaware that the user is currently trying to do
> something private; I am suggesting this as a way that they can be aware and
> nice, without actually impacting their business.
>
> > As in the case reported in the Ars Technica article, the implemented fix was
> just treating HSTS records as state that shouldn’t be persisted into private
> browsing mode.
>
> I am not trying to be anonymous when I am asking to be private; that’s secrecy
> and is much much harder.
>
> > As in previous “evercookie” cases, user agents that can clear all (or most) state
> mechanisms simultaneously can mitigate the concern. I think HSTS is a more
> difficult case because persisting the HSTS records is typically a way of increasing
> the user’s security against downgrade attacks.
> >
> >> Server implementers may choose how long they retain records relating to
> separate personas, just as they do for today’s default persona.
> >>
> >> This is NOT a request to stop tracking or keeping records; that is an
> orthogonal question that is covered by activities such as do-not-track, cookie
> directives, and so on. This is about giving users control of their privacy by
> controlling what gets linked to what, and exposed when.
> >>
> >> We do not think it is particularly necessary or valuable to have a machine-
> readable means of discovery over whether servers support this feature. Any
> support that they provide is an improvement on today’s experience, where
> servers are unaware that users are trying to be private. Claims of support for this
> feature are probably better conveyed in advertising or other human-readable
> ways.
> >>
> >> This feature might also be valuable for shared terminals; for example, in
> libraries, airline lounges, internet cafes and the like, a new persona can be
> minted each time the terminal is unlocked for a new session. Libraries might tie
> the persona to the library card, so users returning get re-linked to their online
> history and so on. It might also be a lightweight replacement of logging-in, for
> browsers on shared devices — a browser might have a simple way of saying
> which family member it is right now (e.g. a pull-down menu).
> >
> > Yeah, I think these are good use cases. Again, I expect that some of these are
> implemented now by clearing cookies / local state when a new guest logs in.
>
> But the server can still think “same UA, same IP address, same OS, same
> fingerprint => same user who’s just cleared their cookies”. We want the server
> to think “I should segregate this”.
>
> > Firefox has a “profiles” feature that can be used for that purpose (it also
> separates the add-ons, bookmarks, etc. between different users of the same
> machine): https://developer.mozilla.org/en-
> US/docs/Mozilla/Multiple_Firefox_Profiles
> >
> > To your earlier point:
> >> I have some ideas around codifying ‘private browsing mode’ and how to
> communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of
> interest to others?
> >
> > Would servers see a benefit from an indication that the user is in a private
> browsing mode (however defined, but in this case, particularly for the mode of
> not persisting state on the local machine)?
>
> The benefit is being nice to their users, and respecting their wish for privacy. The
> cost is an increase in the number of ‘users’ (the cheapest way to support this is
> to treat each persona as separate).
>
> > Maybe they could avoid downloading files or storing certain types of state —
> rather than asking users to check a box when they’re on a public computer, if
> they’re in guest/private mode the site would know that this wasn’t going to be a
> device with persistence for the user. Related: are private browsing modes in user
> agents observable by servers today?
>
> No, that’s the problem. At least for us, private browsing mode doesn’t “put you
> in a green field” or restrict what you can do. Indeed, *entry* to private
> browsing might do no more than snapshot the local state. It’s *exit* that
> discards the current state and reverts to the prior saved snapshot. For the
> server, exit gets you back to your ‘anonymous’ persona. Hence the permission
> to initialize any server state from the anonymous persona, but activity adds to
> the records under the named persona.
>
> >
> > Thanks for sharing ideas,
>
> Thanks for discussing!
>
> > Nick
>
> David Singer
> Manager, Software Standards, Apple Inc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (MingW32)
Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
Charset: utf-8
iQEcBAEBAgAGBQJUu1mUAAoJEHMxUy4uXm2J0OoIAMiJAMwa8tahJkVUdW0H2lk6
ifgpVbGGQzrnB2jD8ztYuvAuyB3SAS4TSBrRKZu9NgbfAZz6OL8g2xD2DCx4W4DH
k/5bAWBWtz1OrOjVsH4laCLb2ewYHv6B3LwKoOrE93uFjz8Jx83kH0cXT//yALAo
kCHUw46TOyAPtlaQ9K9BxykJ8CnSumF0dvK9hy9wHLtiOjNICsh3l5qhQ2+J7Gyn
j+NApWEOcABF4muno4qw54vFurMGG15wxnIVlLE8zAch/dNR6NxqalkzcMRdTS2T
zMKfY4yBwzyZVWtH0wLDkGuhzp5lIZYssS0wpZ4xsQhD2Ra42FLco0q+WsHPpTQ=
=FukM
-----END PGP SIGNATURE-----

> On Jan 17, 2015, at 22:58 , Mike O'Neill <michael.oneill@baycloud.com> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> I agree privacy does not require secrecy. Actually I think confusing them could lead to a catastrophic collision, creating a bad result for everyone; see http://blogs.wsj.com/digits/2015/01/16/obama-sides-with-cameron-in-encryption-fight/
>
> But some form of anonymity could be a useful way forward. Not totally invisible anonymity but a way to operate with user managed identity, which is why I am interested in your Persona header.
OK, I would enjoy proposals that attack this question — how to be anonymous online — but this suggestion (persona) is about control over information and where it’s visible, not about anonymity. I’d rather not mix the two.
Have you looked at the TAG draft, which (IMHO) also tries to mix ‘private browsing session’ with anonymity?
>
> In my opinion police should be able to identify and monitor wrong doers (with a proper warrant targeting individuals not whole populations) but users should be able to present multiple identities in different contexts. You may be prepared to share a low entropy audience identity to advertisers (e.g. "aging male geek currently resident in the UK"), but not your Facebook account or friends list to thousands of unknown companies. The user should always be in full control of who gets data about them, but not by being invisible to the rule of law.
OK, again, my suggestion does not contain any element of secrecy or anonymity. The same information flows are happening, and the recording of information remains controlled by othe rfactors — protocols, laws, agreements, and so on.
>
> I like the potentiality of the Persona concept but not the UUID bit.
OK. The UUID came from realizing that setting a boolean ‘I am in private mode’ (which has been suggested), results in unpleasant outcomes:
* perhaps you only have two personae — the private one and the public one. That’s not nice; the more private browsing you do, the larger the dataset behind that persona becomes, and it contains unrelated topics and parts of your life.
* perhaps in private browsing mode there are attempts to make you more untraceable, and so on. But that’s likely to get in the way of ‘normal operation’ on the web
I selected UUID because they are easily made, and yet provide for continuity of a session. I realize that they work against trying to make me less traceable, as they (anotehr) unique identifier. But as I say, I think we’ll need to work on anonymous browsing separately (and it’s a much harder problem).
>
>
>
>> -----Original Message-----
>> From: David Singer [mailto:singer@apple.com]
>> Sent: 16 January 2015 00:35
>> To: Nicholas Doty
>> Cc: Mike O'Neill; public-privacy (W3C mailing list)
>> Subject: Re: Super Cookies in Privacy Browsing mode
>>
>>
>>> On Jan 15, 2015, at 12:31 , Nick Doty <npdoty@w3.org> wrote:
>>>
>>> Hi David,
>>>
>>>> On Jan 12, 2015, at 3:08 PM, David Singer <singer@apple.com> wrote:
>>>>
>>>> The user-agent can send an optional HTTP header ‘Persona:’ whose value is a
>> suitable machine-generatable distinct identifier (e.g. a UUID). If the header is
>> absent, the user is operating under their default (unlabeled) persona, which is
>> distinct from all the identified personas, which in turn are also distinct from each
>> other. A user and their user-agent may return to a persona at any time, or
>> continue using a persona for any length of time. A persona identifier is expected
>> to be universally unique, not contextualized to the current user-agent or device.
>>>>
>>>> Servers respecting this are requested to ensure that the labeled personas
>> leave no trace or influence on each other or on the unlabeled persona. For
>> example, activity under one persona should not affect the ads shown under a
>> different persona; any history records that the user can see should be distinct for
>> each persona; and so on. (It’s OK for your unlabeled persona to be reflected in
>> labeled ones, but optional; if servers wish, they can initialize a named persona
>> from the default, un-named one, when they first see it.)
>>>
>>> I think it’s definitely an interesting idea. I think there may be similar thinking
>> behind the advertising identifier proposals, although I’m not sure the exact
>> details on those.
>>>
>>> I share some of Mike’s concerns. Even if some servers could use a change in
>> Persona header to help users separate their shopping activity and help them
>> avoid seeing ads they wouldn’t like, other servers (intentionally or
>> unintentionally) would use the new unique and persistent user identifier to
>> conduct tracking the user might not want. That could (*could*) undermine work
>> done to prevent passive fingerprinting of users without their knowledge.
>>>
>>> The use case seems very relevant though. Personally, I use private browsing
>> modes more than anything as a way to get a new, short-term cookie jar. "What
>> does this site look like when I’m not logged in?" "I’m using my friend’s computer
>> but don’t want to be logged into their Facebook account while I’m browsing.”
>> “Can I log into my email for a minute on your computer?” etc.
>>>
>>> Are there cases where a Persona identifier header would be more useful than
>> just clearing or separating the “cookie jars” or other stores of local state?
>>
>> Yes.
>>
>> Here’s an example. A couple of years ago I used ‘private browsing’ on our home
>> computer to look for my wife’s present. Yes, all the history, cookies etc. were
>> cleared of the history.
>>
>> But when I checked ‘search history’ on Google, of course, there was all the
>> data! Servers are currently unaware that the user is currently trying to do
>> something private; I am suggesting this as a way that they can be aware and
>> nice, without actually impacting their business.
>>
>>> As in the case reported in the Ars Technica article, the implemented fix was
>> just treating HSTS records as state that shouldn’t be persisted into private
>> browsing mode.
>>
>> I am not trying to be anonymous when I am asking to be private; that’s secrecy
>> and is much much harder.
>>
>>> As in previous “evercookie” cases, user agents that can clear all (or most) state
>> mechanisms simultaneously can mitigate the concern. I think HSTS is a more
>> difficult case because persisting the HSTS records is typically a way of increasing
>> the user’s security against downgrade attacks.
>>>
>>>> Server implementers may choose how long they retain records relating to
>> separate personas, just as they do for today’s default persona.
>>>>
>>>> This is NOT a request to stop tracking or keeping records; that is an
>> orthogonal question that is covered by activities such as do-not-track, cookie
>> directives, and so on. This is about giving users control of their privacy by
>> controlling what gets linked to what, and exposed when.
>>>>
>>>> We do not think it is particularly necessary or valuable to have a machine-
>> readable means of discovery over whether servers support this feature. Any
>> support that they provide is an improvement on today’s experience, where
>> servers are unaware that users are trying to be private. Claims of support for this
>> feature are probably better conveyed in advertising or other human-readable
>> ways.
>>>>
>>>> This feature might also be valuable for shared terminals; for example, in
>> libraries, airline lounges, internet cafes and the like, a new persona can be
>> minted each time the terminal is unlocked for a new session. Libraries might tie
>> the persona to the library card, so users returning get re-linked to their online
>> history and so on. It might also be a lightweight replacement of logging-in, for
>> browsers on shared devices — a browser might have a simple way of saying
>> which family member it is right now (e.g. a pull-down menu).
>>>
>>> Yeah, I think these are good use cases. Again, I expect that some of these are
>> implemented now by clearing cookies / local state when a new guest logs in.
>>
>> But the server can still think “same UA, same IP address, same OS, same
>> fingerprint => same user who’s just cleared their cookies”. We want the server
>> to think “I should segregate this”.
>>
>>> Firefox has a “profiles” feature that can be used for that purpose (it also
>> separates the add-ons, bookmarks, etc. between different users of the same
>> machine): https://developer.mozilla.org/en-
>> US/docs/Mozilla/Multiple_Firefox_Profiles
>>>
>>> To your earlier point:
>>>> I have some ideas around codifying ‘private browsing mode’ and how to
>> communicate ‘heh, I am trying to be private here!’ to servers. Is this a topic of
>> interest to others?
>>>
>>> Would servers see a benefit from an indication that the user is in a private
>> browsing mode (however defined, but in this case, particularly for the mode of
>> not persisting state on the local machine)?
>>
>> The benefit is being nice to their users, and respecting their wish for privacy. The
>> cost is an increase in the number of ‘users’ (the cheapest way to support this is
>> to treat each persona as separate).
>>
>>> Maybe they could avoid downloading files or storing certain types of state —
>> rather than asking users to check a box when they’re on a public computer, if
>> they’re in guest/private mode the site would know that this wasn’t going to be a
>> device with persistence for the user. Related: are private browsing modes in user
>> agents observable by servers today?
>>
>> No, that’s the problem. At least for us, private browsing mode doesn’t “put you
>> in a green field” or restrict what you can do. Indeed, *entry* to private
>> browsing might do no more than snapshot the local state. It’s *exit* that
>> discards the current state and reverts to the prior saved snapshot. For the
>> server, exit gets you back to your ‘anonymous’ persona. Hence the permission
>> to initialize any server state from the anonymous persona, but activity adds to
>> the records under the named persona.
>>
>>>
>>> Thanks for sharing ideas,
>>
>> Thanks for discussing!
>>
>>> Nick
>>
>> David Singer
>> Manager, Software Standards, Apple Inc.
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.13 (MingW32)
> Comment: Using gpg4o v3.4.19.5391 - http://www.gpg4o.com/
> Charset: utf-8
>
> iQEcBAEBAgAGBQJUu1mUAAoJEHMxUy4uXm2J0OoIAMiJAMwa8tahJkVUdW0H2lk6
> ifgpVbGGQzrnB2jD8ztYuvAuyB3SAS4TSBrRKZu9NgbfAZz6OL8g2xD2DCx4W4DH
> k/5bAWBWtz1OrOjVsH4laCLb2ewYHv6B3LwKoOrE93uFjz8Jx83kH0cXT//yALAo
> kCHUw46TOyAPtlaQ9K9BxykJ8CnSumF0dvK9hy9wHLtiOjNICsh3l5qhQ2+J7Gyn
> j+NApWEOcABF4muno4qw54vFurMGG15wxnIVlLE8zAch/dNR6NxqalkzcMRdTS2T
> zMKfY4yBwzyZVWtH0wLDkGuhzp5lIZYssS0wpZ4xsQhD2Ra42FLco0q+WsHPpTQ=
> =FukM
> -----END PGP SIGNATURE-----
>
>
David Singer
Manager, Software Standards, Apple Inc.