tag:blogger.com,1999:blog-7267732413134453732017-08-17T11:37:33.427-07:00Scheme/Host/PortImproving web security, one post at a timeAdam Barthnoreply@blogger.comBlogger10125tag:blogger.com,1999:blog-726773241313445373.post-47072410121546687152011-12-11T17:19:00.001-08:002011-12-12T21:12:12.521-08:00RFC 6454 and RFC 6455Today, the IETF published two document: RFC 6454, <a href="http://www.rfc-editor.org/rfc/rfc6454.txt">The Web Origin Concept</a>, and RFC 6455, <a href="http://www.rfc-editor.org/rfc/rfc6455.txt">The WebSocket Protocol</a>. &nbsp;Both these documents started out as sections in the <a href="http://www.whatwg.org/specs/web-apps/current-work/">HTML5 specification</a>, which has been a hotbed of standards activity over the past few years, but they took somewhat different paths through the standards process.<br /><br />RFC 6454's path through the IETF process was mostly smooth sailing. &nbsp;The document defines the same-origin policy, which is widely implemented and fairly cut-and-dried. &nbsp;In addition to the comparison and serialization algorithms we inherited from the <a href="http://www.whatwg.org/">WHATWG</a>, the <a href="http://tools.ietf.org/wg/websec/">websec</a> working group added a definition of the Origin HTTP header, which is used by <a href="http://www.w3.org/TR/cors/">CORS</a>, and a broad description of the principles behind the same-origin policy.<br /><br />RFC 6455's path was less smooth. &nbsp;The protocol underwent several major revisions in the WHATWG, before reaching the IETF. &nbsp;The protocol was fairly mature by the time it reached the&nbsp;<a href="http://tools.ietf.org/wg/hybi/">hybi</a> working group and was implemented in WebKit and Firefox. &nbsp;Unfortunately, some details of the protocol offended HTTP purists, who wanted the protocol handshake to comply with HTTP. &nbsp;The working group polished up these details, leading to churn in the protocol.<br /><br />Around this time, some colleagues and I were studying the interaction between <a href="http://www.adambarth.com/papers/2009/jackson-barth-bortz-shao-boneh-tweb.pdf">DNS rebinding</a> and transparent proxies. &nbsp;It occurred to us that folks had analyzed the end-to-end security properties of WebSockets but less effort had been expended analyzing the interaction between WebSockets and transparent proxies. &nbsp;We studied these issues and found <a href="http://www.adambarth.com/papers/2011/huang-chen-barth-rescorla-jackson.pdf">an interesting vulnerability</a>. &nbsp;We presented our findings to the working group, which updated the protocol to fix issue.<br /><br />One perspective on these events is that they are a success. &nbsp;We found and fixed a protocol-level vulnerability before the protocol was deployed widely. &nbsp;Another perspective is that <a href="http://www.xtranormal.com/watch/7991991/web-sockets-we-are-the-first">we annoyed early adopters</a> polishing unimportant protocol details. &nbsp;My view is that this debate boils down to whether you really believe that <a href="http://www.jwz.org/doc/worse-is-better.html">worse is better</a>. &nbsp;For my part, I believe we had a net positive impact, but I hope we can be less disruptive to early adopters when we improve security in the future.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com612tag:blogger.com,1999:blog-726773241313445373.post-17902650171770884232011-12-03T12:20:00.001-08:002011-12-03T13:28:14.982-08:00Timing Attacks on CSS Shaders<a href="http://dvcs.w3.org/hg/FXTF/raw-file/tip/custom/index.html">CSS Shaders</a> is a new feature folks from Adobe, Apple, and Opera have proposed to the W3C <a href="http://www.w3.org/Graphics/fx/">CSS-SVG Effects Task Force</a>. &nbsp;Rather than being limited to pre-canned effects, such as gradients and drop shadows, CSS Shaders would let web developers apply arbitrary OpenGL shaders to their content. &nbsp;That makes for <a href="http://blogs.adobe.com/jnack/2011/10/css-shaders-hell-yeah.html">some really impressive demos</a>. &nbsp;Unfortunately, CSS Shaders has a security problem.<br /><br />To understand the security problem with CSS Shaders, it's helpful to recall a recent security issue with <a href="http://en.wikipedia.org/wiki/WebGL">WebGL</a>. &nbsp;Similar to CSS Shaders, WebGL lets developers use OpenGL shaders in their web applications. &nbsp;Originally, WebGL let these shaders operate on arbitrary textures, including textures fetched from other <a href="http://www.schemehostport.com/2011/10/foundations-origin.html">origins</a>. &nbsp;Unfortunately, this design was vulnerable to a <a href="http://www.contextis.com/resources/blog/webgl/">timing attack</a>&nbsp;because the runtime of OpenGL shaders can depend on their inputs.<br /><br />Using the shader code below, James Forshaw built a <a href="http://www.contextis.co.uk/resources/blog/webgl/poc/index.html">compelling proof-of-concept attack</a> that extracted pixel values from a cross-origin image using WebGL:<br /><blockquote class="tr_bq"><span style="font-family: 'Courier New', Courier, monospace;">for (int i = 0; i &lt;= 1024; i += 1) {<br />&nbsp; // Exit loop early depending on pixel brightness<br />&nbsp; currCol.r -= 1.0;<br />&nbsp; if (currCol.r &lt;= 0.0) {<br />&nbsp; &nbsp; currCol.r = 0.0;<br />&nbsp; &nbsp; break;<br />&nbsp; }<br />} </span></blockquote>Timing attacks are difficult to mitigate because once the sensitive data is present in the timing channel it's very difficult to remove. &nbsp;Using techniques like bucketing, we can limit the number of bits an attacker can extract per second, but, given enough time, the attacker can still steal the sensitive data. &nbsp;The best solution is the one WebGL adopted:&nbsp;prevent sensitive data from entering the timing channel. &nbsp;WebGL accomplished this by requiring cross-origin textures to be authorized via <a href="http://www.w3.org/TR/cors/">Cross-Origin Resource Sharing</a>.<br /><br />There's a direct application of this attack to CSS Shaders. &nbsp;Because web sites are allowed to display content that they are not allowed to read, an attacker can use a Forshaw-style CSS shader read confidential information via the timing channel. &nbsp;For example, a web site could use CSS shaders to extract your identity from an embedded <a href="http://developers.facebook.com/docs/reference/plugins/like/">Facebook Like button</a>. &nbsp;More subtly, a web site could extract your browsing history bypassing <a href="http://dbaron.org/mozilla/visited-privacy">David Baron's defense against history sniffing</a>.<br /><br />The authors of the CSS Shaders proposal are aware of these issues. &nbsp;In the <a href="http://dvcs.w3.org/hg/FXTF/raw-file/tip/custom/index.html#security-considerations">Security Considerations section of their proposal</a>, they write:<br /><blockquote class="tr_bq"><i>However, it seems difficult to mount such an attack with CSS shaders because the means to measure the time taken by a cross-domain shader are limited.</i></blockquote>Now, I don't have a proof-of-concept attack, but this claim is fairly dubious. &nbsp;The history of <a href="http://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf">timing attacks</a>, including <a href="http://theory.stanford.edu/~dabo/papers/webtiming.pdf">other web timing attacks</a>, teaches us that even subtle leaks in the timing channel can lead to practical attacks. &nbsp;Given that we've seen practical applications of the WebGL version of this attack, it seems quite likely CSS Shaders are vulnerable to timing attacks.<br /><br />Specifically, there are a number of mechanisms for timing rendering. &nbsp;For example, <a href="https://developer.mozilla.org/en/DOM/Animations_using_MozBeforePaint">MozBeforePaint</a> and <a href="https://developer.mozilla.org/en/Gecko-Specific_DOM_Events">MozAfterPaint</a> provide a mechanism for measuring paint times directly. &nbsp;Also, the behavior of <a href="http://www.w3.org/TR/animation-timing/">requestAnimationFrame</a> contains information about rendering times.<br /><br />Without a proof-of-concept attack we cannot be completely certain that these attacks on CSS Shaders are practical, but waiting for proof-of-concept attacks before addressing security concerns isn't a path that leads to security.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com682tag:blogger.com,1999:blog-726773241313445373.post-10673571661519461342011-11-20T12:50:00.001-08:002011-11-22T20:23:39.436-08:00Referer (sic)One of the more astonishing facets of the web platform is the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header. &nbsp;Whenever you click a link from one web site to another, the request that fetches the web page from the second web site contains the URL of the first web site. &nbsp;This behavior causes both security and privacy problems: <br /><ol><li><i>Security</i>. &nbsp;Despite copious warnings, developers often include secrets in URLs. &nbsp;For example, to prevent <a href="http://en.wikipedia.org/wiki/Cross-site_request_forgery">Cross-Site Request Forgery (CSRF)</a>, developers often use <a href="http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf">secret tokens</a>, which have a nasty habit of leaking into URLs and then into <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> headers sent to other sites.</li><li><i>Privacy</i>. &nbsp;The <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header is even worse for privacy, for example, leaking search terms from <a href="http://www.google.com/">your favorite search engine</a> to the web sites you visit. &nbsp;As another example, <a href="http://www.facebook.com/notes/facebook-engineering/protecting-privacy-with-referrers/392382738919">Facebook&nbsp;accidentally&nbsp;leaked user identities to advertisers</a> via the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header.</li></ol>The <a href="http://en.wikipedia.org/wiki/Principle_of_least_astonishment">principle of least astonishment</a> tells us we should remove this "feature", but, unfortunately, we can't just remove the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header from the platform. &nbsp;To many people rely on the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header for too many different purposes. &nbsp;For example, bloggers rely on the Referer header to generate <a href="http://en.wikipedia.org/wiki/Trackback">trackback</a> links, but that's just the tip of the iceberg.<br /><br />As a first step, I wrote up a <a href="http://wiki.whatwg.org/wiki/Meta_referrer">short proposal</a> for a mechanism web sites can use to suppress or truncate the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header: <br /><blockquote><span style="font-family: 'Courier New', Courier, monospace;">&lt;meta name="referrer" content="never"&gt;</span><br /><span style="font-family: 'Courier New', Courier, monospace;">&lt;meta name="referrer" content="origin"&gt;</span></blockquote>One subtlety in the design is including the "<span style="font-family: 'Courier New', Courier, monospace;">always</span>" option. &nbsp;The main reason to include this option is to make it easier for us to later block the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header by default. &nbsp;The <span style="font-family: 'Courier New', Courier, monospace;">always</span> options gives sites an escape valve to turn the <span style="font-family: 'Courier New', Courier, monospace;">Referer</span> header back on, if needed.<br /><br />This mechanism is <a href="http://trac.webkit.org/changeset/100895">now implemented in WebKit</a> and will hopefully be <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=704320">implemented in Firefox</a> and other browsers soon.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com490tag:blogger.com,1999:blog-726773241313445373.post-47600965008908342442011-11-07T00:24:00.000-08:002011-11-07T00:24:35.518-08:00How I learned to stop worrying and embrace Content-Security-PolicyThis week, the W3C Web Application Security working group held its first face-to-face meeting at <a href="http://www.w3.org/2011/11/TPAC/">TPAC</a>, the W3C's annual technical meeting. &nbsp;The main topic of discussion was moving&nbsp;<a href="http://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html">Content-Security-Policy</a> (CSP) from an unofficial draft onto the standards track. &nbsp;I'm actually pretty excited about CSP now, but I wasn't always its biggest fan.<br /><br />CSP has a bunch of features, but the core value proposition (in my view) is that web applications can whitelist where their scripts come from. &nbsp;The main drawback of this approach is that authors need to remove all inline script from their web application because the browser doesn't know whether an inline script is part of the application or whether it was injected as part of a cross-site scripting (XSS) attack.<br /><br />Initially, I was skeptical about CSP for two reasons:<br /><ol><li>CSP has a bunch of functionality unrelated to mitigating XSS, which makes the feature more complicated than necessary. &nbsp;Because I'm a big believer in <a href="http://en.wikipedia.org/wiki/Minimum_viable_product">minimal viable products</a>, my initial reaction was to remove all the extra functionality and focus on the core use case for the first iteration.</li><li>I was worried that removing all the inline scripts from a web application would be too hard because web applications use inline scripts frequently. &nbsp;Joel Weinberger, Dawn Song, and I even <a href="http://www.adambarth.com/papers/2011/weinberger-barth-song.pdf">wrote a paper</a> exploring that issue.</li></ol>I still view those criticisms as fair, but I worry less about those issues. &nbsp;By and large, the early adopters I've worked with have been able to use CSP effectively, which is some evidence that the extra complexity isn't the end of the world. &nbsp;I've also now built and retrofitted a number of non-trivial web applications to avoid inline script. &nbsp;It's a fair amount of work, certainly, but not an insurmountable task.<br /><br />Thanks to some great work by <a href="https://plus.google.com/113575960253398010351/posts">Thomas Sepez</a>, Chrome now uses CSP in the vast majority its HTML-based user interfaces. &nbsp;Over the years, there has been a steady trickle of XSS vulnerabilities in these interfaces, which is problematic because these interfaces (such as the browser's settings interface) have powerful privileges. &nbsp;Now that Chrome uses CSP extensively, we can be confident that we've mitigated this entire class of vulnerabilities.<br /><br />My perspective on CSP now is that it makes a serious dent in one of the biggest problems in web security. &nbsp;It's definitely not a silver bullet (nothing ever is), but we should invest in CSP because we should be working on the big problems. &nbsp;If we're not willing to dream big, we should just pack up our tent and go home.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com238tag:blogger.com,1999:blog-726773241313445373.post-50999444487241984562011-10-30T18:16:00.000-07:002011-10-30T18:16:23.035-07:00The Priority of ConstituenciesLawrence Lessig wrote in&nbsp;<a href="http://harvardmagazine.com/2000/01/code-is-law.html">Code is Law</a>&nbsp;that the choices we make in writing code embody our values. &nbsp;This observation is especially true when building a browser because the browser mediates interactions between many distinct entities. &nbsp;Because the browser's security policy is at the heart of mediating those interactions, we should ask ourselves what values the browser's security policy embodies.<br /><br />One key value is the&nbsp;<i>priority of&nbsp;constituencies</i>, which is enshrined in the <a href="http://www.w3.org/TR/html-design-principles/#priority-of-constituencies">HTML Design Principles</a>:<br /><blockquote class="tr_bq">In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.</blockquote>To better understand this principle, let's consider a specific example: whether the browser's password manager should be enabled for a given web site.<br /><br />The password manager is a source of conflict for these competing interests. &nbsp;Implementors (myself included) believe that password managers improve security by reducing the costs of using a large number of more complex passwords. &nbsp;Many banks, however, disagree. &nbsp;They believe that password managers reduce security because passwords stored in password managers can be stolen by miscreants.<br /><br />How do browser vendors resolve this conflict? &nbsp;By default, we enable the password manager. &nbsp;Because users have a higher priority than implementors (i.e., browser vendors), browsers let users turn the password manager off. &nbsp;Because authors (i.e., site operators) also have a higher priority than browser vendors, browsers let authors disable the password manager on their own web sites by setting <span style="font-family: 'Courier New', Courier, monospace;">autocomplete=off</span>.<br /><br />The careful reader will have noticed that the scheme above violates the priority of constituencies in one case. &nbsp;What if the user wants to use the password manager on a web site sets&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">autocomplete=off</span>? &nbsp;Because users have a higher priority than authors, the browser should resolve this conflict in favor of the user. &nbsp;Typically, browsers handle this case via their extension system. &nbsp;For example, the <span style="font-family: 'Courier New', Courier, monospace;"><a href="https://chrome.google.com/webstore/detail/ecpgkdflcnofdbbkiggklcfmgbnbabhh">autocomplete=on</a></span> extension lets users override authors who want to disable the password manager.<br /><br />How, then, should we respond to web site operators who wish to block or override these sorts of extensions? &nbsp;As long as we believe that these extensions faithfully enact the user's will, we're hard-pressed to let authors block these extensions because that would violate the priority of constituencies. &nbsp;Instead, we ask authors to be humble and accept the user as sovereign.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com541tag:blogger.com,1999:blog-726773241313445373.post-86982805097730551852011-10-22T20:13:00.000-07:002011-10-23T11:05:40.844-07:00X-Script-Origin, we hardly knew yeOn Thursday, Robert Kieffer filed an interesting bug in both the <a href="https://bugs.webkit.org/show_bug.cgi?id=70574">WebKit</a> and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=696301">Mozilla</a> bug trackers:<br /><blockquote>WebKit and Mozilla browsers redact the information passed to window.onerror for exceptions that occur in scripts that originate from external domains. Unfortunately this means that for large institutions (like us here at Facebook) that use CDNs to host static script resources, we are unable to collect useful information about errors that occur in production.</blockquote>Why do browsers redact this information in the first place? &nbsp;The answer is actually a combination of two factors: <br /><ol><li>Although browsers generally prevent one origin from reading information from another origin, the script element, like the image element, is a bit of a loophole: an origin is allowed to <i>execute</i> a script from any other origin. &nbsp;(This exception has wide-ranging implications on both security and commerce on the web.)</li><li>The script element ignores the MIME type of resources it loads. &nbsp;That means if a web page tries to load an HTML document or an image with the script element, the browser will happily request the resource and attempt to execute it as a script.</li></ol>At first blush, these two facts would seem to imply a serious security vulnerability. &nbsp;Certainly executing a script leaks a great deal of information about the script and ignoring the MIME type means a malicious web site can cause the browser to execute any resource, regardless of the sensitivity of the resource (e.g., an attacker can execute the HTML that represents your email inbox as if it were JavaScript).<br /><br />Fortunately, we're able to snatch security from the jaws of vulnerability because of a happy coincidence: resources that contain sensitive information happen to <i>fail to parse</i> as valid JavaScript (at least usually). &nbsp;For example, your email inbox probably consists of HTML that quickly throws a SyntaxError exception when executed as JavaScript. &nbsp;(The consequences of expanding JavaScript to <a href="http://en.wikipedia.org/wiki/ECMAScript_for_XML">include HTML-like syntax</a>&nbsp;is <a href="http://scarybeastsecurity.blogspot.com/2009/05/more-plausible-e4x-attack.html">an exercise for the reader</a>.)<br /><br />Returning to our original question, we now understand that (in an attack scenario) sensitive information actually flows though the JavaScript virtual machine, where it generates an exception. &nbsp;That exception is then processed by window.onerror! &nbsp;If browsers did not redact the information they give to window.onerror, they would potentially leak sensitive information to malicious web sites.<br /><br />How, then, can we address Robert's use case? &nbsp;Certainly we would like web sites like Facebook to be able to diagnose errors in their scripts. &nbsp;Robert suggests an "X-Script-Origin" HTTP header attached to the script that would indicate which origins are authorized to see exceptions generated by the script. &nbsp;Although that would work, that solution seems overly specific to the problem at hand.<br /><br />A more general solution is for the server hosting the script to inform the browser which origins are authorized to learn sensitive information contained in the script. &nbsp;(Typically servers would authorize every origin because scripts are usually the same for every user). &nbsp;We already have a general mechanism for servers to make such assertions: <a href="http://www.w3.org/TR/cors/">Cross-Origin Resource Sharing</a>. &nbsp;We can address Robert's use case by adding a crossorigin attribute to the script element that functions similarly to <a href="http://www.whatwg.org/specs/web-apps/current-work/#attr-img-crossorigin">the crossorigin attribute on the image element</a>. &nbsp;Once the embedding origin is authorized to read the contents of the script, there's no longer any need to redact the exceptions delivered to window.onerror.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com264tag:blogger.com,1999:blog-726773241313445373.post-32216212849764918832011-10-15T19:49:00.000-07:002011-10-15T19:55:17.003-07:00Local URIs are more equal than others (Part 1)On Wednesday, Cedric Sodhi asked the WebKit development mailing list <a href="https://lists.webkit.org/pipermail/webkit-dev/2011-October/018245.html">why WebKit restricts access to local URIs</a>. &nbsp;This post describes one of the reasons why local URIs are more equal than other URIs. &nbsp;In a future post, we'll revisit this issue when we discuss how local URIs (e.g., <span style="font-family: 'Courier New', Courier, monospace;">file:///Users/abarth/tax2010.pdf</span>) don't really fit cleanly into the web security model.<br /><br />Although the web platform largely isolates different origins from each other, there are a number of "leaks" whereby one origin can extract information from another origin. &nbsp;For example, browsers let one origin embed images from another origin, leaking information such as the height and width of the images across origins. &nbsp;These leaks are often at the core of security vulnerabilities in the platform.<br /><br />These same leak exists, of course, between local origins (e.g., those with <span style="font-family: 'Courier New', Courier, monospace;">file</span> URIs) and non-local origins (e.g., those with <span style="font-family: 'Courier New', Courier, monospace;">http</span> or <span style="font-family: 'Courier New', Courier, monospace;">https</span> URIs). &nbsp;What kind of information could a web site extract from your local system using this leak?<br /><br />On my laptop, I have Skype installed, which means that, on my laptop, the URI below resolves to a PNG image with a particular height and width:<br /><blockquote><span style="font-family: 'Courier New', Courier, monospace;">file:///Applications/Skype.app/Contents/Resources/SmallBlackDot.png</span></blockquote>If I visit a web site, if the browser doesn't address this leak, the web site could determine whether I have Skype installed by attempting to load that URI as an image. &nbsp;On my laptop, the image element would have a certain well-known height and width, but on a laptop without Skype installed, the browser would fire the error event.<br /><br />Returning to Cedric's question, why do browser vendors restrict access to local URIs but not to non-local URIs if both have the same information leak? &nbsp;I would prefer to close this leak in both cases, but many web sites embed cross-origin images, e.g. from <a href="http://en.wikipedia.org/wiki/Content_delivery_network">content delivery networks</a>. &nbsp;If we were adding the <span style="font-family: 'Courier New', Courier, monospace;">&lt;img&gt;</span> tag today, we would probably require servers opt in to cross-origin embedding using the <a href="http://www.w3.org/TR/cors/">Cross-Origin Resource Sharing</a>&nbsp;protocol.<br /><br />Fortunately, very few web sites include images (or other resources) from local URIs (especially after we removed the full path from <span style="font-family: 'Courier New', Courier, monospace;">&lt;input type="file"&gt;</span>, but that's a story for another time). &nbsp;That means browsers can block all loads of local resources by non-local origins without making users sad, preventing web sites from snooping on your local file system.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com339tag:blogger.com,1999:blog-726773241313445373.post-9666001735249611302011-10-09T00:13:00.000-07:002011-10-09T00:14:58.487-07:00Integrity for sessionStorageThere are many different ways to think about security. &nbsp;I prefer the following approach:<br /><ol><li>Define a set of <i>threat models</i> that describe the attacker's&nbsp;capabilities. &nbsp;For example, the "man-in-the-middle" is a classic threat model in network security that represents an attacker who has complete control over the network but who has no control over network endpoints.</li><li>Identify a set of <i>security properties</i> that we wish our system to achieve. &nbsp;Defining good security&nbsp;properties&nbsp;is a tricky business, and we're mostly going to wave our hands in this blog. &nbsp;If you'd like an example, you should imagine something like "the attacker doesn't learn the contents of the user's email."</li><li>Determine whether an attacker with the capabilities described in the threat model <i>could possibly defeat</i> any of the security properties of our system. &nbsp;We usually assume that the attacker knows exactly how our system works (e.g., because attackers can read W3C specifications).</li></ol><div>This approach tends to be somewhat conservative in the sense that we underestimate whether our system is secure. &nbsp;That's helpful when thinking defensively because being conservative pushes us to design systems that are secure robustly rather than systems that are secure by some happy accident.</div><div><br /></div><div>So far, this post has been very abstract, but let's get concrete. &nbsp;Recently, I've been corresponding with a number of Firefox developers about <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=495337">Firefox Bug&nbsp;495337</a>. &nbsp;There are a number of technical details, but the issue boils down to the three factors above:</div><div><ol><li><b>Threat model.</b>&nbsp; We're concerned with an <i>active network attacker</i>. &nbsp;(I need to write a "foundations" post introducing the important threat models in web security, but I didn't want to write too many foundations posts in a row.) &nbsp;Essentially, an active network attacker has full control over the network (e.g., they can intercept and spoof HTTP requests and responses), but have very little power over secure network connections (e.g., they can't mess with TLS connections).</li><li><b>Security property.</b> &nbsp;Here's where things get interesting. &nbsp;What are appropriate security properties for <a href="http://www.w3.org/TR/webstorage/">sessionStorage</a> (an API for semi-persistently storing data in the browser)? &nbsp;I claim that the data <a href="http://www.schemehostport.com/2011/10/foundations-origin.html">an origin</a> stores in sessionStorage should have <i>confidentiality</i> and <i>integrity</i>&nbsp;(i.e., other origins should not be able to learn or to alter data stored in sessionStorage).</li><li><b>Could possibly defeat.</b> &nbsp;That leaves us with the question of whether an active network attacker could possibly defeat the confidentiality or integrity of data in sessionStorage. &nbsp;I claim that such a thing is possible in Firefox (via <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=495337#c25">a somewhat elaborate sequence of steps</a>) because Firefox's behavior deviates slightly from the specification. &nbsp;Specifically, in some circumstances that an attacker can provoke, Firefox considers only the <i>host</i> portion of the origin, ignoring the <i>scheme</i> and the <i>port</i>. &nbsp;By ignoring the scheme, Firefox lets a network attacker leverage his or her ability to control HTTP to disrupt the integrity of HTTPS data in sessionStorage.</li></ol><div>Does this represent a "real" security problem? &nbsp;Well, that's a hard question to answer. &nbsp;Certainly this issue makes it harder to understand the security of systems that use sessionStorage. &nbsp;Instead of being able to use clean abstractions like confidentially, integrity, and origin, we need to understand more details of how exactly an attacker can subtly manipulate sessionStorage.</div></div><div><br /></div><div>Ultimately, complexity is the enemy of security. &nbsp;Applied judiciously, threat models and security properties can help you understand the security of your system in&nbsp;simpler&nbsp;terms.</div>Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com459tag:blogger.com,1999:blog-726773241313445373.post-58643095613959683282011-10-01T16:08:00.000-07:002011-10-02T08:25:39.072-07:00Foundations: OriginEvery discussion of the security architecture of the web platform should begin with the notion of an origin. &nbsp;An origin is the basic unit of isolation in the web platform. &nbsp;Every object in the browser is associated with an origin, which defines its security context. &nbsp;When a script running in one origin tries to access an object, the browser checks whether the script's origin has access to the object's origin.<br /><br />So what is an origin? &nbsp;Simply put, an origin is the scheme, host, and port of the URL associated with the object. &nbsp;(Hence the name of this blog.) &nbsp;For example, if you're viewing an article on New York Times in your browser, that article (and all of its associated objects) are in the <span style="font-family: 'Courier New', Courier, monospace;">http://www.nytimes.com</span> origin. &nbsp;This blog exists in the <span style="font-family: 'Courier New', Courier, monospace;">http://www.schemehostport.com</span> origin, which means there is a security boundary between this blog and the New York Times.&nbsp; Of course, there are many subtleties to that security boundary, which we'll get to in due course.<br /><br />Many folks have written about the browser's origin-based security model, which is often referred to as the same-origin policy because, in the usual case, the browser allows one object to access another if the two objects are in "the same" origin.<br /><br />If you'd like to learn more about the same-origin policy, one popular reference is <a href="https://developer.mozilla.org/en/Same_origin_policy_for_JavaScript">Jesse Ruderman's wiki page</a>, but, despite origin's central role in web security, there isn't a specification explaining how the same-origin policy works! &nbsp;To fix that,&nbsp;I've been working with the <a href="http://tools.ietf.org/wg/websec/">IETF's websec working group</a> to write <a href="http://tools.ietf.org/html/draft-ietf-websec-origin">a specification of the web origin concept</a>. &nbsp;There are still a handful of issues to address, but hopefully finish working through the IETF process soon.Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com338tag:blogger.com,1999:blog-726773241313445373.post-692737677657043932011-10-01T14:56:00.000-07:002011-10-01T14:56:59.480-07:00Welcome, dear readerI've decided to start blogging again. &nbsp;This blog is about the security architecture of the web platform, where we are today, how we go here, and where we're going tomorrow. &nbsp;My goal is to write one in-depth, technical post a week.<br /><br />I'm going to focus more on defense than offense, which means I won't be posting about the newest clever attack techniques (at least not that often). &nbsp;Instead, I'll be taking you behind the scenes and showing you how we make the tough calls in securing the web platform.<br /><br />Please feel encouraged to give me feedback, both about what works and what doesn't. &nbsp;I hope you enjoy reading!Adam Barthhttps://plus.google.com/110402179355010562902noreply@blogger.com