I have a VM that runs Ubuntu 14.04, but I'm using fvwm rather than the normal Ubuntu desktop environment, and trying to use WebRTC runs into two problems with sandboxing: writing to /run/user/NNN/dconf/user (which seems to have something to do with WebRTC invoking the Necko proxy server component and trying to access the systemwide proxy config) and trying to create the directory /run/user/NNN/pulse (which already exists; pulseaudio is already running), where NNN is the uid in decimal.
I don't know what the security implications are of allowing write/create/etc. access to this /run/user directory in general or even to those two places in particular, but it's probably bad. I also don't know why it's broken for me and not lots of other users.
I still need to see if this reproduces on any version of stock Ubuntu, but I don't have an unmodified 14.04 VM lying around. It *does* reproduce with the official Nightly build as well as a local m-c build.
The test page I'm using is, as usual, https://mozilla.github.io/webrtc-landing/pc_test.html; when affected, clicking the Start button doesn't even prompt for input devices.

It's not reproducible on regular Ubuntu, and the difference is that normal Ubuntu autostarts pulseaudio with a wrapper that loads the `x11-publish` module (among others), which publishes the server info as X11 root window properties; this seems to make the pulseaudio client library just connect instead of first trying create the directory that already exists.
I do still see errors about dconf on my setup, and don't with regular Ubuntu, but that doesn't break WebRTC; not sure what's going on with that.
Also, this directory is available in the environment as XDG_RUNTIME_DIR, so if it does wind up needing special treatment it's probably better to use that than hard-code /run/user.

Flags: needinfo?(jld)

Summary: WebRTC and PulseAudio need access to /run/user on some systems → WebRTC and PulseAudio want and/or need write access to $XDG_RUNTIME_DIR on some systems

(In reply to Mike Hommey [:glandium] from comment #4)
> So that's what it wants write access for, shared memory.
Specifically, a single bit of shared memory — but clients that only read that bit will still open the file for writing and write a 0 to the second byte. This is to work around bugs in posix_fallocate, according to comments in https://github.com/GNOME/dconf/blob/master/shm/dconf-shm.c
Also, the leaf filename isn't necessarily "user"; it's configurable when using a non-default dconf profile. See https://developer.gnome.org/dconf/unstable/dconf-overview.html for details. It's also possible to store the database itself in $XDG_RUNTIME_DIR, but according to the source it uses the subdirectory "dconf-service", not "dconf", so it seems that we could safely allow subtree write access to $XDG_RUNTIME_DIR/dconf even in that case.
Incidentally, because I didn't actually say this when I added it to this bug's See Also: The PulseAudio part of comment #0 was fixed in bug 1335329.

(In reply to Mike Hommey [:glandium] from comment #6)
> All those things are implementation details... should the sandbox work
> around those, or should we just make our calls to dconf from sandboxed
> processes go the parent process via ipc?
They are and we should. There's a question of whether that should happen at the GSettings level or individual users — as far as I know there's currently just this and the WebRTC proxy thing (which supports GConf as well as DConf and might wind up in a different process eventually anyway — bug 1287225) — but I can file a followup bug anyway.
(But then sandboxing kind of inherently means intervening in implementation details, at least on OSes designed for coarse-grained ambient authority.)

I've tested the file listing use case and it seems that dconf actually does work despite the errors, in the sense that it is able to read the pref setting. What it can't do is cache the values it reads.
The shared memory files are used to signal that the file storing the actual prefs has changed: readers create it (with the first byte being 0) if needed, and the writer sets the first byte to 1 and unlinks it. This way existing readers will see the change and know to reload the data, while future readers get a fresh flag that starts at 0.
If the reader failed to open or commit space for the file, dconf_shm_is_flagged() always return true, so every pref read will reread and reparse the file.
So the result of this is to (1) make long file listings slower to load, because every directory entry is a call to HourCycle() and a re-read of the dconf storage, and (2) print stuff to stderr that people will probably, and quite understandably, file bugs about. (And/or it will mislead people who are investigating actual bugs; this kind of thing has happened before.)

DConf uses small memory-mapped files for the writer to signal readers
to invalidate cached data; the file is created by the first reader and
readers will write to it to force storage allocation.
If we don't allow opening the file, DConf will still work, but it will
reread the database on every pref access, and it prints messages on
stderr claiming it won't work. So we should avoid that.
Review commit: https://reviewboard.mozilla.org/r/144368/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/144368/