Developer Hackfest status

Today is the third day of the Gnome Developer Experience hackfest. The first day I was in the platform group where we looked at the core gtk+ platform and whats missing from it. We ended up with a pretty large list of items, but we picked a few of them and started on a few of them. More details in cosimocs blog post.

Yesterday we finally had all the people interested in the application deployment and sandboxing story here, so we started looking at that. I’ve historically had a lot of interest in this area with previous experiments like glick, glick2, and bundler, so this is my main interest this hackfest. We have some initial plans for how to approach this.

There are two fundamental, but interrelated problems in this area. One is the deployment of the application (how to create an application, bundle it, “install” it, set up its execution environment and start it) and the second is sandboxing (protecting the user session against the app).

Deployment

For deplyoment we’re considering a bundling model where the app ships the binary and some subset of the libraries it needs, plus a manifest that describes what the kind of system ABIs that the app requires. These are very course grained, unlike the traditional per-library dependencies, and would be on the level of i.e.

bare: Just the kernel ABI

system: libc, libm, and a few core libs

gnome-platform-1.0: The full gnome upstream defined set of “stable supported ABIs”

There is a scale here where apps (like games) can chose to bundle more dependencies, and expect more stability over time, but it will not be very integrated, while on the other hand more integrated desktop apps which will be tied to specific versions of some desktop platform.

What is important here is that once an app specifies a particular profile we guarantee that this is all that the app ever sees, i.e. we support isolation of the app runtime environment vs the distribution environment that the user is running. This is done by containers and namespaces where we mount only the system files and application files into the app namespace.

With this kind of isolation we guarantee two things, first of all there will never be any accidental leaks of dependencies. If your app depends on anything not in the supported platform it will just not run and the developer will immediately notice. Secondly, any incompatible changes in system libraries are handled and the app will still get a copy of the older versions when it runs.

One nice thing about this setup is that the application runtime environment will look like a minimal but standard linux install with the app and its dependencies in the standard prefixes like /usr/lib, etc. This means that when creating an application one can very easily reuse existing .deb or .rpm files by just extracting these and putting in the application bundle. Of course, it also will require a higher level of binary compatibility guarantees than what we have previously handled for modules the platform profiles provide. For instance, internal IPC protocols the platform libraries use absolutely have to be backwards compatible.

Sandboxing

The sandbox model goes hand-in-hand with the isolation model of app deployment, in the sense that whatever the app should not be able to do it will not even see. So, for a fully sandboxed app we will not even mount in the users home directory in the app namespace, rather than not having access rights to it. (We will of course also offer applications with unlimited sandboxes so that existing apps can run.)

In order to talk with the sandboxes app we need a IPC model that handles the domain transition between the namespaces. This implies the kernel being involved, so we have been looking (again) at getting some form of dbus routing support into the kernel. Hopefully this will work out this time.

Unfortunately we also need IPC for the Xserver, which is very hard to secure. We’ve decided to just just ignore this for now however, as it turns out Wayland is a very good fit for this, since it naturally isolates the clients.

We also talked about implementing something similar to the Intents system in android as a way to allow sandboxed applications to communicate without necessarily knowing about each other. This essentially becomes a DBus service which keeps a registry of which apps implements the various interfaces we want to support (e.g. file picking, get-a-photo, share photo) and actually proxies the messages for these to the right destination. We had a long discussion about the name for these and came up with the name “Portals”, reflecting the domain-transition that these calls represent.

We’re continuing to discuss the details of the different parts and hopefully we can start implement parts of this soon. After the hackfest we will continue discussions on the gnome-os mailing list.

31 thoughts on “Developer Hackfest status”

I’m loving this. I hope you are going to have a look at 0install, at least for inspiration. It already makes only requested deps visible to apps (through environment variables), and there is an experimental sandboxing app to limit visibility of the filesystem as well. Moreover, it allows secure sharing of apps between users, and libraries between apps, instead of requiring each app to bundle non-standard dependencies.

Also I really like the idea of Portals. Keep up the good stuff! I’ll be following this project closely.

Disclaimer: I am not a developer of 0install, and therefore this is not a shameless plug.

So what is the advantage to using SELinux which does what you want to code (besides “not invented here”).

You mentioned DBUS … why not extend the existing platform code to support communication via isolated clients? I do not know what SELinux offers for this, but extending a working solution would be better than recreating all bugs once more.

What should be the advantage for apps not to use the system libs, but bring their own (outdate, unpatched) libs?

Process isolation is matter of the operation system, not of the desktop env.
Enforcing file system access regulations is also a matter of the operation system.

Why not relay on the existing packaging infrastructures to share “apps”? Do they have too many barriers to ship malware or Swiss cheese programs?

Don’t take me wrong, I just want to know the rational to follow the approach.

SELinux does not automatically imply sandboxing, its just a tool, like the other tools the Linux kernel provides. It may well be part of the sandbox implementation (we’ll see how it works out exactly), but by itself its not a solution to anything.

The current DBus daemon is a userspace process. As such it cannot really be in multiple namespaces or securely arbitrate between them, that is something only the kernel can do.

Ideally we *do* want most app to share the system libs, but only those, not the non-core libraries of varying ABI stability and quality. This way we can work hard to make the core 100% ABI backwards compat and of high quality and then all apps will keep working “forever”.

The separation of “operation system” and “desktop” is not really interesting. We (Gnome) have issues we need to solve, some of them are in the higher levels of code, but some require us to make changes in the lower levels (udev, networkmanager, etc, etc). Just trying to work around things in the upper layer isn’t a good way forward, which is why the Gnome project works hard to “drain the swamp”.

Its quite possible to reuse the packaging infrastructure, by buiilding bundled app from existing packages. This is what i meant by .deb/.rpm above. We’d use these to construct a bundled app. There might be a few changes needed in the packaging, but not much.

I’d add “users” to the above sentence, but it seems like Gnome 3 solved that issue by actually not having users.

Using JavaScript makes this even more insane. That’s worse than Java applets.
With Java applets you had at least a more or less usable language by default with the option to use any alternative language which could compile to bytecode.

To some extent such an App Manager would either reinvent mechanics of regular package managers, because some apps just naturally have uneasy dependencies, in which case the user has 2 places to manage applications, plus the 2 managers need to communicate if some dependencies are not installed

OR

the App Manager reuses the system package manager e.g. as PackageKit Frontend, but then it talks rpm or deb and gets its apps from Debian or RPM Repositories. The sandboxing is then harder to implement and has to be included in the system package manager, probably.
The purpose of the new lightweight format is then to act as a proxy for rpm and deb optimized for the majority of easy dependency cases and for cross-distro deployments.
The format is then never seen by users but by repositories which “localize” the lightweight format in an automated fashion to rpm or deb to be accessed in the regular way by the system package manager or the packagekit optimized Frontend for “Apps”. (These intermediate/proxy repos then just act like the tool alien, but with more reliable conversion, because it’s from lightweight to heavy and not heavy to heavy). The advantage is that you have one tool for all apps and don’t need to know which app installs/uninstalls using which package manager and you build on giant shoulders and the user can reuse its knowhow to search and manage apps. What do you think?

sending messages to opaque interface names rather than PID’s, and triggering actions in other applications/processes (ie performing local RPC, which is those messages’ actual goal most often than not anyway) are both problems long solved
especially the former (with all the micro- or hybrid kernels around, plus Android with Binder and, in fact, Intents, happily implementing namespace managed message ports) but also the latter, if one thinks of LRPC, Nt’s QuickLPC, Solaris’ Doors and whatnot (or even that Micheal Hearn’s dissertation about fast RPC in linux)…

please dont reinvent the wheel for the nth time…

[WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.

We’re not reinventing IPC, most things in the linux desktop already use DBus for IPC. It does all we want and and apps will keep working as-is. We just need to move parts of the dbus daemon into the kernel to handle the domain transitions.

We’re not reinventing IPC, most things in the linux desktop already use DBus for IPC.It does all we want and and apps will keep working as-is.

so? does that preclude implementing a better native IPC/RPC system forever?
even though applications (more precisely, stub libraries) can be gradually ported it to like they were ported to dbus back in the day – since we have the luxury of living on a moving platform?

We just need to move parts of the dbus daemon into the kernel to handle the domain transitions.

since it’s what makes local processes communicate with each other and with services, rather than a service itself, local IPC belongs in the kernel – having a userspace daemon was a mistake to begin with…

but apart from that, if this is just (another) effort about moving dbus functionality to the kernel, what makes it different from others who did the very same thing, succeeding at making it work but failing at reaching mainline?

I don’t think we want a doors-like blocking thread-based RPC-style IPC at all. The primary async nature of dbus is a much better fit for typical event-driven apps. Also, doors lacks some important dbus features like name registration and service activation.

We’ll obviously see what happens with upstream acceptance, but I think the new design we have is much better that previous tries, and it doesn’t affect other parts of the kernel like the networking stack, so it has a much bettter chance.

And no, it won’t be using sockets. DBus is not inherently a streamed system, its a local message passing system, so sockets are not necessary in a kernel implementation.

But, I’m not very interested in arguing about various IPC methods, dbus is already a core part of the linux desktop landscape, and we think it is an excellent fit. If you like some other IPC mechanism, more power to you.

Check out Linux Standard Base. LSB has the same basic idea about platform dependencies. Your application depends on “lsb-base” and OS delivers backwards compatible libraries to the standard version. “gnome-platform” can be implied with “lsb-desktop” dependency.

You also probably mean that external IPC must backwards-compatible, since applications will try to access it. Internal IPC doesn’t need that limitation.

That is, “Portals” would define a set of common capabilities that the apps can do… and or others that it can’t definitely do… but at least the *user* would be greeted with an interface for allowing the app to do (something) that is not specified in its portal(set a new capability)…

or you’ll be greeted with a torrent of mails of why my “app” doesn’t open my file anymore !??… or my file is gone!??… or i created this directory and can’t move anything to it!?? … etc etc

Users can make A LOT of mistakes, i think that hardly the hardwired “portals” can think of all the possibilities, that yet shouldn’t pose any considerable security hazard, but that can pose serious usability hazards…. specially for a novice.

[WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.

Has GNOME heard of or looked at elementary’s Contractor? It sounds *exactly* like what you described when talking about doing something similar to Android Intents, and it’s already in use on elementary OS.

From the article, “First “links” are created between Contractor and ‘destination’ apps. These ‘links’ are called ‘contracts’.

“I thought of Contractor just like a building contractor: I come to you and say ‘build this’ and then you figure out who will do what. So an app signs a ‘contract’ with Contractor stating that it can handle THISTYPE of data.

“Contractor then reads its directory of .contract files, parses them into a big dictionary and then uses that to return either a full list, or filtered list based on mimetype, to the ‘source’ application. It sends that data over dbus and the app just executes the given command.”