One noteworthy feature is the process model of Chrome. Most browsers seem to aim to have all tabs and windows in the same process which means that they can all crash together. Chrome has a separate process for each tab so when a web site is a resource hog it will be apparent which tab is causing the performance problem. Also when you navigate from site A to site B they will apparently execute a new process (this will make the back-arrow a little more complex to implement).

A stated aim of the process model is to execute a new process for each site to clear out the memory address space. This is similar to the design feature of SE Linux where a process execution is needed to change security context so that a clean address space is provided (preventing leaks of confidential data and attacks on process integrity). The use of multiple processes in Chrome is just begging to have SE Linux support added. Having tabs opened with different security contexts based on the contents of the site in question and also having multiple stores of cookie data and password caches labeled with different contexts is an obvious development.

Without having seen the code I can’t guess at how difficult it will be to implement such features. But I hope that when a clean code base is provided by a group of good programmers (Google has hired some really good people) then the result would be a program that is extensible.

They describe Chrome as having a sandbox based security model (as opposed to the Vista modem which is based on the Biba Integrity Model [3]).

It’s yet to be determined whether Chrome will live up to the hype (although I think that Google has a good record of delivering what they promise). But even if Chrome isn’t as good as I hope, they have set new expectations of browser features and facilities that will drive the market.

8 comments to Google Chrome – the Security Implications

No, i think you misread it, thay said that Vista uses Biba and that they do better because they use a capability based system, where the sandboxed process can only *respond* to requests from the non sandboxed part and nothing else.

That was the biggest question how they are doing that. In linux there is a process flag to disable most syscalls, but i wonder how they do that on windows…

I believe it is for windows primarily and as such works in a slightly more integrated way than linux which is more modular.
If I were to hazard a guess I would say that they have reconfigured their process management to allow them to do what they claim to be able to do.
None shall know until it is released and I am signed up to test it out.I personally use and endorse Fedora core 9 with XP Professional on dual boot and appreciate both Operating systems for their different qualities,windows integration and blue screens of death and linux outdated drivers but rock solid with the best free developments and releases.
If Google Chrome does what it claims I will use it on windows and maintain firefox 3 on linux if the system integration has issues.Hoping for a quick release :)

It’s great that Google have recognised that security needs to be an important consideration with browsers. It’s a shame that this beta of Chrome shows that they haven’t been thorough enough about it to fix known security problems with the toolkits they’ve built Chrome on. But it’s a beta version and no doubt these issues will be addressed with the release version. (But then again, some Google products seem to remain as beta versions forever!)

It’s also great that Google is acknowledging the need to keep ahead of the bad guys and their rapidly evolving ways of using exploits, social engineering and other web-borne threats to harm users. The inclusion of the malware and phishing blacklists in Google Chrome is a step in the right direction. Google state that the software checks a URL against their blacklist databases of web pages/sites that are known to have delivered malware and phishing attacks in the past.

Of course, that approach is mostly too slow to protect against transient threats, and most online threats today are highly transient. The bad guys register and invoke domains, or put their exploit payloads onto legitimate web sites they’ve been able to poison, for just the few days they’ll be able to fly under the radar and not make it onto blacklists. The bad guys either shut the exploit down before making it onto the blacklists, or very soon after. So often these days, the threat is gone before it can be recorded into the blacklists. Worse, at least for the operators of legitimate sites that have been compromised, when the threats are detected and the sites added to the blacklists, the sites show up as infected even after the threats are gone.

AVG believes the best approach is real-time scanning that inspects each web page for exploits right when the user clicks on the link to visit it. That’s the approach the AVG LinkScanner technology uses. This real-time scanning functionality is more effective against transient threats. The safe surf AVG LinkScanner Active Surf-Shield module in all paid AVG products does real-time scanning to detect infected and potentially-infected content as you browse the web. This real-time approach delivers the maximum protection simply not able to be provided by blacklists.

I’m with Lloyd on this. As a CTO of CMS company the exploits our users get hit with are almost exclusively injection related attacks – and I do mean related as the work arounds to prevent injection attacks are obvious to developers.

Its a double blind for us. As we are a CMS we permit script insertion, so our users tend to get hit when they permit anonymous script insertion on their sites. Mostly we can detect it but, as these things tend to work, exploits are usually a tad ahead of the game.

Anyway, my post is really about Chrome. Its still really early days yet. Lots of (particularly) javascript related functions arent quite working yet, and their javascript debugger really, really sucks – its a toy.

To name only one obvious general area that affects loads of script – pixel width and height issues. Annoyingly scroll bars appear even when they have been explicitly denied. Chrome is touted as a web 2 application (at least in my local media spin from google) that is ideal for function heavy javascript client server applications. Well… Not yet. It cant even run FCKEditor without stripping javascript from the editor source – not that we use this editor but millions of users do. Google still has a long way go before really function rich applications can even think of seriously coming on board.

Having said that. Seriously good little (big really) product and absolutely going in the right direction. They really need to get some of the mozilla developers on board to sort out the bugs though.

Yes. Your right Lloyd about time and getting it right. Google is a remarkable company. By far the most successful advertising company on the planet.

I know that Chrome code is open source, but I really wish it wasn’t. Brendan Eich at Mozilla must be quietly spitting chips.

Which brings me to my point. I’m going to go out on a limb here and expand the concept of security momentarily.

As I have spent much of my career coding ways and means of capturing granular, user specific data I’m really not very comfortable with an independent (and for most, foreign) advertising company using their backward propagation technology to track so many of our moves quite so thoroughly.

Its very easy to overlook this concern, but I suspect those who don’t see a problem with their activities being exposed haven’t had, on the rare occasion in the course of their professional careers, various security organisations show an interest in using their work.

So, my biggest security concern is not the specifics of the various functions and objects within Chrome (after all Goggle will get it right over time), but with Chrome itself.