Technique also works for any data entered into a browser's search box.

Be careful what you type on your computer while surfing the Web. It very well could be funneled to a script kiddie who has appropriated a handful of lines of code and inserted it into his site.

The hack has been possible for years, but two proofs of concept published this month graphically demonstrate just how easy it is for even savvy people to fall for it. Both demonstrations use JavaScript to hijack the search command found in all standard browsers. The script is activated when a user presses the ctrl+f or ⌘+f keys, causing whatever is typed after that to be sent to a server under the control of the website operator rather than to the browser's search box.

Proofs of concept here and here show how this method could be used to trick people into divulging their password or credit card number respectively. The pages pose as lists that catalog leaked user data and invite visitors to search it to see if their information is included.

To be sure, the demos are crude. The search bars that are opened are only a rough approximation of the search bars found in Google's Chrome browser. And of course, they look nothing like the search interfaces found in Internet Explorer, Firefox, or other browsers. But as security expert Bruce Schneier once noted, exploits only get better. There's nothing stopping a determined attacker from improving the hacks so they present an authentic-looking box that's customized for whatever browser and operating system an end user happens to be using. Other browser functions, such as the ctrl+s or ⌘+s save commands, could also be intercepted and replaced with a fake dialog box that instructs users to enter their administrator password.

The "browser event hijacking" hack uses JavaScript's preventDefault function, which cancels an operation while allowing all remaining handlers for the event to be executed. The code for the password-stealing demo looks like this:

Neohapsis blogger Ben Toews said he raised the issue with members of Google's Chrome team and "it was labeled as a low-priority issue." He said he's not sure he disagrees with the assessment, but thinks the issue needs to be addressed.

There are at least two possible solutions to reduce threats like these. One is tweaking the user interface so search boxes are in a part of the browser that can't be confused with Web content. Browser designers who wanted to adopt this approach might be able to learn from changes Microsoft has made to recent versions of Windows that cause Web content to be shaded when sensitive system messages are being displayed. An alternate fix could involve displaying a warning when sites call preventDefault to cancel events registered as a browser key binding.

Given the frequency of posts purporting to contain passwords, credit card numbers, and other details leaked from popular websites, it's not a stretch to think plenty of people use the search feature to see if their personal information is included. If you've ever typed data into a browser search box that you wouldn't want outsiders to see, you're in good company.

"This is has been possible for quite some time," said Jeremiah Grossman, CTO of Web security firm WhiteHat Security. He went on to say it would be easy for even security-savvy people to fall for such a scheme. "I couldn't tell you with any certainty I haven't."

Chrome has made [most of, apparently] its dialogs overlap with the window frame in a way that web content can't do, this should be an obvious start for any browser-provided interface elements that prompt user input. That, coupled with conditioning users to expect it, could go some way toward improving the situation without excessive intrusion or harmful limitations on web apps.

A prompt (e.g. "this site tried to use a key command usually associated with browser functionality; allow once, always allow, deny") could work, but it would be annoying to developers and users alike. Edit: and it would need to be implemented properly so that the user doesn't accidentally activate any choice while they continue typing whatever they had just been typing, which sadly still happens with some dialogs.

bazza80 wrote:

Wouldn't a better solution be to stop Javascript being able to intercept ctrl+f and other combinations? I can't think of a legitimate reason why a website would need to handle ctrl+f or ctrl+s.

Some legitimate web apps provide functionality that's roughly analogous to the action associated with the key command they're intercepting (e.g. cmd+s or ctrl+s for Save in a Google Document).

Wouldn't a better solution be to stop Javascript being able to intercept ctrl+f and other combinations? I can't think of a legitimate reason why a website would need to handle ctrl+f or ctrl+s.

Have you never used Microsoft Word and used CTRL + S to save a document? If you are writing an email in Gmail, the same function exists because it is a function people are already aware of to help save a document draft.

I agree that it shouldn't be allowed, but a very obvious example of a place where it would make complete sense to be used is something like webmail.

I was curious what happens when you use the Find from the menubar, and it still pulls up the Safari default search. Kind of interesting that this would potentially catch more experienced users than novice ones. Most people just do not remember or use key commands.

In Safari on the Mac, this looks totally wrong, but you could code it to look like the correct full width search bar.

Google Chrome provides a "themes" option to change the font/colors/background of the title bars, thus preventing websites from emulating them. That's what I use to mitigate getting confused by this stuff.

I have found this behavior of JS grabbing meta-keys to be extremely irritating in general, even ignoring the fishing potential. Many online readers for example seem to like to take over functions for navigation, which can result in unexpected behavior like not actually being able to close a tab/window with the keyboard. Popup windows and the like can also use such tricks to be extra annoying.

I personally use a 3rd party keyboard driver, and since applications are all scriptable I solved this for browsers by intercepting keyboard entry for certain key commands (like closing a window, or editing functions including Find) and then activating the menu items directly. This prevents the proof of concept examples from being effective by keeping JavaScript out of the loop entirely. However, that's a bit of a hack for something we really shouldn't need to worry about.

gnosis wrote:

Some legitimate web apps provide functionality that's roughly analogous to the action associated with the key command they're intercepting (e.g. cmd+s or ctrl+s for Save in a Google Document).

I don't think I agree with this. Just as with any well behaved application, and web app shouldn't have command collision with its environment. That's not very hard either. To use your own example, Google Documents could use "alt-s" on a platform where ctl+s is the standard application save, or ctl+s when cmd+s is the standard.

At the very least though, browsers should require that allowing JavaScript to intercept and override meta-keys be subject to a whitelist, with users explicitly opting in. The value is too low in general and the potential for mischief too high.

Ultimately, this is just another example of where the greatest security hole is user manipulation / social engineering. Although I'm definitely an advocate of designing software to avoid user error, perhaps there might be some way to educate people to become more cynical about things on the internet that contain their password / person information.

Plus, who does Ctrl+F for a password anyway? In what world is this a security issue?

"Given the frequency of posts purporting to contain passwords, credit card numbers, and other details leaked from popular websites, it's not a stretch to think plenty of people use the search feature to see if their personal information is included."

Directly covered the article. Sniffing for the browser being used and then delivering a widget to match the native look & feel is trivial. It won't work for users doing some completely custom theming, but it doesn't need to either. Like spam, phishing is a game of Large Numbers: it's enough if it looks right for most browsers and a tiny, tiny fraction bite.

Quote:

Plus, who does Ctrl+F for a password anyway? In what world is this a security issue?

The same world where 'password' and '123456' are the most common passwords? It's just a proof of concept, but it's not hard to imagine ways clever people might make use of it as an additional tool in social engineering attacks. For that matter as I said my original irritation with this sort of thing didn't even have anything to do with phishing, but rather with jokesters/jerks (1) or just plain coding screw ups.

1. What's more fun then getting someone to visit a goatse page? A goatse page that intercepts cmd/ctl+w and makes it spawn new windows instead!

"Given the frequency of posts purporting to contain passwords, credit card numbers, and other details leaked from popular websites, it's not a stretch to think plenty of people use the search feature to see if their personal information is included."

Ultimately, this is just another example of where the greatest security hole is user manipulation / social engineering.

You are right for many cases, but not this time.

Quote:

Although I'm definitely an advocate of designing software to avoid user error, perhaps there might be some way to educate people to become more cynical about things on the internet that contain their password / person information.

I don't think it's a stretch that people should be able to trust their actual native interfaces. Sure, web pages can display whatever, but sites are supposed to be subject to a certain level of sandboxing, and users have a reasonable expectation of that. If someone issues a system command, and the web page can hijack that, I don't think it's reasonable to call that user error.

I don't think I agree with this. Just as with any well behaved application, and web app shouldn't have command collision with its environment.

I think it depends on what you consider a collision, versus what you consider expected behavior. When I'm working in a word processor, I expect cmd-s to save the document I'm editing, not to save the application in which I'm editing. And I would take that a step further and say that any well-behaved application should also be expected to adopt the key combination conventions of its environment; obviously, in the case of a web application I would prefer what you consider a collision. In that sense, Google Docs is substantially better behaved (in my opinion) than, say, any random Adobe app I can think of, which can't even be expected to get undo/redo right.

Quote:

That's not very hard either. To use your own example, Google Documents could use "alt-s" on a platform where ctl+s is the standard application save, or ctl+s when cmd+s is the standard.

I would find that to be a poor user experience. Keyboard commands are useful only if they're predictable, introducing completely foreign (or in the case of a Mac, essentially borrowing the equivalent command conventions from a foreign OS) commands defeats that purpose and makes me fight with the app. Moreover, this wouldn't avoid collision completely, just shift it around somewhat. Many environments employ alt-[key] to activate menus or other actions, and OS X (which uses cmd-[key] for most commands) uses ctrl-[key] for many emacs bindings. It may be that you're more willing to accept those collisions, but then the argument for familiarity really grows stronger in my opinion.

Quote:

At the very least though, browsers should require that allowing JavaScript to intercept and override meta-keys be subject to a whitelist, with users explicitly opting in. The value is too low in general and the potential for mischief too high.

I might disagree on the value:mischief ratio, but as I said in my earlier comment, I think a confirmation could be okay if done right. I think that what the above exchange highlights is that people have profoundly different priorities and expectations on this issue, and a choice would be preferable to none.

Edit to add: I don't want to give you the impression that I want to convince you to abandon your preference on this, I'm just stating my own. For whatever it's wroth, you have an upvote from me for proposing a thoughtfully different view.

I am a Chrome engineer who works on UI and talks to the security folks.

First, disallowing JS from handling registered browser shortcuts is a non-starter. Not only are practically all key combinations mapped to some shortcut or other, but in many cases web pages actually need to take over a command to implement it properly. For example, in a Google Doc, the loaded content may actually only be a small piece of the document -- other pieces are dynamically loaded as you move through the document. In this world, we _want_ Docs to intercept ctrl-f so it can implement it properly -- otherwise users will be very frustrated that suddenly find-in-page doesn't find most of the matches it should.

Note that for some shortcuts like ctrl-w, we do prevent pages from seeing them to begin with, but this is more about avoiding the situation where a hung renderer prevents you from closing or changing tabs.

Second, showing a warning any time a page has a handler for a shortcut is also a non-starter. You never want to show warnings anyway -- a cardinal rule of UI design is that users ignore them and click anything that will make them go away, so you've accomplished very little other than to annoy people. And given the paragraph above, we don't want to warn you that "Google Docs has intercepted ctrl-f, do you wish to allow this?" anyway. We need to Just Do The Right Thing, which is almost always to allow pages to handle these shortcuts.

As for implementing UI in an unspoofable way, this is rarely helpful, because only the most technically-adept users notice the differences. The find box in Chrome is actually unspoofable now because when opened it presents a contiguous surface with the toolbar, which a page cannot match. But the dividing line in the latter case isn't something most people will notice. Sadly, if you do user studies, you find out that more drastic changes don't buy you very much more than this -- if something looks even remotely functional most users will blithely proceed anyway.

Frankly, the particular issue described here is WontFix in my opinion. I suspect there are many more plausible ways to convince someone to type their password than by hijacking ctrl-f. A crappy phishing page would work better. And both pages would be something we'd add to the SafeBrowsing anti-phishing blacklist anyway.

I don't think I agree with this. Just as with any well behaved application, and web app shouldn't have command collision with its environment.

I think it depends on what you consider a collision, versus what you consider expected behavior.

A completely fair point. It's also an odd situation, in that there aren't many cases on an OS where you're effectively running an application within another application such that you'd be having this sort of conflict in the first place. Saving actually isn't even the best example, since there probably aren't many people who use a browser's direct save functionality very often. However, what about window manipulation vs document manipulation? If I hit cmd-n, do you open a new browser window, or do you create a new Google Doc? If I hit cmd-w, do you close the document (expected behavior for a "word processor") or close the tab/window (expected behavior for a browser)? Does cmd-t create a new tab, or open options for text formatting? There are differences between web apps and native apps, and it is a tricky question I think.

Quote:

Many environments employ alt-[key] to activate menus or other actions, and OS X (which uses cmd-[key] for most commands) uses ctrl-[key] for many emacs bindings.

That environments vary might undermine your point a little. I'm perfectly OK with emacs and vim having their own schemes for example . In general I agree with you that consistency is highly desirable in a UI, but some collisions are a lot harder to resolve then others.

Quote:

Edit to add: I don't want to give you the impression that I want to convince you to abandon your preference on this, I'm just stating my own. For whatever it's wroth, you have an upvote from me for proposing a thoughtfully different view.

Not at all, and I very much appreciate your thoughts as well. I personally feel that the number of cases where it's useful (most people seem likely to use a few specific web apps) are significantly outweighed by the cases where it has potential for mischief (everywhere else on the Internet), so a whitelist might be practical. On the other hand you have a good argument, so maybe a preference really is the best answer. Or perhaps some automated solution could be developed, along the lines of "by default allow JavaScript to take over commands if coming from a domain secured by a class 2 or higher certificate". There must be some good balance to be found for having it where it's desirable but not where it isn't.

I am a Chrome engineer who works on UI and talks to the security folks.

Thanks for jumping in, your insights and experience lend a lot to the subject.

Quote:

First, disallowing JS from handling registered browser shortcuts is a non-starter. Not only are practically all key combinations mapped to some shortcut or other, but in many cases web pages actually need to take over a command to implement it properly. For example, in a Google Doc, the loaded content may actually only be a small piece of the document -- other pieces are dynamically loaded as you move through the document. In this world, we _want_ Docs to intercept ctrl-f so it can implement it properly -- otherwise users will be very frustrated that suddenly find-in-page doesn't find most of the matches it should.

That's downright fascinating. It's not something I'd thought of (I've only ever used relatively small documents, so never considered what happens with larger ones). It certainly makes sense of the fact that Google Docs implements its own Find box rather than letting the browser handle it.

Quote:

Note that for some shortcuts like ctrl-w, we do prevent pages from seeing them to begin with, but this is more about avoiding the situation where a hung renderer prevents you from closing or changing tabs.

This seems to be a pretty reasonable limitation, but I think it's worth noting that it does come with a cost: any web app that implements its own multiple-document interface (or equivalent) is prevented from using a conventional "close" command.

Quote:

Second, showing a warning any time a page has a handler for a shortcut is also a non-starter. You never want to show warnings anyway -- a cardinal rule of UI design is that users ignore them and click anything that will make them go away, so you've accomplished very little other than to annoy people.

I don't think this is necessarily right, and there's ways to mitigate the "click anything that will make them go away" problem (a short delay before any action is available), and where security is a concern that kind of annoyance could be warranted. Have you done (or seen) research into whether users are more thoughtful in making choices like this when delayed?

Quote:

And given the paragraph above, we don't want to warn you that "Google Docs has intercepted ctrl-f, do you wish to allow this?" anyway. We need to Just Do The Right Thing, which is almost always to allow pages to handle these shortcuts.

If it's implemented the way "are you sure you want to leave this page?" (window close event) is implemented, it could allow developers to provide an explanation of why you've intercepted the command (e.g. "by preventing this action, searching a large document may produce incomplete results"). I realize that this is taking the entire process further down the path of friction for users, but sometimes that's appropriate, and the consideration here should be whether this is one of those times.

Quote:

As for implementing UI in an unspoofable way, this is rarely helpful, because only the most technically-adept users notice the differences. The find box in Chrome is actually unspoofable now because when opened it presents a contiguous surface with the toolbar, which a page cannot match. But the dividing line in the latter case isn't something most people will notice. Sadly, if you do user studies, you find out that more drastic changes don't buy you very much more than this -- if something looks even remotely functional most users will blithely proceed anyway.

I wonder how much overlap there is between the "technically adept" who notice and those of us who know and use keyboard commands.

Quote:

Frankly, the particular issue described here is WontFix in my opinion. I suspect there are many more plausible ways to convince someone to type their password than by hijacking ctrl-f. A crappy phishing page would work better. And both pages would be something we'd add to the SafeBrowsing anti-phishing blacklist anyway.

Essentially, hijacking keyboard commands for malicious purposes is phishing, and should be treated as such.

The solution is to keep the browser interface and web content separated. They should never shared the same area. I understand everyone going for minimum chrome nowadays (especially Chrome) by removing toolbars and the like, but "merging" the two will continue to cause problems like this (although I would consider this particular issue pretty minor so far).

I doubt this will ever be the direction taken by browser makers so the real solution is (again) in the hands of the user. Mine is NoScript. Keeping scripting disabled by default is simply the best security you can get online (along with education). If inconvenience is preventing you from using some form of JS control, give it a couple months and build up a nice big whitelist.

I don't think I agree with this. Just as with any well behaved application, and web app shouldn't have command collision with its environment.

I think it depends on what you consider a collision, versus what you consider expected behavior.

A completely fair point. It's also an odd situation, in that there aren't many cases on an OS where you're effectively running an application within another application such that you'd be having this sort of conflict in the first place. Saving actually isn't even the best example, since there probably aren't many people who use a browser's direct save functionality very often. However, what about window manipulation vs document manipulation? If I hit cmd-n, do you open a new browser window, or do you create a new Google Doc? If I hit cmd-w, do you close the document (expected behavior for a "word processor") or close the tab/window (expected behavior for a browser)? Does cmd-t create a new tab, or open options for text formatting? There are differences between web apps and native apps, and it is a tricky question I think.

I think these examples raise good questions, and I honestly didn't know the answers so I tested them (and I'll include my presumptions): I was 50/50 on expecting cmd-n to open a new document, it opens a new window (I think this would be my preference); I fully expected cmd-w to close the window, because Google Docs is a single-document-per-window application, and this is how it behaved; I expected cmd-t to create a new tab, because Google Docs is restrictive in its formatting capabilities and would not use the native OS X formatting window, and that was correct as well.

An even better example, in my opinion: cmd-a to select all. In a normal webpage, I would expect it to select the entire page's contents (or the contents of a focused text field). The document is like a (contentEditable) text field, but from what I understand it's actually a custom implementation. If the browser prevented reserved keyboard commands, it would be impossible for the user to achieve the expected "select all" outcome in Google Docs.

Quote:

Quote:

Many environments employ alt-[key] to activate menus or other actions, and OS X (which uses cmd-[key] for most commands) uses ctrl-[key] for many emacs bindings.

That environments vary might undermine your point a little. I'm perfectly OK with emacs and vim having their own schemes for example . In general I agree with you that consistency is highly desirable in a UI, but some collisions are a lot harder to resolve then others.

Well, environments differ, but variation within an environment is (thankfully) rare. My point about emacs binding is that it's actually (nearly) system-wide in OS X, so ctrl-[key] is full of collisions. There are valid cases for diverging (emacs, vim being among them), but simply being a web app doesn't seem like a good reason for requiring that, to me, particularly as web apps grow in capability and usage.

Quote:

Quote:

Edit to add: I don't want to give you the impression that I want to convince you to abandon your preference on this, I'm just stating my own. For whatever it's wroth, you have an upvote from me for proposing a thoughtfully different view.

Not at all, and I very much appreciate your thoughts as well. I personally feel that the number of cases where it's useful (most people seem likely to use a few specific web apps) are significantly outweighed by the cases where it has potential for mischief (everywhere else on the Internet), so a whitelist might be practical. On the other hand you have a good argument, so maybe a preference really is the best answer. Or perhaps some automated solution could be developed, along the lines of "by default allow JavaScript to take over commands if coming from a domain secured by a class 2 or higher certificate". There must be some good balance to be found for having it where it's desirable but not where it isn't.

I think the useful:potentially-harmful ratio will change (and indeed has been changing for some time) as web apps become more powerful and prevalent. Like it or not (and I'm increasingly inclined to like it), web apps are becoming less and less of a computing ghetto and more and more a part of normal computing, and arbitrarily limiting them compared to their native counterparts seems counterproductive; but it does sort of recursively provide more justification for continuing to treat them as lesser apps the longer they remain lesser apps.

Quote:

In the mean time technical users can at least override it themselves.

While I don't share your preference on this, I do have many customizations to make the web... act more the way I want it to, so I'll say: amen to that.

The solution is to keep the browser interface and web content separated. They should never shared the same area. I understand everyone going for minimum chrome nowadays (especially Chrome) by removing toolbars and the like, but "merging" the two will continue to cause problems like this (although I would consider this particular issue pretty minor so far).

Where are they being "merged"? This doesn't appear to be caused by any kind of "merging" of browser UI and web content.

We're worried about protecting people who are not able to be protected.

If you type every URL you go to into a search box because you have no idea what an address bar is. You're already too stupid for solutions to resolve.

I'm always impressed with the correlation between rants on the Internet about how other people are stupid and the poor proofreading skills of the people posting those rants.

The whole purpose of usability work is to socially engineer positive outcomes for users with poorer skills in the relevant problem area. When those users access networks, usability and security have many overlapping areas. Protecting those people from harmful social engineering is largely a matter of successful usability work.

"Given the frequency of posts purporting to contain passwords, credit card numbers, and other details leaked from popular websites, it's not a stretch to think plenty of people use the search feature to see if their personal information is included."

It's a stretch to think that's common.

It's not a stretch to imagine a less-experienced user being lured into a "helpful tool" site, which offers to search a "hacker's database" to see if your password is in it. Or even to trap less sensitive data, such as all your user IDs and your real name, which is all the opening a malicious person or criminal organization needs.

We're worried about protecting people who are not able to be protected.

If you type every URL you go to into a search box because you have no idea what an address bar is. You're already too stupid for solutions to resolve.

I disagree. In fact, I almost always treat anything I type into the omnibox that's more than about 10 characters long as a search because it's far too easy to make a typing error and end up at a phishing site. By doing a search I get a confirmation that I didn't make a typo and also that I had the right site name even if I didn't make a typo.

I guess there are other features in Javascript, that you would want to allow only for sites you deem secure.

So we should have a global setting like "allow web application access for any/selected/none" and a way for web sites to request access from the user if it's set to 'selected'. Choices would be remembered per site, of course.

(There's already a similar feature built into Safari where web sites can request to use the system's notification center.)

The usability arguments regarding warnings and dialogs are utter nonsense. Web browsers should be following the security example set by modern operating systems, not starting over with ideas that were thrown out with the mainstream use of Windows 98. In an era where putting something on my phone requires me to accept a list of permissions, to say that web applications trying to act like a real application should be exempt from the security policy that real applications have to follow just because they're written in JavaScript and HTML is absurd.

I've never used ctrl+f in that way and every time I write sensitive information I make sure the site is at least verified. Suppose you can never be completely safe but I won't start getting paranoid over this issue (just yet).

As for implementing UI in an unspoofable way, this is rarely helpful, because only the most technically-adept users notice the differences. The find box in Chrome is actually unspoofable now because when opened it presents a contiguous surface with the toolbar, which a page cannot match. But the dividing line in the latter case isn't something most people will notice. Sadly, if you do user studies, you find out that more drastic changes don't buy you very much more than this -- if something looks even remotely functional most users will blithely proceed anyway.

Browser interfaces change. Between browsers, browser versions, OS versions... the idea that users - who have been conditioned to frequently (and automatically) updated browsers - should be intimately familiar with the 'official' interface is naive.

The functionality being exploited isn't going away, wether we like it or not. Users aren't going away either, so while we can of course sit back and say they're stupid (not that I believe that is the case, ignorant maybe), that means compromising our own security - unless you believe only personal passwords are threatened by phishing (in which case, join the ranks of the ignorant).Defining safe browsing habits and educating people about them continues to be the general solution - valid for social engineering/phishing in general and not just this particular exploit. And it has to be done continually to maintain vigilance.

Since migrating to a SSD, I was lazy and just didn't install Java. I took the approach that "I'll install it when I need it." Gues what? After 5 months, I still haven't had a need for Java in any manner.

Congratulations on not needing Java, but you're still vulnerable to the attack since it uses JavaSCRIPT, which is interpreted by most modern browsers by default.

...Just as with any well behaved application, a web app shouldn't have command collision with its environment....

This is the root of the problem, right here. Another example is the ability of an element within a web browser to "pop up" a window outside the bounds of the browser itself- this should simply never be allowed.

...Just as with any well behaved application, a web app shouldn't have command collision with its environment....

This is the root of the problem, right here. Another example is the ability of an element within a web browser to "pop up" a window outside the bounds of the browser itself- this should simply never be allowed.

One problem with tweaking the display of the search box is that I think the constant, rapid update cycles of browsers has probably conditioned people to not notice slight changes in the interface. Yesterday your tabs were square, today they have rounded corners, tomorrow they'll probably be all flat "metro" style. So if you see a spoofed part of the browser UI, I'd imagine a lot of naive users would be unfazed by any slight differences.

...Just as with any well behaved application, a web app shouldn't have command collision with its environment....

This is the root of the problem, right here. Another example is the ability of an element within a web browser to "pop up" a window outside the bounds of the browser itself- this should simply never be allowed.

Why?

To keep an obvious separation between trusted browser and untrused website. If a website operator could insert/manipulate icons or graphics into the browser chrome they could, for instance, alter the state of the SSL lock-graphic, or the contents of the address bar - letting them write www.yourbank.com when you were actually at www.yourhijacker.com, etc.

There is an level of trust in separating the content form the browser that should never be compromised.