Posted
by
timothy
on Thursday July 08, 2010 @03:13PM
from the acknowledge-over-repeat dept.

snydeq writes "InfoWorld's Peter Wayner takes a first look at Firefox 4 Beta 1 and sees several noteworthy HTML5 integrations that bring Firefox 4 'that much closer to taking over everything on the desktop.' Beyond the Chrome-like UI, Firefox 4 adds several new features that 'open up new opportunities for AJAX and JavaScript programmers to add more razzle-dazzle and catch up with Adobe Flash, Adobe AIR, Microsoft Silverlight, and other plug-ins,' Wayner writes. 'Firefox 4 also adds an implementation of the Websockets API, a tool for enabling the browser and the server to pass data back and forth as needed, making it unnecessary for the browser to keep asking the server if there's anything new to report.'"

Merging the address and search fields is a big drawback. It further confuses people about what a URL is, and it encourages them and others (esp. advertisers) to give directions to web sites as if the keywords == addresses. (Hey, like AOL!)

If this trend continues, we'll have shenanigans and lawsuits claiming that "squatters" are using keywords on their pages that "belong to us". It will open another "IP" can of worms.

Encouraging people to rely on keywords also opens them up to phishing big time. It's like having them clean their teeth with their enema: Very semantically dirty!

URIs have become cumbersome. Making the net content-addressable is a big efficiency measure.

You can still give out a key that will only map to you, and return a URI that is clearly you. Or at least as clearly as happens now when someone does a Google search.

But now you're not constrained to identifying yourself with some bogus fqdn with a limiting TLD stuck on it.

As for Phishing, banks have moved to authentication systems that use graphics on the page to tell you that the password-entry box you're looking at is legit. If you don't see your predetermined secret glyph, you don't enter your password. And the glyph isn't sent until your browser and the server are connected by SSL, so it can't be sniffed and hacked into a phishing site. And it isn't sent unless your browser already has a cookie identifying it as having been validated previously, using a secret-question protocol. If you deleted the cookie, you go through the secret-question routine again.

Short of adding more layers of such things, or using in-person pre-validated biometrics over secure links, you're not getting much more security than that on the internet. Using simple, recognizable URIs won't help you, and really, just invites social engineering based on URIs that look almost legit.

Hmm. Have ten million users doing the same ten million calculations each on different data on the sever, or have the ten million users download their data and do the calculations on their own machine...which one will complete faster?

Server-side scripting is a massive bottleneck if the page has any complexity at all.

What you should be complaining about is the disastrous state of the code sent to the client side. Most of it is painfully bad.

Your posts defines two distinct categories: URLs and Search Terms. Most people don't think about those things as separate ideas. They're just means of telling the internet to show a website.

The key distinction between a URL and a search term is that URLs are hard to remember and prone to typos. Search terms are far easier (and tend to be helpful even if you spell them wrong). why would I want to type in "http://krugman.blogs.nytimes.com/" when I can just type in "krugman [google.com]" (or "krugrman [google.com]") and get my daily Keynesian economic analysis that way.

For the browser, the URL and the search term are completely distinct. For an engineer or a software programmer, it's clear why they would have separate fields for entry of one or the other.

But for a user (even a technically savvy user) semantic cleanliness doesn't make any sense and causes more problems than benefits.

Uh, the method you described does almost nothing to stop phishing. Doing a man-in-the-middle on it is trivial, so really all it does it is require the phisher to handle each bank separately... which they probably have to do anyway in order to make their sites look the same. The only trip-off to the user would be an extra security question being asked, which no one will notice because banks randomly ask those security questions anyway.

Yea, and it'll also reduce the incentive for people to squat and typo-squat domain names.

I'm frankly tired of all that crap: if ICANN wants to deal with the rampant squatting, I'll start supporting "address bar for addresses only" thinking. Until then, I'd rather google hijack me to a meaningful result than accidentally direct myself to some damn squatter site.

You do realize that flash internally manages a display object hierarchy not unlike the DOM? There isn't much difference between writing apps in flex/flash and writing apps in javascript with something like ExtJS toolkit. All rich app frameworks I know, on any platform, use the HTML-like approach of having an element hierarchy and a set of layout rules that are constantly re-calculated.

HTML may be ill-suited to rich app development, but so is everything else. Win32 and X11 are both truly horrible API's, arguably much worse than HTML+JS+CSS, but combined they hold the majority share of native apps.

And by the way, the browsers of today are designed for rich applications. They have been for a few years now. Cars were originally designed to make it up to a brisk walking pace at best. Things change.

Definitely. I love having them separate. Besides, even my netbook has a resolution of 1366x768. Who needs an address bar that's over a thousand pixels wide? I mean, really. So much of their efforts go into optimizing screen space usage, but I feel that a unified bar that's mostly blank really defeats this purpose.

Because the hypertext transfer protocol was designed to transfer hypertext documents. It was not designed to be a remote application protocol.

Irrelevant. If it can be evolved to work well enough for people then it is suitable. The Type-III Secretory Gland evolved into the Bacterium Flagellum without any design, but it happened to work well enough to survive and so it did.

I think the opposite. DNS has gone to shit because of the squatters. To the point that its pretty much useless now.

And with all the phishing sites.... well we should be discouraging people from typing in $COMPANY_NAME.com to get information they need. They make one typo or if the site they want is under a TLD other than.com then at best they're going to be inconvenienced by loading up the wrong page, and at worst they've entered their banking logon into a phishing site.

Its far better for people to simply enter a reasonable approximation into a search bar and have a search engine give the site thats most likely what they wanted. Google is much more forgiving of typos than DNS.

And if you actually know the exact URL, then the functionality is still there for you to bypass the search engine and go directly there. I don't really see a downside.