Towards Next Generation URLs

Changes are afoot in both development practices and Web server technology that should help advance URLs to the next generation.

Introduction

For many years we have heard about the impending death of URLs that are difficult to type, remember and preserve. The use of URLs has actually improved little thus far, but changes are afoot in both development practices and Web server technology that should help advance URLs to the next generation.

Dirty URLs

Complex, hard-to-read URLs are often dubbed dirty URLs because they tend to be littered with punctuation and identifiers that are at best irrelevant to the ordinary user. URLs such as http://www.example.com/cgi-bin/gen.pl?id=4&view=basic are commonplace in today's dynamic Web. Unfortunately, dirty URLs have a variety of troubling aspects, including:

Dirty URLs are difficult to type.

The length, use of punctuation, and complexity of these URLs makes typos commonplace.

Dirty URLs do not promote usability.

Because dirty URLs are long and complex, they are difficult to repeat or remember and provide few clues for average users as to what a particular resource actually contains or the function it performs.

Dirty URLs are a security risk.

The query string which follows the question mark (?) in a dirty URL is often modified by hackers in an attempt to perform a front door attack into a Web application. The very file extensions used in complex URLs such as .asp, .jsp, .pl, and so on also give away valuable information about the implementation of a dynamic Web site that a potential hacker may utilize.

Dirty URLs impede abstraction and maintainability.

Because dirty URLs generally expose the technology used (via the file extension) and the parameters used (via the query string), they do not promote abstraction. Instead of hiding such implementation details, dirty URLs expose the underlying "wiring" of a site. As a result, changing from one technology to another is a difficult and painful process filled with the potential for broken links and numerous required redirects.

Why Use Dirty URLs?

Given the numerous problems with dirty URLs, one might wonder why they are used at all. The most obvious reason is simply convention -- using them has been, and so far still is, an accepted practice in Web development. This fact aside, dirty URLs do have a few real benefits, including:

They are portable.

A dirty URL generally contains all the information necessary to reconstruct a particular dynamic query. For example, consider how a query for "web server software" appears in Google -- http://www.google.com/search?hl=en&ie=UTF-8&oe=UTF-8&q=Web+server+software. Given this URL, you can rerun the query at any time in the future. Though difficult to type, it is easily bookmarked.

They can discourage unwanted reuse.

The negative aspects of a dirty URL can be regarded as positive when the intent is to discourage the user from typing a URL, remembering it, or saving it as a bookmark. The intimidating look and length of a dirty URL can be a signal to both user and search engine to stay away from a page that is bound to change. This is often simply a welcome side effect, rather than a conscious access control policy -- frequently nothing is done to prevent actual use of the URL by means of session variables or referring URL checks.

Cleaning URLs

The disadvantages of dirty URLs far outweigh their advantages in most situations. If the last 30 or 40 years of software development history are any indication of where development for the Web is headed, abstraction and data hiding will inevitably increase as Web sites and applications continue to grow in complexity. Thus, Web developers should work toward cleaner URLs by using the following techniques:

Keep them short and sweet.

The first path to better URLs is to design them properly from the start. Try to make the site directories and file names short but meaningful. Obviously, /products is better than /p, but resist the urge to get too descriptive. Having www.xyz.com/productcatalog doesn't add much meaning (if a user looks for a product catalog, they might well expect to find it at or near the top-level products page), but it does needlessly restrict what the page can reasonably contain in the future. It's also harder to remember or guess at. Shoot for the shortest identifiers consistent with a general description of the page's (or directory's) contents or function.

Avoid punctuation in file names.

Often designers use names like product_spec_sheet.html or product-spec-sheet.html. The underscore is often difficult to notice and type, and these connectors are usually a sign of a carelessly designed site structure. They are only required because the last rule wasn't followed.

Use lower case and try to address case sensitivity issues.

Given the last tip, you might instead name a file ProductSpecSheet.html. However, casing in URLs is troubling because depending on the Web server's operating system, file names and directories may or may not be case sensitive. For example, http://ww.xyz.com/Products.html and http://www.xyz.com/products.html are two different files on a UNIX system but the same file on a Windows system. Add to this the fact that www.xyz.com and WWW.XYZ.COM are always the same domain, and the potential for confusion becomes apparent. The best solution is to make all file and directory names lowercase by default and, in a case sensitive server operating environment, to ensure that URLs will be correctly processed no matter what casing is used. This is not easy to do under Apache on Unix/Linux systems (related info), although URL rewriting and spellchecking can help (discussed below).

Do not expose technology via directory names.

Directory names commonly or easily associated with a given server-side technology unnecessarily disclose implementation details and discourage permanent URLs. More generic paths should be used. For example, instead of /cgi-bin, use a /scripts directory, instead of /css, use /styles, instead of /javascript, use /scripts, and so on.

Plan for host name typos.

The reality of end user navigation is that around half of all site traffic is from direct type or bookmarked access. If users want to go to Amazon's web site, they know to type in www.amazon.com. However, accidentally typing ww.amazon.com or wwww.amazon.com is fairly easy if a user is in a hurry. Adding a few entries to a site's domain name service to map w, ww, and wwww to the main site, as well as the common www.site.com and site.com, is well worth the few minutes required to set them up.

Plan for domain name typos.

If possible, secure common "fat finger" typos of domain names. Given the proximity of the "z" and "x" keys on a standard computer QWERTY keyboard, it is no wonder Amazon also has contingency domains like amaxon.com. Google allows for such variations as gooogle.com and gogle.com. Unfortunately, many Web traffic aggregators will purchase the typo domains for common sites, but most organizations should find some of their typo domains readily available. Organizations with names that are difficult to spell, like "Ximed," might want to have related domains like "Zimed" or "Zymed" for users who know the name of the organization but not the correct spelling. The particular domains needed for a company should reveal themselves during the course of regular offline correspondence with customers.

Support multiple domain forms.

If an organization has many forms to its name, such as International Business Machines and IBM, it is wise to register both forms. Some companies will register their legal form as well, so XYZ, LLC or ABC, Inc. might register xyzllc.com and abcinc.com as well as primary domains. While it seems like a significant investment, if you use one of the new breed of low-cost registrars (like itsyourdomain.com), the price per year for numerous domains for a site is quite reasonable. Given alternate domain extensions like .net, .org, .biz and so on, the question begs -- where to stop? Anecdotally, the benefits are significantly reduced with new alternate domain forms (like .biz, .cc, and so on), so it is better to stick with the common domain form (.com) and any regional domains that are appropriate (e.g. co.uk).

Add guessable entry point URLs.

Since users guess domain names, it is not a stretch for users -- particularly power users -- to guess directory paths in URLs. For example, a user trying to find information about Microsoft Word might type http://www.microsoft.com/word. Mapping multiple URLs to common guessable site entry points is fairly easy to do. Many sites have already begun to create a variety of synonym URLs for sections. For example, to access the careers section of the site, the canonical URL might be http://www.xyz.com/careers. However, adding in URLs like http://www.xyz.com/career, http://www.xyz.com/jobs, or http://www.xyz.com/hr is easy and vastly improves the chances that the user will hit the target. You could even go so far as to add hostname remapping so that http://investor.xyz.com, http://ir.xyz.com, http://investors.xyz.com, and so on all go to http://www.xyz.com/investor. The effort made to think about URLs in this fashion not only improves their usability, but should also promote long term maintainability by encouraging the modularization of site information.

Where possible, remove query strings by pre-generating dynamic pages.

Often, complex URLs like http://www.xyz.com/press/releasedetail.asp?pressid=5 result from an inappropriate use of dynamic pages. Many developers use server-side scripting technologies like ASP/ASP.NET, ColdFusion, PHP, and so on to generate "dynamic" pages which are actually static. For example in the previous URL, the ASP script drills press release content out of a database using a primary key of 5 and generates a page. However, in nearly all cases, this type of page is static both in content and presentation. The generation of the page dynamically at user view time wastes precious server resources, slows the page down, and adds unnecessary complexity to the URL. Some dynamic caches and content distribution networks will alleviate the performance penalty here, but the unnecessarily complex URLs remain. It is easy to directly pre-generate a page to its static form and clean its URL. Thus, http://www.xyz.com/press/releasedetail.asp?pressid=5 might become www.xyz.com/press/pressrelease5 or something much more descriptive like http://www.xyz.com/press/03-02-2003 -- or even better like http://www.xyz.com/press/newproduct. The issue of when to generate a page, either at request time or beforehand, is not much different than the question of whether a program should be interpreted or compiled.

Rewrite query strings.

In the cases where pages should be dynamic, it is still possible to clean up their query strings. Simple cleaning usually remaps the ?, &, and + symbols in a URL to more readily typeable characters. Thus, a URL like http://www.xyz.com/presssearch.asp?key=New+Robot&year=2003&view=print might become something like http://www.xyz.com/pressearch.asp/key/New-Robot/year/2003/view/print. While this makes the page "look" static, it is indeed still dynamic. The look of the URL is a little less intimidating to users and may be more search engine friendly as well (search engines have been known to halt at the ? character). In conjunction with the next tip, this might even discourage URL parameter manipulation by potential site hackers who can't tell the difference between a dynamic page and a static one. The challenge with URL rewriting is that it takes some significant planning to do well, and the primary tools used for these purposes -- rule-based URL rewriters like mod_rewrite for Apache and ISAPI Rewrite for IIS -- have daunting rule syntax for developers unseasoned in the use of regular expressions. However, the effort to learn how to use these tools properly is well worth it.

Remove extensions from files in URL and source.

Probably the most interesting URL improvement that can be made involves the concept of content negotiation. Despite being a long-supported HTTP specification, content negotiation is rarely used on the Web today. The basic idea of content negotiation is that the browser transmits information about the resources it wants or can accept (MIME types preferred, language used, character encodings supported, etc.) to the server, and this information is then used, along with server configuration choices, to dynamically determine the actual content and format that should be transmitted back to the browser. Metaphorically, the browser and the server hold a negotiation over which of the available representations of a given resource is the best one to deliver, given the preferences of each side. What this means is that a user can request a URL like http://www.xyz.com/products, and the language of the content returned can be determined automatically -- resulting in the content being delivered from either a file like products-en.html for English speaking users or one like products-es.html for Spanish speakers. Technology choices such as file format (PNG or GIF, xhtml or HTML) can also be determined via content negotiation, allowing a site to support a range of browser capabilities in a manner transparent to the end user.

Content negotiation not only allows developers to present alternate representations of content but has a significant side effect of allowing URLs to be completely abstract. For example, a URL like http://www.xyz.com/products/robot, where robot is not a directory but an actual file, is completely legal when content negotiation is employed. The actual file used, be it robot.html, robot.cfm, robot.asp, etc., is determined using the negotiation rules. Abstracting away from the file extension details has two significant benefits. First, security is significantly improved as potential hackers can't immediately identify the Web site's underlying technology. Second, by abstracting the extension from the URL, the technology can be changed by the developer at will. If you consider URLs to be effectively function calls to a Web application, cleaned URLs introduce the very basics of data hiding.

URLs can be cleaned server-side using a Web server extension that implements content negotiation, such as mod_negotiation for Apache or PageXchanger for IIS. However, getting a filter that can do the content negotiation is only half of the job. The underlying URLs present in HTML or other files must have their file extensions removed in order to realize the abstraction and security benefits of content negotiation. Removing the file extensions in source code is easy enough using search and replace in a Web editor like Dreamweaver MX or HomeSite. Some tools like w3compiler also are being developed to improve page preparation for negotiation and transmission. One word of assurance: don't jump to the conclusion that your files won't be named page.html anymore. Remember that, on your server, the precious extensions are safe and sound. Content negotiation only means that the extensions disappear from source code, markup, and typed URLs.

Automatically spell check directory and file names entered by users.

The last tip is probably the least useful, but it is the easiest to do: spell check your file and directory names. On the off chance that a user spells a file name wrong, makes a typo in extension or path, or encounters a broken link, recovery is easy enough with a spelling check. Given that the typo will start to generate a 404 in the server, a spelling module can jump in and try to match the file or directory name most likely typed. If file and directory names are relatively unique in a site, this last ditch effort can match correctly for numerous typos. If not, you get the 404 as expected. Creating simple "Did you mean X?"-style URLs requires the simple installation of a server filter like mod_speling for Apache or URLSpellCheck for IIS. The performance hit is not an issue, given that the correction filter is only called upon a 404 error, and it is better to result in a proper page than serve a 404 to save a minor amount of performance on your error page delivery. In short, there is no reason this shouldn't be done, and it is surprising that this feature is not built-in to all modern Web servers.

Conclusions

Most of the tips presented here are fairly straightforward, with the partial exception of URL cleaning and rewriting. All of them can be accomplished with a reasonable amount of effort. The result of this effort should be cleaned URLs that are short, understandable, permanent, and devoid of implementation details. This should significantly improve the usability, maintainability and security of a Web site. The potential objections that developers and administrators might have against next generation URLs will probably have to do with any performance problems they might encounter using server filters to implement them or issues involving search engine compatibility. As to the former, many of the required technologies are quite mature in the Apache world, and their newer IIS equivalents are usually explicitly modeled on the Apache exemplars, so that bodes well. As to the search engine concerns, fortunately, Google so far has not shown any issue at all with cleaned URLs. At this point, the main thing standing in the way of the adoption of next generation URLs is the simple fact that so few developers know they are possible, while some who do are too comfortable with the status quo to explore them in earnest. This is a pity, because while these improved URLs may not be the mythical URN-style keyword always promised to be just around the corner, they can substantially improve the Web experience for both users and developers alike in the long run.

Further Resources

Articles

Numerous articles have been written about the need for clean URLs. A few of the more prominent ones are cited here.

The authors would encourage submission of other tools and articles to improve the article’s resource listing.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Share

About the Author

Port80 Software, Inc. is an innovative developer of software products for Microsoft Internet Information Services (IIS) focused Web site administrators, developers and owners. Port80 products enhance IIS functionality, security, performance, and user experience, augmenting IIS with on-par or better features than those provided by the Apache server. Port80 also develops the w3compiler desktop tool for next generation markup and code optimization. Port80 Software is a Microsoft Certified Partner located in San Diego, CA. Additional information about the company is available on the Internet at www.port80software.com.

Good article and I certainly think your intentions are spot on. However you should distinguish between web-applications and web-sites.

A site such as Amazon should use clean URLs, perfectly right. I should be able to type www.amazon.com/books/sciencefiction/asimov/irobot and be taken to that particular book. Amazon are shocking for this though. However when it comes to their checkout procedure (the web-app bit really) then I should not be bothered about the URL or trying to type in URLs.

I also disagree about querystrings not being abstract. The alternative is to submit form data which is far more technology specific and hard to handle than a very easy to cut-and-paste URL string. Plus you don't get the annoying "Do you want to resubmit the form data?" when you refresh

Anyway, good article, lots of good points.

Paul WatsonBluegrassCape Town, South Africa

brianwelsch wrote:I find my day goes by more smoothly if I never question other peoples fantasies. My own disturb me enough.

Good article and I certainly think your intentions are spot on. However you should distinguish between web-applications and web-sites.

REPLY: Yes that is true. However, we do think that when using a URL as a UI regardless of site purpose or style it should be well thought. For a Web application the entry points should be well defined and regular and everything else hidden. For "web sites" where entry point isn't as important the clean URL is more important as ever as you note. Unfortunately in most cases the distinction between Web app and site isn't always as clear as it could be.

A site such as Amazon should use clean URLs, perfectly right. I should be able to type www.amazon.com/books/sciencefiction/asimov/irobot and be taken to that particular book. Amazon are shocking for this though. However when it comes to their checkout procedure (the web-app bit
really) then I should not be bothered about the URL or trying to type in URLs.

I also disagree about querystrings not being abstract. The alternative is to submit form data which is far more technology specific and hard to handle than a very easy to cut-and-paste URL string. Plus you don't get the annoying "Do you want to resubmit the form data?" when you refresh

REPLY: Agreed on the Posting issue and how it is handled but I think you will see that you could rewrite the query string to abstract things more so and still use GET. Imagine a query string simply going from userid=4&view=6 to /userid/4/view/6 or something. The value is not maybe as huge as some of the other points made, but obviously given the amount of mod_rewrite hacking going on it, there are proponents for this.

On a related note, in the future we may see with the rise of Rich Internet Applications (e.g. FlashMX) that people will have one entry point URL and within that app you don't pass state data the traditional way. If this works out the query string and somewhat post method will certainly be diminished in usage significantly. However, so far this has not come to pass and may never. Obviously the web application people are pushing this model more than the "web site" folks.

Anyway, good article, lots of good points.

REPLY: Thanks if just getting people to question the current status quo in Web app development and quality then we did our job.

Chris Neppes wrote:On a related note, in the future we may see with the rise of Rich Internet Applications (e.g. FlashMX) that people will have one entry point URL and within that app you don't pass state data the traditional way

For pure web-applications that is fine. But I sincerely hope the RIA method is not applied to content and resources.

The very fundamental principle of the internet is the URL. Soon as you start hiding it behind one non-changing URL and preventing resources from being accessed directly via specific URLs then you kill off the net.

I personally see web-applications dying off and internet enabled applications picking up from there. The web will go back to being a linked content datastore.

Paul WatsonBluegrassCape Town, South Africa

brianwelsch wrote:I find my day goes by more smoothly if I never question other peoples fantasies. My own disturb me enough.

Although I agree with good design and your general point, I find some of your article impractical and/or wrong. Let me give you some examples:

>Dirty URLs are difficult to type.

Users don't ever type them anyway. They use the web pages, they don't mess about in the address bar. That's obvious

>Dirty URLs are a security risk.

No they aren't. Bad programmers are the risk. You do not offer an alternative that is better than a query string!

>and the parameters used (via the query string)

But they do promote abstraction. The query string has a standard format (I have used PHP, C# and Java). So you can use any language on the server. I really cannot understand where you are coming from on this.

Users don't ever type them anyway. They use the web pages, they don't mess about in the address bar. That's obvious

REPLY: Is it? Have you never received a truncated or wrapped URI in an email and had to reconstruct it? Have you never seen a complex URI referenced in a PowerPoint presentation or hard copy materials? Have you never had someone repeat one over the phone to you?

>Dirty URLs are a security risk.

No they aren't. Bad programmers are the risk. You do not offer an alternative that is better than a query string!

REPLY: First, you neglect to mention file extensions, which are a major source of "noise" in traditional URLs, and which almost always betray the backend technology. This a principal reason we suggest removing them. As to query string alternatives, we make two suggestions in the article: 1) pregenerate static pages where appropriate (which has additional performance and searchability benefits) and 2) rewrite them to a more innocuous, harder-to-guess-at format (an old mod_rewrite trick, as detailed in some of links at the bottom of the article).

>and the parameters used (via the query string)

But they do promote abstraction. [confused] The query string has a standard format (I have used PHP, C# and Java). So you can use any language on the server. I really cannot understand where you are coming from on this.

REPLY: First, while a query string is not bound to the technology used to parse it (though, as we point out, the file extension normally is), it does nothing to abstract the URL from a key implementation detail, namely, whether the page is dynamic or static. A clean URI, by contrast, has the same form regardless of how, or when, or by what kind of agent, a given resource was composed.

Second, doing without the query string format is one way to encourage developers to stop thinking of the URL as part of a backend system, and to start thinking of it as what it is -- a public interface. Maybe this in turn will help discourage such practices as exposing database field names in the URL -- the SQL injector's delight.

Finally, to take the idea still further, imagine getting rid of parameter names altogether. Think of how modern programming languages work. Imagine having to do this to call a function:

foo(parm0=value0, parm1=value1, parm2=value2)

Silly looking, isn't it? Why should someone calling this function need to know (and accurately repeat) the internal parameter names? Why should its implementation be bound to those names forever? Why not abstract such details from the interface?

Well, exposing parameter names in a function signature makes no more sense than exposing them in a URL's query string. This illustrates a general principle of abstraction: anything internal to the application that _can_ be taken out of its public interface, should be. Obviously, the traditional query string isn't there yet.

Chris Neppes (for the authors) wrote:Finally, to take the idea still further, imagine getting rid of parameter names altogether. Think of how modern programming languages work. Imagine having to do this to call a function:

foo(parm0=value0, parm1=value1, parm2=value2)

Silly looking, isn't it? Why should someone calling this function need to know (and accurately repeat) the internal parameter names? Why should its implementation be bound to those names forever? Why not abstract such details from the interface?

Well, exposing parameter names in a function signature makes no more sense than exposing them in a URL's query string. This illustrates a general principle of abstraction: anything internal to the application that _can_ be taken out of its public interface, should be. Obviously, the traditional query string isn't there yet.

I can fully understand your point, as in the point about amazon and the /sciencefiction/asminov querystring.

However, Visual Basic uses this syntax, and if you ever use ADO, or ADO.NET with parameters, you will also know that this kind of functionality is used. The reason it is used is because it allows you to ommit optional parameters. Similarly, if you could goto amazon.co.uk/author=asminov&category=sciencefiction, or amazon.co.uk/title=irobot&author=asminov, then surely this would be more usefull that trying to second guess whether you needed to enter the category then the author.

I can see how this kind of functionality can be usefull (especially if you have a good search engine for your site, so that going to http://www.foo.com/thing returns a search result for "thing" rather than a 404), but simply putting in place another method of passing parameters to the site isn't going to help, as users are rarely going to think of the same random URL as the programmer.

I think you are confusing two different cases here, each of which demonstrates a different advantage of clean URIs:

Case 1: You have /www.foo.com/thing and the /thing is an interface to a resource that is reasonably represented by that term. This is like saying www.microsoft.com/word and getting the main Word homepage, for example. Here, the advantage of a clean URI is primarily for the end user seeking to find a resource, without necessarily knowing where it is in the site -- as you point out, it is a kind of search mechanism.

Case 2: You have actual URL parameters, some of which happen to be optional. In this case, if one wants to keep the URL as abstract as possible, optional arguments could easily be accomodated the way they are accomodated in most programming languages, namely as something that can be added to a call when necessary. Thus, just as both of these function calls will work:

foo(arg0, arg1, optarg0)
foo(arg0, arg1)

...so would both of these URLs:

www.foo.com/arg0/arg1/optarg0/
www.foo.com/arg0/arg1/

Here, the abstraction is not for the sake of end users, but rather for programmers -- those who build and maintain the application, and those who make use of it -- and the main benefit is not searchability or guessability, but interface persistence. Even VB, as verbose as it is compared to a terse language like C, still does not require that arguments be named when they are passed.