Ajax has provided developers the ability to transform the way mortals use the Web from simply reading contents to interacting with websites. There is an explosion of applications on the web that, before, were available only for desktop environments. While many factors contributed to it like increasing bandwidth, faster chips, mature tools, and better understanding on how people use the web, Ajax made websites more alive and pleasant to use. But, Ajax can also do the reverse. And since Ajax is a new thing to many developers (though the underlying technology has been around for years), it can be used in ways that break the conventions that made the Web easy to use in the first place.

Imagine your looking for videos of Paris Hilton and Nicole Richie. As always, you use Google. You click on the 1st item in the search results. The link takes you to a page with tons of links that are trying to get your attention. Every link in the page is transmitting a telepathic signal that says “Click me and I will be yours”. Your undecided for a while but eventually, you choose one and 2048 milliseconds later you see another page with a bunch of links that says “Click me”. Hoping you’ll find the videos of Paris and Nicole, you click another link, another page, another link, and so on until you the page now asks for your credit card. Damn! Anyway, you simply click the Back button again and again until you’re back to where you started.

Going back is possible because the browser remembers the location of the pages you recently visited. Every time you click a link, the browser loads the page and the history gets updated. That’s the convention since the Web’s inception. But Ajax loves breaking conventions. When you click a link and the page is loaded using Ajax, the history is untouched. So when it’s time to go back, clicking the ‘Back Button’ will not show the previous page. Instead, with our example, what you will see are the search results from Google.

To get around the back button dilemma, one proposal is to just remove the back button. Hmmn. For an experiment, hide the navigation buttons in your browser and then track how long you can browse the web without the back button. I know that’s a silly proposal. Trash it.

Dojo, an Ajax framework, provides a way to preserve the back button behavior. But thinking beyond the technical solution, what if breaking the behavior can be avoided in the first place. If the updates result in a page that looks so much different from the previous, then users would believe they are on a different page already. In that case, you need to rethink your Ajax solution. If you use Ajax to update small and discrete areas of the page, there is a less chance of problem arising from the back button. Of course, when in doubt, you can always do user testing to find out what works better. Actually, you should always do user testing.

The back button is a symbol of freedom. It assures us that wherever our desire takes us, we can always count on it to take us back to where we’ve been before. It’s like going deep in a forest without the need to leave breadcrumbs because retracing our steps is as simple as clicking the back button.

Did I say Ajax loves breaking conventions?

Not too long ago, you can always bookmark a page and return to it the next day. Well, you can still do that today for sites that have not yet been ajaxified (is that a word?). For ajaxified sites, bookmarking is a problem. The problem happens because when you bookmark a page, what is saved is the URL at the address bar. When you click a link and the page is loaded using Ajax, the address doesn’t change — a similar problem with the back button where the history is never updated.

To be fair, the bookmarking woes is not exclusive to Ajax. Frames suffer the same problem. Whatever link you follow and no matter how deep you are inside a frame, the address does not change. Come to think of it, you can update the contents of frame without affecting other frames. So it does have some Ajax-like behavior and problem.

There are cases where leaving the address as is makes sense. You should be able to bookmark your search results in Amazon but you shouldn’t bookmark a page halfway through a checkout process. The challenge then for developers is not how to bookmark every thing but when to treat an operation as bookmarkable and when it is not.

Feedback has always been a usability challenge even before Ajax arrived. Pre-Ajax, at least we have the status bar and the spinning logo to remind us that an action is under way (assuming you noticed it). In some ways, when you click a link and it shows a blank page, you got a feedback because the browser has changed and you can no longer do anything until the browser is done. This is the synchronous world of pre-Ajax web. But the ‘A’ in Ajax means asynchronous and it causes your browser to behave differently. With an Ajax request, the browser does not give automatic feedback. You need to write code that tells the user an action is underway or it is done. The asynchronous way also means the user can continue interacting with the page while the Ajax request is being processed. Unlike before where the user has to wait for the current request to finish before he can proceed to the next, now your user may have clicked a dozen links or pressed several buttons already by the time the 1st request is finished. If you’re lucky, the results of these actions are mutually exclusive but if these change some state in your application (e.g. a delete operation), then you may run into problems.

Just because you can sprinkle every Ajax-technique imaginable in your website does it mean that you should. I have a friend who got so excited with inline editing he implemented it in almost every piece of information in his project. For a while it was fun but as the development goes on, the code has become unmanageable and he eventually dropped inline editing. Good thing he realized it before his end users use it.

Every time you’re thinking of using Ajax, ask yourself:

Will this improve my application or just lengthen my resume?

Will it help the users accomplish their tasks faster or help me get more bragging rights?

When we looked at the actual download speeds of the sites we tested, we found that there was no correlation between these and the perceived speeds reported by our users. About.com, rated slowest by our users, was actually the fastest site (average: 8 seconds). Amazon.com, rated as one of the fastest sites by users, was really the slowest (average: 36 seconds).
— The Truth About Download Time

Was Jakob Nielsen, the usability guru, wrong when he concluded that to avoid annoying users, pages should load in less than 10 seconds? I think he’s still right because it is no fun staring at the hourglass for 5 minutes. But when we tell our friends that Flickr is faster than Picasa, we don’t say uploading a 1MB JPEG takes 2.35 seconds in Flickr while it takes 4.86 seconds in Picasa. We just say Flickr is faster.

What is missing from Nielsen’s conclusion is that when users say a website is slow, they talk about their feelings and not what they see in the stopwatch. This does not mean that programmers should abandon measuring website performance. We still need to make that slow function run faster and there is no way to tell if we are progressing or not if we don’t know the score.

Browsers follow a fetch-parse-flow-paint process to load web pages. Given a URL, the fetch engine finds it and stores the page into a cache. The parse engine discovers the various HTML elements and produces a tree that represents the internal structure of the page. The flow engine handles the layout while the paint engine’s job is to display the web page on the screen. Nothing unusual except that when the parse engine sees an image, it would stop and ask the fetch engine to read the image. The parse engine continues only after it has determined the image’s size. The end result is that the browser will wait until all the elements of the page has been processed before it shows the page. During this processing, all the user sees is a blank page. This is how things work with the 1st widely-used web browser, Mosaic.

Netscape Navigator 1.0 took a different approach. When the parse engine sees an image, it still asks the fetch engine to load the image. The difference is that the parse engine will put a placeholder in the internal structure of the page to mark where the image is and let the flow and paint engines do their job. When the image is loaded and analyzed, the paint engine does a repaint on the screen. This can happen several times if the page has a lot of images. If you measure the overall time it takes to finalize the page display, Mosaic is faster than Netscape. But, users would say otherwise. Mosaic lacks a sign of progress while the appearance of text, then an image, then another image makes users think that Netscape is faster.

My first encounter with the world wide web was in April 1996 while working as a student assistant at the Advanced Science Technology Institute. Back then, web pages consist mostly of text. Nowadays, it is not uncommon for a page to contain lots of big images, embedded videos, references to several CSS and JavaScript files. So while computing power and bandwidth has improved over the years, content has also bloated making performance issues still a problem.

The common opinion is that if you want to improve a website’s performance, you focus on the database, web server, and other back-end stuff. But, Yahoo! engineers found out that most optimization opportunities are present after the web server has responded with the page. When a URL is entered into the browser, 62% to 95% of the time is spent fetching the images, CSS, JavaScript contained in the page. It is clear that reducing the number of HTTP requests will also reduce response time.

Another cool product from the team is YSlow (nice name). It analyzes a web page and tells you why it is slow based on the 13 rules. YSlow is a Firefox plugin and works alongside another web developer’s indispensable tool, Firebug. When I first used YSlow with SchoolPad, my initial grade was F (58). I first set out to address Rule #10 – Minify JavaScript. Since SchoolPad is written on Ruby on Rails, the asset_packager plugin came in handly in merging all my CSS files and JavaScript files. Using the plugin, CSS and JavaScript files can be used in a single reference.The asset_packager is also smart. During development mode where CSS and JavaScripts files are often updated, the plugin references the original script files but in production mode it uses the minified versions of your CSS and JS files. A few changes in your Capistrano file, then you can make the minification (is that a word?) process automatic every time you deploy your application. After a few more tweaks, my overall grade is now C (78) with an F on rules #2 (Use a CDN) and #3 (Add an expires header). I can’t address rule #2 because that requires money and #3 has to wait because it requires more Googling.

It might happen that you have an A in YSlow yet users complain that your website is slow. Talk about an unlucky day. Don’t despair. Maybe, it is time to focus on managing expectations instead of performance.

I wrote my first GUI-based program using Visual Basic on Windows 3.1. (I’m sure Evan would argue that Basic is not a programming language). Any Visual Basic book would tell you to change the mouse pointer to an hourglass before a lengthy operation such as a database query, and change it back to the default pointer afterwards.

I think many people got addicted to the mouse pointer that along with the release of Windows 95 was a variety of themes that replace the default mouse pointer icon with a rocketship, a barking dog, or a wiggling clownfish.

Anyway, back to the web. Wait! The reason the hourglass is widely used in desktop applications is because it tells the user that the application has accepted the action, and it is now working on it. In your web apps, you should do the same.

Traditionally, when you click a link in a website, the browser sends a request to the server, receives a page, and repaints the content. Feedback in most browsers is in the form of a spinning logo at the top right part or an expanding strip at the bottom or at the status bar.

Here comes Ajax. With Ajax you can do away with refreshing everything and update only a portion of your page. Unlike the page-based model, the browser cannot give feedback that something is going on after an Ajax-based action. No more spinning logo. No more expanding strip bar. We went backward in interface design.

Fortunately, there is a way to give feedback that requires writing code using JavaScript. It could be as simple as writing a message “Loading…” on the page or using an animated GIF file. One implementation would be just to show a previously hidden text or image or create an ‘img’ element and append it to where you want to display (usually a ‘div’ element). Many web applications nowadays use various style of animation but the key idea is the same — a smoothly looping animation to indicate an activity.

Animated activity indicators are useful for tasks that will incur short delays. If the delay takes too long, the user may think the application is going on circles or has become a zombie. For longer delays, it is best to show how much progress has been made, an estimate of time remaining, or a sequence of messages telling the user what’s happening at the present. This is very useful but tricky to implement because HTTP requests don’t give tracking information at regular intervals. If you are making a sequence of 4 requests, you can guesstimate that progress is 25% done when the 1st call has completed. Flickr uses the best activity indicator I’ve seen so far when uploading images. Image files are usually big and Flickr does a great job of giving progress feedback to its users — an overall progress and a per-file progress indicator.

During the modem days, it is acceptable that many sites are slow given the hardware limitation. But now in the broadband era, amplified by tons of marketing, users expect websites to be lightning fast.

Nobody wants a slow website. But when the user say it is slow, oftentimes, it is because nothing is happening on the screen.

Sounds easy, right? That is until time pressure, individual preference, market forces and other factors come in and conspire against the design.

The most common business model for software is to regulary release versions of a product, where each release contains more features than the previous. The new features give customers reason to open their wallets and upgrade their software. Ideally, the new version keeps the good features and adds more good features.

Ideally.

When a new version is released, the design of the next version has already started. From a business perspective, this is a good strategy because the company can keep releasing new features. From a design perspective, gathering and understanding feedback from actual users seldom happens, which is a necessary step to improve the product. Without feedback, the designers would not know that users are having problems printing in landscape or they can’t find the spell checking tool anymore. What used to be a simple thing to do now becomes a source of confusion.

Making features available, it seems, is not a difficult thing to do. There is no physical limit in how many items a menu group can contain or how deeply nested your menus are, or how many buttons you use. If your designing a web application, you can include a hundred links in a page and Firefox won’t complain.

The difficult part is deciding what goes where. What menu group should this be included with? Would this be part of the standard toolbar? Should it be introduced in a flash screen? Should this be at the top, middle, or at the bottom of the list? Even though there is unlimited space for every feature, every feature competes with each other for user’s attention. An item on top of a menu list is more accessible than the items at the bottom. It is not uncommon for users to suggest or complain that the feature they often use should be at the top of the menu list.

More features also mean more options for the users. If the additional features are for accomplishing different tasks, it does not present a problem to the users, for example, saving a document and printing it. But if you give the users many options to accomplish a single task, it would require extra brain processing. In the case of printing, possible options include page size, number of pages in a sheet, print range, orientation, and color. Every time you give users a choice, they have to think about something and make a decision.

Features not only affect design but also the integrity of the software. As developers add more features, they also put in new code. The new code not only has the responsibility to implement the new features but also not to damage existing ones. As the code grows, complexity rises exponentially thus making it difficult to add new code and preserve the integrity at the same time.

Business strategy is not the only factor that can work against a user-friendly software. Sometimes, the designers also work against good design. Making things better is a noble goal but sometimes it gets confused with simply making things different. A designer involved in several iterations of a product has the innate pressure to make things different from the previous. When you hear designers suggest a redesign, it should only be done if it is aligned with making things better like improving navigation, decreasing task completion time, or improving quality of search results.

Design is often confused with aesthetics. If you survey the job requirements from companies looking for web designers, the primary requirement is often an expertise in Adobe Photoshop. Aesthetics will make the design pleasing to the eye but may make it less comfortable to use. Usability can make the interface comfortable to use but can be uglier. New technology, like Ajax, can make the software more responsive but could be at the expense of aesthetics and usability.

Making aesthetics a priority is not confined to interface design. It is everywhere. We buy alarm clocks that looks beautiful but has unreadable numbers, or phones with vibrant colors but with a small keypad. When we moved to our new office, an admin staff asked what I think about our work area. I said the work area is too crowded and our developers would always hit the guy behind him when he stretches his arm. Puzzled, she was actually asking if I like the colors of the wall. Because if not, they will change it. But the alloted space for each developer would remain.

It is our nature to be biased towards attractive things — Nokia cellphones, iPod, BMW, and Paris Hilton. It is no secret that movie actors and actresses receive more attention and that, all other variables being equal, attractive people are preferred in hiring decisions. Attractive things also foster positive attitudes like affection, loyalty, and patience making people more tolerant of design problems. Even though an attractive software is not user-friendly, it can still become successful because once we liked something, our natural talent to adapt kicks in. When we are hook, forcibly or by choice, we find ways to adjust with the unusable interface. Over time, the interface becomes natural to us even though a 5-step task can be redesigned to become 2.

When users are already conditioned to do things in a certain way, design improvements can have negative results if not managed properly. In the case of web applications, the ability to make quick updates immediately available to users is a double-edge sword. Adding inline validation in the order-entry form will improve user experience but reorganizing the flow of the pages will present problems to users even though the change, as the designers see it, is an improvement.

The problem is designers often think of themselves as typical users. Even though design discussions revolved around “what if the user wants to…” and “i think the user will be confused if…”, it is our nature to project our own beliefs onto others. What designers think as improvement to the user’s experience is often just a result of his own biases. While the designer may be correct, designs that are product of internal discussions should be done if it matches the results of study with actual users. The are ways to involve the users as early as possible like releasing prototypes or chunking the interface into several iterations.

There is always the desire to make users happy. Still, designs can go wrong. One reason designs go wrong is the people involve have become so proficient in what they are building that they can no longer perceive the areas that can cause trouble to the user. Designers know too much and are accustomed to the software that no matter how hard they try, they can no longer put themselves in the role of a viewer. Predicting all the problem users will have, or the errors that will get made is an impossible task. The only way to know these problems and errors is to observe users and learn what they do. While designers are expert in the software they are working on, users are expert in the task they are trying to perform with the software.

Sometimes, teams are not allowed to talk to end users. The most common reason is fear of the designers telling the users too much. Sometimes, teams work for clients who may be concern about price and schedule but not on usability. Often, designers depend only on requirements that have been filtered by the people from marketing, support, and from the office of the CTO and CEO who all believe they have a better understanding of what the users want. Sometimes, they also have an opinion on how things should be designed which aggravates the situation the designers are into.

Design failures also happen when it is done not by designers but by programmers (sometimes by managers). Programmers know the value of simplicity in implementation and all things being equal, they would rather do the simplest solution that would work. This is a sound principle because as code grows, complexity rises exponentially. By implementing simple solutions, code becomes less costly to maintain. When one design option requires more code than the other, it is just natural for programmers to choose the design that requires less code to implement.

The factors working against good design are not hurdles that can be solved by a technical solution. Designing user-friendly software starts with the right mindset. It is very common for technical people to call users stupid every time they receive a complain for what is seemingly a trivial task. In this kind environment, no user-friendly software will ever get produced.

Some think good design is just common sense. While it is true that knowledge of quantum mechanics is not a prerequisite to good design, thinking that design is just common sense leads us to believe that we know what users want. We can guess and try as hard as we can but only by asking and observing users we will know what they want and the problems they are facing.

The right mindset knows that users are different and exactly opposite of you. What is easy for you, is hard for them. What is trivial for you, takes a lot of time for them. Users don’t think the same, don’t act the same, and don’t have the same experience as you do.