So, we donâ€™t care about performance or optimization much anymore. Except in one place: JavaScript running on browsers in AJAX applications. And since thatâ€™s the direction almost all software development is moving, thatâ€™s a big deal.

Using the history of DOS and Windows applications as his guide, he makes three conclusions based on this point:

First, the observation that a slow application today may not be slow tomorrow:

[Historically, the] developers who ignored performance and blasted ahead adding cool features to their applications will, in the long run, have better applications.

Second, we’re likely to see some sort of uber-Ajax toolkit emerge. It will be slow, but see the previous point:

If history repeats itself, we can expect some standardization of Ajax user interfaces to happen in the same way we got Microsoft Windows. Somebody is going to write a compelling SDK that you can use to make powerful Ajax applications with common user interface elements that work together. And whichever SDK wins the most developer mindshare will have the same kind of competitive stronghold as Microsoft had with their Windows API.

Third, once this happens, the applications with custom-rolled UI toolkits will be obsolete:

And while youâ€™re not paying attention, everybody starts writing NewSDK apps, and theyâ€™re really good, and suddenly businesses ONLY want NewSDK apps, and all those old-school Plain Ajax apps look pathetic and wonâ€™t cut and paste and mash and sync and play drums nicely with one another. And Gmail becomes a legacy. The WordPerfect of Email. And youâ€™ll tell your children how excited you were to get 2GB to store email, and theyâ€™ll laugh at you. Their nail polish has more than 2GB.

Time will tell if Joel’s naive application of history to the present will bear out, but it’s hard to dispute that in a year or two, JavaScript will be much, much faster.

I did smirk a bit when reading this bit:

You can follow the p-code/Java model and build a little sandbox on top of the underlying system. But sandboxes are penalty boxes; theyâ€™re slow and they suck, which is why Java Applets are dead, dead, dead… Sandboxes didnâ€™t work then and theyâ€™re not working now.

Someone tell that to Microsoft and Adobe, quick! ;-)

UPDATE:

A commenter complained about my use of the word “naive” without supporting comments. I don’t want to bore the community with my own essay, but in brief:

Every Ajax/Web 2.0 play and its dog are trying to repeat the DOS/Windows history, but it’s an entirely different marketplace with perhaps three orders of magnitude more players. Despite a few hundred years of business precedents when Microsoft came around, no one predicted the DOS/Windows monopoly–but plenty of pundits were off predicting other interesting futures based on their analysis of the past.

I think it’s a touch naive to assume that:

The NewSDK will be the second coming of Microsoft Windows; this is exactly how Lotus lost control of the spreadsheet market. And itâ€™s going to happen again on the web because all the same dynamics and forces are in place. The only thing we donâ€™t know yet are the particulars, but itâ€™ll happen.

Any web developer knows that an imperfect free market with imperfect bickering standards bodies can perpetuate wildly inefficient and incomplete platforms for an awfully long time–despite a clear motivation from a lot of big companies to create a large, high-level standard set of cross-platform APIs. Throw in the open-source wild card, where large numbers of developers create amazing software for no particularly compelling financial motive, and the traditional predictable dynamics of the marketplace are further disturbed.

If recent history is any guide, it may be that instead of some “NewSDK” taking over, we’re more likely to continue to see incremental improvements throughout the web stack and gradual cooperation amongst browser vendors. Cross-browser-OS-compatible-across-every-web-app drag-and-drop and copy-and-paste? Yeah, maybe, but if that’s going to happen, it’s probably more likely to emerge via HTML5-like efforts than a new high-level GWT-like platform.

Am I wrong in thinking Joel’s history lesson a bit naive? Where do you think things are headed?

I find it hard to believe that only one toolkit will come out on top. I think each toolkit has it’s slow area to begin with, and each toolkit will change as necessary to match the browser’s capabilities.

Naive?? It all sounds pretty well-reasoned to me. Maybe, if you’re going to inject personal judgments into posts that are primarily nothing more than quotes from TFA, you could do your readers the service of saying why you feel that way?

Ryan, I totally agree. One of the primary reasons that Ajax exists, is for performance. We were tired of full page refreshes everytime we wanted to talk to the server, so we started using XHR to just transmit the necessary information. Clients I have worked with certainly seem to think performance is critical as well, and I don’t think the need for performance is going to go away.

Apples and oranges. Comparing desktop projects to web applications is still a bit off. Of course we’re all trying to work towards that goal but the fact of the matter is that the integration Microsoft Office managed to achieve with the OS (Also built by Microsoft) isn’t exactly a great comparison to web applications and integration, there are fundamental differences you can’t ignore between the two.

As for the performance issue, of course that matters. Luckily a large part of the performance issue can be handled by having the foresight to address matters of speed. Relying on the technology to ramp up to meet your requirements is a bit fool-hardy and most likely the quickest way to ensure your startup doesn’t– noone can use it.

As for interoperability, you can achieve such things today, but not without you manually writing specialized (most likely server-side) code to handle the lions-share of the legwork. Is there a better solution? Probably. Are there inherent obstacles? Certainly. Browsers, cross-domain, websites supporting it, etc.. all these things need to come together. The issue isn’t that we haven’t been trying, it is that in most instances it’s extremely tasking and sometimes, not worth it.

Tom Trenka is very into “agree to disagree” and let everyone continue on their own path. Seems to me the only ones to ever do this (in any context) are those convinced of their superiority… As long as “we are the best, and our way is the ONLY way” continues Joel’s second point will not come to pass. Welcome to – Browser Wars II: Attack of the Toolkits –

Comment by Wade Harrell — September 24, 2007

Perhaps Joel has yet to personally glimpse into the cross browser CSS abyss.

It’s not that browsers violate CSS with abandon, or that they implement it only partially (they do), or even that the specs themselves allow one a complete pixel-perfect description of a particular layout (they don’t, allowing for “interpretation”). It’s that CSS ignores a large number of common layout scenarios. There are simply certain things that you can’t do, no matter how much incredible hackery exists within your toolkit layer.

And since it’s obvious Mozilla Foundation is uncomfortable with deep-level CSS enhancements and fixes, preferring to stick to additions and surface changes (inline-block anyone?), not only is IE not going to improve on its own but it’s not going to improve by being dragged kicking and screaming either. That is, without a cultural shift within Mozilla from code hackers to designers and CSS hackers. Which could happen. Who knows. Either way, Joel’s predictions will probably come to pass, although not to the degree he envisions. And at that point perhaps the dominant toolkits will possess enough leverage to drive CSS changes.

We believe that the “biggest percentage of developer mindshare” will be realized by not introducing a behemoth that precludes the already hard earned skills of “X-toolkit” devs, but by providing an intuitive environment where X-toolkit can be used seamlessly alongside Y-toolkit for those that want that level of control, and XYZ-toolkit transparently applied where applicable (automatically) for those who don’t care which is used but just that ABC-functionality is achieved. Good visual tools and a complete deployment configuration layer will be important in the end as well.

Let’s say you want to use the Grid from Extjs, and the Fisheye from Dijit. You’ll be able to simply drag the two widgets onto your screen, and the system will handle lazily pulling in the (minimum) required files, and it will package everything together when you click “save as” or “deploy” or something similar. And your new creation is available for others to find and pull into the _their_ app (ad infinitum).

I do think that his application of the story is not as relevant as he thinks it is. Rarely are things that simple, especially considering how different things are today from the time he speaks of. Generally, I think fragmentation is the future of things, not consolidation.

One of the flaws in Joel’s analogy are holes in its history. OS/2’s failure against Windows demonstrates that it is, in fact, possible to fail through being too slow for the hardware of the time in spite of feature superiority.

Hmm, how many Windows apps these days don’t look like Win32 apps? And look at the difference between Office 95 up to Office 4515. There rarely has been a consistent look. But we can give the windows platform credit that copy and paste works on that platform. It’s still an issue on Linux.

And on *suddenly* everyone buying new hardware: that might have worked in the past, but are computers not fast enough these days for what most people do? More than 50% is still using IE6.

On second thought to the whole issue, it’d be pretty neat to see something client-side, Google Gears perhaps, that would facilitate a sort of persona as you move around the web. Of course there’d have to be a “public” table or database on your client and you’d have to allow it to be used on certain sites but it’d be awfully great to visit a website and have it go, for example: “I see your MySpace login, would you like to have me blog these photos for you?” without ever really requiring the two systems to interact in the fashion we’re use to today with manually coding everything.

I guess it’d be a bit like DDE for Windows. MySpace (example, again) would say: “I can do these things” and other sites would program themselves and allow that functionality and when you’re using those sites they could query the “servers” and their “functions” that they could use.

Comment by Joe — September 24, 2007

Joel is a great thinker, but he’s been out of touch with web applications for far too long. He is truly the Microsoft of application development. This sounds a lot like the former hipster 12 years on who shows up at the latest club, trying to stay “in”, but just looks like the 30 year old in a room of 18 year olds. Spolsky doesn’t have the experience or the perspective to make intelligent predictions about the future of web application development. His commentary is in fact detrimental.

1. The idea that a bigger SDK will win you “the space” is such utter nonsense that I’m surprised he even penned it. Note how “the internet” isn’t even *mentioned* in his article. He’s talking about internet enabled applications, and has done exactly zero analysis of how a massive interconnected network wrapping your software changes the ideas of that software, of its distribution, of its featureset, of its consistency, its resilience, of its users, of its cost, etc. This needs an article of its own, but I’ll sum up — massive toolkits don’t win you anything on the internet; more massive toolkits win you even less — and the reason isn’t that there aren’t enough cycles or slots or pipes yet. The reason is low barriers to entry and easy interoperation (not “easy if everyone has the same toolkit”; easy as in “out of the box” easy). Or, another way: the internet has proven that ideas rule, not frameworks.

2. This newSDK will be so compelling that anyone using let’s say html/css/js will be impossibly hobbled and crushed, as they won’t easily interoperate with newSDK. This comes from an ancient mindset: “software” = “business software (MSOFF)” + video games + development tools. Each of these “softwares” dominate a market, and competitors must comply or be damned.

Nytimes.com is a piece of software. It is software that gives you free access to news and some of the best writing in the world (pace O’Reilly), and lets you share with other people your opinions of that news and writing. So is Amazon.com. And my home page. “Software is Undefinable” is the lesson of internet software. “Good Software” is what person x uses to perform action y successfully. What toolkit is it, exactly, that will make clicking a graphic and immediately having the depicted product sent to me in 3 days a tedious and obsolete process? Is it difficult to go to pownce.com and type something and click a button and have it published to all my friends? Is it really a problem that I can’t drag a picture from flickr into my message? And if i could — is that something compelling? Business software will not dominate internet software, or even a particular segment, and as such, all those old canards about “interoperability with the market leader” can be safely ignored.

3. Standardization makes better software. Really? Is the idea that, from Great Leap Forward Day 1, every new “important” release of sofware on the internet will use the same interface and same features and same process to solve same problems as previous software? Does that sound right to you? Or like something so incredibly dumb that you’re surprised someone actually said it? Or look at it this way: how important has innovation and novelty been to the growth and success of the internet, and directly, to your continued enjoyment of the medium? When flickr added the ability to annotate images by selecting an area and just start typing, was that an unfortunate requirement that would have been so much easier had there been The One Right Way That Everyone Must Use Because You Must Use A Standard Mindset If Software Is To Grow? Or another way: does everyone have to annotate images like flickr does in order to be taken seriously?

4. Quote: “If youâ€™re a web app developer, and you donâ€™t want to support the SDK everybody else is supporting, youâ€™ll increasingly find that people wonâ€™t use your web app, because it doesnâ€™t, you know, cut and paste and support address book synchronization and whatever weird new interop features weâ€™ll want in 2010.”

– “Everybody Else”: who is that, exactly? Who is this cabal who is using the same sdk, and in fact so dominates the world that new developers must use the same sdk or fail? And how did they get rid of myspace, amazon, flickr, google, ebay, delicio.us, fark, youtube, cnn, boingboing, wikipedia, rememberthemilk, linkedin….. or is the idea that some essential software (let me guess: word processor) is built in the new SDK, and then, magically, that company is unassailable by anyone not using the same SDK. Sounds like a wishlist for a monopolist, not a justifiable trend analysis.

1. prior to using my service, research competitors, discover that one or the other is using a “superior SDK”, and will therefore not even try mine, even if it is cheaper, or faster, or more easily accessible, or — gasp — has better features.
2. after using my service and being happy with the service, the user will discover that they can “cut and paste from flikr” and will therefore leave my service. Or: after using my service, the user finds that *the way* that my service cuts and pastes from flickr isn’t *standard*, and will therefore leave my service.

– “and whatever … weâ€™ll want in 2010”: An extraordinary statement. On the one hand we have Spolsky FUD about competing with the SDK to rule all SDK’s, and on the other, i suppose, Frodo, this annoying everyman who keeps coming up with ideas the SDK didn’t anticipate. Or is it: The Eye of the SDK will see all new ideas while they exist in the minds of inventors and absorb them into itself before being created, growing ever more powerful. Which is a interesting argument: anticipatory function creation (feature creep) will improve the SDK.

Comment by jimbob — September 24, 2007

@JimBob
If you have been around long enough you’ve come to appreciate the fact that History has a way of repeating itself. To believe that just because this is a “new dawn”, History will not repeat itself is ignorant… ;)
I don’t agree with everything Joel says e.g. the Copy Paste feature is really ridiculous but to believe that the diversity of platforms/libraries we have today will exist in 5 years is truly amazingly optimistic… ;)
Consolidation…!!
It happens in every single SW sector if you just take it past it’s infancy, look at CMS systems today as an example of that…
Sure there’s coming out new CMS systems every year but the number of big players are about 50% of what they were 5 years ago…
@everybody
I am notoriously recognized as speaking my truth and thanx to that fact quite unpopular in some circles (especially the Flex ones ;) but I have an habit of speaking my mind whenever I can, especially when it’s something I feel I know something about. And the funny thing is that in five years time when NONE of the pure JavaScript libraries exists anymore I can point to this post and say; “I told you so!”
To believe that JavaScript libraries will prevail in competition with the “Hijax” Frameworks like GWT and (ours) Gaia Ajax Widgets is pure insanity. I mean seriously who wants to fiddle with JSON or XML serialization themselves?
JavaScript is just like x86 CISC Assembly Programming which is a great language to write C++ compilers in. JavaScript though doesn’t write C++ compilers it writes “Hijax Frameworks”…
5 years from now JavaScript will be entirely abstracted just like Assembly programming where 5 years after ANSI C came out and only used by Framework developers… ;)
My guess is that GWT will have 90% of the Java camp, some other Hijax Ajax library for PHP will have 90% of those guys and Gaia will have 90% of the .Net camp. Unless some other Hijax vendor comes around and turn the tables. Anyway pure JavaScript libraries are a thing of the infancy, not something that’ll last…
It’s inevitable basically because it’s a hundred times easier to deliver what the customer wants in addition to that all the “Juniors” can be productive from day ONE…
The Time2Market is hundred times faster and the maintainability is a hundred times faster, sure you’ll have to sacrifice a couple of CPU cycles and a couple of bits on the wire in the end product, but then again who cares, hardware and bandwidth all follows Moore’s law… ;)

Joel hit the nail on the head by combining two laws of nature:
1. Moore’s law: nobody ever lost money betting that next year’s computers will run faster and have no memory – Joel hit the nail on the head.
2. Innovator’s dilemma: no existing player (e.g. Microsoft, Adobe) can ever believe that the next generation of tool users doesn’t just want a bigger, better version of their hammer (e.g. Silverlight, Flex)

“But Ajax apps can be inconsistent, and have a lot of trouble working together â€” you canâ€™t really cut and paste objects from one Ajax app to another, for example, so Iâ€™m not sure how you get a picture from Gmail to Flickr. Come on guys, Cut and Paste was invented 25 years ago.”

Actually, this makes me wonder if he knows what he’s talking about.
nate

Comment by Nate Grover — September 24, 2007

@ Thomas Hansen: http://en.wikipedia.org/wiki/Interchangeable_parts
That is the reason tightly bound client and server applications will be stuck in the mud and fail. As long as you continue to build server-side components targeting a specific display (technology, layout, functionality) you are stuck.
.
When your server is only concerned with generating/ingesting a data exchange format (JSON, XML, etc.) your display (technology, layout, functionality) is liberated. A flex dev team should be able to use the same server interface as an Ajax team, as an Air team, etc. To force only one display type by generating your display code on the server is a dead end. A client-side application should be able to change layout and functionality an infinite number of times without a single change to the server-side application. The server is about moving around raw data. Raw data has no value in itself, it is the perspective with which you view it that gives it worth.

Comment by Wade Harrell — September 25, 2007

@Wade
Interesting wikipedia article, though I don’t understand how that’s an argument against tightly coupled server/client-side Frameworks…?
If what you’re claiming (I think you’re claiming that is) is correct you mean that the RichEdit control in Windows API should have been “de-coupled” from Window API in addition to the Button control, TreeView control and just about any other Windows API control for Windows API to become a success…?
Well, History definitely has a lot of proof about that you’re pretty wrong on this… ;)

@Wade
Ohh yeah…!
I just forgot, if you look at the server as a “database” (“moving raw data around”) you’re going to have some really funny things happening to your BUSINESS logic…!
Ask yourself if you’d like to have the entire business logic running on the SERVER (in some strongly typed language) or on the CLIENT (in JavaScript)
I’ve written quite a few blogs about that problem in fact…
Hint; follow my signature… ;)

@Thomas: if you can not discern between client-server applications and desktop pcs then it seems pretty obvious you just want a flame war.

I will know better than respond to anything you have to say in the future

Comment by Wade Harrell — September 25, 2007

@Wade
Huuh…???
I am not after a flame war in any ways…
All I want is to put a little bit of focus on the difference between “JavaScript inclusion files” and Ajax Frameworks…
I am to be quite honest quite a bit confused by the fact that Ajaxian writes 15 articles a week about every single “JavaScript inclusion library in the world” but only twice a year of ours which happens to be the second largest Ajax library in the world for a platform that has about 37% of all the web servers in the world (+ those running Mono on Linux that is)
I am sorry if I hurt somebody’s feelings, but that’s my honest opinion…
BTW the “definition” of a flame war is somebody that throws in opinions he or she often doesn’t even mean or comes with impossible to proof claims. I think my rhetoric does not qualify for either of those two criteria…

@Gavin
You’re entitled to have your opinion but realize first who you’re talking about…
Joel was a senior project manager at Microsoft before most of the ajaxian readers could read in addition to that he’s been doing the startup dance with a LOT of employees and also started programming back in the (almost) 70s…
Though I do agree that sometimes Joel comes with very “brute force” statements I’m also way too deep into one of his books at the moment to dismiss him as egoistical and arrogant… ;)
In fact those two qualities I’ve yet to see ANY traits of at all in his writings. Especially compared to lots of the other “process/methodology” writers today…

Leo: OS/2 did not fail because it was too slow, but because microsoft released windows 95 and win32 apps did not run on OS/2. In short, it failed because it was underfeatured.

I agree with joel: desirable features always win over performance. This is only logical, as being able to do something is more important than being able to do it fast.

Comment by Joeri — September 26, 2007

@Joeri
Just as a curiosity Joel even explains this problem too in general terms…
He calls it “Chicken and Egg” problem…
For those that haven’t read his books I’d SERIOUSLY recommend them…!
BRILLIANT guy…! :)