Parsing the difference between the Internet and the Web according to Alan Kay

Kay thinks the Internet was built better than the Web. Is he right?

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

What did digital pioneer Alan Kay mean by, “The Internet was done so well, but the Web, in comparison, is a joke. It was done by amateurs”?

When Kay speaks, programmers listen. But like anyone who puts forward an opinion, he opens himself up to being misinterpreted. That was the case last July in an interview with long-running publication *Dr. Dobb's Journal*. At one point, the inventor of object-oriented programming chimed in:

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

Stack Exchange user kalaracey can't have been the only programmer confused about the meaning of Kay's quote, but he's the one who asked about it. Several devs answered.

Read further

Kay actually elaborates on that very topic on the second page of the interview. It's not the technical shortcomings of the protocol he's lamenting, it's the vision of Web browser designers. As he put it:

You want it to be a mini-operating system, and the people who did the browser mistook it as an application.

He gives some specific examples, like the Wikipedia page on a programming language being unable to execute any example programs in that language and the lack of WYSIWYG editing, even though it was available in desktop applications long before the Web existed. Twenty-three years later, and we're just barely managing to start to work around the limitations imposed by the original Web browser design decisions.

Kay doesn’t get how messy lower-level protocols are

I read this as Kay being unfamiliar enough with the lower level protocols to assume they're significantly cleaner than the higher level Web. The “designed by professionals” era he's talking about still had major problems with security (spoofing is still too easy), reliability, and performance, which is why there's still new work being done tuning everything for high speed or high packet loss links. Go back just a little further and hostnames were resolved by searching a text file which people had to distribute!

Both systems are complex heterogeneous systems and have significant backwards compatibility challenges any time you want to fix a wart. It's easy to spot problems, hard to fix them, and as the array of failed competitors to both shows it's surprisingly hard to design something equivalent without going through the same learning curve.

As a biologist might tell an intelligent design proponent, if you look at one and see genius design, you're not looking closely enough.

There’s some truth to the claim

In a sense he was right. The original (pre-spec) versions of HTML, HTTP, and URL were designed by amateurs (not standards people). And there are aspects of the respective designs and the subsequent (original) specs that are (to put it politely) not as good as they could have been. For example:

HTML did not separate structure/content from presentation, and it has required a series of revisions—and extra specs (CSS)—to remedy this.

HTTP 1.0 was very inefficient, requiring a fresh TCP connection for each "document" fetched.

The URL spec was actually an attempt to reverse engineer a specification for a something that was essentially ad hoc and inconsistent. There are still holes in the area of definition of schemes, and the syntax rules for URLs (e.g. what needs to be escaped where) are baroque.

And if there had been more "professional" standards people involved earlier on, many of these "miss-steps" might not have been made. (Of course, we will never know.)

However, the Web has succeeded magnificently despite these things. And all credit should go to the people who made it happen. Whether or not they were "amateurs" at the time, they are definitely not amateurs now.

102 Reader Comments

His statement that we accepted web browsers as applications when what we really wanted "operating systems" is spot on. Most people still don't realize that, it seems to me. And truly, the development model could have been tightened up from the beginning. Those engineers and developers had a carte blanche, of sorts, and no one cared enough to go with it.

Going off on that last point, I see a difference between a certain style of engineer who is content to cobble together tools to get the job done--and to leave those tools cobbled together. Another kind of engineer is always attempting to refine and distill his tools and to make newer tools of even higher quality. Of course, there is a bit of this in the first engineer too, but something else is missing to let the process carry itself away.

Sadly, none of the browser makers have been able to fully express that second engineer. Chrome came close, but they stopped. Where's my in browser code editor, Google? Why'd you start so strong with Chrome and then just stop innovating on some meaningful levels. Yes, printing web pages works a lot better than at launch, but I still can't create a javascript/html project in the browser.

Of course, some people would disagree that this is what a web browser is about. I disagree with them.

Ok, so I'm probably showing my ass here, but I have to admit that I wasn't aware of a distinction between "the internet" and "the web". Color me curious though. Could someone either explain or point me in the direction of a good resource?

The original scope of web pages was simply a means of getting access to documents. It was not a means of "programming applications". It was originally a means of collaborating between "content-makers" not programmers. The collaborative nature of HTML was the focus and all were invited in to help make it happen.

The problem with waiting for a standards committee to write specifications is exactly that, waiting. You have large companies involved now that see some advantage in pushing or delaying for one spec or another and as a result NOTHING of consensus gets decided. And presumably they have programmers working on their "favorite" implementation of HTML or more appropriately HTML5, the latest semi-failure. Consensus breeds mediocrity, its not going to be the "best" at everything.

Yet there is nothing preventing a clique of programming "geniuses" from coming up with their own web-application/operating system/programming IDE that sits in a webpage. To think otherwise is bullshit.

BTW this rant was written in an editor window on ARS Technika website. It's not perfect but it works well enough.

Ok, so I'm probably showing my ass here, but I have to admit that I wasn't aware of a distinction between "the internet" and "the web". Color me curious though. Could someone either explain or point me in the direction of a good resource?

The internet is a collection of computer networking via the internet protocols, TCP/IP being the most common. The web uses this networking to send its data around. The web is characterized by is two main protocols, HTTP (Hyper-Text Transfer Protocol) and HTML (Hyper-Text Markup Language).

However, the Web has succeeded magnificently despite these things. And all credit should go to the people who made it happen. Whether or not they were "amateurs" at the time, they are definitely not amateurs now.

One need only look at what people manage to do with JavaScript to see the triumph of duct-tape-and-spit solutions to poorly (or at least hastily) designed systems.

Ok, so I'm probably showing my ass here, but I have to admit that I wasn't aware of a distinction between "the internet" and "the web". Color me curious though. Could someone either explain or point me in the direction of a good resource?

The Internet is like a freeway. The WWW is like a car. Cars ride on the freeway, but they aren't the freeway.

Additionally, other types of traffic that aren't WWW can travel on the Internet. Things like email, DNS, ftp, etc.

Ok, so I'm probably showing my ass here, but I have to admit that I wasn't aware of a distinction between "the internet" and "the web". Color me curious though. Could someone either explain or point me in the direction of a good resource?

The Internet is like a freeway. The WWW is like a car. Cars ride on the freeway, but they aren't the freeway.

Additionally, other types of traffic that aren't WWW can travel on the Internet. Things like email, DNS, ftp, etc.

Mr Kay, needs a bit more bandwidth cause I can surely say at certain speeds the internet (TCP/IP) does start breaking...

Also I'm pretty sure Bill Gates thought Netscape was an immediate threat to his operating system.

Mostly because his original OS solution was meant to ride on Netware! The default install of Windows 3 installed Netware IPX/SPX modules. Networking in the stone age wasn't considered a part of the operating system, it was an add-on.

The threat was he was going to have to double up his programming staff to keep up with the other OS options on the market, including Netware. TCP/IP was still considered academic/military in a lot of IT offices. But it had scale that the other options didn't easily have. Anybody remember Banyan?

Ok, so I'm probably showing my ass here, but I have to admit that I wasn't aware of a distinction between "the internet" and "the web". Color me curious though. Could someone either explain or point me in the direction of a good resource?

Consider the Internet to be like an operating system doing transportation tasks. The Web (HTTP protocol) rides on top of the Internet. Other applications are email, DNS (Domain Name Service, its what tells the web browser where to send its packets), and FTP (File Transfer Protocol, used to handle the give and take necessary to transfer files from one place to another).

As to resources, type a networking term into a search engine window in any web browser and click <enter>. What you will get back is the result of another "Internet riding" application. The responses from Wikipedia are most likely to be somewhat free of bias. Those from the manufacturers and Internet related company will be less so, but still useful. The RFC's from www.ietf.org are the Bible of the Internet and like any religious text are practically unreadable by humans.

Have fun. You're not going to be tested!(A personal note: When I first got on TCP/IP, there wasn't anything called DNS. DNS is possibly the ONLY thing that made web browsing possible. Learning what DNS does will lead you both up and down the Internet rabbit hole.)

Mr Kay, needs a bit more bandwidth cause I can surely say at certain speeds the internet (TCP/IP) does start breaking...

Also I'm pretty sure Bill Gates thought Netscape was an immediate threat to his operating system.

Mostly because his original OS solution was meant to ride on Netware! The default install of Windows 3 installed Netware IPX/SPX modules. Networking in the stone age wasn't considered a part of the operating system, it was an add-on.

The threat was he was going to have to double up his programming staff to keep up with the other OS options on the market, including Netware. TCP/IP was still considered academic/military in a lot of IT offices. But it had scale that the other options didn't easily have. Anybody remember Banyan?

I think it's fair to say most people were caught off guard. Kind of like that old IBM story where Watson Sr. asked how many computers could we possibly sell, and the answer came back as 6. Outside of BBS users who could possibly ever want to use the internet really? Even after Mosaic launched did people even understand the impact of that right away? Everything needs a killer app, email and the web are one thing, but I think the true killer app for the internet was Napster and of course Google.

Ok, so I'm probably showing my ass here, but I have to admit that I wasn't aware of a distinction between "the internet" and "the web". Color me curious though. Could someone either explain or point me in the direction of a good resource?

A network is the physical structure that allows computers to talk to each other. The computers uses a protocol (the way in which they talk to each other) using TCP (Transmission Control Protocol), though some use UDP (User Datagram Protocol). We use an addressing system called IP (Internet Protocol). Collectively, we call this the internet.

Using the internet we can use e-mail, FTP, usenet and HTTP (the protocol used for the World Wide Web, or web). The main language used is generally HTML (Hyper-Text Markup Language), but can be Javascript, ASP, ASP.Net, JAVA or any number of other languages that allow data to be organized as it is is received and transmitted.

There was very little design to the internet. The primary design goal was to minimize the function as much as possible so that a composite internetwork could be built on top of as wide a variety of networking equipment as easily as possible. The biggest success of the Internet has been DNS. Both of the main routing protocols BGP and OSPF and worked well. The biggest failure of the Internet has been security. The Internet's design goal to be open is in essential conflict with a network that is designed for security. During the 1990's when I attended the IETF, the security area floundered endlessly without achieving any consensus on the infrastructure needed to overlay on top of the open Internet and make the best of a bad situation. Not much seems to have happened since. The other major failure of the Internet was the failure to design in an adequate address capability to begin with.I am not familiar with Alan Kay. But he sounds like the garden variety of ideological fool. The web browser established the major public value of the Internet. Before that it was primarily an infrastructure to support academic science. The browser arrived about the same time as the cost of bandwidth dropped to the point that the general public could afford Internet access. Like any other real world computer application, the browser has developed iteratively. First, it was an interface to obtain information. Soon, it was also a means to enter form based data in the same pattern as the 1980's IBM 3270 data stream. Clearly that interface is still evolving along with computer technology. It may be that applications will generally migrate from the desktop to the server. If so, javascript whether within a browser or within a runtime that serves more or less the function of a browser may become the primary user API. The design of that API is still evolving and still fairly primitive compared with current state of the art desktop windows based gui's. The applications built on those API's will not be one size fits all. Nor will anybody get the applications right without several cycles of trying and discovering from user's experiences what works and what does not.

His statement that we accepted web browsers as applications when what we really wanted "operating systems" is spot on. Most people still don't realize that, it seems to me. And truly, the development model could have been tightened up from the beginning. Those engineers and developers had a carte blanche, of sorts, and no one cared enough to go with it.

Going off on that last point, I see a difference between a certain style of engineer who is content to cobble together tools to get the job done--and to leave those tools cobbled together. Another kind of engineer is always attempting to refine and distill his tools and to make newer tools of even higher quality. Of course, there is a bit of this in the first engineer too, but something else is missing to let the process carry itself away.

Sadly, none of the browser makers have been able to fully express that second engineer. Chrome came close, but they stopped. Where's my in browser code editor, Google? Why'd you start so strong with Chrome and then just stop innovating on some meaningful levels. Yes, printing web pages works a lot better than at launch, but I still can't create a javascript/html project in the browser.

Of course, some people would disagree that this is what a web browser is about. I disagree with them.

That seems a perspective heavily reliant on the benefit of 30 years of hindsight. I think the *idea* that the browser might act as mini-os is something that has evolved with time. The useful functions of the web were originally built largely upon ideas and activities based in a non-digital world - as that was the only thing available for context.

Hypertext started out as a variant of the layout tools being used by publishers to generate printed documents (or in the case of Berners-Lee, technical documentation at CERN). It was grounded in a paper-centric world with what can charitably be described as serious display constraints and *CONSIDERABLY* less computing power to throw at design elements ... let alone to throw at trying to act as a mini OS. Browsers were created to browse hypertext.

The thinking started moving in the direction of the browser as OS-agnostic application host with the buzzword "middleware" [edit: h/t Java for moving thinking in this direction] ... which resulted in the whole IE shenaigans and subsequent browser wars of the mid-late 90s. But by then there was a whole Web 1.0 out there being used by early adopters ... which, obviously were the entirety of the current user base. It's not a matter of agreeing or disagreeing - the nature of the browser is a product of the evolution of the browser.

I don't think the reductionist dichotomy of two engineering types you present does the development community proper justice. From Netscape Navigator to Chrome it has is been a pretty exciting evolution to watch ... and IMO, nothing shy of brilliant.

Not behaving as a Javascript/HTML IDE doesn't seem to be a shortcoming based on conceptual design of Chrome-as-OS (I prefer "Application Platform"). Being properly designed as an OS should imply the ability to support an ecosystem of competing IDEs all of which run under Chrome to suit a range of tastes and desires - not that Chrome provides a locked-in stock feature you want. I don't have much context to rate the Chrome API or anything, but with a quick search it sure looks like there's a pretty healthy Chrome dev-app ecosystem.

Maybe I'm missing something. Is there a reason you can't install a tool under Chrome that does what you want it to? Picked at random, this looks like a good place to get ideas for setting your browser up for development (and kind of makes me think I'll play with Chrome a bit this weekend).

Exactly. We all pine for the Golden Age of MSN and AOL as the bringers of universal light and wisdom.

To hop off of the sarcasm train for a second, I think that the biggest turning point for the Internet was when the first home cable/DSL systems made it a violation of contract to plug their pipe into more than one computer. Instead of picking up a $50 NAT router at K-Mart, you were supposed to pay another $70/mo contract for every computer plugged into the Webway, and Wifi was already made impossible before its invention. Today, every household still has one ethernet cable that is traded around to mom's computer, dad's laptop, and the PS3 whenever somebody wants to compose some packets.

Oh wait, we totally ignored that horrible restriction, bought the damn router, and used the service the way we wanted. The cable co didn't have the tools to deep-inspect what we were doing, and eventually realized that their original business plan was unworkable with the general public. But now we have Cellular networking with the tools to properly protect against abuses like using a 3.5" device plan on a 7" screen, and when we can't stop that cold, we can automatically bill such villains enough that it hurts to even think of doing so again.

We have no idea where the next tablet revolution is coming from, maybe it's micro-servers in the home. Nope, that's already been vetoed in your cable contract, and the tools to stop it are already in place and paid for. How many other possible tech destinies are being killed before they can even begin?

Makes sense to me, hardware-wise the internet works really well. It scales well (IPv4 aside), it adopts additional technologies well (from phone to cable to fibre to LTE to whatever), it's reliable and short of taking down every backbone in one hit it's difficult to actually destroy. There are some security issues, but that exists with natural resources too (ocean acidification in the example of the Pacific).

HTML is moronic. And all we're doing is building new codes that work over the top of HTML and browsers don't let me edit anything. There's no easy way to change things and upgrade things, it's all agreed upon standards by everyone and if someone decides to do things differently, it's a mess. Imagine if the internet worked via having to get a new browser every time one of the tier one providers decided to change how they move data.

Prove it. Make something on the web that's better. Then get a bunch of people to use it or at least "Like it".

The point isn't to make something on the web that's better, it's to make something on the Internet that's better than the web. There actually is something arguably better than the web on the Internet, although it is an evolution of something developed for the web. It sort of pains me to say it, because it is still kind of garbage HTTP REST web services. I know, it's called web services, so it is pretty obviously part of the web, but it has transcended the browser and web sites. Here you can use the lowest layer of the web to provide almost universally understood data and make relatively standardized remote procedure calls without mixing the valuable part (the data) in a format that is generally not too terrible (XML or JSON or BLOBs, as opposed to HTML) without the formatting or logic (no need for CSS or JavaScript, you can render the data however you want with a native application.)

The Browser as an OS is a bad idea. We already have operating systems in the form of operating systems. We have far better tools with which to build and display functionality with them than we ever will inside a single application built on top of a real operating system (a browser).

Basically, what we have is an idea, which evolved from the web, for uniform remote procedure calls, the way they should be structured (statelessly) to allow better scalability, with the current working implementation (HTTP REST) being fairly awful. Replace that with something better and forget about the awful little web technologies that have layered garbage on top of garbage because they've been stuck with the outdated idea that interface and function should be transmitted along with the data and that HTML is a good generalized data format, and that tooling needed to be built around those things, and the Internet will be much better off. It'll be a browserless future, with real, efficient, high quality microapplications running on top of real, efficient operating systems natively leveraging the Internet more directly. Right now, I think a better future is being held back by its reliance on http, and the continued obsession with the idea that transmitting interface and functionality to be rendered by a browser is somehow a generally good one.

I listen to thought leaders who have built strong reputations in programming. When Bjarne Stroustrup, Herb Sutter (and many others) talk, I pause whatever I'm doing and listen.

No doubt Mr. Kay has a following but haven't paid any attention to him since the 90s.

Ok let's see:Herb Sutter: Brilliant guy who knows quite a lot about one programming language and can be considered one of the experts of memory models and concurrency in general.Bjarne Stroustroup: Tacked object oriented features onto an existing language and then spent the next decades marketing it to everybody, increasing its popularity by basically accepting every feature anybody wanted.Alan Kay: One of the fathers of OOP, graphical user interfaces, educational uses of computers and other HCI stuff.

Yes I can see why you'd pay much closer attention to what the first two say than the later, grand ideas are much scarier than technical details about move semantics (not that I don't love the new gotw stuff from Herb, but that's just not the same league..)

A bit more on-topic: He certainly has a point, the internet has scaled greatly from its humble beginnings to now. Sure we're running into some limitations now (mostly wrt IPv4 and routing), but if the internet had been planned as the web we'd be in real trouble by now. We're pretty much throwing band-aid over band-aid and using things in ways they were never intended to (HTML being the worst offender; look at modern websites and then compare them to the original ideas that spawned HTML - not very forward looking or graceful transition there)

PS: To be fair if the original creators of the internet failed in one regard it's security, several problems we're still fighting today could have been avoided there. Yes I know in the given situation back then security wasn't important, but if we give them credit for their forethought we also have to point out the parts where they failed.

His statement that we accepted web browsers as applications when what we really wanted "operating systems" is spot on. Most people still don't realize that, it seems to me. And truly, the development model could have been tightened up from the beginning. Those engineers and developers had a carte blanche, of sorts, and no one cared enough to go with it.

Going off on that last point, I see a difference between a certain style of engineer who is content to cobble together tools to get the job done--and to leave those tools cobbled together. Another kind of engineer is always attempting to refine and distill his tools and to make newer tools of even higher quality. Of course, there is a bit of this in the first engineer too, but something else is missing to let the process carry itself away.

Sadly, none of the browser makers have been able to fully express that second engineer. Chrome came close, but they stopped. Where's my in browser code editor, Google? Why'd you start so strong with Chrome and then just stop innovating on some meaningful levels. Yes, printing web pages works a lot better than at launch, but I still can't create a javascript/html project in the browser.

Of course, some people would disagree that this is what a web browser is about. I disagree with them.

That seems a perspective heavily reliant on the benefit of 30 years of hindsight. I think the *idea* that the browser might act as mini-os is something that has evolved with time. The useful functions of the web were originally built largely upon ideas and activities based in a non-digital world - as that was the only thing available for context.

Hypertext started out as a variant of the layout tools being used by publishers to generate printed documents (or in the case of Berners-Lee, technical documentation at CERN). It was grounded in a paper-centric world with what can charitably be described as serious display constraints and *CONSIDERABLY* less computing power to throw at design elements ... let alone to throw at trying to act as a mini OS. Browsers were created to browse hypertext.

The thinking started moving in the direction of the browser as OS-agnostic application host with the buzzword "middleware" [edit: h/t Java for moving thinking in this direction] ... which resulted in the whole IE shenaigans and subsequent browser wars of the mid-late 90s. But by then there was a whole Web 1.0 out there being used by early adopters ... which, obviously were the entirety of the current user base. It's not a matter of agreeing or disagreeing - the nature of the browser is a product of the evolution of the browser.

I don't think the reductionist dichotomy of two engineering types you present does the development community proper justice. From Netscape Navigator to Chrome it has is been a pretty exciting evolution to watch ... and IMO, nothing shy of brilliant.

Not behaving as a Javascript/HTML IDE doesn't seem to be a shortcoming based on conceptual design of Chrome-as-OS (I prefer "Application Platform"). Being properly designed as an OS should imply the ability to support an ecosystem of competing IDEs all of which run under Chrome to suit a range of tastes and desires - not that Chrome provides a locked-in stock feature you want. I don't have much context to rate the Chrome API or anything, but with a quick search it sure looks like there's a pretty healthy Chrome dev-app ecosystem.

Maybe I'm missing something. Is there a reason you can't install a tool under Chrome that does what you want it to? Picked at random, this looks like a good place to get ideas for setting your browser up for development (and kind of makes me think I'll play with Chrome a bit this weekend).

There are some cool extensions that I haven't heard about. In general, I can easily understand the difficulty of gaining perspective on early development. But really, in the past 5-7 years the "browser as an OS" idea has been out there big time. Google has certainly picked up on that. Yet, even still, I am simply not impressed with the progress of Chrome in the past 4 years. I get the feeling that Chrome Developer tools were brought to parity with Firebug, with a couple of pet-peeve's addressed, and that's it.

As a developer, I have a good sense for when functionality sits still.

As a user with some kind vision, it can be frustrating to see that environment sit still.

His statement that we accepted web browsers as applications when what we really wanted "operating systems" is spot on. ...

...That seems a perspective heavily reliant on the benefit of 30 years of hindsight. I think the *idea* that the browser might act as mini-os is something that has evolved with time.

There are some cool extensions that I haven't heard about. In general, I can easily understand the difficulty of gaining perspective on early development. But really, in the past 5-7 years the "browser as an OS" idea has been out there big time. Google has certainly picked up on that. Yet, even still, I am simply not impressed with the progress of Chrome in the past 4 years. I get the feeling that Chrome Developer tools were brought to parity with Firebug, with a couple of pet-peeve's addressed, and that's it.

As a developer, I have a good sense for when functionality sits still.

As a user with some kind vision, it can be frustrating to see that environment sit still.

I guess my point is that you haven't really established that Chrome or any of the other modern browsers released in recent years are failing to deliver on a properly robust "Browser-as-OS" capability. It seems to be a case of dissatisfaction with the quality of applications that are available on the platform.

That's probably a product of the fact that the developer community is stretched pretty thin right now ... everybody and their pet goat seems to pooping out a new language, platform or browser with some snazzy new framework or API these days. There are only so many teams out there.

The Web was conceived as a distributed filesystem for hypertext documents. Clients can GET a document, PUT a document, POST to a document, or DELETE a document referenced by a URL describing the hostname of the remote server and the document's path on that server. The Web was really not much more sophisticated than FTP in its original scope.

Two key elements of the Web hindered progress as it evolved from a store of documents to a store of applications: the Document Object Model (DOM) and the client-server paradigm of the HTTP protocol.

Modern web application work by running JavaScript to manipulate the DOM. They are not so much applications as self-modifying documents. The DOM is a clumsy hierarchical object store which is transformed into a visual layout using CSS. JavaScript could never really evolve as a general-purpose programming language because of its tight coupling with the DOM.

Furthermore, HTML incorporates some legacy application-type features which were deemed essential to hypertext documents in the pre-JavaScript era. Forms and file upload are two examples of legacy HTML features that don't fit nicely into modern web application paradigms.

Most native GUI applications revolve around some variation of the Model-View-Controller (MVC) pattern. Most modern web frameworks claim to be MVC-like, but they all lack the most important characteristic of a true MVC pattern: changes in the Model layer should trigger corresponding changes in the View layer.

Web applications can't (readily) implement the MVC pattern because there's no ubiquitous way for HTTP servers to notify HTTP clients that a model event has occured which may affect their views. HTTP is a client-server protocol: servers listen for requests from clients, not the other way around.

Some bleeding-edge web applications will attempt to use HTML5 WebSockets (if the client supports it) to listen for requests from the server. Otherwise, clients must poll the server periodically to check if there have been any pertinent events since the last time they polled.

The Web wasn't really designed to do what it does today. It works pretty well as a distributed filesystem in the client-server paradim (as compared to peer-to-peer distributed filesystems like BitTorrent or 9P), but its evolution into an application platform was somewhat unforeseen and rather clumsy.

Kay is absolutely right, of course. And the reason isn't very mysterious: the web was originally designed by an engineer, and the internet was designed by computer scientists.

This is not to disparage engineers, but engineers often style themselves programmers, when they don't understand the deeper and/or longer-term theoretical issues that computer scientists worry about. The original web browser was designed as a quick and dirty project for use by a group of engineers and scientists, with no thought given to larger concerns, and that origin has haunted the abominable architecture of the web ever since.

On the other hand, the theoretical work put in by the designers of the internet to make it add-hoc, dynamically routing, self-healing and robust, as well as nicely factored out in a way that made it very general and independent of particular applications, has paid off in a remarkable way. All this was unneeded at the time but because they thought like computer scientists, it has proven remarkably scalable and robust.

The internet will live a long time, but the web will have a much shorter lifetime. In our lifetimes the web will be gone or vestigial, but the internet will remain.

Once again, this is not a criticism of engineers, just a reminder that an engineer is no more qualified to design a world-wide software infrastructure than a programmer is qualified to build a bridge.

Prove it. Make something on the web that's better. Then get a bunch of people to use it or at least "Like it".

The point isn't to make something on the web that's better, it's to make something on the Internet that's better than the web. There actually is something arguably better than the web on the Internet, although it is an evolution of something developed for the web. It sort of pains me to say it, because it is still kind of garbage HTTP REST web services. I know, it's called web services, so it is pretty obviously part of the web, but it has transcended the browser and web sites. Here you can use the lowest layer of the web to provide almost universally understood data and make relatively standardized remote procedure calls without mixing the valuable part (the data) in a format that is generally not too terrible (XML or JSON or BLOBs, as opposed to HTML) without the formatting or logic (no need for CSS or JavaScript, you can render the data however you want with a native application.)

The Browser as an OS is a bad idea. We already have operating systems in the form of operating systems. We have far better tools with which to build and display functionality with them than we ever will inside a single application built on top of a real operating system (a browser).

Basically, what we have is an idea, which evolved from the web, for uniform remote procedure calls, the way they should be structured (statelessly) to allow better scalability, with the current working implementation (HTTP REST) being fairly awful. Replace that with something better and forget about the awful little web technologies that have layered garbage on top of garbage because they've been stuck with the outdated idea that interface and function should be transmitted along with the data and that HTML is a good generalized data format, and that tooling needed to be built around those things, and the Internet will be much better off. It'll be a browserless future, with real, efficient, high quality microapplications running on top of real, efficient operating systems natively leveraging the Internet more directly. Right now, I think a better future is being held back by its reliance on http, and the continued obsession with the idea that transmitting interface and functionality to be rendered by a browser is somehow a generally good one.

Applications are always going to have interface and data/functionality (for a while, anyway). As the interface is always evolving because developers are always working on it, it is going to have to be transferred. Good applications usually enforce some separation between "functionality" and "interface". So we have layered applications. The GUI, for the web, is rendered through html/css and is easily manipulable via javascript. I like this setup and I find it the most flexible GUI I can work with. I far prefer it to "Winforms". And nothing I've seen commonly from any other OS looks like it is any better. I've played with WPF, but didn't want to waste time learning it in place of HTML/Css/JS. So, personally, I find the web stack pretty good for interface.

As for what you are saying about web services, I am in total agreement. The web application that I am currently writing is backed by WCF and is just a collection of endpoints. Currently, index.html is an actual html file that loads up javascript and css, but I have imagined just generating it as well.

As for how the interface is built, I've actually created a tiny DSL that is parsed by javascript and describes a structured document of javascript/jquery form widgets. This way, the DSL itself can be generated and customized by the user either in browser (CodeMirror) or through drag and drop WYSIWYG based on domain objects.

I really don't like the "old" model of a website being a collection of documents the routing of which is managed by a Server application on top of a file system. I prefer just thinking in terms of a program. Then I get to think about the layers. Most of the layers are going to exist as abstractions btw code and a database of some sort.

So, like many applications, there is going to be a user interface and a cluster of backend services that either operate independently or in relation to the user interface. I'm content to write a javascript application that takes care of the user interface while using C# (or whatever) to code interface agnostic domain logic that runs independently from the client machine.

When I think of the browser as an OS, I guess I don't really think of the OS in the standard way. We tend to think of Linux, Windows, OS X, etc when we think of OS's. I tend to think of the web as an OS, as a system that stitches together systems. So, for me, when I think of the browser as an OS, I think of a browser that is really intelligently embedded in this "matrix" of programs such that it can represent it, since it is the "browser".

But now "browser" is starting to sound a lot like "key maker", so maybe I'm just putting too much on the browser. But, on the other hand, as our understanding of programs evolves, I think our IDEs will continue, too. I think there's a lot more that could be "integrated" into the development environment.

Honestly I'm not sure exactly what Kay is talking about here. Superficially, I would have interpreted what he was saying in the context of hypertext-as-language / browser-as-OS interpreter. Along the lines of "Wow. HTML/CSS/Javascript sure are kind of jumbled .... and none of these *@#&! browsers have ever worked exactly the same. Good lord." Which, IMO, is a combined fact of evolution and corporate assholedom more than a case of having been engineered by amateurs.

But, based on the first answer (Karl Bielefeldt) it sounds like Kay is conflating the choices Wikipedia has made to not take advantage of browser-based capabilities, which have been implemented for many years in other web implementations, with an inability for browsers accomplish the features he imagined. That doesn't really even make sense.

I dunno. This seems like a throw-away comment that doesn't really lead much of anywhere productive.

So. The highway is more solidly engineered than a fleet of automobiles that that drive on the highway? OK. Now what?

He's absolutely right, of course. And the reason isn't very mysterious: the web was originally designed by an engineer, and the internet was designed by computer scientists.

The web was designed as a way to share scientific papers. These papers, once written, would never change, so no need to keep a connection. Also, they are never updated, so no need to worry about that. And they didn't have animation or sound, so that wasn't needed either. For what it was designed for, it does a very good job. The problem is that people started to tack things onto it without considering for a moment that it might be best just to chuck it and come up with new protocols to handle the things they wanted to do. The web is a flimsy house of cards and it is surprising that it keeps working at all.

This is not to disparage engineers, but engineers often style themselves programmers, when they don't understand the deeper and/or longer-term theoretical issues that computer scientists worry about. The original web browser was designed as a quick and dirty project for use by a group of engineers and scientists, with no thought given to larger concerns, and that origin has haunted the abominable architecture of the web ever since.

HTML was beautifully designed to be device independent. Not printer independent like LaTeX, nor screen independent (like modern web pages are—unless your screen is not the same size as the designers'), not only reflowable but furthermore Tim Berners-Lee didn't even assume you could fit both images and text on the screen simultaneously. In fact, the syntax for hyperlinks was so general that it doesn't even need separate elements for images, videos, music and 3D models. Than Mosaic came and added <icon>, nah, <img> and implemented image rendering in-browser, setting the precedent for the current audio/video mess of Flash and only recently implemented <audio controls> and <video controls>. Although, to be fair, that's also because some operating systems refuse to ship VLC (or even just an AAC decoder).

HTML was designed to be a terrific document format. And it is, but popular browsers made short-sighted modifications to ship shiny features faster, resulting in a mess. And then popular web sites came adding fancy features even faster than browsers, without caring for dull things like blind people. In fact it's so bad that browsers don't even implement some of the stuff that has been strongly recommended since HTML 2.0 (arguably the first HTML standard). For example, Links seems to be the only browser around that implements general purpose <link>s properly.

Quote:

The internet will live a long time, but the web will have a much shorter lifetime. In our lifetimes the web will be gone or vestigial, but the internet will remain.

We will not go back from document fetching to manually emailing documents. No more than email ever went away. Nor can we expect hypertext to go away in our lifetimes.

P.S. The first web browser was also a WYSIWYG web page editor. The web was intended to be a wiki. Hence HTTP PUT.

He's absolutely right, of course. And the reason isn't very mysterious: the web was originally designed by an engineer, and the internet was designed by computer scientists.

The web was designed as a way to share scientific papers. These papers, once written, would never change, so no need to keep a connection. Also, they are never updated, so no need to worry about that. And they didn't have animation or sound, so that wasn't needed either. For what it was designed for, it does a very good job. The problem is that people started to tack things onto it without considering for a moment that it might be best just to chuck it and come up with new protocols to handle the things they wanted to do. The web is a flimsy house of cards and it is surprising that it keeps working at all.

Agreed 100%... I did MS XAML long before I ventured into HTML.. and I can honestly say as complex as XAML is, HTML is even more confusing, unless of course I want to write an academic paper