UPDATE: Some folks think that saying "JavaScript is Assembly Language for the Web" is a totally insane statement. So, I asked a few JavaScript gurus like Brendan Eich (the inventor of JavaScript) and Douglas Crockford (inventor of JSON) and Mike Shaver (Technical VP at Mozilla). Their comments are over in this follow up blog post.

I was talking to Erik Meijer yesterday and he said:

JavaScript is an assembly language. The JavaScript + HTML generate is like a .NET assembly. The browser can execute it, but no human should really care what’s there. - Erik Meijer

This discussion started because I was playing with Google+ and, as with most websites that I'm impressed with, I immediately did a View Source to see what was underneath. I was surprised. I saw this:

Let's just say that this went on for about 1300 lines. It is tight and about 90k. This is just the first half. It's mostly minified JavaScript. The middle part of the page is all spans and divs and generated class ids like this:

It works, and it works great. Many of Google's best properties have GWT behind them. Would you be more impressed if you did a View Source and found that it was not only pretty on the outside but also inside?

This seems a little ironic because it was just a few years ago when ASP.NET Developers were railing against ViewState. "It's so heavy" really means "I don't understand what it does." ViewState was (and is) a powerful enabler for a development methodology that gets folks developing on the web faster than before. This is not unlike other toolkits Google Web Toolkit (GWT). GWT isn't completely unlike Web Forms in its philosophy. From the GWT website:

Google Web Toolkit (GWT) is a development toolkit for building and optimizing complex browser-based applications. Its goal is to enable productive development of high-performance web applications without the developer having to be an expert in browser quirks, XMLHttpRequest, and JavaScript.

That seems like a very admirable philosophy, no? You could even say (with apologizes and tongue in cheek):

"ASP.NET WebForms" is a development toolkit for building and optimizing complex browser-based applications. Its goal is to enable productive development of high-performance web applications without the developer having to be an expert in browser quirks, XMLHttpRequest, and JavaScript.

The intent of this post isn't to shine a light on WebForms or be a WebForms apologist. It's great for certain kinds of apps, just as GWT is great for certain types of of apps. What I want to focus on is that working with server-side toolkits could be argued as going against the alternate philosophy that the real joy of developing on the new web comes from clean jQuery JavaScript and clean, clear markup ala Razor or HAML. It all comes down to what level of abstraction you choose to play at.

Semantic markup will still be buried in there and things like http://schema.org are still very important, just don't expect the source of your favorite website to read like a well indented haiku anymore.

To be clear, minification and compression are orthogonal optimizations. I'm talking about simply not caring if the markup and script emitted to the client are pretty. If you don't care about the markup sent to the browser, only the result, how can this free us to develop in new ways that aren't confined to slinging markup and JS. Ultimately, if it works great, who cares?

My question to you, Dear Reader, is why do you care what View Source looks like? Is HTML5 and JavaScript the new assembly language for the Web?

UPDATE for clarity:

The point is, of course, that no analogy is perfect. Of course JavaScript as a language doesn't look or act like ASM. But as an analogy, it holds up.

JavaScript is ubiquitous.

It's fast and getting faster.

Javascript is as low-level as a web programming language goes.

You can craft it manually or you can target it by compiling from another language.

If the tools - as a developer OR a designer - give you the control and the results you want, what do you care? I propose that neither Rails, nor ASP.NET nor GWT is 100% there. Each has their issues, but I think the future of the web is a diminished focus on clean markup and instead a focus on on compelling user experiences combined with languages and tools that make the developers work enjoyable and productive.

What do you think, Dear Reader...Do you want your HTML and JavaScript abstracted away more? Or less?

UPDATE: I want to say this again to make sure folks really understand. There's two separate issues here. There's minification and general obfuscation of source, sure. But that's just the first. The real issue is JavaScript as a target language for other languages. GWT is a framework for writing Web Applications in *JAVA* where the resulting bytecode is *JAVASCRIPT.* GWT chooses a high level designed language (Java) over an organicaly grown one (HTML+JS) and treats the whole browser as a VM. The question - do we write assembly language or something higher level? Also, I realize now that Google+ was written with Closure, but the point remains valid.

About Scott

Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.

I suppose I don't care about View Source anymore. If I'm not mistaken, isn't the whole point of the single line HTML/Javascript to save bandwidth so to give a better user experience. If you can just simply save 5KB over each request by eliminating white space and you get 1,000,000 hits, you'll save nearly 5 GB of bandwidth. That's not chump change.

And with the frameworks that are available today, the days of using View Source to learn how to do something on the web are quickly fading away.

wooster11

Wednesday, 06 July 2011 23:26:02 UTC

I see that as an optimization that makes sense on big, highly engineered web applications, but is a poor tradeoff for websites. There's a big difference between an application like Facebook / Google+ / Twitter (especially when you look at all the work they're pushing to the client) and your average website. If you give up semantic HTML, you lose a lot of flexibility in development and design, and I don't think that tradeoff makes sense for the huge majority of websites.

Scott "Asbjørn" Hanselman - I think if you can express the content as a document, sticking to document semantics gives you a lot of benefits because that's what HTML / CSS / JavaScript were designed around. It's pretty hard to style HTML that was generated by angry machines who care little for us humans, unobtrusive JavaScript works is hard to adapt to machine generated code, search engines have a harder time with it, etc..

So I guess if those things - modifying style, flexible JavaScript, searching and working with content - are important, it's a document. If those things are unimportant, it's probably an application. Twitter/Facebook/Google+ show a rapidly moving stream of information in an HTML shell that's really just a client for webservices accessed over AJAX and is indexed via external services, so not a document.

I can see what Jon is saying with the Applications != Sites statement. The delineation for me is a function of how important design is to the final product.

WebForms caused us great joy circa 2002 by giving us state and cleaning up a mess of spaghetti ASP. WebForms caused us pain circa 2006 when the Web started accelerating again and all of the cool kids began using CSS to make dropdown menus and start doing fun things with Prototype, and we wanted to start incorporating these techniques in our Web sites.

These Ruby & Firefox hipsters (bah! get a job!) took a step back to simplicity while we were doing voodoo sacrifices to page event lifecycle charts and chain smoking black box controls that spewed out markup in tables and mangled ID names and built up control trees that got confused by dynamically added elements. Not to mention our huge table-based layouts that designers shot out of Photoshop export that were hard to maintain because there was no equivalent design tool that worked well enough to maintain them.

So then came along CSS adapters and updated markup and ID naming options and control customization, but at some point we sat back and said "hey wait a minute--all of this nonsense just to get and send a string! WTF are we doing?!" It was suddenly easier to get fast results for many types of Web sites if we just typed the stupid HTML that we wanted ourselves. So we became a little (over) disillusioned with WebForms.

Now we're at the "plateau of productivity" stage of the Gartner hype cycle in regards to this type of technology. Web sites are becoming more complex than ever, so maybe a state-based Web model is beginning to make sense again, especially coupled with the app-like APIs coming with the HTML5 bandwagon. The browser is becoming a runtime.

WebForms was a good idea and continues to work well for many people, but perhaps the concept was just a smidge ahead of its time. After all, we wouldn't have dreamt of using this much JavaScript in 2001--if MS had introduced a GWT equivalent then our eyes and processors would have melted. This is the second round of the stateful paradigm--and the second attempt at any paradigm usually does end up working a bit better, hindsight being 20/20 and all.

Hi Scott, this is an interesting discussion and I also share the point of view of Jon Galloway.

As a side note, do you know about Lift (http://www.liftweb.net) ? There is also a demo site http://demo.liftweb.net/ which shows many of the neat things you can do with it and how simple and clean they are. Honestly, I have been using that framework for a while and everytime I need to go back to ASP.NET or even Rails, I found myself less and less willing to put up with all the work it requires.

Couldn't agree more. We're about to ship a mobile incarnation of our browser UI written 100% in ScriptSharp. We ended up rolling our own template compiler that looks a lot like jQuery templates, but targets ScriptSharp at build-time instead of generating JS at runtime like jQuery templates. Combined with a hand-rolled t4 WCF client generator that also spits ScriptSharp, we have 100% compile-time type checking of *everything*, the ability to refactor, FxCop, and "Find All References" throughout our browser app, but it all compiles down to JS at the end of the day. Nobody on our team wants to touch raw JS ever again. Nikhil's been a busy guy the last few months, and the results, when combined with some internal tooling, are amazing.

Also, bear in mind that the above GWT produced HTML is horrible for SEO. The ratio of HTML to content is awful.

Alistair

Thursday, 07 July 2011 00:16:25 UTC

Kinda. I, actually, like to know who and what happens underneath. The fact is that, when you really start into getting to your side complex and you want/need to do that specific not predicted thing, you'll end up having to get your hands dirty and then, my friend, the cleaner it is the better.

Alistair - Agreed, minimifcation is different from not caring about the DOM. However, I suspect that GWT spits out different HTML for robots, given this IS google. I know that it gives different HTML for IE vs. Chrome.

I think you're making an assumption that the minified output is what exists on the server side before it's sent across the wire. Your entire point is moot if the output is minified only when it's written to the response stream.

sliderhouserules

Thursday, 07 July 2011 00:24:27 UTC

sliderhouse - Perhaps my post was unclear. I'll look for ways to make it clearer. I'm talking about develoment techniques that don't involve deep HTML and JS work. GWT hides a LOT. Yes, the result is minified, but even if it wasn't it still isn't classic tidy source. It's machine code.

The problem I think is that any abstraction over HTML/JS is probably not as flexible, and in the case of WebForms, you so often need to do more than what the abstraction easily affords you, that you spend more time fighting it than benefiting from it. Knowing and embracing HTML/JS is simpler in those cases. Why would you use an abstraction that only makes things harder? If that isn't the case for your situation, more power to you -- but it helps to know both so you can tell the difference and make the right decision.

Machine-coded HTML (or whatever you want to call it) is fine if you never have to work with said source code. It's like the .designer files of web/win forms. Who cares what the designer tool is producing as long as you can get at the controls through properties or some widget to set values and attach events. It's when you have to crack open the HTML and work with it where it falls down. Same problem exists with XAML. It's a bugger to edit sometimes but the designers don't always expose everything in a way that condusive to what you want (or you have to dig deep into the designer to find something you know is just a simple atttribute).

I agree with the statement that the browser is becoming the runtime shell. In such that applications or functional sites should do everything they can to minimize the content going down the pipe. Frankly if the entire page could be gzip'd and unzipped on the client on the fly fast enough to not incur a performance penalty for the user that would be idea and who cares what the layout of the markup looks like to the humany eye as long as it works. From a maintenance perspective though the markup needs to be human readable so I can jump in anywhere and not have to spend brain cells trying to decipher a complex tag heiarchy.

I hope we can find a way, as we move to more integrated client-and-server stacks, to continue to let people to look "under the hood." I've learned a lot from View Source.

Also recall the web tenet that the end-user's wishes should override the designer's, one motivation for semantic markup: a user should be able to say things like "I want all lists to appear in reverse order." That's why we want to write ul and li tags and leave the rest up to the browser, instead of rendering a bitmap. Of course there has always been a tension here, with designers winning over time (minus some exceptions, like the Readability bookmarklet).

Complexity seems like a natural enemy of this user-first viewpoint, because even if the markup makes it possible, the more complex a site is, the less likely any user will be able to make meaningful changes to it.

Re: ViewState, I think it's a great example of how a technology can get a bad reputation that it can't shake. Nothing really wrong with the idea, but a couple of design decisions ended up causing too many headaches to ignore. Java in the 90s falls into this category too.

PS, I wasn't able to post with name/email/url, had to create an OpenID.

Scott - Yeah good point. I would be interested to see the HTML produced for robots (and for people on screen readers for that matter).

Still, I don't like sending out those overweight HTML pages even to regular users, although once it is minified and gzipped it probably makes little difference.

Alistair

Thursday, 07 July 2011 00:44:04 UTC

I suppose Google has the engineers to make sense of the spaghetti when they need to add features or fix bugs. For most other development shops, developer productivity trumps hardware and bandwidth costs every time. Clean, readable markup and code pay big dividends on a small development team when every (human) cycle is precious.

To put it another way, the benefits of HTML and JavaScript compression and minification only kick in after your reach massive scale. For most of us, it's premature optimization.

I also have an ideological attachment to view source. The fact that anyone and everyone can see the source code makes HTML and JavaScript the easiest programming environment to learn -- examples are everywhere! It also means that anyone and everyone can verify that the code does what it's meant to. Compiled binaries and obfuscated scripts don't provide these added benefits.

One point of note is that google+ is not coded in GWT, but uses the closure compiler and JS library.

Still, your larger point that web programming is moving away from a scripting language towards something compiled is hard to argue, and I think is unavoidable. With increasing complexity there is also increasing opportunity to optimize, but doing so makes further development hard - necessitating the use of a compiler or minimizer. Additionally, if the network is one of the key bottlenecks in delivering your content, optimizing for size will be a natural reaction.

Scott, I totally agree the browser is a runtime, and tools like GWT/script# etc are all about allowing apps to be authored in a manner optimized for development and deployed in a manner optimized for runtime. However it doesn't have to come at the cost of sacrificing the patterns around semantic markup etc. that result in good practices around bringing together content, behavior and look and feel.

Matt Davis - Sounds like you have a very cool system in the works using script# - would love to hear from you and chat about things some more. Mind contacting me over at nikhilk.net AT gmail.com?

I greatly prefer pretty markup. It's easier to learn from it that way. View Source and FireBug are my primary methods of learning how to do new, nifty stuff with web pages. If all the markup is fuglified, err.. minified, it's exceedingly difficult to learn from it.

i think the only real practical value of readable view source is in the debugging, especially in IE

ben

Thursday, 07 July 2011 01:00:54 UTC

They probably have a published version, uncompressed, available on the dev-servers.

And Firebug makes the markup look perfectly readable.

It's probably safe to say that this is the future for the very large, corporate sites. And with tools like Firebug, we needn't worry about code being indented properly on corporate sites. They can just publish the compressed version and should we want to take a look, Firebug will make it readable for us.

I think that private websites and such have no real reason to go to these lengths to lower bandwidth use and load times. It'd be said if even the smallest of sites gave up on semantics like this. But for those huge websites with millions of visitors, it's simply a must.

Koen De Groote

Thursday, 07 July 2011 01:06:00 UTC

I think nice markup and tight CSS is really only required for development - going live the code should be minified. Some Rails View Engines do this by default if they see you're running in Production mode - HAML is one of them. It will squeeze out "reasonable" whitespace.

Rails goes even further and if you have a :cache => true directive on your CSS or javascript files, it will lump them completely into one file. I don't know if they get minified (maybe).

The idea here is that production is a whole different animal than development. In development you WANT your HTML properly formatted so you can see your way around. No doubt the same is true for Javascript and CSS.

As for WebForms - I seriously had to check the date on this post :).

Rob Conery

Thursday, 07 July 2011 01:23:39 UTC

I'm of the mind that the source code that arrives at the browser is not the same stuff we wrote. Minification and any other number of treatments (like Strangeloop's Site Accelerator) turn your source code into the smallest, fastest code it can be... a lot like compilation, really.

Assuming the JavaScript is stored in a static file, and that HTML compression is turned on for static files, minification makes little difference due to the "entropy" of the JavaScript file.

Put another way ... let's assume a 100k non-minified JavaScript file becomes 10k after HTML compression. Theoretically, an 80k _minified_ version of the same JavaScript file will _also_ be 10k after HTML compression. So, minification is not really gaining you anything.

I guess the moral of the story is: use the best tool for the job. In this case, that would be HTML compression, not minification.

Mike

Thursday, 07 July 2011 01:35:52 UTC

I care about semantics and clean view source, but i know that i should not. It does not make sense.

We are writing code for machines, not people. If a software can allow us to write clean code in a single language, and translate it to server and client code, why fight it. Unfortunately, we are not there yet. All we have is ton of unnecessary code being sent to the client, just in case (webforms), or useful, but not cached reusable code (GWT)

Remember your looking at the production side of this code, I'd bet that every one of those sites use javascript and css minification. I'm not sure if there are any html minifiers out there but it wouldn't surprise me if they exist and are being used on those sites. I think we're at the point where the final product doesn't need to be nice neat and clean but it needs to be tight and easy to download. I think that the minification process is becoming a sort of "compiling" for the web, similar to compiling your release binaries instead of leaving in the debug information.

But as far as that layer of abstraction offered by Webforms and GWT goes, I think part of it has moved on to the javascript layer with libraries such as jQuery, prototype, mootools, backbone.js, etc. Obviously they aren't functionally equivalent, but a lot of the same functionality can be gained from those javascript libraries and is easy as or easier to code. Not to mention that you can take those javascript/html/css client side skills and apply it to any technology you happen to be using on the server side. The other big reasons those sites are going with the client side code is performance and scalability, for example client side validation is a lot faster than doing a postback in web forms and if you have enough traffic you can potentially knock your server out with the postback approach. Anything you can get running on the client side takes that much more stress off of your servers. If your working on a high traffic site where performance is an issue, this is the way to go right now.

I don't think there's a right answer, but your comparison to assembly language is probably apt. I'll take it a step up.

There was a time a programmer with no knowledge of C wasn't going to get anywhere. Then we created C++, VB, C#, Ruby, and a plethora of other languages that serve to put more layers of indirection between you and the assembly. These layers of indirection introduce performance costs, and occasionally you lose the ability to make a really killer optimization. But most applications have tighter schedules than performance requirements so the higher-level the language the better. When you need something to run fast and don't care about maintenance, C's still the shining star.

You can use HTML/CSS/JS in the same way. Google+ and Facebook are high-end web applications. It's more important that they deliver a cutting-edge experience and as lean a download as possible than that they remain readable for their developers. Using HTML this way is choosing C or assembly. What about your blog? It was important for the blog developers and your theme developer to ensure end users could understand and maintain the code. So it looks more or less readable, and the code isn't as lean as it could be. Using HTML this way is choosing C# or Ruby.

I don't think computers are the sole intended audience of all websites, but the developer is free to choose.

(I typed out a comment, then remembered to log in with my Open ID, and I was disappointed to find my comment had been lost. Is there a way to address this? Here's a shorter version of the post.)

Application developers' choice of language isn't coincidental. Assembly and C are close to the metal. The programs are harder to understand and harder to write, but the developers are able to squeeze every ounce of performance out of the machine. Ruby and C# have many layers between the developer and the metal. These applications pay a performance penalty and occasionally aren't able to perform a required task (.NET shell extensions were a no-no for a long time!) But the higher-level languages are easier to understand and help meet aggressive schedules.

HTML can be wielded the same way. Google+ and Facebook are web applications used by millions; their developers chose to aggressively optimize their HTML at the cost of readability. Your blog and its theme was intended to be edited; the HTML was optimized for readability at the cost of performance. Developers have to choose the right tool for the job.

The only sad part is my memory of self-learning HTML by viewing the source of nearly every web page I visited. I wonder how many people will be curious about web development but turned off by minified source? I don't think it's an argument for abandonment, but it's still a concern.

You have to differentiate between human-readable view-source and semantic markup. It's hard to tell out of context, but that jumble of markup in your second screenshot looks like semantic enough markup, and there's no obtrusive/inline JavaScript in the markup. Add whitespace and it's probably about what someone would write by hand.

I think the future of semantic markup is great and has only continued getting brighter with HTML5.

The importance of human-readable JS/HTML is an interesting question. If you'd asked me a decade ago, it would have been a no-brainer to say that it was very important. The web could never have evolved as well as it did without meaningful view-source.

Even with the excellent in-browser tooling we currently have, I do spend quite a bit of development time loading pages and immediately hitting Ctrl-U to see what the heck was actually rendered. Digging through the markup tree in Firebug/Chrome is an alternative to that, but no where near as quick/easy and doesn't provide the same high-level overview of the size/structure/layout of the document. In my own development, minified markup would definitely add unwanted friction.

Once you factor in gzip/deflate HTTP compression, I'm skeptical that the difference between human-readable and minified markup is worth the obfuscation on any but the most heavily trafficked sites.

The difference is each of those sites is a personal aggregator of other content. They use a personal context (terms, network, location) to aggregate information or snippets elsewhere often sourced from other private silos (e.g. protected tweets, fb updates and photos).

Many of us don't write those sorts of sites but write the content-rich personalization-light sites they are discovering and linking to.

We don't need tons of JavaScript, AJAX and hash fragments because we're writing rich content that needs to be search indexed and perma-linked open to wide audiences.

5 GB of bandwidth. That's not chump change. <= really? This is 2011. I checked Amazon S3 transfer prices, 5GB costs 60 cents. That's quite the definition of chump change. I also did a Google search for 1mbit colocation price and found offers below $30 -- 1mbit is 320GB, and so 1GB is below 10 cents for colo as well...

Like many here, I completely agree with Erik Meijer's comment "Javascript is the assembly language...".

However, having built a large system using Nikhil's Script#, there are incredibly benefits and yet still some challenges.

One of the biggest problems with such tools, whether Viewstate, GWT, etc is that you have to debug in the browser. This means that you have to set breakpoints and map the generated code back to what you wrote. Same with HTML and perhaps even CSS. So the more the tool does the bigger the gap. Which is why Less, SaaS and CoffeeScript seem more palpable for many, they are more like macros over their respective abstractions, much like C++ was over C originally.

Could you imagine writing C# and debugging IL? It's possible sure, but doesnt like a good time.

Once debuggers are created and a few other tooling problems are solved, then Javascript+HTML can truly become the assembly language.

Are you cunningly preparing us for a Silverlight to HTML/Javascript compiler? So if I say: "yes, I don't care about what is downloaded to the client, it is all about abstraction layers" I can't go back because I said so? :-)

The sheer amount of jQuery plugins proves it is all about abstraction. At times I teach jQuery, Javascript and HTML but it does feel like teaching assembly: basic stuff, unhelpful editors and debugging, low productivity, repeating the same constructs...

Thursday, 07 July 2011 04:35:03 UTC

Minifying HTML is simply an extra build step to reduce bandwith,and could be totally transparent to the user like compression. There's nothing stopping browsers auto-formatting the contents of 'view-source' to (partially) restore the semantics of the minified HTML.

Dave Barone

Thursday, 07 July 2011 04:54:58 UTC

Like many others, I think there's a difference between minifying your scripts, and non-semantic HTML. I encourage peers and mentor people to own their output - whether HTML, CSS, JS, whatever - and make that the best it can be. I can't be the only web dev here who would only use Repeaters because the HTML rendered by DataGrids, et. al. was not up to par. It's great to see that MVC has helped to reverse that trend, but I still think we need to be make sure our output makes sense.

I've had to deal with lots of unwieldy code created by great developers who felt their only responsibility was the server-side code. Hard to maintain, difficult to plug in other components, etc.

I think it is ridiculous that developers have to do this kind of "hacking"/compiling of javascript to produce web applications.Let's face the truth - HTML and JavaScript were created to be a document language and not an application language. So now developers are using the wrong tools to create web apps. Developing apps in HTML/DOM/JavaScript is like developing apps on top of Word or PDF. In other words - does not make any sense.Just to make a web app people have to use a lot of javascript libraries (which essentially are hacks around dysfunctional DOM to make it a little bit useful). For example jQuery - Scott in some blog was praising how nice and easy he can do some DOM manipulations with it - but compare jQuery code with some Silverlight code - the difference is enormous. Silverlight is much more readable, maintainable, and much more easy to understand.It is about time for developers to understand that hacking on top of dysfunctional DOM/HTML/JavaScript will not make an evolution on the WEB. It is counter productive. It makes developers forget about strict code, about Gang of Four, about readability. What kind of new generation of developers are we making? Is using "$" shortcut for development really such a big achievement?Web needs a new thinking - it needs a new platform. For example Silverlight would be a much better platform for the web than HTML in all aspects of development. Obviously Google will never implement Silverlight, because they are so anti-Microsoft. And that is not good - because Google is one of the primary forces that could drive WEB platform forward and not backwards. Microsoft created Silverlight with good intentions, but Microsoft can not be alone in defining the new WEB - and others just do not care about developers to really drive the change (and probably are not smart enough).Google is even trying to make a browser as an OS (Chromebook) and god help developers if that will pick-up a steam.

IMHO, the server response will become like an executable code to the browser, only when developers are capable of checking a bigger part of their work server-side, like some form of compilation (or at least the checks involved) of the JavaScript. And this is where the WebForms and ASP.NET in general, isn't that developed, yet. Google's Web Toolkit provides a "Java API", which makes compilation, syntactic and semantic (through server testing of the client code) checks possible.The HTML and css are not that problematic - we already control rendering through compiled code which can easily be tested and at the client side all the browser developer tools display the code indented and properly colored.

In other words, if it's a machine code - why do we still write in it? Shouldn't there be a higher abstraction? :)

A K , could't agree more.... I've been using javascript/jQuery/jQuery UI/jqGrid for some time now, and although thanks to these libraries programming in javascript has become less painfull as before, it still requires a lot of work, and maintaining the code stilll requires too much effort. jQuery, jQuery ui libraries release so often, it's not easy to keep up to date. I've seen some releases having subtle breaking changes, hard to detect... Some very complex pages I've rewritten in Silverlight, the difference IS huge, far mor maintainable (even more when we started using MVVM in SL) and I didn't experience any breaking changes going from SL2 to SL3 to SL4. Debugging with SL is a joy when compared with javascript debugging, and there is no mismatch between server side and client side development

I think one of the things that has made the web so huge is its "openness." Traditionally, it has been relatively easy for any Joe Schmoe to go look at at sites source and learn from really smart people and maybe even improve upon and extend their code. The whole minification stuff somewhat hampers the "openness" of the web in that regard.

JavaScript is assembly language for the web just like PostScript is assembly language for a printer. Once upon a time, high-end graphic designers crafted elegant PostScript to make the laser printers jump through hoops. Once Adobe Illustrator (and better drivers) came along, few care to inspect the PostScript directly.

So it goes with the Web. Heck, maybe the "big Surprise" at the Build conference about Silverlight is that it's going to target HTML5 + JavaScript + node.js + jQuery as a first-class runtime for Silverlight output. That would be in keeping with this trend.

I think we have to separate 2 completely independent things: 1) Minification 2) Compiling HTML+JavaScript vs. hand-coding them. The first thing is not interesting - you could either turn it on or off (it may only have a minimal influence on the structure of your source code). But the second thing is very, very important - it will define the way we do our jobs as developers. May be some day we will code only in "C++ of the web" (I mean any language) with only few "inline Assembly of the web".

Konstantin

Thursday, 07 July 2011 08:39:51 UTC

I tend to agree that there's a significant distinction (for both users and developers) between Web Apps and Web Sites. If you look at Wikipedia or Blogspot, in the majority of their pages, they don't use much javascript (except for setting user preferences, or registering some header/footer stuff). Only the "editor" pages are script-rich.

Thinking about the difference between Web Apps and Web Sites, I tend to define it like this: "A web app is made up by pages that do something even when you don't provide any input, and tend to modify their DOM a lot. A web site is made up by pages that don't change after they're loaded". If you leave Facebook or Gmail pages open, and you go away, when you get back you see very different pages, with new posts/alerts, new open chats, and new Mails in the inbox.If you do the same thing with Wikipedia, after loading the Superman page, when you get back, you still see Clark Kent's glasses, and you don't see any alert that says that you may also be interested in Batman.

That said, I spend quite some time to make my HTML markup clean... and I usually fail :-P

I'm going to go back to McConnell's comment in Code Complete: Write code for people not computers.

My personal opinion is in line with Nicholas and Jon above. If you are creating an application that is using the web as an API, fine, use a framework that sits on top of JS and HTML and generates things as needed, in a semi-unreadable clump. But then keep in mind that you are no longer looking for a web developer for your site maintenance, but a very specialized (insert frameworks here) developer who might have some web development experience but probably isn't using it much.

If you are creating a site that doesn't need every last byte of space squeezed out of it just to fit across the wire, then write for maintainability. I'm on the fence on minimization of HTML, now that we have developer tools for all the major browsers, but the original HTML/markup needs to be readable as does the original JS. And it needs to be clean,well formatted, and well described.

I like HTML, I like JS. Maybe it's because I've been using them for over a decade and I'm getting set in my ways, but I'm not ready to slap a compiler on top of them and forget playing with them.

Unfortunately I think that is exactly where we are going. I think we are going to be asked for more and more complexity and our two options are going to be either frameworks that generate everything for us or updating browsers standards, And given the historical ability of browser groups to implement standards in a timely fashion and get people to upgrade to newer versions of browsers...

There is definately a difference between web app and web page. For your site you want clean semantic html. Does it need to be readable? No. I don't care if its minified. But, if you take away semantic clean minimal html and incorporate a huge mess of machine code you rob search engines of the capacity to understand your site.

Actually, on second thought - tell all my competitors they should build their sites like this :)

This kind of question becomes even less important in these days of Single Page Applications, where a minimal page is used just to get the browser started, and then everything is generated on the fly by JavaScript/jQuery. This renders View Source completely useless, but that doesn't matter because all the browsers have got Developer Tools built in to view the actual DOM and other loaded assets as they exist currently, including manipulated content.

Even the purists are adopting "compiler" solutions like CoffeeScript and Sass to make their JavaScript and CSS more serviceable, and it's standard practice to minify these resources in production, so why should the HTML itself be a special case?

I know there are those in Redmond who don't want to hear this from so many, but when to do something that actually matters, it gets done via Silverlight...

Chance

Thursday, 07 July 2011 11:58:55 UTC

"It's so heavy" really means "I don't understand what it does." "

Wow.

David Fauber

Thursday, 07 July 2011 12:10:04 UTC

I'm a little unclear on the point of this post. You don't seem to be complaining "just" about minification, but why complain about it at all? I make web sites for web site users, not for people who want to scrape my code. I'm seriously not going to double the size of all the script downloads just so it's easier for 1 in 10,000 users to look at my source code. No thanks, I will continue to minimize.

As for abstractions. it would have been difficult or impossible for a human being to, say, code Angry Birds directly in javascript in any reasonable length of time. So why bemoan that you can't easily understand the code that GWT produced? If GWT hadn't produced it -- it most likely wouldn't exist. Instead, try to understand GWT, if that interests you.

Thursday, 07 July 2011 12:45:49 UTC

I call this as inner beauty, which as important as outer beauty.

M Bala

Thursday, 07 July 2011 12:55:25 UTC

This is very interesting. Last year at the YUI conference there was a panel on "The Future of Frontend Engineering" in which Tantek Çelik attributed the reason why JavaScript has been so popular was because view source ("I have a hypothesis developed for that. I think the reason JavaScript has won can be said in two words: view source.")

One of the people in the audience (Dion Almaer) brought up the point that you make on this blog and even said "the new view source is on code repositories rather than in live code, because most of our websites are actually rendered according to performance standards or actually rendered for different browsers".

I wonder what the ramifications of this would be. As long as there are plenty of open source projects where people can actually view the source code in code repositories this might not be a big deal but only time will tell :)

You can the video (and a transcript) here: http://developer.yahoo.com/yui/theater/video.php?v=yuiconf2010-panel

Application Javascript no doubt should be unreadable and minified. It saves bandwidth and deters decompilation.

I'll not be surprised if Javascript eventually turns out to be a form of intermediate code, where people code in other languages, run it through a compiler/translator, and it becomes HTML/JS. Javascript depends so heavily on libraries to be cross-compatible, and its syntax is so lacking, that a more advanced wrapper language will definitely boost programmer productivity.

And this is possible because the JS engines are all getting faster, and JS needs to evolve much faster if we don't want to end up with a wrecked state of web with lousy JS.

Thursday, 07 July 2011 13:21:09 UTC

There are soo many great tools out there to minify and combine your web application and as of recent there is a build system inside of the HTML5Boilerplate that makes it even easier...

The idea is to keep the development code as clean and formatted as possible with good variable names, class names, etc... but when you are ready to go to production there is a build process that will compress all your work down into something very small so it can be delivered at an optimal speed. As for the HTML minification you can argue that it may or may not help you due to gzip compression, but I would guess it doesn't hurt. In some cases the minification step also deters people from stealing your work as easily. It is still possible of course, but makes it harder.

Interesting article and I've had somewhat similar thoughts. Therefore I do agree with you ;-) When we write native apps do we care about the readability of the output? We became accustomed to viewing the markup and js in our sites in part because there has not been as great of a separation between build (design, code, debug - especially debug)-time and run-time as there has been when building native apps. In fact, we have often used the run-time container (browser) as a debugging tool. These facts seem to have contributed to why we have cared about readability in output. And of course we view markup to see how others do things :-)

As UX continues to become richer in browser-based apps there will no doubt be an increase in the amount of client code required - whether that code is in the form of native browser capability, js libraries, or app-specific code. I expect that we will care less and less about readability in the final output.

I will say though that we will continue to care about performance, and tight, fast code will be preferred by developers and users over bloated output. That makes it still useful for developers to at least have some sense for what is going on under the covers as levels of abstraction rise. Just as it has always been.

Bill Draper

Thursday, 07 July 2011 13:47:30 UTC

Hi, Scott,

Are you trying to leak the September BUILD contents here? Or you are 'testing water'? :)

I keep the code clean and formatted for ME. If I have to use a third party library that dumps out machine generated code into my page, that's fine, but MY stuff will be well structured. And it's not because Jon Galloway will come along and view my source and judge me, it's so that when something unexpected happens, I can get back in and see what went wrong.

The code generated by third parties is a mess, but if I trust it, then I trust it. Even though something might be Open Source, or I can view the code and see what I think the problem is, I will rarely, if ever modify the code that generates the code. I know that the people who wrote it have solved problems that I don't even know exist, and me jumping and mucking with things is likely to end poorly. Go read Ray Chen's blog about how people misuse(d) the win32 API, "fixing bugs" that were "caused by Microsoft."

My code needs to be debuggable. Theirs does not. Mine will be pretty. Theirs will be whatever they want it to be.

Matt Dawdy

Thursday, 07 July 2011 14:06:46 UTC

Coding in "pretty" HTML+CSS+JS and minifying when you deploy is not the same thing as developing in WebForms/ViewState and you know it. You're not fooling anyone.

There's only one person to whom the readability of the markup should matter, and that's the developer. By "the developer", I mean the human being who creates and maintains the application. For him (the male embraces the female), it matters utterly - but if it's desirable to run that source through an obfuscator or a minifier, then that seems fine to me.

But the readability of the development copy of the site is paramount. It is second only to "works" in the measures of software quality, especially in a situation where different developers work on it from day to day.

Our app was built in GWT. As has been mentioned, there are pros and cons to not caring about the generated output. I've never cared about the JavaScript that gets generated and leave it obfuscated even in development mode. The JavaScript aspect of this is moot for me. I can't think of a single case where there was an issue with the generated JavaScript in any browser.

The HTML and CSS on the other hand is a different story. Browser differences still pop up and not having the same control over HTML and CSS makes it harder to deal with them. Twice, and in different ways, we've had the IE issue outlined here bite us: http://codebetter.com/kylebaley/2011/03/31/how-to-strip-away-the-super-powers-of-borders-in-ie-2/

With the HTML and CSS being generated, it's harder to play around with the HTML and figure out: a) what's wrong, then b) how to fix it.

Larger concern for me is testing. With GWT, unit testing these applications is much easier but UI testing is more difficult. With some other entity taking over generation of the HTML and CSS, it's more difficult to use HTML IDs and CSS selectors to locate elements on the page for UI testing frameworks.

I think end-user experience is the most important thing. It is certainly nice to have a good looking markup, but it is way more important that the client gets an excellent experience, after all, they are the reason we're building the system in the first place. This experience should also include assistive technologies (like screen readers), as long as we enable such devices, I don't care if all the markup is in one big only line.

"Working software is the primary measure of progress." If you deliever what the customer wants, and it works and performs, who cares how... and "view source" can be as ugly as it appears... if the customer is happy.

As other individuals have pointed out, there are two issues here. First being minification. In terms of HTML, CSS, etc. it 100% should be minified and compressed prior to being sent across the wire. At this point in time there are tools that can do it during the build process or you can use something like the project I created to just do it on the fly (I actually used Scott's site as the guinea pig. He laughed, so I was happy). Anyway, if you're not doing this because you want to be able to do "View Source" for debugging purposes, you're using the wrong tools (for HTML/CSS, most developer tools have an "Inspect Element" option that gives more info anyway). With JavaScript, things get a bit tougher but that's why you keep a debug version around for testing purposes.

The second issue is JavaScript, etc. are the assembly of the web. Well we're already seeing languages being built on top of JavaScript (CoffeeScript, Script#, etc.). The issue at present is the tooling. It's rather difficult to go back to the original language (CoffeeScript, etc.) and debug from that. Until then, they're not that useful to me. But as far as HTML/CSS, if a tool can come along and generate decent markup based on some layout mechanism (I don't care if it's drag/drop or whatever), I'm all for that. The issue currently is that the vast majority of those tools suck... And not in a good way. However as things move along I would hope that we can abstract away the underlying languages a bit.

I think Jon Galloway's got it right WRT the difference between app and doc (and yes, there is some grey area in between), but I'm worried about the level of insouciance toward opaque View Source. There's even outright hostility toward View Source (obfuscation)! This is a reality, but View Source is why the web became successful, and when we lose that, we lose much of the introduction and self-training that made the web accessible to a generation of developers.

I want to say this again to make sure folks really understand. There's two separate issues here. There's minification and general obfuscation of source, sure. But that's just the first. The real issue is JavaScript as a target language for other languages. GWT is a framework for writing Web Applications in *JAVA* where the resulting bytecode is *JAVASCRIPT.*

This article got me thinkin...I wonder if Google will come to regret GWT.I mean, Google makes it living by parsing HTML and indexing it.What if in the future all web pages are nothing more than JavaScript that renders a view using HTML5 canvas?Is Google gonna be able to cope with that?

Maybe GWT spits out different HTML for robots.However, this article made me reconsider Microsoft's new found interest in HTML5 for Windows7.Maybe M$ is making the switch to JS-driven HTML5 as a strategy for cutting off Google's air supply.

ted stockwell

Thursday, 07 July 2011 16:56:47 UTC

I think that, as always, I want both: I want abstraction and also the ability to go inside and tweak things, as needed to cover aspects the tooling/abstraction is not able to deal, or can't evolve fast enough to be able to deal with.

One of the best things I've liked while working heavily with Monorail some years ago, was that, it had lots of documented extension points, but if you needed you could even switch in customized (or totally new) versions of the core pieces to be able to achieve what was demanded. In retrospect we see that being open source helped a lot to that end.

So I want some tooling that evolves fast enough to catch with changes in the "assembly language" behind it (HTML X/ Js Y), and that allows to replace or add new pieces to the "composition/ optimization pipeline", easily tailoring it to the needs of each project we will be working on.

jinushaun - You're missing the point. WebForms is a layer of abstraction over HTML and JS. ViewState solved a problem that still isn't quite solved. GWT chooses a high level designed language (Java) over an organicaly grown one (HTML+JS) and treats the whole browser as a VM. The question - do we write assembly language or something higher level?

Scott - Sorry for neglecting the larger observation, that JavaScript is the assembly language of the web, which is a great insight, if a tiny bit misleading since one doesn't typically compile *down* to an FP language (personally, I'd like to see F# or some other strong multi-paradigm/FP language supported client side, but I don't see that happening).

It would be great to have WebForms "upgraded" to the next-level, where you can have: - automatically optimization, minification and compression what is send to the browser, - detect and make the necessary changes to control "events-handlers" to take advantage of latest technologies (jquery/ajax/json/htm5/css3/etc). without sacrificing the developer side.

IL->decompiled back to JavaScript. It's a clever concept and works with compiled assemblies. That site above shows the XNA SDK Samples running on a browser as HTML5/Canvas/ECMAScript5 code. Calls out to native externals (PInvokes if you like) are fixed-up to javascript code, so in the case of XNA these are redirected to Canvas calls in HTML5.

After that show, maybe a follow-up show about "whatever happened to Volta" from the cloud programmability team....

Joe Wood

Thursday, 07 July 2011 18:35:15 UTC

I really agree with what Eric said.

If you review the history of html and the web, I think you could have logically deduced that browser was going to become a "vm" and "HTML + JS" was going to become your "bytecode". Early on html and the web was used to share documents. Eventually, WYSIWYG applications became sprouting up because they made life easier and faster. Some time there after, the web began to explode and someone made the bold decision to build their application on the web using html. Sadly, html was created to support documents and not applications, and when push came to shove and eventually javascript won out as a tool to help support html for applications on the web. And what do we want now? Tools to make our jobs easier and faster. This is what John Resig did with jQuery. This is what Google created in GWT, except they didnt create their own language, they used an already popular one.

We have become so dependant on HTML and JS that it is near impossible to scrap it and use something more supportive of web applications. It would be nice to see Microsoft change Silverlight and make it more like GWT, make it something that builds HTML and JS (with the help of C# of course). I don't mind having my JS and HTML abstracted away from me. Do I care that C# is compiled into MSIL? Or that MSIL is compiled to ASM? Not one bit, as long as it works! :)

Higher level, of course. What seems compelling is to have a run time that accepts a sufficiently expressive intermediate language as its input along with appropriate abstractions in each of the various aspects of the distributed problem (presentation, web/presentation server, back-end, etc.). XAML/.NET has given us that. If that intermediate language happens to be JS becomes a detail (would it be nice if one day the browser supported a more abstract il and JS was but one language that supported it?)

We really need better tooling and debugging and if this helps us get there it will be welcome. Another advantage for some might also be the ability to gain a little more consistency in programming at the various layers - maybe why node.js has gained a certain interest. But we do also have to keep in mind other factors such as some that made Rails and ASP.NET MVC so popular so fast - some abstractions actually make building real applications harder rather than easier.

Bill Draper

Thursday, 07 July 2011 18:58:24 UTC

"Ultimately, if it works great, who cares?" This is exactly what I've been saying about Ruby and Python for years. Dynamic languages like JavaScript eliminate much of the ceremony associated with coding and once you get it right, it's right. For systems that have to scale or be collaborated on across large teams, compile time type safety is helpful. But it's only helpful for each build. After that, who cares?

I got into software development before the internet was a thing. Back then, as now, I developed applications to be used by end-users. Web sites ARE applications and are designed to do something - whether rendering pictures and text, or performing complex UI interaction. Nowadays the application's UI is rendered in a browser, but nowadays we live under the tyranny of Google and SEO.

A search engine should be just that, a search engine, and not an excuse to plaster advertising on every square inch of surface area, as in the movie Idiocracy.

Our tools can and should generate one big, long string of optimized code that the the browser (interpreter) can execute. There will someday be some new means of allowing users to find applications, sites and information (I refuse to call it "content") without us having to develop so that everyone in the search engine/SEO/web advertising ecosystem gets fed. Have you had your SEO snake oil today?

When I came up, page markup and any language that ended in "script" wasn't considered programming or application development. If my controls generate JavaScript when the code executes server-side and gets served, so what. I don't write JavaScript. I don't need to.

In my not humble at all opinion, HTML, JavaScript and browsers are the biggest kludge ever to exist in the world of information technology. And thanks, MS, Firefox, Safari, Chrome, et al. for building so many versions of a better "standard" mousetrap. Really, thanks a effing lot - I'm looking at you, IE.

It's been over 10 years since I needed to View Source to teach myself some new browser UI trick. There's nothing I can't do with web forms and Telerik or Infragisitc or any number of third party controls.

Building apps is building apps. Software is software. Web sites are software. There is nothing new under the sun. Plus ça change, plus c'est la même chose.

</manifesto>

Fred Thekat

Thursday, 07 July 2011 19:03:58 UTC

While this discussion presents two issues there are a few details to each of them that are lacking.

------------------------------

1) JavaScript is not an evil language, it's not a poor language, it's not a weak language. These are issues people have with it because it is sometimes difficult to understand, and people attack and fear what they do not fully understand. Treating JavaScript as an assembly language by having a compiler that takes another language and boils it down to the JavaScript can be handy. This can also be a very bad idea if that compiler does not develop effective JavaScript. Exactly as if a C++ compiler compiled assembly code in an inefficient manor.

2) There is no definitive difference between a website and a web application. It is still a dance of HTML, JavaScript and Cascading Stylesheets that presents information. It was in the 90s where we defined these, when ASP PHP and JSP were born. After systems started becoming something more then just a few little static documents and becoming a more organic and dynamic object there was no difference anymore. Yes I can still write a Classic Web Page. Yes I can still write a Classic Application. However, for the most part the point is nostalgia and not the technology. I can write a Website with all the latest and greatest and make it look and feel just like those pages too. So there is no more division of the page or application definition.

3) Viewsource is a legacy web browser feature that today is used more and more by developers to harvest features that they want to add to their page, debug the webpage if they want to view the received file the browser has received, and for hackers who want to find a vulnerability. Hackers aside since the resourceful people of that world will never be hindered completely. Also, There are plenty of other reason that viewsource is used, these are just three that I often see.

Messy Source can be semantic. A semantic web means that information is represented in a contextual way. Menu content is identified as menu content, headers, body content, subjects, etc are all marked accordingly. Clean source can NOT be semantic, just because it is human readable doesn't mean it properly represents the information. This goes for JavaScript, HTML and CSS. When was the last time you opened the hood of your car and said OMG that is the most beautifully set of hardware, in reference to how things appeared? While the car guy part of me would say, "all the time good sir", It still looks like a tightly wrapped package of wires and pipes and other hardware, much like a tightly compressed HTML file.

4) Viewstate was the worst idea ever. I am sorry to say that, but it is true. There is never a need to send the USER content that can be exposed to harmful application and then presented back to a system that claims to be secure. ViewState was the end-all-catch-all data center for many ASP-WebForms pages. Containing data that if modified between the client and server could reek havoc on secure systems. We got along well with out it before ASP-WebForms, we get along well without it now with ASP-MVC. As I see ViewState, it was the solution to make web development feel more like VB programming and provide a state to a stateless environment. At least the web as defined in 2002, which was claimed to have a stateless model.

I see websites as a very stateful environment. It has a beginning, a middle and an end and even after a page has been received by the user, we can use AJAX to dynamically request content outside of that standard stateful pattern, each with its own stateful behaviors. This does not differ in the slightest when you examine any application. It has a beginning, a middle and an end. While the end of an application can imply that the application has completed, go a little farther and think of the states of the application as these same events of a webpage. Startup, loading, moving from state(screen) to state(screen), exiting, etc all can be inferred as the statefulness of a website.

5) Clean markup is important. Important for maintenance reasons. Development purposes. Overall this is how it works, purposes. I know for a fact that my C++ application does not have 800MB of whitespace characters in it after I compile it and send it to someone to execute. Why should I send my user of a website all this whitespace that means absolutely nothing to them?

I shouldn't. Effective developers will develop clean code, semantic markup, and deliver efficient user experiences. This means that the HTML file has all the proper markup it needs to correctly represent the content it is presenting. This means that the JavaScript code is written and delivered in an efficient and concise manor. This means that my CSS is also efficient and delivered in a concise manor.

------------------------------

We need to focus on creating excellent user experiences, while maintaining effective clean code. Effective code should also adhere to a "simple is, as simple does" technique. That means you should not have to have 80 technologies to present three to an end user. You should never confuse the difference and purpose between any of your technologies. If you write ASP-MVC, you should never have to write a ASP-WebForms page just to present some dynamic behavior. ASP-MVC is just as good as any of the PHP or Java MVC frameworks, and somewhat weaker since it is younger and tied more to a business direction instead of progressive technology advancement that is common in the OSS community.

I have been thinking about this question off and on throughout the day and I think the statement made by Erik has some merit, but isn't exactly accurate. [Most of us] don't write in assembly language or MSIL because we feel it is too difficult to understand, and therefore it doesn't allow us to concentrate on the problem. For this reason we raised the level of abstraction. On the other hand, JavaScript isn't that hard to understand; the tools around it just suck.

The fact that tooling is lacking is the one of the reasons we are using other tools like Java with Google Web Toolkit and .Net with Script Sharp to generate the script. The languages available in Java and .Net aren't much different than JavaScript. However, it is the frameworks and the Integrated Development Environments that make them the developer's choice.

Think of something as easy as creating a data structure with properties in C# using Visual Studio and how much more difficult the same task is in JavaScript. Part of this issue is that everything is in the global namespace by default in the browser, so we have to rely on frameworks like JQuery. However, JQuery isn't a standard either and there is no intrinsic support for it in the IDE.

One of the reasons for the lack of tooling is that the browser was essentially a document reader that was hacked into an application platform (Thank you Microsoft and IXmlHttpRequest). I imagine this is why I feel like the JavaScript support in Visual Studio was hacked in as well. Some vendor needs to rethink what the "browser as a platform" is and design the proper tools to support it.

No, I do not think we should aim to abstract the client-side too much. The client-side is the closest you have to touching your end-users and giving them the experience you desire. As said, complexities are growing on the web. Yes, that is true. Even more, I believe the ever complex web is THE reason we won't be able to abstract the front-end code like an assembly. Because I believe assembly implies it's much more static than it is or will be. While things like HTML5 and such, do take a relatively long time to be release, the general development community and innovation moves quicker, hence: jQuery movement + web forms = I'm screwed.

You mentioned Webforms + Script type of thing. While I don't know what exactly what you're thinking, I might think of it backwards, just to stir the pot for you:

Script + Webforms. Oh no he didn't. Yeah, I put the "Script" first baby. Boom!...Well, all I mean is the scripting world needs to take a higher priority in our minds as we build a framework. I think that you're thinking we need to let the two play nicer together...as in "stop touching me" (I have little kids). That would be great. Let script live happily and let webforms use it's power. How about the whole thing though. Script/HTML/CSS + Webforms. Separate them all out as much as possible. Maybe it's like View + Webforms. Sorry, I'm thinking out loud...or writing out thought...typing as I think...whatever. Got it...View + C#. Simple concept. Just separate script/html/css/everthing-front-end from the chaos of webforms but allow some c# to "bind" or give it stuff. No implied single form or viewstate or fo-html with runat="server". Just real markup that allows C# to script it...if it's nice. Really, if you just took the simplicity of classic asp, suggested a separation (via code behind-like setup) but didn't enforce a structure, but allowed developers to choose to implement specific components that webforms brought us. No drag and drop, "it just works" crap. But show me how and where things go to have something like a viewstate setup, with only the deeper object pieces not in my face. Everything in and out of the page is revealed and implemented by me, the developer. All the niceness of the membership works if I want it, but I implement the front end, completely and tell the back-end to make the membership calls myself. A blank canvas framework for .Net. Yes...

Stuff like that only works if you guys trust the developers more. I'm trying to trust you guys. We do need to strip out a lot of stuff. So my request: Please prune the framework, and leave the front-end to us. And I'll choose the events, thank you. If you have or know of a project or solution like that, please let me know.

Ben

Friday, 08 July 2011 02:48:48 UTC

The ViewState is the reason I have go to ASP.NET MVC. After all, WebForms really increase productivity - but, for small sites the fast loading and network traffic counts(I remember your blogpost about optimizing the favicon for that....)

On ViewState - I think this is a dead-end and a hangover of a less then capable client stack. Calls between the server the and client should be explicit and meaningful. Sending the state of the GUI over the wire between each interaction is wasteful - it doesn't use the capability of the client, wastes bandwidth and bottlenecks the server.

Joe Wood

Friday, 08 July 2011 16:38:11 UTC

Alistair nailed it: it's the accessibility, stupid. The danger in thinking of web development as writing software for a browser runtime is that you ignore all the non-browser HTTP user agents: speech browsers, indexers/search engines, command-line text browsers. Eventually developers focusing on the runtime abstraction forget the document centricity of HTTP and go about layering on complexity and extra interoperability protocols that accessible HTTP resources provide.

Practically, only startups that find a way to survive without SEO can get away with this. So be it. As web dev and good internet citizen, I couldn't care less what these private walled-gardens are doing on their property. Their client stacks and frameworks and enigmatic HTML machinations won't affect the broader SEO/public-driven webdev culture at large.

"As I see ViewState, it was the solution to make web development feel more like VB programming and provide a state to a stateless environment. At least the web as defined in 2002, which was claimed to have a stateless model."

Exactly.

David Fauber

Friday, 08 July 2011 21:24:36 UTC

I personally like pretty code because...

a) easier to debug our own crap. (Don't lie... you all know this is true)b) easier to say "hey... how the f* is Facebook doing this?" Look at their code, manipulate, and use on our own code so I can go play my next round of Mario Cart with my sons.

To others points, yeah, we've done this to shave time off the download, we use GZip compression at the server side to do more. But thankfully, bandwidth gets cheaper and wider, so eventually it doesn't matter.

And it wasn't viewstate was unreadable, but sometimes if you used the same user control over and over in a page, you ended up generating a 4MB viewstate. Works fine internally, not so well in Argentina.

4) Viewstate was the worst idea ever. I am sorry to say that, but it is true. There is never a need to send the USER content that can be exposed to harmful application and then presented back to a system that claims to be secure. ViewState was the end-all-catch-all data center for many ASP-WebForms pages. Containing data that if modified between the client and server could reek havoc on secure systems. We got along well with out it before ASP-WebForms, we get along well without it now with ASP-MVC. As I see ViewState, it was the solution to make web development feel more like VB programming and provide a state to a stateless environment. At least the web as defined in 2002, which was claimed to have a stateless model.

1. I don't think it was the worst idea ever. You apparently never used Windows ME. 2. It's fairly secure. There were a number of safety mechanisms in place to prevent the type of attacks you are referring to. It wasn't meant to replace SSL nor give excuses for irresponsible programming. That and if such attack happened, there was usually more than one cause than just the viewstate. 3. We got along because it didn't exist. We got along without internet, MP3's and Blu-Ray too. 4. It feels more like WinForms programming. not VB itself. 5. When did HTTP itself become stateful? Last I checked it is still request/response/end. We use tools to communicate via sockets over web interfaces, but they themselves to not use the HTTP protocol to keep a persistent connection unless you hack it and never end the response. That has it's own short comings.

Well, if all my application (C#) code can be compiled into JavaScript, then why not also compile all (XAML) controls to "code" that makes them render on the HTML5 Canvas? In this scenario a developer would develop in Visual Studio with whatever technology du jour and would not need to know JavaScript, CSS or HTML, applications would cover the whole browser area with a single canvas and Microsoft would make sure all pixels on that canvas have the right color, and View Source would just show stuff that almost nobody cares to understand.

Marc Schluper

Friday, 08 July 2011 23:19:55 UTC

@Scott + @Matt I was not aware of ScriptSharp until now it looks great for me, I have never really taken to JavaScript and would prefer to abstract it away like this. A show on it would be great.

As for viewstate I think it got a bad rep because of its misuse, it was left enabled by a lot of people on all their pages and controls when not required. .NET 4 makes the situation much better because you can have the viewstate disabled by default at page level but just enable it for a single control within the page.

The problem with js as an assembly is that its so terribly flawed and inconsistent. Most source-to-js compilers are incomplete or lack tooling. GWT is probably the most advanced out there, for .Net guys theres Script# in the works but is missing lots of language features. Also there is the pretty nice Websharper(F# -> js) framework.

I actually think the web should be all based around the CLR, so we'd have security and language independence. Maybe Google Native Client could help solving the problem.

But most immedeately we need much better tooling, I want Visual Studio to integrate with some core js libraries that provide things like namespaces, requiring js files, some form of widgets, client + server html templating, data binding... which should be powered by great tools for developers and designers, like Blend for Silverlight.. I like jQuery for hacking, but not for something *professional*, as this always tends to get hacky.. (I think I have not ever seen a cleanly written js application)

It doesn't trouble me when I dont easily understand raw CLR for an app i don't own, especially if it is a client-server app. The fact that a plain View Source presents gibberish is to be expected. Use firebug if you want a more complete understanding of the client side of an app.

Javascript+DOM has pulled off a powerful enough platform for distributed computing that the underlying OS/runtime/machine instruction set/microarchitecture is now largely irrelevant for most web app/site developers. At some point, someone will bld a new paradigm on top of it and it will be abstracted away. Only developers need pretty code and markup. Better not to rely on reverse engineering other peoples private code to learn how to build for the web. Look to open source instead.

JavaScript is already the vm of the web. It has been for some time now. The problem is that, as some have already mentioned, javascript makes a poor substitute for a REAL bytecode/virtual machine. I predict that in less than 10 years, we will have an actual bytecode standard for the web to use in place of javascript, because if we don't the web's current rate of evolution will really start to slow down, if not stop completely.

I suggest Microsoft recommend the CIL standard for this purpose when the time comes, seeing as it's already a standard anyway. Some people say that javascript engines in some browsers are already competitive with the .net vm, but I guarantee if you had five browser teams competing for the fastest CIL vm, the fastest vm in the group would be orders of magnitude faster than the fastest javascript engine is today.

SleepyDaddy

Sunday, 10 July 2011 16:40:47 UTC

Javascript is still just script. It is NOT anything like assembly language, and it is really garbage to have such trash floating around on the Web, but it is apparently the platform of choice for many.

It is abhorrently slow, and being script requires translation, which is part of the problem. The "optimization" of putting it all on one line is like someone saying how great their optimizations of BASIC were by omitting comments, using single letter variables, and other crap optimizations to make the wrong choice of language run something like 30% faster.

Use Java! or create a new language using a byte coded VM that will actually do the jobs that are needed without all the verbiage.

My take on this whole Javascript thing is that the programmers don't want to learn how to choose the right tool for the job. Maybe I'm wrong. YMMV!

ExBASICProgrammer.

ExBASICProgrammer

Sunday, 10 July 2011 21:02:20 UTC

DISCLAIMER: I don't know DOTNET and WebForms programming.

Comparing WebForms to GWT??

WebForms: The FrontPage of the 00's. That is, Microsoft tells developers: who the hell need to be a web-developer. Just point and click and you have a full-featured site. You all know how _that_ ended.

GWT: Something that MSFT doesn't have yet (we'll get to script # in a moment), and all the dotnet shops are screaming for.

RE script#, please compare the GWT team (and development time invested) with the script# team.

fanbaby

Monday, 11 July 2011 00:31:20 UTC

With respect to the question of whether or not view source matters, I don't really see why you can't have both when it comes to pretty html/js... There are common rules (or at least, it's a reasonably simple task to define some) for how html/js can be displayed, properly spaced and indented. It doesn't seem unreasonable that the View Source window could properly stylize the output, or at least have an option if it's seen as important. Basically, Visual Studio's ctrl-k...

What that potentially leaves you with is the opportunity to work at multiple levels of abstraction depending on the need.

I've heard a lot about the whole "js is the IL of the web" lately, and it's not an altogether absurd idea. Ideologically, however, it seems like a messy workaround. I suspect that getting "the world" to adopt an efficient, consistent and uniform intermediate language that all browsers implement is all but impossible. So in that regard, the idea is phenomenal.

Then again, I do little-to-no web development... so wth do I know?

snorfys

Monday, 11 July 2011 10:44:26 UTC

Depends on, I think. If I'm working on simple layouts where I don't need to dive into JS with Firebug everything is fine and I don't give a .. to JS (Really, I hate it). But if I'm working together with my designer I have to dive into this stuff because he want's features that are impossible at beginning of work. Later on they tend to get more and more possible but it ends in heavy JS-cascades. Maybe I should say: For "Release" it is ok, for "Debug" not.

Well, I think we are yet to see the final conclusion. We have the debate of javascript vs other languages for some time. Lots of frameworks like GWT, Objective j etc are trying to create an extra abstraction layer over js. These frameworks are possible due to advancement in speed and improvement of js engines(At least, it is one of the reasons ). But this is just one side of the story. Improvement in JS Engines and language has also resulted this "assembly language" to be used in server side. Javascript is now invading the other side ( Node.js and webos ). It will be interesting to watch this tussle. BTW It would be interesting to have a GWT like framework in javascript itself.

EmptyString

Tuesday, 12 July 2011 16:42:25 UTC

One cannot really compare ViewState with GWT or optimized Javascript in general. I don't even understand how one could start to compare the two given that one is entirely passive on the client and meant for storing state and the other is optimized application logic meant to run on the client.

From personal experience, the most obvious reason to hate ViewState is that it's per-view and cannot be cached on the client the same way that a .js file can be. I don't mind big, complicated Javascript because no matter how big it is, I know it will be cached after the first request.

I've seen 20MB (yes, MB) ViewState being used by developers using a grid control who didn't know any better. They simply thought the page was slow because of the amount of data and not because they were sending 20MB back and forth with each request.

More importantly, ViewState enables a sort of weird emulation of event driven programming but for a platform that's inherently "client-server".

This reminds me of 'Varsity in the 90's, DOS was still big and being 733T, you coded in assembly if you were smart enough and all the other students was in awe because of your assembly-fu. Then I started some real work and I quickly switched to Delphi for all my win32 development, it had the power of C++ but creating applications with layouts etc. was so much easier than C++.

Fast forward to today and now I am an independent contractor, so spending 1 hour on something to could be done in 20 minutes, really makes a huge difference, so I quickly started using GWT, since I can do more in less time and personally, I agree, I see the Javascript result as assembly or some other lower level language, my higher level language is java, and not only that, the components available in GWT really speeds up my development, and that is really important when you get paid per deliverable and not being paid to just warm the seat. If I don't deliver, I don't eat, so the faster I can build applications, the better for me. Also note I have a full SLA on the deliverables, so no amount of corner cutting will help me, so I need higher level languages to speed up my development time while still keeping the quality high.

1. I don't care about the View Source any more! Now I like to see something like google+ for my web apps.2. Yes I think Html5 + Javascript is the new low level language for the web.3. I want the full control on the low levels but I don't expect it to be very easy. But it's very nice to support many common scenarios as default behavior easily. The abstraction is more important to me.3. Yes I want my Html & Javascript abstracted away more. But of course I want to be able to create controls/components which they can use all of the low level web features. They should be able to inherited/inherit by/from other controls/components easily.

As you said, (Html + Javascript) = Web Low Level Assebmly. Althogh I like to know anything about the low level languages, I don't like to create even an average web app with these low level languages. I think we need more high level languages and tools. While these tools give us the full control on the low levels (As a low level component developer) they should observe the abstraction. Especially I like the real component oriented development which we can easily reuse the components in new projects or upgrade a single part of the software without change in other parts.

What do you think, Dear Reader...Do you want your HTML and JavaScript abstracted away more? Or less?

More please!

JavaScript-Hater

Monday, 18 July 2011 19:19:12 UTC

Hi Think it depends on your needs, time-frame, budget and type of resources you have on your team. There are many large scale applications that run great with asp.net such as bing.com, hotmail.com, chase.com, Office 365. At the end of the day, it depends on you how you want to use the technology. I am sure if you give GWT to a lazy inexperienced developer, the result will be poor.

Chirag Nirmal

Monday, 18 July 2011 19:44:03 UTC

This article is ridiculous. The output of an optimized site in production does not have to have been generated from a server-side technology. Clean HTML and Javascript and CSS that is highly readable can and should be reinforced in any framework, if drag-and-drop development (i.e. "I don't understand Javascript") is not the first priority. Once a site is ready for production, the thousands upon thousands of lines of otherwise highly readable markup/CSS/script can be processed through a minifier on a developer's workstation before being uploaded to a server. The end result might look like the gobbligook that you found.

So in other words, perusing web sites with gobbligook when invoking "View Source" has ABSOLUTELY NOTHING to do with developers' insistence upon clean markup and code. Very seldom do developers debug a site running in production.

I might add, it's highly offensive to suggest that HTML and Javascript are the "assembly language for the web". Javascript is a highly readable language and is more like C# than assembly language; *minified Javascript*, on the other hand, which is a whole other beast, is more like MSIL. But not assembly. Assembly is assembly. Just don't go there. You lose credibility as a technical authority when you suggest such parallels.

I have to agree with Chirag, it's going to depend on a lot of factors, and there is no one size fits all answer ...yet. I prefer to write the HTML + JS myself, but that's nothing more than a preference, an opinion. It doesn't mean I wouldn't choose WebForms or GWT for a quick cheap app. I don't care what the browser's view-source looks like in production, as long as I can read the development source code and know what's going on. To create quality software, that's really what a developer needs to be able to do.

Right now though, I still think that writing HTML + JS offers more flexibility. It's more deliberate too. WebForms doesn't have a "compiler" that converts C# into HTML+JS the same way C is compiled into assembly. It works with server control components, and you end up having to work within the boundaries of such components. The second you want to break those boundaries, I believe it's much easier and straightforward to write HTML+JS than to write yet another server control that "ends up as" HTML+JS.

Scott - I don't think comparing assembly to JavaScript is a right comparison. I think the correct discussion here is if you really want to have full control over your HTML and JavaScript or not? If yes, then ASP.NET MVC is the perfect solution that gives you full control. ASP.NET forms is the inefficient way of doing things and it doesn't give you full control over your client side. In GWT, you're developing in Java and JavaScript gets produced as a result. Again, you don't have full control and also you'll end up doing much more work for solving corner cases and fighting the unnecessary abstraction layer. I don't think there's an abstraction layer needed when it comes to web development. However, wonderful tools like "JQuery" and etc... always make it a lot more fun and easier to develop.

I also agree. Javascript is nothing more than an assembly language for more abstract languages.I also use ScriptSharp instead of writing JavaScript, and it sounds like my style of development is very similar to what Matt Davis describes, above.This, in turn, gives me compile-time checking, refactoring, and everything else a modern IDE has to offer.

@Marc Schluper - I actually tried to make a XAML to HTML compiler years back as a pet project. The problem at the time was that HTML+CSS couldn't handle some of the most basic layout patterns that XAML didn't have to think twice about. So to support certain layouts, more and more hacks and javascript had to be added. And then eventually, the HTML site was slow as hell. However, if you severely limited the subset of XAML you supported, it would be possible.

I do not agree. Too one-sided, too short-sighted if I may say so.Markup is not dead, but it is so easy to ignore. Have been a server-side programmer for years but underestimating power of markup is a common mistake.

In fact HTML5 is coming back to the markup semantics: dropping div and adding footer and header are just examples of it.

Also would never compare GWT with webforms. Was webforms not the framework that client's button click was being handled on server side?! Web forms was based completely based on a flawed abstraction that we can reduce web development to windows-form-esque solution...

So to start, let me admit that I mostly scrolled to the bottom, only skimming the other comments. That's because from what I gleaned, no one is saying what I'm about to say:

The current situation is a coping mechanism and in no way ideal or "good".

The real issue is that HTML+JS have not kept pace with how they are being used.

Semantic HTML is a great idea. Unfortunately, the vocabulary we're working with is far to limited. We should not be forced to resort to extremely complex machinations to bend HTML and JS to our will. The basic things that we use every day should be simple and easily expressed through the declarative markup. JS is there to compensate for all the things that you can't declare in HTML.

I'm with Eric on this point. This is no different from any other language that is generated, or compiled, and you never have to look at the output ever. It's a problem if we need to go back frequently to the generated output to understand what was intended. For a while now, I've taken the approach that JS should not be hand written by the masses. Introducing that as a design constraint has resulted in a much nicer design overall, with cleaner abstractions that I would not have explored otherwise. For example, when designing a rather domain specific UI, I opted to for specific, but XHTML valid markup that had a tiny little JS post processor. While I wrote the JS post processor, it was a fraction of the volume of the code that the was written in the domain specific markup.

Taking this further, I cannot see any legitimate reason why tiny grammars are not built specific to narrow situations that generate JS. This is different to generalised, broad frameworks like GWT which cater for general purpose programming needs. Even in my example above, and with hindsight, I could well have written the post-processor in any language that was convenient.

Lastly, treating JS as an output also changes testability. In my limited experience, the generation phase is the smallest effort compared to the getting the grammar for the input language designed and built. Testing the grammar may well be simpler but I guess it depends on the grammar itself. In any case, if testing the JS is important in itself, I'd consider generating the equivalent JS tests from the generator itself.

Troubleshooting in firebug would be difficult I'd think because the output and the written source would be different. That is actually the reason I've held off plunging into Coffeescript

Imagine a client side debugger that converts the js back to the coffeescript while debugging :)

Steve Gentile

Tuesday, 19 July 2011 12:45:06 UTC

dll hell in 90s, xml hell in early 21century, now behold the js hell for the next 10 yrs. it seems like a cycle runs every 10 yrs.

Tuesday, 19 July 2011 14:14:46 UTC

It seems to me; these autogenerating mechanisms were created because large software companies waged browser wars designed to make it nearly impossible to write browser compatible javascript.

Now these same companies want us to use their code generating controls/plugins/CMS.

I prefer to use RAD tools like JQuery which allows developers to have a compatible event-driven environment for developing web applications with minimal code.

bitFlinger

Tuesday, 19 July 2011 18:50:15 UTC

Some folks think that saying "JavaScript is Assembly Language for the Web" is a totally insane statement. So, I asked a few JavaScript gurus like Brendan Eich (the inventor of JavaScript) and Douglas Crockford (inventor of JSON) and Mike Shaver (Technical VP at Mozilla). Their comments are over in this follow up blog post.

There is a big difference between a ton of carefully written javascript that has been rendered unreadable due to being minified and the garbage spat out by a web forms app. I thought we were moving away from web forms and this attitude toward web development as a community? Shouldn't javascript be considered a first class web language in its own right, rather than just the byproduct of toolkits x, y and z? With all of the great things going on with javascript right now on both the client and the server side, I think your thesis is become less true all the time.

Tony

Monday, 25 July 2011 02:25:34 UTC

I have to say a few more things about web forms first, since it was brought up. The code generated by a web forms app is an abomination. The view state is a bloated disaster and the fact that the whole thing works by putting every single "page" inside a form is terrible. Web forms was an abstraction created to disguise the way the web works from Microsoft developers so the didn't have to learn new things to get stuff done. They could go right from writing their crappy thick client VB apps to crappy bloated web apps. Why learn about how gets, posts, responses and requests actually work? We've got this completely arbitrary thing called the "page life cycle" and that's all you need to understand. Oh you want to be able to make something happen without the page refreshing? Don't bother with learning javascript! It doesn't work with web forms that well anyway since it hijacks all your element ids. Instead we've got this thing called the Ajax Control toolkit! It's another thing we've created that keeps you from having to learn anything!

Tony

Wednesday, 04 January 2012 20:07:03 UTC

Seems I'm late to the game again... my two cents:

I think the analogy/point is well made that JavaScript (in modern, responsive web implementations) is realling starting to feel like alien machine-code and the browser more like a "runtime".

I agree with both Jon and Nicholas on the whole App != Site comparison for the same reasons. I often find myself stuck for a moment here, as it is sometimes hard to decide on an appropriate approach because my project's content is somewhere between the two paradigms... where some chunks are simply document-like and a carefully crafted bit of semantic markup makes a lot of sense. Other parts (CMS-like functionality, animated and/or same-page UIs) seem to fit the App approach better.

It would certainly be nice to have some higher level abstractions available to program easier and compile down to something like JS, but it becomes yet another platform/approach for developers to consider - paralysis stemming from vast opportunity.

And (cheeky), wouldn't that get the hipsters and corporate coders all up in a fit, knives out, if we all wrote .Net C# that compiled down to Node.js and client-side JS anyway? Scary.

Interestingly enough, John Resig is taking an entirely different approach - at the Khan Academy they are starting to teach JavaScript as a First Language, which is an intruiging concept I was speaking to Scott Allen about a bit. Just like back in the college days of learning assembly language and C+ before ever touching anything else.

Is it better to learn the guts and gristle then in the name of better understanding and good craftsmanship? Or do we just provide people the tools to swiftly weave a larger web, without worrying about them knowing what is underneath the sheets?

As a code-on-sleeve nerd who likes to know the heart of stuff, who often spends a rediculous amount of time learning rather than getting 'er done, I am undecided yet.