Thursday, December 15, 2011

I recently stumbled over a thread on the jQuery mailing list about how to modularize jQuery which keeps getting bigger and bigger with every version with not everybody using every feature. Some argued to change jQuery to support "dead code elimination" via Google Closure Compiler's advanced optimizations, which would eliminate unused code from people's projects; others wanted to use AMD/require.js modules instead, which enables only loading required dependencies.
Having just done a little project on closure compiler at work, I figured it might be possible to support both of those ideas equally. And so I got coding…

How it works

With my change closure compiler (CC) gets experimental 1st class support for both Common JS and AMD modules. This means that CC knows about these types of modules and performs special transformations and optimizations for them.
The high level goals are:

Concatenate all modules into a single large file.

Automatically order modules so that dependencies are fulfilled.

Make it really easy for CC to apply its built in optimizations.

Step 1: Transform AMD to Common JS modules

Add --transform_amd_modules to the command line options of CC to transform AMD modules to Common JS modules. In this first steps basically

define(['foo'], function(foo) { return {
test: function() {}
}});

gets transformed to:

var foo = require('foo');
module.exports = { test: function() {} }

From now on we don't have to worry about the peculiarities of asynchronous AMD anymore. This step by itself, might be useful to some people. E.g. if you want to use AMD code in Node.js directly.

Step 2: Process Common JS modules

Add --process_common_js_modules to the command line options of CC to enable specific processing of Common JS modules.
Most Common JS implementations (like e.g. Node.js) implement it by wrapping all code of a module in a closure like this:

(function(require, exports, modules) { /* your module code */ })(…)

The problem with that is that the module pattern is really hard to optimize for closure compiler because with function calls and scopes involved, everything becomes really dynamic and hard to statically understand.
This is why I implemented a transformation for Common JS modules which allows them to be safely concatenated into a single JS file without the use of closures. This works by renaming all global symbols in a module so that they never conflict with a different module.
The following Common JS module named "example/baz":

Notice how exports just becomes module$example$baz while require('foo') gets turned into module$foo. As you see both exports (and by proxy module.exports) as well as require get converted into direct references to the specific module. All global variables and function names get suffixed with the module name, so that they can no longer conflict with any other module.
Note that while these sources seem really verbose, closure compiler will, of course, make all variable names really short later in the compilation process.

Step 3: Managing dependencies

Add --common_js_entry_module=foo/bar.js to your command line options to specify your "base" or "main" module. Going from this, the system will figure out the dependencies and only include those in the final output. Also everything will be in the right order.

How to use it

On Performance

I'd argue that if you need to load some JS, doing it in a single requests usually always wins. Having *ALL* your JS in one file is, however, usually not a good idea. You want to incrementally load stuff. How to do that within the framework of what I described above is left as an exercise to the reader.

Friday, November 25, 2011

Google Closure Compiler supports a compilation mode called "ADVANCED_OPTIMIZATIONS". There are official docs but I wanted to highlight a couple interesting aspects of what activating this mode means in practice. These features set closure compiler apart from other minifiers such as UglifyJS and can potentially help to substantially decrease code size. They do, however, come as trade offs that limit your flexibility with using the JavaScript syntax. It is a common myth that advanced optimizations require using JSDoc types (which are then checked by closure compiler). This is not true.

Property renaming

Closure compiler can rename properties.foo.longNamebecomesfoo.aOf course, this will break, once you call foo['longName'], so be careful :)

Property collapsing

Closure compiler can collapse property chains (which are often used to represent namespaces in JS)foo.bar.bazbecomesfoo$bar$baz (and will eventually be minified with standard variable renaming.There are various exit conditions when this optimization cannot be applied. Never assign to a global object twice. Don't alias objects in the property chain as in: a = {b: 0}; c = a; c.b = 5;This optimization reduces code size and can increase execution speed because fewer lookups have to be performed.

The catch

Don't use the module pattern. Code, that includes the module pattern usually will not minify well with closure compiler. Yeah, sorry about that :) This is not because the closure compiler engineers do not like the module pattern (and similar constructs) but rather because invariants that have to exist for the optimizations (and others) described above, can not be (easily) shown for code with very complex runtime semantics.We recently had an awesome conversation of G+ about how to fix this. The idea is to transform special cases of idiomatic JavaScript (in this case using AMD) to a form that can be efficiently compiled by closure compiler. Lets do this!

Thursday, October 6, 2011

Last weekend was the third JSConf EU. Again it was both an extremely challenging and rewarding experience as an organizer. We are honored to welcome such amazing attendees and speakers to create the ultimate nerd-heaven weekend. Thanks for coming from Jan, Holger and me.

People seem to like our little conference. These are a couple paragraphs about how we put it on. The world still needs more JSConfs. Come to one of ours and then talk to us how to do one at your place of the world.

Done for love, not profitWe spend every dime that we take it on the conference and don’t aim for profit. Often sponsors are signed at the last minute when all vendors are already paid for so it is actually hard to spend more money. In this case we’ll just increase the party and food budget.Not going for profit makes a whole different conference experience possible. While conferences usually optimize for creating the best possible value for the companies that pay for attendees’ tickets or the sponsors, we can make seemingly irrational decisions like having the best espresso maker in town at our venue that make a big difference for our attendees.

PartiesWe always do 3 parties: One before the conference, a big one after the first day (because everyone will have time and be in town) and another one after the conference.As we are in Berlin we try to bring the original Berlin party experience to our attendees. That means torn down venues in abandoned homes and lots of Techno.Drinks are always free for all which certainly helps in making the parties great.We also invite the local Berlin developer scene and friends to the parties. We want to show the people that travelled here how awesome this community is and we want to give the community a chance to meet all the great people who come. This usually works out great.

Speakers dinner and other amenitiesCan’t have one because there is a big party for everyone :)10-20% of our overall attendees are actually speakers. Another large percentage of attendees have spoken at previous installations of the conference. This is another reason why e.g. having a special area on the venue that is exclusive for speakers does not make sense. In conjunction with the No Stage rule below, the ensures that speakers and attendees freely mix and speakers don't camp out hiding. And attendees don't get to feel inferior.

StageWe do not have a stage. We feel that part of what makes our conference special is that speakers and attendees are on one level and not having a stage to speak on emphasizes this point.

Speaker travel and hotelWe pay for our speaker’s travel. On the other hand, we are happy if the speaker’s employer offers to pay for travel. As we are not-for-profit, we immediately turn around and use the budget to make something else more awesome. We also pay for their hotel, we add two nights around the conference at least (for all, not just international speakers) and we make it a fancy hotel, take care of the wifi and do everything else to make sure speakers are treated like rock stars. We don't quite pick them up from the airport individually, but we are thinking about it. If you treat speakers nicely, they are more likely to bring their A-game talks.

Technical depths of talksWe encourage our speakers to aim their talks at an audience that is made up of experts in their particular field. Thus there won't likely be any “Introduction to X” talks. If one does not happen to be an expert in a given field, it is very likely that one will reach a point in a talk where one does not understand everything. At this point, if you are not a fan of mind blowing brain massages, JSConf is not for you. On the other hand, this does actually enable the conference to push attendees into new levels of understanding and interest. One can always read up on details after the conference and maybe eventually become an expert.

Talk lengthMost of our talks are 30 minutes long. The reason is that we want to invite the people who actually created. Not all of them are necessarily experienced public speakers. Doing a 30 minute talk is significantly easier than longer formats which helps keep the quality very high. And if you can't convey your awesome topic in 30 minutes, the attendees are likely to get bored.

TimingWe do the conference on the weekend. While this might not be ideal for families, we make this tradeoff because we are looking for attendees, which are willing to sacrifice a weekend to come. This is also the reason why we start our ticket sales on Sundays.

SizeWe did 350 people this year. My personal experience was that this was already a little too big, as such a number of people requires executing everything with great precision which is super unlikely for amateurs like us (Professionals can’t do this either, but they don’t care because lots of attendees means lots of money).250 people seems like a great number for a conference, smaller doesn’t hurt.

Round TablesRound tables at a conference make it look more like a wedding and they don't force everyone facing the same direction. Also they encourage conversations, which is our ultimate goal. See also: Building Serendipity

Power under every tableWe are geeks and we love our laptops so we make sure we can charge them whenever we want. Next year we should also have power in the lounges as well.

FoodWhen a caterer says they can provide you with great food, we assume they are lying. Collecting references and testing the food is a must. It is important to have multiple food lines to make sure everybody gets their lunch quick - even if not everyone undstands this within milliseconds. In Berlin, make sure they stock up on Club Mate, and beer.

VenueThe first year we did JSConf EU we got lucky by finding a venue that was perfect for our size, yet very much provided a unique Berlin experience that you can’t easily find anywhere else in the world. Ever since we are looking for raw, industrial venues that don’t even need electricity, let alone an internet connection and then bring in everything to create a truly unique experience. Compare that with a doing a conference in a business hotel that makes you bored even before the first talk has started.We learned that it is key to have great light artists who can turn every shithole into something very special.

WifiImpeccable Wifi is a must. Venues usually don't know/care or even lie about their capabilities. 16Mbit DSL over two routers is not enough for 200 people. Heck, it isn't enough for 20 people. See my article about this topic for suggestions. Do not trust venues unless you tried their wifi with a crowd as big as yours.

SoundWe always hire professional sound engineers for every track. They do things such as turning down the volume when a speaker sneezes which is a small thing but quite valuable when you sit in the audience. Make sure you have headset microphones as they work much better for inexperienced speakers and that you have a monitor box for the speaker.

VideoWe rent the most expensive and awesome professional projectors money can buy (even if we then have to save budget elsewhere).We’ll use DVI next year. Using VGA was a mistake (although not a bad one). Make sure you have VGA-to-DVI-I equipment and a pack of all possible converters on the speaker desk (You’ll also want a Apple power adapter there as usually 95% of speakers use Apple hardware).

Selling ticketsConferences only sell tickets when they are sold out. This is a fact of life. Should you ever consider to run a conference, count your friends who will definitely buy a ticket. This will be the number of tickets you sell in early bird round one. Make sure your friends buy their tickets in the second sales open and then tweet about it immediately. The rest of the tickets will sell like a charm.Fortunately we don’t have to pull tricks like this anymore, as we usually sell out all tickets in seconds.We have three ticket categories: 1. a special rate for attendees of the previous US conference (if you come to both you are truly bad ass) 2. an early bird with a fixed percentage of total tickets, say 20-40% and 3. Regular tickets. There are no student discounts, no press tickets, no community tickets. Here is why: Since all money goes into making the conference, we can make the tickets as cheap as possible. And we feel it is fairer for everybody to pay a low price rather than having some pay more so few can pay less. This means some won't have the funds to attend, but that's how it is. We usually have a number of sponsors running contests around tickets which gives you a chance in.

SponsorsSponsors appreciate sponsoring great conferences. Great conferences do not let their sponsors influence their decision-making. The way we make sure this happens, we generally follow this procedure while budgeting: the amount of money taken in from the attendees covers all base costs, venue, speakers, food, everything, but it might mean that we have 3-star food, and only a limited amount of free drinks at the parties. A sponsor then come in and upgrade the food and create a larger drink tab, or pay for another DJ set or something. But they can never interfere with the base conference setup. That said, we do not feel it is bad to have sponsors speak, in fact, we love it. But it is important to be very explicit about the fact, that product advertisements are not acceptable. If in doubt, we check the slides in advance and work with sponsors to make sure they don't miss a chance to totally delight the finest people in our industry.

The unexpectedWe always try to do something noone in the audience expects. This might be a musical performance or a talk about nuclear physics.

Thursday, June 16, 2011

Node.js has a special API called process.nextTick which schedules a function to be executed immediately the next time the event loop is idle.

Web applications often try achieve a similar result using setTimeout. The problem is that setTimeout with a zero time parameter does not schedule for immediately but rather for some time a little later. This *can* lead to real performance problems.

I benchmarked a couple ways to do postMessage between workers, iframes and on the current window itself. Turns out doing postMessage on your current window might be a really good alternative to implement process.nextTick in browsers.

return Bailout("reference to a variable which requires dynamic lookup");

return Bailout("Object literal with complex property");

if (!Smi::IsValid(i)) return Bailout("Non-smi key in array literal");

return Bailout("unsupported const compound assignment");

return Bailout("compound assignment to lookup slot");

return Bailout("invalid lhs in compound assignment");

return Bailout("non-initializer assignment to const");

return Bailout("assignment to const context slot");

if (proxy->IsArguments()) return Bailout("assignment to arguments");

return Bailout("assignment to LOOKUP or const CONTEXT variable");

return Bailout("invalid left-hand side in assignment");

Bailout("arguments access in inlined function");

Bailout("Function.prototype.apply optimization in inlined function");

return Bailout("call to a JavaScript runtime function");

Bailout("delete with global variable");

Bailout("delete with non-global variable");

return Bailout("invalid lhs in count operation");

return Bailout("unsupported count operation with const");

return Bailout("lookup variable in count operation");

return Bailout("Unsupported non-primitive compare");

return Bailout("unsupported declaration");

return Bailout("inlined runtime function: IsNonNegativeSmi");

return Bailout(

return Bailout("inlined runtime function: ClassOf");

return Bailout("inlined runtime function: SetValueOf");

return Bailout("inlined runtime function: RandomHeapNumber");

return Bailout("inlined runtime function: GetFromCache");

return Bailout("inlined runtime function: SwapElements");

return Bailout("inlined runtime function: MathSqrt");

return Bailout("inlined runtime function: IsRegExpEquivalent");

return Bailout("inlined runtime function: FastAsciiArrayJoin");

I was also pleased to learn that my assumption that modern JS engines de-optimize when they see use of "arguments" is no longer true. E.g. uses like fn.apply(this, arguments)which is often used in AOP style function wrapping would be able to run in optimized code. I benchmarked various uses of arguments in this JSPerf testcase. The idea is to have a loop in each test which should be easily optimized but which would take so long to run that the actual usage of "arguments" in the first statement should not influence the benchmark itself. Suggestions how to make this better, appreciated :)

Does anyone know of similar ways to find out about which elements of JS will trigger running in non-optimized mode for other (open source) engines?

Thursday, June 2, 2011

Just wanted to say how proud I am of my mom to quit her job after like 40 years of working, going back to college, finishing that in no time and now open her own practice as a psychologist: Psychologische Beratung Elmshorn

Sunday, May 15, 2011

Good WIFI at tech conferences is hard. Very hard. Usually it doesn't work.

At JSConf 2011 Meno Abels and I tried to make it work. All the credit really goes to networking guru and all-things-software Meno. Also thanks to Stephouse.net for awesome work on the connectivity and the access points. In the following paragraphs I will walk through the challenges one faces when it comes to WIFI at conferences. This article will stay quite shallow technically. If I ever have more time I will dig deeper.

0. BasicsDear conference organizer, this is the part that can be easily fixed: You need to have a person that takes care of the WIFI as a core priority. The WIFI will break and you will need someone who is willing and capable to put on the wetsuit and jump into the shit. E.g. at least one of your access points will be overwhelmed at any given time. Many things can be done to fix this. Somebody will have to do it.

1. Never trust anyone

Now you booked a venue and they say that they can handle the WIFI for you. Chances are, they are lying. In any case, if you want to go this route, ask for references and call the references. Purespace wanted to handle the Nodeconf traffic over a 2Mbit/s down, 300 kbit/s up DSL connection. That will not work. Chances are you have more than 300kbit/s in TCP ACK packages for your downstream traffic. At nodeconf Stephouse saved the day by coming in on 5 minute notice and installing a microwave link in about 45 minutes (Remember the wetsuit thing from above).

2. Number of attendees and number of devices

Calculate 2.3 devices per attendee.

The primary problem this creates is overwhelmed access points. What can you do to fix it:

Monitor the number of people attached to each access point.

Add another access point in zones where more people come together than you expected.

Use as many access points as possible. Lower the antenna power as much as possible to decrease the range of each AP.

Never have two APs close to each other on adjacent channels.

Don't use encryption. This will be painful, but it really relaxes the CPU of your APs.

People sometimes move in groups and stay attached to an AP while others move in and go onto the AP because it is the closest one. This creates an uneven distribution of people over the APs.Easy fix: Throw everyone off the AP. The devices will then try to connect to the closest AP again.

3. Unknown Building, Temporary Setup

Conferences are by definition temporary setups. You will have no time to tune your system under real world conditions. But still, you will need to update the configuration throughout the conference to make things work better under real load.

4. Bandwidth

Calculate 100KBit/s per attendee in both directions.

You can live with a little less up-link traffic but don't go with consumer level DSL. In case you cannot get that kind of bandwidth from a single provider, take all you can get from multiple vendors and use the software that Meno built for JSConf.eu to aggregate the links (Warning: Only an option if you have a black belt in networking kung fu.)

5. Bandwidth in the Air

Effective bandwidth per WIFI channel is 20 MBit/s. WIFI has 11 channels. That means that you will never get more that 220 MBit/s in the air, ever. This bandwidth has to be shared by everybody in the room. If you put 5000 people in a relatively small room, then your WIFI will be slow. There is no way around that. It is simply physics. (You may be able to add the 5GHz channels, but we recently experienced problems with some Apple devices.)

6. People

People are the primary problem for conference WIFI. Actually not people but the software running on their computers which they did not disable. This involves bittorrent clients and backup software. One person at JSConf uploaded 9.6 GB in the first 30 minutes of the conference. This means that one person used almost 80% of all bandwidth. Identifying these "powerusers" will get your conference a long way to good WIFI. Usually you will only have an IP address, so your only option is to either block the person entirely or to block the remote IP where the traffic is going. At JSConf we introduced social traffic which links all traffic to Twitter identities. This way we can used Twitter @-messages to ask people to disable rogue services.

7. JSConf learnings

All of the above we learned at previous confs. This is what we learned at JSConf:

With a few 100 people at a conferences there will be a couple stupid persons in the audience. Redirecting all traffic from http://twitter.com to https://twitter.com goes a long way in fixing this problem.

Using an auth service such a social traffic requires white listing of IP addresses. Make sure you highjack all DNS traffic (all on port 53 regardless of DNS server used) so that you are able to control e.g. the IP address you white listed for twitter.com.

Wednesday, May 11, 2011

OK, actually I like micro libs. The title of this post is only part of my Hacker News Optimization (HNO) strategy, developed by my PR agency.

There has been some heated discussion in my Twitter feed over the last days and following #jsconf about the benefit and usability of micro libs.

Now one can make many good points pro and contra using micro libs but one stood out to me the most

Micro libs suck because they have weak cross browser compatibility.

As should be obvious, this is a classic bullshit argument. I can prove it wrong easily: The ultimate micro lib Vapor.js has the best cross browser compatibility of all JavaScript code that has ever been written.

Coming back to the original argument, even joking aside, it is still wrong. There might be individual micro libs with certain bugs but that is true for all environments. Roughly 99% of all jQuery modules are badly written and contain massive bugs. The situation might be better in big toolkits such as quooxdoo, dojo and Ext but why is that really?

Right, these guys actually test their stuff.

Testing

Cross browser or interoperability problems between micro libs are actually a problem of insufficient testing. Now the good thing is that we, as the JavaScript community have one pretty unique feature that will help us solve the testing dilemma:

Thus we may copy everything that made CPAN successful without falling prey to not-invented-here-syndrom that made gems, pips and all the others only sub-par competitors.

Izaac showed great vision at #nodeconf when he described the future of npm using TAP in the testing layer. TAP, which comes from the Perl community but really hasn't anything to do with it, is a protocol for test-runner output. It is really easy to produce, human readable and fairly easy to consume as a machine. With TAP everyone can use their favorite testing framework, even cucumberish natural language comprehension using regexes, if only it supports TAP output. On the other side we are able to build awesome tools that process the TAP output of your tests to do awesome things with it.

See for example this page that shows automated test results gathered from people installing Archive::Zip. So it mostly passes everywhere but there are some weird outliers that go wrong. Does that sound familiar? Yes, this is just like things happening in browsers.

Using our unified testing infrastructure each library would be tested in all possible browser environments. The select-box now allows you to select which environment is important for you and only modules compatible with that environment will be shown to you. Remember, it is perfectly valid that zepto.js chooses not to run in IE. If that is important to you, update your selection and it will no longer be recommended.

Closing words

Lets build this infrastructure. I for one donate my free time starting now.

Sunday, April 10, 2011

I don't really know why I'm writing about this now, but it is on my mind and I want to let it go :)

I like jQuery. Boom. I'm sorry if that hurts your feelings. I like it as a neat DOM access layer and basic building block of desktop websites and applications. I'm saying building block, because only jQuery will not get you far, but that is another story.

There is one rule that you have to keep in mind when you are using jQuery and you care about your code to scale beyond a few lines:

You can never use jQuery modules.

jQuery modules (as in methods and properties hanging off the jQuery's prototype object or, to a lesser extent, things hanging off the jQuery function itself) are practically always the wrong design.

They all live in a single namespace, so there is risk of collision.

They force a DOM centric application design.

It almost never looks nice as an API if you are not implementing actions like "show" and "hide".

Instead for almost all use cases where people use jQuery modules it would be better to use a "stratified" API which is this case just means that you pass the jQuery object you want to work on to a function, rather then calling the function on the jQuery object. While jQuery's beauty derives from chaining its basic methods, this usually doen't make sense for more complex modules.

jQuery.sub() has been introduced to somewhat reduce the risk of namespace collision. I think it will go down in history as the most blatant example of a leaky abstraction because you end up never knowing the exact type of a given jQuery object unless you instantiated it yourself.

What about all those nice jQuery modules, you can copy and paste into your application to make fancy things? My personal experience is that most are of very bad quality, so it makes sense to rewrite them to a stratified API style (usually this only take a few minutes) and then assume ownership of the code and fix bugs as they appear.

Wednesday, February 16, 2011

There has been a lot of discussion in my Twitter peer group lately about what's the right level of optimizing website performance and how to go about it.

Premature optimization is the root of all evil.

This sentence is true but it might also seem like an excuse to never optimize, so lets elaborate a little on the condition when and how to optimize your code.

Lets start by defining optimization for the purpose of this blog entry:

When optimizing for performance one chooses between two or more implementation strategies and picks the fastest one based on the reason that it is faster then the other strategies.

Now that we have defined performance optimization as a choice between multiple implementation strategies is becomes clear that it is just a subset of the more general concept of software design. When picking between implementation strategies in software design one has to regularly pick the worst of all evils to achieve the best possible outcome. The trade offs are:

Maintainability

Scope

Quality

Performance

You never get more than 3 of those right with limited ressources. This also means that if we want great performance we will have to be cool with worse maintainability, reduced scope and/or worse quality. As a software designer we need to be aware of this and should make a very conscious decision when we opt for great performance.

Now Steve Souders and friends have proven to great length that in the special case of website and web application load time performance can have a great positive impact on the overall success of a project, thus is will often be the right thing to opt for performance and e.g. decide to reduce scope.

Now if performance optimization can be a good thing, then why is premature optimization always evil: Because the premature refers to the fact that the performance optimization was not done because of a conscious design decision but for other reasons: E.g. because we can or because when writing that ray tracer the other day something proved to be faster.

To differentiate mature from premature optimization I already late down that conscious decisions are the key factor. What is needed for a conscious decision:Data

When thinking about data and performance profiling comes to mind. Yes, profiling is very, very important, but I was also referring to data in terms of how the optimization will impact the scope, quality and maintainability of your application.

Unfortunately profiling is not the solution to all performance problems. Often you have to make decision early in the software design process when little code is written and you cannot really profile anything (you could micro profile but you don't know if that code is relevant to your overall performance yet). Thus besides profiling you also want to use experience to get some things right in the first place. If you don't have that experience, buy somebooks. Experience often also is the only way to assess how an optimization will impact things like quality or maintainability. I guess, you need a lot of experience to write good software :)

The good thing is: Computers are fast. This leads to an important rule:

When things aren't too slow, optimizing them will always be a bad idea because the the trade offs in quality, maintainability and scope will result in a net loss.

When designing your software, you should never pick an implementation strategy because it is faster unless you have data (through profiling or experience) to support your decision.

In concrete terms for JavaScript when picking between

C-style loop and forEach -> use forEach.

String builder and concatenation -> use concatenation

Script tag VS. script loader -> use a script tag

etc.

If you ever pick the faster alternative that is cool as well, just make sure that when you work on my team that you are able to back up your decision with data or I'll make you change it to the slower code with a really embarrassing commit statement :)

PS: Point 1+2 in the list above are examples where you will only ever opt to the faster version after profiling. Point 3 is a good example where your experience might tell you that the tradeoff of using a script loader is well worth the effort.

Tuesday, February 15, 2011

Full Disclosure: I work for Google but Streamie is my private just-for-fun project. Your results may vary.

I will do a post on Streamie's overall Analytics soon, but lets start off with the numbers related to the Chrome webstore.

This data is from the first week of 2011. Streamie was used by a hardcore scene of enthusiastic users and the numbers weren't big. At this time Streamie was already in the webstore but it wasn't actually available to users of the stable version of Chrome.

The Chrome webstore launched to all users on January 7th and on January 12th Streamie became a featured application. This is what happened:

I will put more data on what happened after that in a later post, but as far as the Chrome webstore is concerned it continued to send a significant amount of traffic to the site (both as direct referrers and from the newtab page).

Some final numbers:

Users who have Streamie installed as an app in Chrome have

- spend 300,029 minutes using Streamie

- send about 10,000 tweets with Streamie

I'm personally somewhat surprised with the overall traffic that Streamie is receiving. It is nice that people like it. The Chrome webstore definitely helped pushing the traffic to a new level, though, which, of course, triggered a lot of blog posts and tweets and then more traffic :)

Sunday, February 13, 2011

Streamie was always a very selfish project. I wanted to build a Twitter client that is optimized for my personal usage. That is consuming tweets from about 350 people on my main account.

I have to be brutally honest: I have always used a second Twitter client (Tweetie aka Twitter for Mac) in parallel. That is because Streamie does not support multiple accounts which is a feature that I need. Why did I not implement this? Because it is not important for my consumption habits. On all the other accounts (@jsconfeu, @hhjs, @streamie and some others) I never actually read the tweets but only look out for mentions and DMs.

That said Streamie is moving forward as a get-out-of-your-way tweet-consumption application: All information that is not completely necessary to read a tweet has been removed from the default view. You can't see the name of the author (but his avatar (This works for me even though I suck at remembering faces)), the age of the tweets or any buttons at all. Just the avatar and the text.

I was recently experimenting with integrating embedly and a two-pane view to view content without navigating away. People hated it and it was against the simplicity of Streamie. People actually like opening background tabs for later reading.

In adition to this several small refinements have recently landed in Streamie

The ugly t.co links shortened by Twitter are replaced with a short version of the actual URL. Clicking the link does not go through t.co. This is good for your privacy and also much faster.

You can now choose for which type of tweet (all, mentions, DM) you want to be notified. Differently colored favicons are used to for notification through the tab-bar.

When somebody deletes a tweet, the tweet content is displayed with a strike and you can actually delete your own tweets.

I'm very open for more ideas about information that can be thrown away or hidden behind a hover or click (please comment). I'm not completely happy with the current hover behavior. At least for touch devices this needs to be changed to a click and maybe it should be a click everywhere (Opinions?).

Update: I changed the rollover info to never resize the tweet. The layout jumping was too much.

Wednesday, January 19, 2011

I recently thought about building a WebSocket demo for html5rocks.com and I was thinking about building something which could not easily be replicated with any of the work arounds such as long-polling, etc but would require a real streaming socket connection.

The server is using node.js and it has two modes: it can either talk to a local QuickTime installation to stream the video or load a file into memory that contains all the text that makes up the frames of the video. For the production setup that runs on Joyent's no.de infrastructure the in-memory solution is used to avoid the dependency on QuickTime and to allow simulatenous access by multiple users.

The collaborative part comes into play with the scrub bar that scrubs the video for all users that are currently watching. This also means that the demo will not really be enjoyable with many concurrent watchers :)

I really love the simplicity of the client. It receives every frame as a single WebSocket messages and just puts that into a <pre> tag as soon as it is received. That's it.

2 Warnings:

Most browsers recently disabled their WebSocket implementation, so it might not work for you.Updates:Chrome should workTo view on Firefox 4, go to about:config and set network.websocket.override-security-block to true.For Opera: opera:config#UserPrefs|EnableWebSockets

Friday, January 14, 2011

Wrong education about web technologies has seriously hurt web development as a whole. W3Schools is the worst offender in this area – and because they are along so long they tend to dominate search results. The PromoteJS initiative tries to fix this and now we have a new site in the ring. Help spread the word about W3Fools, an intervention for W3Schools.