Wednesday, December 31, 2008

I don't know what exactly causes it but some things are hard to finish.

Consider programming projects. Have you ever been on a project where... Actually, maybe I should stop right there. Have you ever been on a programming project before? Oh.. Well, in that case I'm going to pick another analogy.

Have you ever written a story or book or something? Well, as you probably have observed, getting the first draft completed, while nice, doesn't mean you're finished. Depending on how much you care about the final version, you'll go through many other drafts. The text will be written and rewritten each time getting better. It may take more time to go from the first draft to the final copy than it took to get the first draft in the first place.

From the point of view of someone who has never written, this may seem weird. How can it take more time to go from the first draft to the final version than it took to get the first draft? Well, for professional writing it can. It depends on how good you want the end product to be.

This sort of thing happens on software projects too. In fact this sort of thing almost always happens on software projects. The bigger the project the more likely, in fact. What happens is that there is a big difference between having all of features in a project working then having all the features in a project working to the point where you can sell it as a product.

Here are the usual things that need work before you can release it to the general public:

1) Error Handling.

Error handling is one of those things that doesn't add any obvious features, that no one wants to plan or think about, but that people yell about when it's missing.

When you're working on the code, you'll notice that some function or other can fail. Any network call, any disk IO function can fail, for instance. What do you do if there's an error? The most common thing to do is just to put up a dialog box that says "I can't do that because something bad happened." but this is rarely the right thing to do. The right thing to do is to try and recover.

For example, if you talk to the central server and there's an error, can you try another server? Can you simply reconnect to the server and restart the communication? This sort of thing is hard to plan out and implement but the users love it. It means that the software can automatically recover from errors without the user having to do anything.

2) "Minor" UI tweaking.

When finishing up an application "minor" UI tweaking often gets dropped by the wayside. The reason for this is once you have a bare-bones basic UI to access a feature everything else just adds time to the schedule. What adds to this, is that either nobody cares about the UI or everybody gets to add their two cents to the UI. In both cases the UI will end up sucking.

3) Bugs!

The difference between the number of bugs you find if you have one, ten, one hundred, one thousand and one million users is impressive. We seen this happen with our product at Intelerad. The more places we deploy our product to, the more bugs we get back. What's impressive, is that this product has been on the market and in use for over five years. The bugs we get back are ones that were introduced five years ago. These bugs have lurked in the code for over five years. The reason we are only getting them now is because the larger user base the more people there are using it in different ways.

People don't report bugs as a rule. When they do report bugs it's because either they're the kind of person that reports bugs or because the bug is blocking their work. The more people use your product the more likely that someone will find data but is blocking their work. Also, the more likely that you'll stumble upon someone who will just happen to report a bug. People who report bugs for the fun of it are extremely rare. I would say there much less than one in a thousand. As a result, you are still gaining many of these people as the total number of people using your product goes past a hundred thousand.

A you could release when there were only a thousand people using it can be more buggy than if there are one million people using it.

A product that is usable for demoing purposes can have many more very hard to find bugs that I product that has to be released to the general public.

4) Scalability:

It is far easier to build a web site/ service for use by a tiny number of people than for a large number of people.

Thursday, December 11, 2008

Okay, so it's finally all gone. I have finally got rid of all my cookies. I am very happy as I thought I would never give them all away. You see, it all started on Saturday when I decided that I'd make a triple batch of chocolate chip cookies. I was making chocolate chip cookies for the company charity auction. Every year Intelerad, the company I work for, set up an auction site on the company LAN and employees can put things on the auction site and sell them to other employees. The idea is all proceeds go to charity. It's quite a lot of fun, although it can be quite distracting at times. I don't have any scientific data but I'm pretty sure most employees spend their time simply watching auctions and trying to snipe one another on various goods.

I don't like to buy things in the charity auction so much as sell things. I think this is partly because I don't like to buy things period. At any rate, what I usually do is cook up a couple of batches of chocolate chip cookies and sell those on the auction. I can't figure out why exactly, but everyone seems to like them almost to the point of farce. They sometimes act as if my chocolate chip cookies or some kind of drug. It's quite flattering, but also really silly. The recipe I use the one you find on the back of the Nestlé chocolate chip package. Many people have tried to equal my cookies but apparently all have failed. I'm not entirely sure how.

The weekend before the auction I decided to make a triple batch of cookies. I like making a large number of batches at once because it takes almost the same amount of time to make a triple batch or quadruple batch of cookies as it does to make a single batch of cookies. Making larger batches is better because eventually you end up with so many cookies you can start to make little houses out of them or put the, in long lines, all over your furniture (what? Only I do that?). After you finish playing with them you can then give them away and make lots of friends.

In the end, I couldn't make any cookies over the weekend and so I made them late on Monday night. On Tuesday, I took in the cookies I made the previous night and started putting them up on the auction site. I amused myself way too much putting cute, little, nonsensical descriptions of the cookies on the website. I would sell them in units of two or three cookies each. This works well however, I was putting in way too much work coming up with cute descriptions that I got tired and stopped. I then realized that all my auctions have it in time approximately 7 minutes from each other. This guaranteed that I would be bothered almost precisely every second minute when somebody coming to collect their cookies and chat. While fun, it made doing code reviews much more difficult. On Monday I got through about two thirds of the first half of the three batches of cookies. (For extra points, what fraction of the total number of cookies was that?)

On Wednesday I somehow tickled a bug that stopped me from making more than one auction at a time. Every time I created a new auction all it did was modify the last auction it created. I started selling cookies in larger batches feeling a slight twinge of panic that I might never be able to get rid of them all. On the plus side, I got slightly more done that day. I say slightly more because Wednesday was chilly day and I got involved helping others get everything ready. Chilly day is a great day. It's a almost spontaneous outpouring of food. I love spontaneous out pouring of things.

Today, Thursday, I managed to figure out how I tickled the bug and started to make multiple auctions again. This time I omitted the cute descriptions and placed the end of each auction at least fifteen minutes apart and often up till thirty minutes apart. This makes creating multiple auctions very quick and it only took me a fraction of the time it did on Tuesday. What is more, the end of the auctions were enough apart that they weren't a perpetual distraction. I managed to complete the work I wanted to do that week during that day and so it was a very productive. (I say this in case there are any Intelerad employees reading this)

Thursday also marks the last of the cookies. I am now officially out of cookies. I hope the cookie junkies don't come after me now.

Saturday, October 18, 2008

You know how sometimes you're dealing with a computer problem and you find yourself trying a bunch of things, almost randomly, in a desperate attempt to try and get it to work? There's a name for that. If you're doing that while writing a program it's called voodoo chicken coding. If you're doing it while trying to debug some sort of operating system problem then I suppose it's voodoo chicken troubleshooting. it's called voodoo chicken coding (or troubleshooting) because you don't actually know what the problem is that so it is trying a bunch of random things to get it to work. It's the equivalent of mumbling things to the gods while waving some kind of voodoo chicken over the computer in a vain attempt to get to it to work.

This is not a good strategy. Even if you do manage to solve the problem this way it will probably reoccur again since whatever you did to fix the problem probably just fixed the problem by accident. (I say this coming from a background of trying to debug heavily threaded software).What is not often stated, however, is that knowing how a system works can sometimes be of no help either. I recently had a problem on my PC where the computer would freeze early in the startup process while the BIOS was still scanning for IDE devices. I ran into this issue I was trying to switch my SATA bus over to AHCI. AHCI stands for Advanced Host Controller Interface. It's a newer protocol for talking to serial ATA devices that offers more features than the normal parallel ATA protocol such as hot swapping and native command queuing. I wanted to enable this on my internal hard drive for two reasons: 1) I thought I already had enabled it at some point in the past. 2) I wanted to enable native command queuing because it sounds cool. 3) I need to enable it in order to run the Mac OS which I've been trying, unsuccessfully, to get running on my computer since I bought it about a year and a half ago. At least part of the reason I haven't managed to get it running is because I haven't managed to get my computer running with AHCI.

Anyway, I turned on AHCI in my BIOS I started getting this freezing problem when the BIOS was scanning the SATA bus looking for new devices. I was a bit confused because as far as I know there's absolutely no way that anything I had done to the hard drive, in terms of formatting or partitioning or installing software, could cause this problem. Scanning for new devices shouldn't be reading anything on the hard drive. That's just weird. Well, it turned out that this was indeed the problem.

After trying everything I could think of I decided simply to wipe the drive in a voodoo chicken debugging attempt to try and get the system to recognize the hard drive without freezing. Amazingly, after doing a low-level formatted the drive and rebooting it worked fine. I still don't understand why this is. Why the heck is it reading off the drive during the device detection routine? Well, I don't know. Anyone want to explain this to me then feel free. Anyway, I'm just happy it's working now. Native command queuing is indeed cool!

So, I'm not sure what the moral of this story is. I think it is that once you've eliminated everything as being impossible the only thing that's left is the impossible, which is impossible. This in turn means you have absolutely no clue what you're doing and might as well start trying a whole bunch of random stuff that shouldn't work.

Now if you'll excuse me, I have some chicken stew to eat. yummy!

Sidenote: Switching from parallel ATA to AHCI requires installing drivers under Windows. Unfortunately installing drivers under Windows requires AHCI to be enabled. Enabling AHCI renders Windows unbootable unless it has the drivers for the AHCI controller installed. This is a very fun situation as it means you can't install AHCI drivers until AHCI is enabled and you can't load Windows if AHCI is enabled. I managed to get around this problem, on my machine by using my two SATA controllers to enable AHCI on the second controller, install the drivers in Windows for this controller, then manually change the hard drive over this this second controller. I then rebooted Windows under the second controller and turned on AHCI under the primary controller. I then moved my hard drive back to the primary controller and start up my machine again. And that effective all these shenanigans with the have AHCI enabled on both controllers and a half the drivers for these two AHCI controllers installed and enabled. There are apparently ways of installing the drivers by booting up from a CD-ROM or other startup disk and then inserting a floppy drive with those drivers at some point. I didn't bother reading up on how to do that as the above methods seemed far simpler to me.

Thursday, October 9, 2008

Arggh! This is driving me crazy! Is there anyway I can actually get any work done without being thwarted by stupid, little computer issues.

Yesterday is was trying to setup an FTP server on my WindowXP machine. This is pretty much a single click operation on MacOS X (not including the usual NAT shenanigans) on windows it took forever.

The user manual was wrong.

The IIs FTP component needed to be installed separately.

There's a hidden button for setting the file sharing permissions into "simple" (read useless) mode. For some reason, the document doesn't mention that certain menu items in the MMC sharing controls don't show up in simple mode. grrr..

The sharing controls are hideous and there's at least one hideous violation of standard UI widget behavior that blew my mind. For the curious it's a checkbox on a settings dialog that doesn't represent a setting but an action to do when you apply the settings. Someone apparently figured that since they were writing code that did actions (like saving settings) when the user clicked ok, that the dialog represented the actions to take when hitting ok instead of the state of the preferences that are mirrored when the user pressed ok.. grrr.

Today it's open office. I wanted to finally start fixing up one of my long text documents - adding things like style information and a table of contents. I opened up the file and spent 3 hours or so adding style information and a table of contents. Just a few moments ago I reopened the document to start adding content again. Humm, all the style information had gone. Did I open the wrong file.? nope, all the textual information changes were still there, just the style information. I looked at the file type: txt!

You have got to be kidding me.

Apparently, OpenOffice was fine with me adding the text style information but had no intention of actually saving said information.. Or even pointing out that I was adding text style information to a text document. Which makes no sense!

Last time I used microsoft word (version 5.0 mac) it didn't do this. The behavior was to stubbornly insist that if you wanted to save to something other than word's default format you manually go through the save-as process each and every time. It insisted on warning you that you might loose formating information each and every time. In the end one tended to give up, write the thing in word's native file format and save-as something else before sending.

I believe that the modern version of word will actually warn you that the specific things inside your current document can't be saved to whatever you've chosen. I might be wrong on this, though. I haven't been able to get past the modern version of word's interface. Where are the keyboard equivalents listed???

Photoshop won't silently save a file with layers or other non-savable info to a png (ie: it won't loose your layers silently). I can't think of any program that would have allowed me to just waste time like this in all the years of using a computer. This is a first: a completely novel way of destroying my data.

Thankyou Open Office. Thankyou, you stupid, useless application. May the idiot that responsible for destroying my data suffer some sort of miss-fortune... like loosing his data while using Open Office in the same way.

Wednesday, October 1, 2008

I just finished watching the Frontline documentary. It talked about the US healthcare situation. Basically, what the documentary was about was how healthcare is done in five different countries. It contrasted this against how healthcare is provided in the US.

I don't live in the US, however I get the impression that the health care debate is incredibly politicized there. It looks like this is making it difficult for them to make any progress with their healthcare system. That's a shame.

I live in Canada. Our healthcare system has had it's share of problems recently. From what I gather, most of these problems have stemmed from underfunding. This underfunding was in turn caused by a need to service a large national debt. Now that the debt situation is under control funding is increasing again. It looks like we're doing pretty well.

Monday, September 29, 2008

A few weeks.. or was it months?.. or maybe it was yesterday.. I can't tell time.. Anyways, at some point in the past I ranted about the lost opportunity that was the Iraq war and how they could have used the money on several other projects. That amount was 500 billions dollars. Now they want to spend 700 billion.

For comparison, canada's national debt is now 467 billion dollars. Aiii!

To be fair they say that they'll get the money back.. Well, most of it.. probably.

It's only ~2000$ per man, women and child in the US. Canada's national debt is ~15000$ per person which is still pretty bad given that Canada's population is about 33 million vs the US' 300 million or so.

Congress, at this writing, is balking at this amount. Honestly, I don't blame them. This is a ridiculously large amount of money and as much as I can appreciate the occasional need to prevent contagion, this is on a greater scale then.. well.. anything I've ever hear of. Honestly, I'd give this a miss too without some really convincing evidence that they know exactly what their doing.

Wednesday, September 10, 2008

Human beings work on two levels. The first is the emotional level. This system is very good at making very quick decisions based on the data but doesn't think very deeply. The second level is the rational level. This is the level that can do mathematics and understand software design. Psychologists think of these two levels as being different systems in the brain. They call the first level system one and the second level system two. Given that there's absolutely no way of understanding a very complex piece of software, like an operating system, if someone's trying to explain to you what's special about the newest version of Windows or Linux or MacOS X, there may be a little technical data transmitted however the bulk of the information will be directed directly at system one, the emotional system.

Very often explaining the positive and negative aspects of the complex design doesn't involve trying to make people understand the whole system, even when the system itself is really only understandable as a whole. It's possible to transmit one's excitement about the system or the elegance of the system through one's own excitement and a few choice examples. You need to transmit your excitement because otherwise your words come off as disingenuous. You need to use a few examples because this is something the brain can understand. In antiquity we weren't always able to prove things mathematically so what we did instead was we used anecdotal evidence. Examples are like anecdotes. They don't have the advantage of being associated with a different person, but if you can make your example personal and that's almost as good.

When it comes to engineering, one of the greatest dangers and engineer faces is misunderstanding a system. To simply not understand a system is not as much a problem because you typically know that you don't understand the system and seek out knowledge, advice and otherwise treat the system with respect one would expect to give to a potentiality dangerous blackbox. When an engineer misunderstands the system he feels free to tinker with it, to change it and then put it into production. Many bugs in software actually exit because an engineer changed the system, either data feature or fix a bug, but didn't understand how that existing system worked. As a result of their tinkering they introduced a subtle problem. As a result, software engineers, in fact all engineers, tend to become professedly more paranoid about misunderstanding concepts as they get older.

This sort of paranoia about misunderstanding a system does not exist in the general population. In fact, it may not even exist in engineers with respect to non-technical matters. When people not in technical roles interact, in such ways that can influence the design and manufacture of complex engineering system, there will almost certainly misunderstand the system.

Working on a complex piece of software requires holding a lot of state in your head. It acquires understanding in detail the software system. In a typical day a software engineer will make many decisions that will affect how long it takes to build a piece of software, how robust that piece of software will be and whether or not a feature gets implemented. He will make these decisions either explicitly or implicitly based on the design he chooses to implement. While requirements suggest design design suggests requirements also. A good engineer will optimize the time it takes to write the software, the quality of the software, and the number of features in the software. Frustratingly for managers, the only person who actually has enough information to be able to make the trade-off is effectively is a software engineer. Frustratingly for software engineers the only person with enough information to be able to understand whether the system should be optimized for speed of development, quality or features is the manager.

From the manager's perspective, it is impossible for manager and to know everything they need to know in order to make a design decision that will influence the schedule, quality and capability of the software their team is building. While I think it would be possible for managers to be able to do this with certain high level features. However, in practice a good designer will be able to understand the whole system in its entirety and for any given set of features, schedule and quality objectives will be able to optimize the design, in its entirety, to the optimal. A common manager mistake is to try and exert control over the software team by withholding important prioritization information.

From the software engineers perspective, it's impossible to know exactly which of features, schedule or quality is most important given the current political climate. This includes pressure from clients, budget pressures and maintenance duties. A common software engineer mistake is to build the wrong thing to a ridiculously high standard.

Communication between management and software engineers is tricky. From the software engineers point of view, he can't give the whole picture because it would simply take too long. In fact, if you were to give the whole picture to the manager to manager would know the same amount as a software engineer about the system. Nevertheless, software engineer doesn't need to give an accurate picture of how the system works. All we need to do is give some idea as to the emotional landscape of the solution space. Essentially, any combination and quality features will result in a schedule with some risk parameter. Negotiating a combination of quality, features and schedule is the process of understanding the solution landscape and then picking a solution with an agreeable combination of factors and a tolerable risk.

A manager can help speed this process along by attempting to communicate the political climate, as much as it relates to the priory of features, the satisfaction with current quality and schedule pressure to the engineer. By doing this the manager gives the engineer context as to what sort of environment the software is being built in. This process is very similar to going to visit an on-site client to find out what sort of workplace pressures the client is under and what sort of environment the software is expected to run under. While a manager can drag and engineer round with him to get yelled at by executives the manager can try and communicate as much as possible the current priorities.

Managers need to be over trust are software engineers to make the right design decisions. This is a matter of professional trust and competency but also a question of having the right information.

Software engineers need to try and poll their managers and under other members of the organization for what the priorities are what the priorities are likely to become and how satisfied everyone is with the current state of affairs.

Monday, September 1, 2008

Well, you learn something everyday. This is especially true if you're working with computers. Today I learned the following:

1- windows file sharing permissions are weird.2- Entourage X (mac outlook) will irreversibly corrupt its database if you push it over 2 gigs.3- Sylpheed doesn't know how to import an mbox file if said mbox file uses mac style line ending.4- Sylpheed crashes most spectacularly if you try to import a 1.6 gig mbox file with mac line endings.5- Practically no text editors will work with 1.6 gig files.6- Knowing how to program in Java and having a development environment ready to go has its advantages.

Tune in next week when I learn that beating oneself over the head with a pan is a good stand-in for using a computer.

Sunday, August 24, 2008

I've been following recent developments regarding the new C-61 copyright bill. There are many things I don't like about it. Here are three.

1) It becomes illegal to copy DVDs for backups or for playing on another device.

I have recently started to move my DVDs onto a separate hard disk so that I can play them from my computer without going through the bother of finding the physical disk first. Essentially I have made a sort of crude movie jukebox. I find this to be a great way of watching movies.

I also make temporary copies of DVDs to my laptop for use on long flights or bus rides. Playing from the digital copies doesn't take up as much battery lifespan as playing them from disk. Also I don't need to carry around the DVD drive, not to mention the disk itself. This is especially useful since my laptop machine doesn't have a DVD drive.

I am concerned for people making copies of DVDs for use in their iPod movie player devices. While I don't do it I don't think this should be made illegal. I can see a time in the near future where it will be possible to put every movie I own onto one of these devices. I would like to see this doesn't become illegal.

I am also concerned about the parents who want to make backup copies of their children's DVDs because their children tend to destroy them. I think this is a reasonable, fair use.

2) The anti circumvention clauses.

All free and open source DVD players on linux are, to my knowledge, based on the DeCSS. This code was backward engineered to allow DVD playback on Linux. This would be made illegal.

From my understanding, the development of this code would be illegal. I'm not even sure that the use of this code is legal, therefore I'm not sure whether there's any way of legally playing DVDs on linux. I think this is a bad thing.

Putting linux aside for a moment, the breaking of CSS has opened up the possibility for me to make copies of DVD for the uses I mentioned above. In a very real way I owe these new capabilities to the breaking of the encryption. It looks like bill C-61 makes format shifting in general illegal and breaking DRM to do so doubly illegal.

I believe that DRM and encryption are examples of how digital technology can be used to create new business models. Digital technology, and the use of encryption, can allow the content producer to control how their content is consumed and paid for. Historically, this has been defeated by other who break the encryption and backward engineer their formats. Anti-circumvention legislation removes the ability of third parties to do this and tilts the balance of power in favour of content producers.

With DRM, piracy is a red herring. DRM certainly doesn't help stop piracy since all you need is one non-DRM copy to begin to circulate for all piracy to be possible. It is, however, a great way of getting people to pay extra for the ability to VCR programs for later viewing... Or to pay to re-buy tracks they actually own but need to buy again because their tracks all use a DRM for a type of player that doesn't exist any more.

Finally, I would like to mentions the Sony root kit incident. Sony's CD copy protection DRM was obnoxious and invasive. It used a root-kit style attack more common of trojan horse (computer virus) cracking attempts. It's buggy modifications to Windows has caused me personally to spend time fixing machines broken by its buggy implementation (known commonly as the Sony rootkit fiasco). In my opinion this sort of drive by virus-like behaviour from software should be illegal and not any attempt to circumvent it!

Tuesday, August 12, 2008

Google's calender, spredsheet applications, mail have started to displace desktop applications. Why? IMHO they suck. They try to be desktop applications but are nasty, buggy, pale imitations. They do have a few things that desktop applications can learn from.

1) No dang installation step. I've always hated installing applications. Is anyone here over 30? Can anyone remember installing applications on Macs circa 1992? The correct answer to that question, with a few exceptions, was no. You just dragged the application from the floppy to where you wanted on your hard disk. The only reason you didn't run the dang program directly from the floppy was it ran slow. 'm using windows XP and everything little thing has an installer. Step one to making desktop applications suck less, get rid of installers. Let's get a standard where I can run desktop applications from the web and cache them locally, please.

2) No load time. Web applications don't need to load. To be honest I'm still not sure why desktop applicati0ons have a load time... and I've been writing them for years! While writing Myster I tried to reduce the amount of time it took to load. In the end I managed to get it down to some reasonable fraction of what it took to load the java virtual machine but really it should have been even shorter.

Just what the heck is happening during a program launch anyway? The answer is the machine reads a block of computer instructions from the HD and starts executing them. This is actually quite fast.. even on windows. The trouble comes when these initial instructions start loading libraries and building tables and constants and loading the code that loads the preferences and reading from the preferences and loading parsing them then loading all the icon resources then displaying those. The list goes on and on. In the end, desktop applications take a long time to load.

If your desktop application takes longer to load than my perception of instantaneous, then you should be making it faster. If your application feels the need to present a splash screen it's taking too long to load. If your app takes longer to load than a typical web page then it's too slow.

3) Web pages can be accessed from anywhere. I'm not entirely sure why I can't access my home documents or application setting from another location. Part of this problem is that applications require an installer and I don't want to go through this heavy install process in order to access my information from another PC. The other part is because I have to find someplace where I can store my documents or setting in order to access them.

Web sites don't have this problem. In one of the weirdest examples or this I have ever seen, my web browser of choice has an option to store its settings remotely. The idea is that when I use my web browser on a different machine, the settings I usually use follow me there. Hurray! Now the three machine I use daily will be in sync. The thing is, in order to use this feature I have type enter a server to connect to to store my settings. Are you kidding me? I got this browser from a website.. A web site that appears to have no problem handling a bazillion downloads of said web browser every month, not to mention other page hits etc.. But it won't allow me to store my settings anywhere on its servers. Is this desktop application group think?

4) Platform compatible. The three machines I use daily are all on different platforms. I have a mac, a PC and a Linux box. I can view the same web sites on all. Desktop apps? Yeah, there are some ports but I would have expected that we'd have cross platform code by now. Java has been around for some time now and does it fine.

gumble grumble grumble..

So at work I'm currently working on a brand new desktop application product. I want to give demo/beta applications to people. I don't want to keep sending out installers to everyone what can I do?

Well, The application is in java so I use Java web start.

With java web start you go to a web page, click on a button or link, the application is then downloaded to your machine (if it's not cached there already) and run. The whole process is a bit quirky in practice since you have to click on a box acknowledging that you're downloading an application by someone named whatever.. but it works. Want to run it offline? Yep, you can do that too. It's like having a desktop application available from a web page. This application, also has roaming user preferences as well so if you go to a different machine, the preferences can follow you around via your login. Basically, it nails 3 out of the four things above. The startup time isn't the best.. I mean this is java, but it's still fast than open office, for example, so it's not bad...

Sunday, June 29, 2008

Repetitive strain injury is a serious risk for all coders and for many office workers too. Many of my colleges at my workplace display symptoms of RSI problems and it worries me.

I once went through a period of about 2 months where I couldn't move at all. I could barely move my wrist, opening doors was difficult etc.. all the classic RSI symptoms. On the advice of my family I went to see a physiotherapist and over 2 months I managed to get use of my fingers back. In a very real way I'm still recovering now. While, I can use the computer all week for the normal amount, I still sometimes get pains in my wrists.. this is 3 years later.

RSI is bad new and represents one of those few times in life where panic and anxiety is a reasonable emotional reaction. Untreated RSI can destroy careers and lives. It can cripple for life. A friend of mine had to abandon his Ph. D in computer science and pretty much change his entire career plan. This same friend has written a short article about what he's learned battling RSI.

One big tip is stop typing! Don't type through the pain!

Being unable to type for 2 months is much less of an issue then being unable to type for the rest of your life.

Saturday, May 31, 2008

Many years ago, when I lived with my parents, I would plant a garden in our backyard. I'd grow things like beans and tomatoes. I quite liked doing it because it was relaxing.

When I moved into the city I wanted to grow tomatoes in pots on the balcony. My first year wasn't very successful. My second year wasn't either. I figured my balcony wasn't getting enough sun or something.

I switched apartments a few years ago and this new apartment had a balcony that was in the sun for most of the day. The tomato plants still didn't do well. In fact, they didn't grow at all. All they did was turn purple.

Eventually I figured out what was wrong. It turns out that I was using the wrong soil.

For whatever reason, "black earth" has, in my family, always been a by-word for rich soil. So, when I went looking for soil to plant my tomato plants in I went with the big bag labeled black earth. Unfortunately, the black earth I got didn't help the tomato plants grow. I'm still not clear on why, although I've been told that tomatoes like something more organic. Great. Tomatoes are hippies.

Anyway, the second time I planted my tomatoes on the sunny balcony of my new apartment, I used a mixture of soils. I actually suspected that my they-aren't-getting-enough-sun hypothesis was bogus, so I planted 6 tomatoes plants n a mixture of different soils. The soil that did best was regular potting soil made for house plants. I can't remember the name of the soil but it was something ridiculous like "Mr. Magic's incredible miracle soil".

(for the experienced tomato growers: No, I didn't have compost in my selection of soils.. I'm getting to that :-) )

They did fine in the miracle soil (by fine I mean they actually grew. This had not happened before). Then I went to visit my parents. My parents still grow tomatoes and their tomatoes were twice the size of mine. I was annoyed but curious. Apparently, there was still room for improvement.

A little while later I noticed my plants had stopped growing. I became very frustrated but hypothesized that they might need more fertilizer. I had this jar of flowing houseplant fertilizer and I decided that since my tomato plants weren't producing anything that I might as well amuse myself by testing this hypothesis by trying to over fertilize them.

I started by giving them the weekly recommended dose every day. Their response was to turn an incredibly deep green then start growing by about an inch per day. It was really impressive.

I went to ask my local plant person about this and they said that tomatoes are always hungry. They really like nutrients and completely sap the soil of any. The house plant stuff I used was nicely balanced but not rich enough in the stuff the tomato plants wanted. I should plant them in pure compost. This made me relived as I half expected them to say they needed blood! Fresh blood and plenty of it! I dodged a bullet there.

So this year I have planted them in a mostly compost mix. They are bright green but and a bit tiny but I'm betting it's because they are cold. After bringing in a plant for a week and seeing its size doubled my hypothesis seems to be confirmed. Well, June is coming so that shouldn't be much of a problem anymore (June always likes the heat turned up to full. She's a funny girl is June.).

Growing tomato plants on your balcony is fun and results in a comical number of tomatoes at the end of the summer (if you've done it right).

The things you need are a balcony that gets plenty of direct sun from about 10am to 2pm. You need pots that are about 17 to 22 L (10 inch pots, deeper is better). You need pure compose soil (maybe mixed with something - opinions on the internet differ. I'm mixing with various amount of regular house plant soil in ratios of two thirds compost to seven eights compost the rest regular house plant soil).

Cherry tomato plants seem to be more resilient and less finicky.

Anti-squirrel gun installations may be needed depending on where you live.

Since my parents have a large compost pile where they compost just about everything I went down there and picked up some real, organic compost. You can also buy compost at the store. I've been told sea shell compost is quite good since tomatoes like the calcium. There are other composts as well. If you buy your soil at a real plant place you can ask the guy. If you're visiting a downtown botanist make sure you make it clear you're not using tomato plants as a euphemism for pot plants. Man, the miss-understandings I've had.. wow... As long as you use some type of organic compost soil you're probably ok. I'm still looking for soil tips to try out next year. Post pictures of your plants and soil tips in the forums :-)... I'd love to see them.

Tomato plants are fun to grow and fun for kids too since once they start growing you can actually see the amount they grow per day. This is growth on a kid-friendly time scale.

Oh and tomatoes taste so much stronger straight off the plant. Don't put them in the fridge because it destroys the taste.

My jaw dropped to the floor. They really worked hard on it. It's so sad. The site is a total nightmare.

JavaFX.com is the kind of site that gives rise to inside-the-box thinking.

Here are the problems I have with it:

1) The "next page" icon things are inscrutable.2) The amount of information per "page" is tiny.2b) The content is laughably superficial as well.3) The "floating" thing.. hereafter called the WTF windoid is so tiny as to make is useless.4) The information presentation is noisy, filled with decoration compared with the amount of information displayed.5) The minimized WTF windoids are hard to read.6) The current WTF windoid hides other minimized WTF windoids behind it.7) The transition effects don't work smoothly on my machine; everything just jumps around.8) My font size settings break its assumptions about font sizes, so text flies off the end of the WTF windoids "titlebars".9) There's not "throw a brick at the author" button.10) Ugly as sin.. And black with white or sharp contrasting colors is my favorite color scheme too! - That's how my website is done. :-)

Sunday, April 20, 2008

For years, now, I've maintained that trying to use Linux was about as much fun as a visit to the dentist. I think I'm finally starting to get it, though.

You see, Linux, in general, has a terrible UI. Not only that but it has been incredibly slow to improve. I fully expected, back in 1999, that'd this problem would be completely solved by now. With the massive rush of ex-apple programmers and general interest it shouldn't be too much effort to fix up Linux to be usable by people unwilling to dedicate several years of their life to learning it. yeah...

So anyway, today I managed to install a piece of software that was obviously programmed by some unix-type. I'm so proud of myself. It only took me two tries. The second try I approached it with the idea that 1) the programmer expects me to know about how everything and 2) put no thought into user interface.

Fun facts:

1) You needed to tinker with a config file that came with its own syntax.2) The client and server concepts were backwards (*my machine* was the server and I had to ssh into the other machine and tell it connect to *me* so I could use it).3) Security by insanity - do I know your login name? Yes? Ok, I'll talk to you. I mean, at least you could pretend it was a password; that's what it acts like.4) The configuration was such that you needed to add two configuration entries to do something that really only needs 1 (for the most common use case). If you only added one configuration line your mouse cursor became trapped on the other screen. oh no!

The thing that really gets me though, is that once it was configured it actually works. I mean, there's nothing wrong with it! No stupid, bugs or miss-features it does what it was supposed to do. It's really a shame that this great software is hiding behind such a terrible interface.

Want to try it yourself? (the software or configuration odyssey. I'm not picky) The software is called synergy. It's purpose is to allow you to use a second computer, with a second screen as if it were really just an extra screen for your main computer.

You see, I run a PC most of the time, but I also have a mac that I want to use. I don't want to keep switching keyboards and mice, though. I'd rather just be able to mouse over to the mac screen and use it like that.

Those familiar with VNC can think of it as setting up VNC session on a monitor hooked up to a different computer. Synergy works better than that because isn't of using VNC to display the remote screen on your computer. You just use the physical monitor attached to other computer's and send your mouse and keyboard command are sent to it.

Now if you'll excuse me I have to flick the mouse over to the Macs screen because the screen saver has come on again..

Ok, so it has one (known) bug:"The Mac OS X port is incomplete. It does not synchronize the screen saver, only text clipboard data works (i.e. HTML and bitmap data do not work), the cursor won't hide when not on the screen, and there may be problems with mouse wheel acceleration. Other problems should be filed as bugs."

Sunday, April 13, 2008

I was at last year's SD Best Practices conference at Boston. One thing I remember was the two times I heard programmers complain about Alan Cooper's views on UI design.

The complaints were both by programmers directed at the aspects of Interaction Design which are incompatible with extreme programming.

First off, I've never been a big fan of extreme programming.

I believe that a great number of programmers have a code-first-think-about-it-later approach to programming. I believe this mentality is a big barrier to creating large, scalable, long-lived software. I find it disturbing that someone would actually advocate a process that actually *encourages* more of this.

I met 3 programmers at SD Best Practices that took extreme programming to mean absolutely no design. As if thinking ahead was banned outright. Talking to them brought to mind scenes from 1984 where instead of being banned from remembering, you were banned from thinking ahead. I don't buy it. To plan is to think ahead is to avert disaster. Imagine trying to build a building or plan a trip or save for retirement without thinking ahead! The extremist extreme programming mentality is that the future so uncertain that all planning is futile. This is only a very coarse approximation of what extreme programming actually advocates but what extreme programming actually advocates is complete overkill in most projects anyway.

At any rate, the Interaction Design (as explained by Copper) conflicts with XP (extreme programming as explained by Kent Beck) in at least two ways.

1. XP advocates user involvement in the design process. You ask the user, you give them a prototype, they tell you what's wrong, repeat until convergence.

User Interaction design says "don't ask the user!". The rational is that users don't know what they want! UI design is hard and should be done by a trained, professional Interaction Designer.

2. XP advocates short iterations and an iterative methodology that takes into account learned mistakes as the project progresses..

User Interaction mandates that the Interaction Designer do the design more or less in one shot, up front.

These two methodologies are not directly comparable. Not only are they trying to solve different parts of the software engineering problem but they are coming from a different set of assumption about what sort of environment they are operating in.

Interaction design is all about user interface design. It does mention how to go about developing software except where software development intersects user interface design. Doing design up front is a good idea with user interfaces because 90% of GUI design is optimizing for the most common cases. You either know what the common cases are or you have to go out into the field and observe the users at work.

The net result of this process is usually a design that is highly optimized to how people work. As with any highly optimized system, GUIs can have a large amount of cross cutting features; that is features that are interdependent. As a result of this, a small change somewhere could change the nature of the GUI completely. Adding one extra click or a pause or adding a step can have a huge impact if that case is common. As a result, user interfaces should be thought of as one holistic piece.

Consider this example: Radiologists look at X-rays and diagnose problems like broken feet and arrows piercing the head. Their workflow (simplified) is 1) load image 2) dictate image 3) next image. I've watched them at work and they work fast. They average about 2 minutes per case. If we had a workflow that added 1 extra click somewhere to that workflow we would have added somewhere around 7 * 60 / 2 =~ 200 clicks. Multiply this by the number of radiologist at work and that tiny, little design decision has single handedly caused hundred of cases of repetitive strain injury! You can do a similar time base calculation. if your extra click requires the user to think about what to do next.

This may seem trivial to some but we've actually had certain minimum workflow standard written into our contracts with some of our clients because they are sick of vendors pushing an extra click on them for no reason.

I've always viewed XP's desire to bring the user into the design process and have them practically design the UI as a first order approximation on how to design a GUI. XP's assumption about having continuous access to a user (or group of users) is very a-typical, in my experience. Also, users are very bad at vocalizing what they want of need:

My favorite users-don't-know-what-they-want story comes from when I was developing a piece of image-burning software. The idea was that this software would be for burning X-ray (or other modalities) series to CD. We knew what sort of users we wanted to target. We thought we knew what they wanted in the software, since we had this big feature list. The only question remaining was what should the "GUI look like?". I looked at the feature list and the target user and I couldn't match them up in my head. I just couldn't believe that the users we were targeting really wanted this big complicated GUI. I decided to ask for an on-site visit. Before the on site visit I'd prepared a list of questions that I wanted to answer. These questions were about whether a certain features were desired or not and about the relative frequencies of various occurrences.

I was curious about what they'd tell me if I just asked them my questions point blank. Did you need this feature? yes! My test users wanted every feature, expect that ones they didn't understand. Did this edge case happen frequently? no! How often? Once every 6 months of so.

I then observed them for three hours. During the three hours each one of the edge conditions happened at least once. They also never needed the extra features. It turned out that the software they were using had the features I was asking about. However, even in cases where they should have logically used these features, they didn't because it wasn't worth the effort to configure them! AGH! I came away from this with a better, streamline GUI and the notion that users (at least these users) had no clue what they wanted.

The second major point of disagreement between ID (Interaction Design) and XP (extreme pornography.. I mean programming) is the use of iterations.

As I've mentioned ID isn't about how to build the thing; it's about the GUI. Nothing is stopping you from using iterations to build the big-bang-GUI. In fact, in my practical experience, you have to start building before the GUI is completely ready because it takes so-freaking-long to design the GUI. Yes, by all mean do iterations. Not doing iterations is silly. You might want to stretch out your iterations longer than the ones suggested by XP, though. XP mandates uber tiny iterations (1 week). With such iterations you're actually under-valuing planning.

Planning is good! Over planning is no so good. Being forced to over-plan then stick with that plan is extremely bad. In my experience, excellent planners (or planner *teams*, rather) can plan two months in advance with a good degree of accuracy. Good planners are about a month and ordinary, mere mortals can plan a week or two if they practice at it. All these estimates depend on the context, of course. Your planning horizon can vary tremendously depending on the problem, the degree of familiarity and the skill of the planner.

My favorite planning story is the time we planned out our "roaming preferences" feature. Essentially we managed to plan out about three months of work and have that plan executed with little or no change within 5% of the estimated number of hours. Hurrah! A hole in one! Then, in the next iteration, we tried to plan out 1 month of work and both the major aspects I was responsible for were completely wrong.. Luckily both things that we got wrong were wrong in opposite directions. One took 4 times longer than I though and the other tool about 1/10 the estimated time. The reason for this was that we had to make a modification that changed where the bulk of the work went so it's not really a co-incidence. We actually knew this was high risk going into it since we were modifying old code we didn't know very well.

Remember to include the risk component in your time estimates and proposed designs!

So, iterations are a way of mitigating against the risk of planning errors. They do this by scheduling points where the design and progress of a project can be re-evaluated and adjusted. The downside to short iterations is that it can artificially limit you to a fixed design horizon. Limiting your design horizon can introduce costly refactoring or code scaring. For a real-life analogy of the dangers of under planning, consider the joys of mountain biking with your eyes closed. Ouch!

Using super short iterations on a project with no risk is silly. Using super long iterations on a project with high risk is silly.

None of this has to do with GUI design. Even if your GUI design is low risk, other aspects of implementing that design might be high risk. The length of iterations for a project is a programmer thing. Interaction designers do their GUI thing and programmers mitigate against their potential errors and their own by having short iterations. The two do not conflict because they are tackling different aspects of the problem. The only major change for XP people is that the GUI design is done by someone else and becomes lower risk. (In practice, politics and errors affect this somewhat)

I like to say that even if you're not doing iterations the best way of actually these implementing these big designs is by making use of iterations.

In my mind, ID (Intelligent Design .. I mean Interaction Design) and XP do not conflict. ID is a way of creating a better GUI. XP is a set of techniques for mitigating against the troubles brought by changes by embracing them.

Wednesday, March 12, 2008

Recently, at a party, I got into a conversation about lucid dreaming. A lucid dream is a dream in which you know you're dreaming.

I first became fascinated by this subject when I started to wonder why you always know that you're not dreaming when you aren't but why you don't when you are. Then I actually experienced a dream where I realized that I was dreaming and that really spooked me.

Why aren't all dreams lucid?

I didn't feel like I was dreaming when I was dreaming. Maybe I'm doing something equivalent to dreaming all the time - spending my life in a daze and not really knowing it. I decided that I was going to work on making all my dreams lucid dreams.

Talking to the guy at the party I was reminded that many people don't remember their dreams at all. Other have never had a lucid dream. Others don't believe it's possible.

Yeah, lucid dreaming is possible. It's not as fun as you might think but it's possible.

After practicing for a few years I managed to have lucid dreams about one every other night.

Many people seem to think that if they knew they were dreaming they'd be able to do all sorts of stuff they've always wanted to do. In practice, my experience is the dream already has an agenda and if you don't play along you wake up. That's not to say you can't do some cool things in dreams, it's just a lucid dream isn't your own personal holodeck.

Dreams are manifestations of your expectations.

Let's say you built a computer. This computer makes up models of the world. It tries to find patterns and guess what's going to happen. This computer is very good at it and after some amount of training develops a whole bunch of models of how things work. It knows when lunch is. It knows you like coffee at the start of the day. It knows if you let go of something in mid-air it falls. It's got a fairly good view of the world.

Now you want to take it to the next level. You want to make this computer interact with the world very quickly; without having to think about what's going to happen. You essentially want it to figure out that if I drop something fragile it should attempt to catch it! Fast! There's no time to think about what's going to happen when you let go then figure out it's going to fall then figure out that it's fragile then figure out that it will break and that you will get mad and yell at it. It needs to know the rule that: If fragile item falls you (the computer) catch it.

How do you do that?

Well, as I said, the computer knows about how the world works. So if you bring up its memory of the lab and then ask it: "What sort of things could happen in the lab?" it will know: the same few people tend to walk in, in the morning. Lots of these people have coffee. Coffee is always in cups. The sun shines through the window in the morning. People occasional brush past things and push things onto the floor. etc.. blah blah.

Really, you don't need to teach the computer about the world, you can simply run it's models of the world against itself and it will figure out (eventually) most of what it needs to know. So long as you take more or less random branches out of the possible scenarios you'll end up producing experiences that have never happened. This is useful to train against.

The human brain is a bit like this. It makes models of the world and builds up expectations. Dreams just run these expectations against one another... presumably to help you react better if this situation should come up in real life.

Actually, there's another important factor: emotion.

If dreams were only about expectations you would have really boring dreams. Most of the stuff we expect to happen is fairly dull. The thing with dreams is they often choose which expectation wins based on emotional response.

Here's an example. Let's say you were in high school and you're standing in the cafeteria. Consider these events: - Someone you're friends with waves at you from across the crowd. - Someone drops their tray. - There's a good seat available for you a few feet away.

Not much emotional response. How about these: - You buy some lunch but discover that your pocket has a hole in it and all the money is gone! - All the seats are somehow taken. - A teacher walks up an casually mentions you just got an A on a test you were worried about. - Terrorists!

Dreams go for the big emotional responses.

If you become lucid you can't suddenly start trying to make things happen that are just "neat". "neat" doesn't play well in dreams. "Dangerous" is fine. "Angry" is fine. "Really totally awsome" is fine.. "neat" isn't.

.. and don't try things just to find out what will happen. It's a dream! Trying stuff in a dream just to see what will happen is like having the following conversation with yourself:

"So.. what will happen if I do this?""I dunno. What do you think?""No, I'm asking you.""Well, what do you think the answer should be?""Actually, I'm just curious about what you think should happen.""Well what do you think should happen?"

If you do this enough, you dream will go "ah, screw this" and wake you up.

In a dream your job is to react to things not make stuff happen. You can make stuff happen but you have to go along with the flow of the dream. Try to make it a reaction to something.

Oh no a tiger! Good thing I'm in a dream since that means I can fly away! (model of how thing works goes "yeah ok".. so you fly.. Good thing your instinctual brain never took physics).

That's about the long and short of it.

In the end I started having weird semi-lucid dreams. In this semi-lucid dreams I know on some level that I'm in a dream because I know that all the things I used to do in lucid dreams will work.. I also know I can just escape by waking up. Well, unless I have a really bad nightmare in which case I an usually so focused on the moment I don't get that strange "this place follows dream logic" feeling. What I think happened is my behavior adapted to included the fact that, in dreams, the rules are different.

Sometimes I think my brain has built its own parallel universe. I have a "dream montreal" that doesn't look like real montreal but is more-or-less consistent every night. I have a map of my home town that includes things which are completely wrong but very consistent between dreams. They aren't spatially correct but they are emotionally correct. That is, the distance between two buildings is a "short" or "long" walk. That the main street contains an "annoying" amount of traffic. That buildings in the new art of town are "taller" and "newer" than the old part. That some buildings are "tall" while others are "scary tall". It's hard to describe. It's not montreal. It's obviously not montreal.. but it feels like it.

.. oh and all the elevators are dangerous and unreliable. I know that if the elevator doesn't work it's because I'm dreaming :-)..

The most important thing is to remember that you can never tell you're dreaming! Especially if you've woken up from a dream! Your body doesn't like waking you up for no reason so it will tend to fake waking you up. When waking up from lucid dreaming always make sure you're not still dreaming.

.. checking for dream signs requires excellent self control.. Actually I think the ability to extract yourself from the moment is the primary benefits to pursuing lucid dreams.

So read all that stuff and you will have a lucid dream tonight. Seriously. (Go flying in reaction to something. It's scary fun.).. good luck having a second lucid dream but the first one usually comes easy. :-)

good book:The Head Trip - It deals with various states of consciousness including lucid dreaming. No LSD or drugs here. All these states are natural ones.

If you like lucid dreaming you might like to experience the watch. Another fun altered state of consciousness easily accessible by messing with your sleep patterns. :-)

Saturday, February 9, 2008

Ok.. So I'm reading a chapter about David Heinemeier of 37signals. He's the dude who wrote ruby on rails. In it he says the following:

"You need to innovate on behalf of your customers, but they don't often know what they want. And it's the same thing for programmers. If you went around and asked them what they wanted in a framework, you wouldn't get a good product out of that. You need to be able to source input from alot of sources, and then have your vision of what it's going to be and then drive that."

(emphasis mine)

There's this idea floating around that the reason a software sucks is because the people make it don't pay listen to the customer. All the layers of bureaucracy between the programmer and the user causes a disconnect. If we could just talk to the customer directly we'd be able to ask them what they want directly and just program that.

This is partly true but there's a problem: users don't know what they want.

I have a feeling that many who have not actually done a software project with real users see this as an arrogant statement. It's not.

Users are not programmers. They don't know interaction design. They probably don't have very good aesthetic tastes. If you let your users build your specification for you, you'll end up hearing the dreaded "This is what I asked for but not what I want". This is why extreme programming, which advocates the above, has a short iteration time.

On the subject of framework design.. Every once in a while someone will post a bit about how painful it is to do something or other in Java vs Python. Most of the time it's something trivial that's not really a Java-as-a-languge issue but Java-as-an-API issue.

Java's API can be irritating at times. My favorite example of this is reading a file.

Let's say you were in Python and wanted to read a file as text. Here's what your code would look like ->

The problem with InputStreams in general is that 1) You can't get a String out of them and 2) if you ask it to fill and array of bytes it actually won't fill the array of bytes. It might fill the array to completion but it probably won't. It's your responsibility to loop over the "read" method in FileInputStream an accumulate all the bytes you want.

To heck with that! DataInputStream can read a full byte array so let's use that!

ByteInputStream can wrap ANY InputStream and add functionality to it. This means you can use it to read bytes from a network connection, file on disk or anything else. It's very cute.

Notice that we had to add the "throws FileNotFoundException" to the end of the method there. This is because we're doing IO which could fail. If it fails it throw an exception. Java won't let you compile until you've told it how you want the exception handled. Most languages don't do this and just let the exception bubble up and kill the program. We're just going to tell java to throw it up to the caller and let someone else take care of it.

Ok, now we have a DataOutputStream but we can't read a String using it. It actually has a method called "readUTF()" which returns a String but it doesn't do what we want because readUTF() expects the data in the file to be in specific format. That format is only written by the corresponding method from DataOutStream.writeUTF()..

Notice the exption changed. This is because DataInputStream.readFully() throws its own exceptions. Since both exceptions are the same kind (IOException) we're just going to tell java that all IOExceptions should be dealt with by the caller.

Yay we have the file's bytes! Now bytes aren't a String so we still need to convert to a String...

The thing is.. If this function existed in the API you could call it to print out the contents of a file by doing this ->

System.out.println(readFile("hello"));

This line does the same thing as the python version. They could have put this function in the core API. They didn't.

I'm betting that what python has to do to provide that functionality is very close to what we did here.

Java-as-an-API doesn't have a very good file-reading / string-manipulation toolbox. This is an API issue. It can still do everything but it's a huge pain. Python rocks for this.

On the other hand Java-as-an-API has a massive framework for making GUIs. Python-as-an-API doesn't.. You need to use a third party library.

Just one more thing before I sum all this up: Java both as a language and as a toolkit was the totally wrong tool for doing applets.

People wanted to make banner and cute animated graphics and such.. Look at what people are suing flash for today and that's what people wanted to do with java applets.

Java applets should have had an API for doing animation. It should have had an animation studio attached to it.

... oh yeah and it had to have drop dead easy deployment..

... oh yeah and it shouldn't have cause the web browser to freeze up solid for 30 second whenever it was used...

.. I could go on but this is all well known.

Sun didn't do their research. Don't be like Sun.

Designing frameworks is hard. You need to balance a host of things from flexibility to speed to ease of use to power and it just goes on. If you don't know who your building for and what they need you're doomed to failure. They can't tell you either because they havn't thought about the problem very deeply.

The process you need to use to build a good API is the same process as the one you need to use to build a good interface.

Wee, I just got back from playing hockey in lafontain park in Montreal.. It's about 0 degrees outside today so it's perfect weather for skating.. Not cold enough that you start to worry about the consequences of breathing.. not warm enough to melt the ice. Excellent.

heh heh.. While trying to stop someone from scoring I tripped and fell into the net.. I scored myself.. Luckily I wasn't hurt. I realized that it was about to happen so I put my hands out in front of me just as I fell. Those nets are heavy and you can seriously hurt yourself it you bang your head against it. We don't play with equipment either so you got to be careful.

I'm still trying to finish reading "Founders at Work" which is a great book that interviews the founders of major tech companies about what it was like to start up a company. These guys are crazy. They work crazy hours on practically no sleep.

anyways... I'm having trouble finishing the book because after a few pages I want to throw the book aside and go found a startup company. It's just silly.. I'll be sat there reading then I'll stop and wander around the apartment thinking about what sort of thing I'd like to invent and how I'd market it etc.. Eventually I'd calm down and sit back down with the book where the cycle would begin again.. It makes reading the book very slow going.

Anyway, I'm going to make some hot chocolate, sit down by the fire for an hour or two and read.. about 5 or 6 page.. thereabouts.. yeah.. bye.

2.5 magnetic levitation /vacuum trains from London to New York. These puppies can get you from London to New York in 1 hour. They fly through tubes under the ocean that are almost at a total vacuum at 4000 mph on magnetic tracks. Much faster than flying.

Sunday, January 13, 2008

Hello, I can't come to the phone right now. If you'd leave me a message I'd get back to you as soon as possible.

Hi there. Umm.. It's Andrew. You know, I hate answering machines. They defeat the purpose of calling someone. I mean I might as well email or leave a message on any one of the billions of message-board style communication mediums. Frankly those are better because I'm not put in the unusual situation of having to come up with something without a backspace key.. Also, I've heard what I sound like on tape. I wouldn't wish that voice on anyone. This has been a long message. I'm surprised I haven't been cut off. Some of those cheap answering machines have so little space on them that you can barely get a message out. Have you noticed you always get cut off at a point that makes it sound like you're insane or insulting:

"Hi there, well, I've been thinking and I'm come to the conclusion that you're just as much an asshole as.." beep!

"Hi there, It's me again, I was about to say just as much an asshole as I've been so I really shouldn't be mad.. Ok, well if you.." beep!

So, you known what. I'm not going to tell you why I've called. I'm going to leave you my number and get you on the phone so I can talk to you. I've got more to ask you from you than a question. I want more from you than a reply. So call me back. My number is

Thursday, January 10, 2008

When trying to predict something, it is often the case that you need progressively more processing power as you increase the precision of your predictions. There's a point, however, at which increasing the amount of effort you put into creating the perfect prediction runs into the hard truth that either your model or your initial reading of the system's initial state limit your prediction's accuracy. Past this point, there's no reason to invest more effort because the amount of precision you'll have in your answer is more than the margin of error for that answer. This is the fuzz point. It is the point of diminishing returns.

It can show up in interesting places. My favorite place is the classic, intractable argument over aesthetics. If you just work at a little more it would look better. This isn't always the case. Consider your ability to predict what people find aesthetically pleasing. Consider any data you have on the topic and how much error there's likely to be. Consider the amount of time you've spent arguing about whether the arrow should be green or blue. You've past the fuzz point.

Aesthetics aren't unique. A special case of the fuzz point shows up when prioritizing. Bug fixes and feature in software.

How accurately can you predict how long something is going to take to fix?How accurate can you be in predicting how important a feature is to implement?How long are you going to argue about it?

The motto is, past the fuzz point, flipping a coin is actually cheaper in the long run.

The fuzz point is very small for small bug fixes. So small, in fact, that you get multiple different sorts of penalties.

In many shops all bug reports must be prioritized before it's decided whether or not they are worth doing. For bug fixes < 4 hours weird things start to happen.

The cost of figuring out if the bug is severe becomes more important.The cost of tracking down the cause of the bug tends to be much more important.The cost of the bureaucracy of fixing the problem becomes very important.The cost of merely context switching away from to bug for enough time for it to be prioritizes becomes important.The difficulty in measuring the relative importance of all these things increases.

Small bug fixing is fuzz point land. If a bug takes a short amount of time, there's no point in prioritizing it. The amount of time you've spent just trying to figure out the true severity and the cause dominate. If the fix is quick, don't prioritize, do it now, on the main branch and the deal with the risk portion of the bug fix separately. (Essentially review the severity and risk of each bug and fix and decide if they must be back ported to the old branch for a bug fix.. Also decide if it's worth running it by QA. The answer is almost certainly yes.).

If you do this, however, you will notice that you're development will stop. This isn't good. The way of getting around this is to allocate a fixed amount of resources to the task and prioritize bug fixing in its entirety with the adding of new features.

If you must prioritize fixes then poll the list of bugs looking for important ones. Don't force everything to be run through the bureaucracy before anyone can get a time budget for it.) .

If you implement this make sure you clearly say how long to spend in the various stages of bug tracking before giving up (how much time trying to ascertain the severity vs how much time investigating for each level of severity vs how much time trying to implement the fix). This is a heuristic but it works fairly well, because bug fixes show up in timesheets so you can see violations.

Tuesday, January 1, 2008

I would like to wish all my friends a happy new year.. but I can't. I can't send them an email because that would be SPAM.. and I can't send them each an individual email because that would be an easily automated job and it's against computer programmer ethics to not be lazy in situations like this. I can't say HAPPY NEW YEAR! in my blog because only two people read it and I really want to tell everyone. I can't put "Andrew is wishing everyone a happy new!" in my facebook status because no one actually reads those things anyways. They exist only for self amusement and occasionally as an existential venting mechanism as in "Andrew is a spoon". Hummm.. I can't think of any way of doing it so I'll stick it in all these places and send personal notes to a few people arbitrarily. It's not perfect but it's the only way to avoid annoying myself.