My little corner of the internet.

Category Archives: Uncategorized

Here is an excerpt from an email I sent last night. If you don’t know me and just want to know how to save your phone, feel free to jump over it:

So while I was boiling my pasta tonight, apparently my body had the urge to include some additional ingredients for once as I found myself inadvertently add my Blackberry to the mix.

Whilst boiling, it cried out to me “Heathenous traitor! Spare me!” ..Whatever, don’t call me names then expect my help. Also, I eat cows and chickens with a clear conscience, I can eat a phone too. Then in a last-ditch effort it blurted “Without me, thou shall never text cute girls again!” I immediately decided that blackberries might not go very well with pasta anyway, so I removed it and pulled the battery out.

It is currently getting the Lazarus Treatment in a closed up shoebox with 14 ounces of desiccant (a 60 day supply for a closet). Hopefully I made the right choice, because after the desiccant, the Blackberry will be all dry and flavorless anyway.

Like most other fruit, I would imagine that Blackberries are fat-free and low in calories but I have a hard time envisioning that the contain many vitamins or antioxidants.

Do you have any idea of the Nutrition Facts or whether I made the right decision?
– Sean

In hopes that it may help others out, here are some things you can do to save your phone if you drop it in water…

If you are fortunate enough to read this BEFORE your phone goes for a swim: remember to take out the battery as quickly as possible. Don’t even bother shutting the phone off if you have a removable battery. In the case of iPhones or other devices where the battery isn’t removable, turn those off as fast as possible. If you remove the battery or at least get the phone off before it dies (short circuits) on its own, you have a good chance of saving it. If you have a sim card, it’s best to yank that too, but it can wait until the battery is out.

Alright, assuming your phone is chilling without a battery – the next step is to dry it off, shake it to get the water out, etc.

The next part is a great idea that I was given via twitter from my friend Nick: go find a desiccant to suck all of the moisture out of the phone. You can grab some at Home Depot or Lowes (which is where I found it). An example of a desiccant is DampRid (link has a picture).

Using a desiccant to dry a cellphone (click to enlarge)

Leave the phone open (battery out and cover off) and put it in a small container with the desiccant for as long as you can comfortably stand not having your phone. In my case, the phone was in the box for 8 hours and it came out just fine. If your phone was submerged longer or didn’t shake out as well, it might take longer. If you’re patient, I’d recommend 24 hours but I just made that number up so take it with a grain of salt. ;) I had happened to buy some new shoes yesterday so I had the box right there… since the DampRid I got was a hanging variety, I hung it up in there (see pic to the right) and then closed the lid and let it sit.

Pop that battery back in and cross your fingers!

I was fortunate enough to have my phone survive. The $9 for the desiccant might have been the factor that saved me from having to replace a $400 phone (I’m all out of free-upgrades!).

Although my BlackBerry (and most phones) are fortunate enough to have a removable battery, don’t despair if you have an iPhone: the unmistakable John Hann reports of having tried to use his iPhone to season his baby-stew and it was also able to survive by being shut off and allowed to dry.

Noticed a dearth of Google articles about this so I figured I’d post my solution…

If you need to generate a random number inside of a MySQL query (which is useful for making fake test-data), you can do it as follows:

SELECT floor(rand() * 10) as randNum;

Which would generate a random integer from 0 to 9 inclusive. Just change the 10 to the number one higher than you want to generate. The important part is just the “floor(rand() * THE_EXCLUSIVE_UPPER_BOUND)”.

The full explanation is that rand() will generate a random floating point number that is greater than or equal to 0 but less than 1. After you multiply that number by your upper-bound, floor() gets rid of everything after the decimal point.

That quote is a relatively common business adage which is attributed to a ton of different people – so many that I’ll just consider it public domain. The reason for this post is just that I’d like to draw attention to it and maybe as you read this post, you can think of how this can be used to help your own productivity and success.

As with any business advice, this tends to be more digestible with some anecdotes so I’ll give a couple of examples of how this has helped me lately with running LyricWiki.org.

Example Uno: Server Uptime

Without a doubt, the biggest problem with LyricWiki up until recently had been uptime. The site was constantly slow or completely unavailable from its inception. The reason is that we were always short on servers since the company had minimal capital, and setting up new servers took a great deal of time and fell on my shoulders during a time in my life where I always had at least one fulltime responsibility other than LyricWiki.

When I cut back my other job a bit to give me more time to devote to LyricWiki, one of the first things I set out to fix was the reliability of the site. In order to know if it was improving, I would need to know how long the site was down each month so I could track whether the number of minutes was going up or down.

I created a small spreadsheet to track outages, and each time the site went down I logged when the outage was, the duration (in minutes) of the outage, which server(s) had the problems, the apparent cause and whether or not the cause was resolved. Much to my surprise, I didn’t really need to graph this over the months. After the first couple of days, it became very apparent that there was one huge problem still lingering and it would be worth my time to automate a fix to it instead of responding ad-hoc each time the problem cropped up.

I whipped up the code to solve the problem while I was on a layover in Philadelphia, uploaded it when I got home – and the site stayed up for almost a month straight! That’s pretty huge. I don’t think that had ever happened in LyricWiki history until just now. Very cool.

Example Dos: Intractable To-do Lists

A major personal time-management problem I had recently was that I couldn’t tell if I was just spinning my wheels or if I was actually making progress on my backlog of LyricWiki tasks. It felt as though I was getting emailed so much work that I barely ever got to break out into the tasks on my talk page let alone the “Mid-term Actions” list I made which was my conscious plan of how to make LyricWiki rock your socks off in the mid-term.

To solve this problem of not having a grasp on my tasks, I whipped up a little widget which you can see in the side-column of the blog, near the bottom. The results were much better than I expected: I actually instantly felt like I was in more control, I felt comfortable removing tasks that were duplicated across lists, and I can finally tell when I’m moving fast enough to make forward progress.

The widget tracks the size of all of my various to-do lists and updates that number hourly. There are two reasons for the caching: 1) checking the lists requires hitting the sites and that takes at least 3 seconds total 2) if it updated in real-time, I’d sit there refreshing the thing all day!

The widget actually has some neat hidden features. Here is the to-do list widget on its own page which also tracks the number of tasks I’ve had at the end of each day and charts my progress (using the Google Charts API). If anyone is interested, maybe I’ll write another post where I give out the source-code to the widget.

Conveniently, the widget is narrow enough to both fit in a blog sidebar and be displayed on a smartphone. It’s currently the homepage on my blackberry and I don’t see that changing any time soon!

Conclusion

The quote “what gets measured gets managed” always seems to ring true for me. I think of it again and again – usually after-the-fact when I’ve just saved time by being OCD about something. I strongly recommend that you take a second (right now) and think of an area of your life or your work which you feel isn’t getting sufficient attention and consider tracking meaningful statistics about it. Please share any similar successful experiences in the comments!

Today I upgraded the blog to the newest version of WordPress because it was so old it couldn’t even run Akismet (the anti-spam plugin). In the process, it deleted the names of all of my categories. If you see numbers where word should be or blank spaces where categories should be – this is hopefully just temporary.

Last night during a long coding-session, I stumbled upon an old site of mine. It was beautiful. Everything looked exactly how it should to be the most usable, useful, beautiful application possible. This was before I got suckered into divs and validation by the standards-gustapo. I was never given a direct or logical answer from the masses as to why I should be using standards-compliant code when the browsers aren’t standards compliant, and a great deal of things can only be done well in a non-standard way, and a great many more will take hours to be standard and mere seconds to just work. “But everyone is doing it!” I figured that this meant that maybe the people I talked to just didn’t have the low-down, but that there really were advantages and maybe some sort of advantage would come out of all of this wasted time making sure I complied to standards and used the horribly-implemented DIVs with as-poorly-observed CSS instead of the beloved tables of old.
It’s been over a year now, and I still haven’t once said to myself “wow, I’m really glad I made that site valid” for any of my projects. Then it occured to me… there is no payoff coming! Making your code standards-compliant and crossbrowser now isn’t going to help you port to new browsers (since every page out there that is cross-browser now would have to be re-written for a browser that actually works how it is supposed to). This is just the typical human response of trying to create conformity. This isn’t helping fix the problems with browsers, it’s just giving coders another thing to waste their time on. That beautiful page of mine from before used tables anywhere that made sense given what tables do. Now this may not be “semantically optimal”… but divs just don’t work like they’re supposed to. I’ve been using them for a LONG time now and have gotten to know them quite well inside and out. The truth is, their behavior is just so wildly implemented that the same code will do drastically different things in many browsers. ‘Table’ is not the right word for the content many times, but they are a lot closer to working than divs are. Divs just DO NOT FAIL GRACEFULLY which is a major flaw in any programming system. And I’m not just talking about the borders/floating/wrapping issues that we are all familiar with… divs will make your browser act like a jittery crack-addict.

As an example, I’ll mention two bugs off the top of my head (that will probably get their own articles later): the FireFox heuristic machine and the IE ghost-footer.

In FireFox (yes even 1.5.0.1, etc.), I have a fully valid page on Projectories with a two-columned UL (they wrap automatically… according to standards anyway) and sometimes it will render with randomly-different wrapping. You heard me right… the EXACT SAME HTML will randomly render different ways. So this two-column list will have maybe one item in the first row, 2 in the second, 2 in the third, 1 in the fourth, etc.. There is no reasoning with FireFox on this matter. Another coder and I have revisited this bug many times and have been unable to convince Fox to listen. Thanks Fox, I didn’t need a “computer” anyway… I probably only needed a heuristic machine anyway. Oh I forgot… FireFox is OpenSource so it’s probably my own fault since I haven’t fixed their bug myself, right? My bad.

In Internet Explorer, there is a page (on an as-yet-unreleased-site) with a simple footer like you’d see on many pages. The main layout of the page is done with divs in a completely-valid cross-browser way. IE 6 and IE 7 however, there is one of the most insane and unacceptable bugs I have ever seen a browser pull. It seriously writes a random sub-string of the footer…. again. Allow me to clarify: the last item in the footer is “Privacy” and the page will sometimes display an extra “y” on a seperate line (also properly linked as if it was the same link as before) and it will sometimes display “vacy”, or “Privacy”, etc.. Now the most obvious thought when seeing this would be “wow, I must have really messed up outputting that footer”. I check though, then I double checked. I rubbed my eyes and triple-checked. I had my co-worker check it, then we stared confused and checked it again. We looked at the source in 2 different browsers (because possibly the “view source” command could be messed up?) and we could come to only one conclusion. IE was written by handicapable children who don’t speak english and have bad eyesight. The source-code NEVER showed these ghost-messages. The source code is very simple at that footer, so it’s not hard to verify. I know this sounds unbelievable, so this will get its own entry sometime with the server and client code, an example page to reproduce it, and screenshots of what was rendered.

It all just got to be too much. I realized that making some standards-crusaders and maybe Tim Berners Lee (whom I now hate) happy, just is not worth the expense of additional features, portability, and design that I could be giving to my users (whom I love). So that’s it. No more mister nice-tool. I’m gonna go make a table. :-P

==Intro==
There is a new wiki-system called WetPaint that I recently had a chance to preview before its official launch (it then launched on June 19th, 2006). Before I get into this further, I want to point out a couple of things:

I run the world’s largest wiki next to wikipedia: LyricWiki.org [1, 2], and this has given me quite a bit of experience dealing with and thinking about wikis.

I have no vested interest in promoting WetPaint. I get pretty complementary in this article, so just remember that they’re not paying me or giving me any incentive. If I say their code is good it’s because… it’s good.

==The Scoop==
Although I still like the MediaWiki software that I use to run LyricWiki.org – the same software used on Wikipedia – I think it is safe to say that it’s been trumped. WetPaint uses MS-Word-like editing to remove the otherwise-substantial learning curve from contributing to a wiki. It’s sprinkled with AJAX to make the process faster, and it fixes a number of problems that come with MediaWiki and many other popular wiki tools. The way WetPaint works is that they host wikis, and they get revenue from some Google AdWords in the bottom of the right-hand column of the page. In the first two weeks after their launch, over 6,000 sites were created [3]. A couple of these sites (set up during the private beta) appear to have been set up by Wetpaint themselves to show off the system – and presumably because the sites are useful anyway – including wikiCancer and wikiPregnancy.

==Advantages==
I noticed several interesting things when poking around Wetpaint. Many of these features I only noticed because they were unfortunately absent in MediaWiki. Anyway, here’s the beef…

The links to add a new page are very prominent, and it’s really easy to create sub-pages also. When creating a page, it will warn you as you are writing if the page name is a duplicate.

Wetpaint makes very good use of tags. They’re easy to add, and they have a tag-cloud on the side of the page to make it easy for people to get intrigued and bounce around your site.

Altough I have no benchmarks and there don’t seem to be any available online yet, Wetpaint seems significantly faster than the bloated MediaWiki. This is partially because Wetpaint uses AJAX so that only parts of the page need to be sent back and forth, and partially because MediaWiki is significantly slower than it should be – but I digress.

The site is fairly cross-browser compatible. It supports Firefox 1.0.7 and higher (PC & Mac) and IE 6.0 for PCs [4]. That’s not phenomenal for a wiki, but it’s really good as far as most AJAX sites go. The much-lauded ajaxWrite receives all sorts of attention even though it requires Firefox which abandons the vast majority of web users.

Editing is intuitive and intentionally similar to Microsoft Word, effectively eliminating the learning curve for most users. As a person who runs a wiki (or four), I can’t emphasize enough how important this is. Not only have I spent countless hours correcting the pages created by people who don’t know how to deal with the proprietary wikitext-markup of each system, but I’ve also had to waste hours of coding-time making forms to make data-entry more intuitive.

The comment system they made has the potential to be very viral. This may have existed in some wiki engines before – since there are so many of them – but I sure haven’t seen it. The only potential drawback I see with it is that visitors might get lost amongst the flame-wars and forget to come back to the content.

There is an auto-generated site map. This is good for search engines and for lost visitors. On the flipside, I checked it out a larger sitemap though, and it appears that it doesn’t scale itself yet. Therefore, if you have a humongous wiki (50,000 pages or so), this page may get to be a bit ridiculous.

==How Wetpaint can become the dominant wiki design==
Wetpaint is one sweet pile of code. There is no reason that a system as well-created as Wetpaint can’t become the dominant wiki design. Since I’ve had some good luck in the wiki-industry, I’ll wax-prophetic for a moment on how Wetpaint can make sure they become the dominant design.

Develop importing from other wikis. This decreases switching costs allowing many of the other well-established wikis to abandon their former engines. WordPress did this with a very high level of success in blogging-software.

Make a developer network! This is important. If they make a SOAP webservice and encourage developers to make plugins, skins, and utilize their API for other programs and mashups, there is a great deal of potential for really cool things to be created. Things that would further Wetpaint’s grasp on the market. In addition to the basics, they should have a very strong platform for letting wiki-owners easily create bots. Wikis with bots are far more effective than those without bots. Since wikis have one structure and many different types of sites are crammed into them, normal maintenance through the database is hard. In the case of Wetpaint it would actually be impossible since it is a hosting solution (it just wouldn’t be safe). If a wiki-admin can create a powerful bot, they can keep their site from becoming messy.

Think about alternate business models (sell a CD with an archive in a specially browseable format, possibly publish books from the content). The only reason I can think of that the majority of wikis aren’t going to jump onto Wetpaint like geese on June bugs (that was lame) is that they are only a hosted solution. See below for more on this.

Bonus: Some sort of anti-spam solution. Spamming is one of the biggest problems with wikis today. If they could think of a way to combat it, they would have a strong competitive advantage.

==Hosted versus installed==
Right now, Wetpaint has joined the ranks of JotSpot, pbwiki, Wikia, and Atlassian as a hosted-wiki provider. I have full confidence that they can dispatch of those companies fairly well. It will take some time though, because they all seem to have some Bubble 2.0 cash to burn through.If they are happy enough with this, then so be it. But if they want to be the only standard for wikis, they have to figure out a way to compete with installed wiki software. When companies have their own servers, they are often more than happy to use installed wikis even for their enterprise wikis which eats at Wetpaint’s potential market. The reason I run MediaWiki instead of switching over to Wetpaint is that by installing an open source package, you have full control. If LyricWiki does decide to put Google AdWords on the side of my page, LyricWiki will get the money. In addition, I have the option (which I have exercised quite a bit) to change the code around to make it better suit my needs. Of course, I have to concede to the hosted-sites that upgrading MediaWiki is a pain, especially when you have modified some of the code in a way other than an official extension.

==Conclusion===Wetpaint is awesome, there is no doubting that. The code is top-notch and the management seem to know what they’re doing. This site is going one direction… up! In times when the 2.0 bubble is spawning countless new businesses based solely on buzzwords, it’s nice to see a company do it right and use the new tools available to build a solid, easy-to-use product. Good job guys.