Friday, September 29, 2006

I was reading this article from the San Jose Mercury suggesting that the San Francisco Giants should not re-sign Barry Bonds for next year. I was driving to San Francisco on Sunday morning, listening to talk radio, and many of the callers were saying the same thing. The crux of the arguments is that the Giants will have to pay Bonds a lot of money, and if they do so, then they will not be able to make the many upgrades they need to change the team into a contender. The pundits go on to say that if they spend the Bonds money on free agents, then the team will be back in the playoffs. However, the pundits fret that the Giants ownership will be afraid that ticket sales will fall if the Bonds is breaking the home run record for another team. Of course, these guys are all wrong.

First off, the Giants are not going to contend next year anymore than they contended this year. They are 11th in the NL in runs scored, 11th in OPS, 9th in runs allowed, and 11th in ERA. They are not good at anything. The Giants are not an Alfonso Soriano away from contending.

But it gets worse. If the Giants add Soriano, they are letting go of Bonds. Now Soriano ranks third in the NL in runs created, just behind Ryan Howard and Albert Pujols. However, if you look at runs created per 27 outs, then he falls to 13th. That's still pretty good. Bonds is 24th in the NL in runs created, but is 4th in runs created per 27 outs. So when Bonds plays, he is a much better offensive player than Alfonso Soriano. Obviously he doesn't play as much because of his age, thus Soriano winds up creating more runs over the course of a season.

Now will Soriano be a much cheaper player to sign than Bonds? That seems unlikely. Of course he will be around longer, so one could say the Giants will be better in 2008 with Soriano on the team. However, it's hard to argue that they would be better in 2007. They won't be able to improve in other areas if they had Soriano instead of Bonds, because they'll spend as much money on Soriano as they would Bonds. Soriano will play in more games, but will probably not be as productive as Bonds on a per game basis.

The fact is that Bonds is the best player on the Giants. Yes, he's (finally) starting to show his age and is not as good as he was a couple of years ago. But he's still an elite player when he plays. It takes a lot of money to replace an elite player.

Now of course the Giants could sign some other players instead of Soriano, I just picked on him because that's the guy that is being targeted by a lot of the anti-Bonds crowd. So maybe several other players together could more than compensate for Bonds. It's still not going to be cheap.

Once you realize that Bonds is still the best player on the Giants, then you realize that the decision to sign or not sign is not just about him breaking the home run record. The reason the Giants have struggled is they have not produced any quality players from their farm system in recent years. There is Matt Cain, who looks like he's on the brink of becoming an elite pitcher. That's it though. Their best everyday players are all free agent signings, thus they are mostly very old: Ray Durham, Moises Alou, Omar Vizquel.

You can't buy good young players, you must produce them. Look at the NL's top run producing teams: Philadelphia, Atlanta, and New York. Philly has Ryan Howard and Chase Utley, both home-grown. Atlanta's old vets of Chipper/Andruw Jones happen to also be homegrown, but they also have several young homegrown hitters: Brian McCann, Adam LaRoche, and Jeff Francouer. The Mets are in some ways the exception, with highly paid free agent pickups like Carlos Beltran and Carlos Delgado, but they also have homegrown talent David Wright.

Thursday, September 28, 2006

According to former president Bill Clinton: all you need is ubuntu. I actually installed the latest version of Ubuntu on my home PC this week. It was probably the least painful Linux install I've ever done on that computer. Only weird things I had to do was install a Nvidia driver (apt get nvidia-glx) and then reconfigure X. I only had to do that so I could my monitor's native 1280x1024 resolution instead of the 1024x768 Ubuntu came up with at first. Maybe I should try it on my laptop, since there appears to be a lot of other people installing on the same laptop I have. Of course the volume of info on this subject also suggests that there are a lot of problems with doing it!

Monday, September 25, 2006

For about a week I've been running Windows Vista RC1 on my laptop. I had used the Beta 2 on my desktop machine. I was unable to use the Aero theme on it, because my graphics card (GeForce4 TI 4400) does not support DirectX 9. However, my laptop's modest Intel 945GM integrated graphics does support DirectX 9, thus my laptop can run Aero quite nicely.

Aero is some nice eye candy. I remember the first time I saw the XP graphics, and I thought "ok that's just silly." Aero is a more impressive evolution.

Overall RC1 is much, much faster than Beta 2. It has some bugs. For example, I can't seem to use my mouse to scroll through contextual menus in Explorer (arrow keys work fine though.) There still seems to be issues around long filenames/paths and with compressed folders. Also, Windows Defender keeps trying to tell me that it is blocking some startup program, but when I open the list of startup programs, all of them are active.

Now will an upgrade be worth it? Probably not really at first, but it will become necesarry eventually. XP was a bigger upgrade in terms of stability, at least when compared to Windows 98 or ME. A Windows 2000 user could get by without XP for a long time, at least until they needed some specialized hardware driver. Seems like Vista will be a tougher sell for XP users.

As for me, I have decided to finally upgrade my desktop's graphics card. The GeForce TI 4400 has been an awesome card. I bought it back when I played games (before I had kids basically.) It was great on games like Medal of Honor and Unreal Tournament. I don't play games anymore, so I will be happy with a cheap, Aero-capable card.

Saturday, September 16, 2006

I decided this week that it was time to buy new running shoes. Most folks say buy new shoes every six months or every 500 miles, which ever comes first. My six month mark was last month, but I had a couple of weeks where I could not run for health reasons. So I waited. This past week, I could really tell my shoes had lost their effectiveness. There were allowing me to pronate way too much, especially late in my run and on my right foot. So time for new shoes.

I went to my favorite running store, The Runner's Factory in Los Gatos. This is where I had bought my previous shoes. They had spent a lot of time with me trying on different shoes, running in them, and watching how my feet and ankles reacted to the shoes. My old shoes were Brooks Addiction 6. They had really worked well for me, and I thought I would probably get the same kind today. Not surprisingly, Brooks has replaced the Addiction 6 with (what else) the Addiction 7.

I was going to try out the new Addiction 7, but unfortunately The Runner's Factory did not have any in my size (11EE.) They offered to order them for me. I figured I would give some other ones a try, and if nothing else seemed good, then I would order the Addiction 7.

When I bought the Addiction 6, I had also tried Brooks top shoe for overpronators like me, The Beast. I didn't like how heavy it felt on my feet. Today I tried on the newer version of The Beast. It felt a lot lighter than the old Beast. Even better, it seemed to have a lot more cushioning than either the old Beast or the Addiction 6. So I decided to go with The Beast.

I gave them a good run on my usual route, later this afternoon. I was very pleased with them. They definitely prevent overpronation, and the extra cushioning made me feel like I had some extra spring in my step. They still feel a little heavier than the Addiction 6, but I think it's a good trade-off.

Wednesday, September 13, 2006

I don't use IE6 much, I mostly use Firefox and Flock. I'm not religious about it, though, so I've been trying out the various revisions of IE7. RC1 is definitely a huge improvement over beta 1 and 2. It is much faster and smoother. It's still slow on my favorite JavaScript test, mostly because it is very slow with opening new windows. The UI has continued to be tweaked, and I think it looks pretty nice. I would not be surprised to see open source competitors adopt some of the UI paradigms seen in IE7.

Now would I use it instead of Firefox/Flock? Probably not. The extensions offered by Firefox still make it a better browser for me. Plus there are still some issues with IE7. For example, it still seems to screw up CSS -- probably because of their incorrect box model. It is horribly slow on scrolling on Slashdot, and has rendering problems on it. Again this is probably because of poor CSS implementation by Microsoft.

iTunes 7.0

I could talk about the new menus, UI tweaks, etc. but it's the album artwork-flipping view that is the coolest. Of course this requires artwork, and Apple is kind enough to provide this for free. I tried to perform this operation on my entire library and had mixed results. iTunes requires an exact match on both artist name and album name. Many of my CDs were ripped using EAC+Lame or CDex, so these names don't always exactly match with iTunes. For example, I had "On and On" by Joe Johnson, and iTunes could not come up with any artwork for it. I changed the title fo the CD to "On And On" and then iTunes found the artwork.

So it's a little buggy with the find artwork. It also seems a little bit slower, maybe because of the extra amount of metadata it's dealing with. I like the menus re-org, and don't mind the other UI changes. I've seen some people comment that it looks more like a Java Swing application, which I find amusing.

Of course iTunes was only a small part of the big announcements from Apple. Yes, my Nano is now officially "old" as it has been replaced by smaller, metallic Nanos. It's nice that they have a 8GB option, and I think it is cool that it only comes in black -- a little extra icing on the iPod status symbol. I was most impressed with its extended battery life and quicker charge time. Those are actually compelling features to me, but definitely not enough for me to replace my nano. I was a little interested by the lower priced, better performing iPods.

The really big news was the widely anticipated iTunes movies for sale. Yeah, they only have Disney (and Disney subsidiary) movies, but you know that will change. They used to only have ABC and Disney TV shows, and look how that has evolved. The pricing seems pretty good, and it's good that they've gone to near DVD quality.

But the real kick is the so-called "iTV." It is unheard of Apple to show a product that is not finished yet. Which makes me assume that it actually is finished, and will be available for the holiday season. Forget the "Q1 2007" -- Jobs is just toying with us.

The idea is great by the way. Forget putting a whole computer in your living room. Nobody wants to surf the web and read email on their TV. They do want to listen to music on their home theater sound system, watch videos and look at their huge photo collection. This little box is going to do all that perfectly. And... with $10 movie downloads off iTunes, it could become a very popular way to buy movies. Pre-order your movie for $10 and it downloads while you're asleep one night. Wake up, and watch it on your home theater system, or copy it to your iPod to watch on the plane or whatever.

Monday, September 11, 2006

One of my co-workers came to me with a familiar problem. He had class that was being rendered as XML. The class itself could have different renderings, so the XML creation was done using a classic builder pattern. This was good, but he still fretted about how his class would probably change (new features, imagine that) and he would have to change the XML builders so they would output the new properties.

It's a classic model-view coupling problem. I had suggested the builder architecture, in part because I remembered a classic article by Holub on this kind of problem. I started thinking and came up with a possible solution. This solution may have its own design flaws, but it was interesting.

My idea was to use Java's annotations to decide what parts of an object need to be present in its representation. Thus the annotation was a clue to the builder saying "show me." A reusable builder could this be built to simply obey the hints provided by the object it was building a representation for. In this case, everything was XML.

I created this XmlViewType as an Enum. Basically there could be one value in the Enum for each representation needed. Actually value() should really return an array of XmlViewTypes, since you will often want to show the same data in different representations. Or for maximum reusability, just have it return an array of strings. You can pass in the value to match against later.

This class returns a dom4j Element, but one could easily switch XML technologies for this. It would not be too hard to do it with just pure DOM. Anyways, what does this code do? It takes in a class and view type. It iterates over all methods in the class, looking for getters. If it finds a getter, then it looks to see if it has been annotated with the XmlView annotation. If so, it compares the value to the value passed into the method. If they match, then it uses the name of the property and invokes the getter. If the return type is a primitive or a String, then it crates a corresponding node for it. If it is a list or array, it creates a surrounding element, then creates a child element for each item in the array or list. If the return type is some other kind of object, then it descends the object graph using recursion. So if your class had a getFirstName method and an instance of this class happened to return "Michael" for this, then you would get something like <FirstName>Michael</FirstName> in your XML. If you object had a getCar() method that returned a Vehicel object, and Vehicle had a getMake() method you might get something like: <car><make>Volkswagen</make></car> Pretty straightforward.

I read this interesting article about Windows Vista. Vista may well be more significant than the author realizes. It may well be the greatest piece of shrink wrapped software ever produced, and perhaps the final hurrah for shrink wrapped software.

For years people have talked about moving applications online. Proponents of on-demand applications tried to argue that they would win out for a variety of reasons, such as portability, ease of maintenance, lower cost, etc. The truth is that maybe they will win out just because it will become too hard to do things any other way.

It still probably seems absurd to suggest that it is easier to write something like MS Word or Adobe Photoshop as a web application than as a desktop application, but the initial creation of the software is only part of the cost. The history of Windows may prove prophetic for other desktop applications. They grow and grow, and have all kinds of "baggage" (working with other apps, backwards compatibility, OS compatibility, etc.) Perhaps this weight is even worse for an OS, especially one that offers so many integrated applications with it. But maybe it will prove to be true for lots of other software too.

So why are web applications any different? For one, there is no such thing as backwards compatibility. There is no such thing as "upgrade" for a web application. Thus you never have to support multiple versions and worry about upgrade rates. The use of web standards also makes many of the other issues easier to deal with. Releases can be much smaller, and incremental for a web application, and thus also more frequent.

All of these things may make desktop applications, especially complex ones, just too expensive to build and maintain. It may only make sense to do them online. Desktops may become the realm of small, simpler applications that don't change much (think notepad.)

I've been doing some research lately on SMS. At PocoPay, we had technology for sending SMS to our customers. For example, if you signed up online you werre sent a pin to verify your cell phone via SMS.

For sending SMS, we paid for a service from Clickatell. We would send them an email and they would then generate an SMS based on the contents of the email. Kind of primitive, but very cheap. PocoPay never made any money so cheap was important!

My latest research has been on receiving SMS. If you're a cell phone user, it is free to receive an SMS, though you must pay to send them. However, for an application provider, receiving SMS is a lot trickier.

SMS is very fragmented, especially in the United States. In the U.S., people like to use short codes for sending SMS. Thus if you want to receive SMS, you need a short code. Only problem is that short codes belong to service providers. So you must pay for the short code from each provider.

Next you still need to receive the SMS. Various service providers will relay the SMS to you either by email or by HTTP POST. Other providers give acess to SMPP networks where you can poll for your messages.

A more primtive route is to do things like cell phone users. This involves hooking your application directly to either a cell phone or a cell modem. Then you can simply receive the SMS on your phone, like an end user would, and then download the SMS to your server. This seems like a good "getting started" option. It's not as expensive, but it does not scale (that's ok for getting started.) Of course you have to connect your phone to a server and need code for interfacing with it, but there are lots of options for this, including some open source ones.

Friday, September 01, 2006

My last post reminded me of a comment a former colleague of mine once made. He accused me of being "anti-XSD." I found this an unusual statement since I have written dozens of XML schemas, maybe more than a hundred. I have made good use of JAXB for generating parsing/binding code based on an XSD from when it was still a proposal up until the present annontation-based versions. So I don't think it's that I'm anti-XSD. It's that I'm anti-XSD proponents, or at least some of them.

You see when I meet somebody who is a huge XSD fan and want to use it to describe the world, I think to myself "I know who you are." These are the same people who used to want to describe the world using relational database schemas. They used to say "build all the data validation into the database, it is faster and better and it has to be right anyways." When they realized that SQL schema definitions were unable to properly describe all real-world data, they turned to stored procedures.

I see the same thing happening with XSD. It was obvious that DTDs were insufficient to describe everything, so XSD was born. It is several orders of magnitude more complex than DTDs, but it still cannot describe everything. It does not allow for dependencies between data. Thus the datatype of one element or attribute cannot depend on the value of another element or attribute (or on the values of four elements and two attributes, etc.)

I've long mused that the existence of the "xs:any" data type was partially because of this shortcoming. I've just picked on of XSD's shortcomings, but that's enough to see that at the end of the day neither XSD and SQL can actually describe the whole world.

So how oh how can we write software for real-world problems when these data modeling languages cannot describe all real-world data? Ahh, that's why we have programming languages. That's why the database guys started pushing stored procedures with their own languages like PL/SQL. Heck, Oracle eventually even put a JVM inside their database so you could write Java for your stored procedures. I wouldn't be shocked to see some kind of language added to XSD.

The truth is you don't need it. There are plenty of programming languages already out there. They can even be used to allow for a declarative approach to data definition. Maybe one of these declarative approaches could become "standardized" or somehow baked into XSD. Maybe one day web browsers (and databases, etc.) will become smart enough to understand "super-XSD" and "middleware", and the programming languages used to build it, will shrink away. I have my doubts though. Those database guys never really pulled it off, and they've been trying since the 80's.