Sunday, December 11, 2011

Friday was my last day at Bump Technologies. It's been a little shorter ride there than I expected, but still a very rewarding one. Before Bump I had been at eBay for four years, so you might think that going to a startup was a big change for me. Actually Bump was the sixth startup that I have worked at. Of the non-startups that I worked at was a very small consulting company that was much more like a startup than not. Still I'll leave the number at six. How have those startups fared?

Two burned out pretty quickly. One of those never got more than angel money, while the other got an $8M series A that it burned through more rapidly than I can understand.

One was bought by Oracle and is now part of their business intelligence suite.

One turned profitable, but was shut down by its board because they no longer thought "hockey stick" growth was possible.

One labored for many years on minimal funding before selling its intellectual property and going out of business.

The other of course is Bump.

[Note: While all of the following is based on my experiences in aggregate, I had no one specific in mind while writing this little rant. In particular my comments on 'modern' startups does not reflect Bump.]

My first two startups were in 2000, almost twelve years ago and during the dot-com bubble. A lot has changed in that time. The capital needed for hardware has shrunk amazingly. Commodity hardware plus the rise of virtualized hosting a la Amazon web services, has led to a world where a startup's big spend is on engineering talent. The value placed on engineering is huge. Engineering is a scarce resource, as the supply of engineers is shrinking while the demand is rising. Further the notion of "10X engineers" is widely accepted, and this essentially lifts any kind of ceiling on the value of an engineer. Now as an engineer this is mostly a Very Good Thing, at least from my perspective. However there are some weird things that this leads to.

The high value of engineering can make for a strange situation in companies with non-technical founders. Just imagine a situation where you have to work hard to get something and continuously spend a lot just to keep that something, and you don't really understand what that something does. That can be hard to swallow and keep down. Now multiply that times five or ten or twenty "somethings." Yeah.

Of course startups have always loved young engineering talent fresh out of school. There has always been huge appeal in such engineers because they are very cheap and have much less of a life outside of work, so they are more likely to happily work the long hours necessary in a startup. With the value of engineering going up so much, the appeal of young engineers is much greater than it was twelve years ago. Go back and read the previous paragraph and consider that such an awkward situation can be avoided with younger engineers. The net result is you have "younger" startups in general where more is asked of young, inexperienced engineers. This can lead to some amazing opportunities for talented young engineers. It can also lead to Engineering Disasters like we've never seen before.

Another interesting change in the world of startups is the value of design. I attribute most of this to the amazing success of Apple and the high value they place on design. Now in Apple's case, I would say they place a higher value on user experience and design follows from that, but more on that subtle difference momentarily. The value of design has obviously lead to a greater role for designers. They are more empowered than ever before to insist on certain aspects of a product. However the value of design has also lead to a lot of "armchair designers". Everyone thinks design is important, and design is the kind of thing that everyone can form an opinion on. Of course we all think that our opinion is very valuable, so we feel strongly about our opinions on design.

Think about how often you hear somebody -- doesn't matter what their job is -- talk about how some website or app or whatever is awful or looks terrible or "broken." The bigger more popular the product is, the more likely it will illicit such responses. I hear developers constantly talking about how something is ugly or beautiful. Even better, I hear developers constantly talk about how most other developers don't care about design and that they are special because they do. It's hilarious.

It's at least as bad if not worse among so called product people in startups. When I was working for my first couple of startups, our product managers were people with some kind of unique experience and thus theoretically some unique perspectives into what our customers would want. Not so much today. The qualifications for being a product person in a startup is pretty minimal as best as I can tell. That doesn't stop the from waxing poetic about the aesthetics of websites and apps.

These product managers and developers may all think that they have a lot in common with Steve Jobs, but as I mentioned earlier, I think Apple's approach is a little different than the approach taken by most startups. I think Apple stresses user experience first and realize that this is not the same thing as product or design. In a startup people have to wear many hats, so a lot of these competing principles get rolled together into one product person who may have some dubious credentials.

The worst sin of startup product people is not their strong opinions on what looks good or bad. It's a tendency to base product decisions on their personal preferences. They think that a product should be exactly what they would find most useful or most cool. Now this is not some new and unique thing. Product managers at startups ten years ago were guilty of this too, as are product manager at big companies today. It's just the circumstances of product managers at modern startups make this situation worse, as does the form-over-function corollary that comes from the emphasis on visual design. It's a perfect storm that leads to products being designed around users with a thousand Facebook friends and 5 tumblr blogs who spend more time at speakeasies than at Target. Of course that might really be your user, but it is statistically unlikely.

I've probably said too much about the current state of startups. This is my blog though, so what about me? I seriously doubt that I'll stop at six startups, though of course I hope my current job lasts for a long time. Like so many other people working in Silicon Valley, I want to work on a product that can change the world. That's what has lead me to startups six times in my career. But startups aren't the only kind of companies that can change the world.

Thursday, December 08, 2011

This is the second post about mobile architecture. In the first post, I talked about how the evolution from web applications has led to architecture's centered around asynchronous access to remote resources using HTTP. I also mentioned how Android applications are able to benefit from cloud-to-device messaging (C2DM) to push new data to apps. The most common use for this is for push notifications, but C2DM is not limited to push notifications. The key to pushing data via C2DM is that it uses a persistent socket connection to the C2DM servers. So whenever any new data is sent to those servers, they can send it over this connection to mobile apps. Now you might think that there is something magical or even worse, proprietary, about using sockets within a mobile application. This is not the case at all. Any application can make use of sockets, not just ones written by Google. Actually that's not true. Any native applications on iOS and Android can use sockets. Socket support was recently added to Windows Phone 7 as well. For web apps there is the Web Socket API, but it is not yet implemented across platforms.

So what are the benefits of using sockets? Well there is the data-push capability that you see in C2DM. In general you can make a much faster app by using sockets. There is a lot of wasted time and overhead in making HTTP connections to a server. This wasted time and overhead gets multiplied by the number of HTTP calls used in your application. There is also overhead in creating a socket connection to a server, but you only pay it one time. The extra speed and server push-iness allows for very interactive apps. Sounds great, but how does this affect the application architecture? Here's an updated diagram.

The big difference here is that instead of requests and corresponding responses, we have events being sent from client to server and back again. Now obviously some of these events may be logically correlated, but our communication mechanism no longer encodes this correlation. The "response" to an event from the app does not come immediately and may not be the next event from the server. Everything is asynchronous. Now in our traditional architecture we also had "asynchronous things" but it was different. Each request/response cycle could be shoved off on their own thread or in their own AsyncTask for example. This is a lot different.

Let's take a simple example. One of the first things that many applications require a user to do is to login, i.e. send a combination of name and password and get some kind of authenticated materials in response (token). In a traditional architecture you would send a login request to the server, wait for the response, and then either get the token or display an error if there was a problem with the username+password combination. In an event driven architecture, your application fires a login event that contains the username+password. There is no response per se. Instead at some point in the future the server will fire an event indicating a successful login that includes a token, or an error that may contain a reason as well. You don't know when this will happen, and you may receive other events from the server before getting a "login response" event.

Logging in is just one example. There are numerous other things that are easy to think of in terms of request/response. The whole idea of REST architectures are built around this request/response cycle and HTTP. Doing a search, getting user information, posting a comment, the list goes on and on. Now of course it is possible to recreate the illusion of a request/response cycle -- just fire your request event and block until you get the response event. But clearly that is Doing It Wrong.

So maybe these event driven architectures are not right for all applications. Yes they are sexier, but don't try to put a square peg in a round hole. This kind of architecture is best suited for apps where you are interacting with somebody else. Let's say you are bidding on an auction. Somebody else bids on the same auction and ideally you will see this happen very quickly. Or maybe you sent a message to a friend on some type of social network, and you can't wait to see their response. Social games are also a great example of the types of apps that would benefit from an event driven architecture. Even if the game doesn't have to be real-time, it would probably still benefit from "lower latency" between moves.
Hopefully you have a better idea now on if your application should use a traditional architecture or an event driven one. Of course there is a lot of room in between. You could easily have an architecture that uses both, picking the right interaction model depending on the use case within your app. Maybe most things use a classical model, but then on certain screens you switch to an event-driven one. Or vice versa. Either way you'll want to read my next post where I will talk about the nuances of implementing an event driven architecture.

Friday, December 02, 2011

The boom in mobile applications has been fueled by excellent application frameworks from Apple and Google. Cocoa Touch and Android take some very different approaches on many things, but both enable developers to create applications that end users enjoy. Of course "excellent" can be a relative term, and in this case the main comparison is web application development. For both experienced and novice developers, it takes less effort to be more productive developing a native mobile application than a web application (mobile or not.) This despite the fact that you really have to deal with a lot more complexity in a mobile app (memory, threads, etc.) than you do for web applications.

Despite having significant advantages over web development, mobile applications often have some similar qualities and this often leads to similar limitations. Most mobile applications are network based. They use the network to get information and allow you to interact with other users. Before mobile applications, this was the realm of web applications. Mobile apps have shown that there is not necessary to use a browser to host an application that is network based. That's good. However the way that mobile apps interact with servers over the network has tended to resemble the way that web applications do this. Here's a picture that shows this.
This diagram shows a traditional mobile application's architecture. In particular it shows an Android app, hence that little red box on the right saying C2DM, which we will talk more about later. Most of this is applicable to iOS as well though. Most apps get data from servers or send data to servers using HTTP, the same protocol used by web browsers. HTTP is wonderfully simple in most ways. Significantly, it is a short-lived, synchronous communication. You make a request and you wait until you get a response. You then use the response to update the state of your application, show things to the user, etc.

Now since HTTP is synchronous and network access is notoriously slow (especially for mobile apps), you must inevitably banish HTTP communication to a different thread than the one where you respond to user interactions. This simple paradigm becomes the basis for how many mobile applications are built. Android provides some nice utilities for handling this scenario like AsyncTask and Loaders. Folks from the Android team have written numerous blogs and presentations on the best way to setup an application that follows this pattern, precisely because this pattern is so prevalent in mobile applications. It is the natural first step from web application to mobile application as you can often re-use much of the backend systems that you used for web applications.

Before we go any further, take another look at the diagram. I promised to talk more about it. That is the cloud-to-device messaging system that is part of Android (or at least Google's version, it's not available on the Kindle Fire for example.) It provides a way for data to get to the application without the usual HTTP. Your server can send a message to C2DM servers, which will then "push" the message to the device and your app. This is done through a persistent TCP socket between the device and the C2DM servers. Now these messages are typically very small, so often it is up to your application to retrieve additional data -- which could very well go back to using HTTP. Background services on Android can makes this very simple to do.

Notice that in the previous paragraph we never once used the word "notification." C2DM was seen by many as Android's equivalent to Apple's Push Notifications. They can definitely fulfill the same use cases. Once your service receives the message it can then choose to raise a notification on the device for the user to respond to. But C2DM is not limited to notifications, bot by a long shot. Publishing a notification is just one of many things that your application can do with the message that it receives from C2DM. What is most interesting about C2DM is that leads us into another type of application architecture not built around HTTP. In part 2 of this post we will take a deeper look at event-driven architectures.

Tuesday, November 29, 2011

I've commented a little on some of my thoughts about the Occupy Wall Street movement. As someone who values personal freedom and liberty, I'm at odds with many of the OWS ideas, natch. Still I can empathize on many of their complaints and desires to make our nation better. However recently someone made the statement to me that to describe OWS as class warfare was laughable. I don't think that was a very intelligent statement. Sure many Republican pundits like to use the term class warfare way too much, but that doesn't mean the term is always invalid.

So is class warfare applicable to OWS? Well let's imagine a hypothetical OWS protester and a hypothetical modern Marxist. Why a Marxist? Because class struggle is a key component to Marxism. What would a Marxist think of statements like "We are the 99%" and "we are getting nothing while the 1% is getting everything"? I think the Marxist would agree with these statements. Once the Marxist realized that proletariat was an outdated term with a lot of negative connotations, I think he would be quick to switch over to saying "the 99%" instead. Who would be more likely to say "capitalism has failed democracy", the OWS protester or the Marxist? Seems like both. Who would encourage participation in general strikes? Who would be more likely to say "Join the global revolution?" Ok maybe that's too easy and meaningless. Who would favor using the force of the government to take from one group (the 1%) to give to the other group (the 99%)? Now don't get too angry about that statement. I'm not saying that this should or should not be done, just that it is a statement that I think both an OWS protester and a Marxist would agree upon.

My point is that on many issues, it is impossible to distinguish between an OWS protester and a Marxist. Does that mean that OWS protesters favor class warfare? No, it does not. However it does suggest that there is plenty of room for a reasonable person to think that OWS does indeed favor class warfare. Of course OWS is notorious for its fractured message, but I've tried to pull all of the above from its main websites and events. It seems to me that everything I've quoted would be things that the majority of OWS protesters would agree with. Those just happen to be the same things that a Marxist would agree with.

Sunday, October 16, 2011

One of the cool feature of the Android SDK is the MediaStore ContentProvider. This is basically a database with metadata about all of the photos on your device. If you are going to display some or all of the photos on a device, you will probably want to show a thumbnail version of the photo. Once again Android's got you covered. The MediaStore.Images.Thumbnails.getThumbnail function can get you a thumbnail for a given picture. But the devil's in the details here. This method will create a thumbnail if one does not already exist. That's a blocking call, but it will return quickly if the thumbnail has already been created. This is an API that any app could call, so if another app has called it for a particular image, then that thumbnail will (probably) already exist. In addition, some phones automatically create thumbnails when you take a picture. However none of these things are guaranteed, so it made me wonder just how many pictures on my phone already had thumbnails made for them. So I wrote a little app to determine this. Here's a screenshot showing the result.

Tapping the first button causes all of the pics in the MediaStore to be counted. Tapping the second button causes all of the thumbnails in the MediaStore to be counted. Tapping the third button causes the thumbs to be counted and put into a HashSet, then iterate over all pics and see if they have a corresponding thumb or not. You can find all of the code on GitHub.

The results on my phone were a bit surprising. First, the biggest number on the screen is the number of thumbnails. There are more thumbnails than photos! My guess is that photos get deleted but their thumbnails persist. Going back to the original question, it looks like about 90% of the photos on my phone have thumbnails already. So any app that uses the MediaStore thumbnails (as opposed to creating their own) will probably be very snappy. At least on my phone. I'm curious what the results would be on your phone. So here's the APK, you can install it on your phone and let me know what are the results.

Tuesday, October 11, 2011

The Occupy Wall Street movement is an interesting one to me. I have a lot of empathy for the people involved in the movement. They make a lot of great points. They are correct that the wealthiest 1% have an incredible amount of influence with our government. This is particularly true of corporations in the top percentile. The amount of influence (control) wielded is obviously disproportionate and flies in the face of a country that has a republic style of government.

The Tea Party movement is also interesting to me. I have a lot of empathy for the people in the movement. They make a lot of great points. They are correct that the government is too powerful and in so many cases does much more harm than good. The amount of power wielded by the US government flies in the face of a limited government as described by The Constitution.

Both of these groups are outraged by the status quo and for good reason. However these two groups seem to be at opposite ends of the political spectrum. Most consider OWS to a be "radically progressive." I've heard some people call them communists or anarchists. On the other hand, the Tea Party is considered to be "radically conservative." I've also heard people call them anarchists.

As you've probably guessed by now, I think both groups have way more in common than they would care to admit. Now demographically they are probably quite different, but that doesn't have to matter. However we have seen the Tea Party get eaten up by the Republican Party. Similarly we are already seeing the Democratic Party eat up the OWS group. President Obama wasted no time getting in on this, just as an incumbent President coming up on an election year should.

So in the end these two groups will be consumed by the very powers that be that they oppose. This is part of how the two party system works. In the end both of these groups just become instruments of the divisive, "rally the base" politics that have been the norm for decades.

I'd love to see an alliance between the two groups, which would be a nightmare for both the Republican and Democratic Parties. Of course there is a fundamental difference between the two groups. As I see it, the OWS folks think that government can be fixed and can "do the right thing." They seem to want to use the very instrument of their misery as its own remedy. The Tea Party philosophy is that the government cannot be fixed, so the only way to lessen the damage that it inflicts is to limit its powers. One could say that the OWS folks are optimistic and the Tea Party folks are pessimistic.

Personally I don't totally agree with either group (surprise surprise!) My philosophy certainly leans toward the Tea Party. I think that in a republic or representative democracy, those with the most wealth will yield more influence. I don't think this can be prevented without either undermining democracy, freedom of speech, or freedom of the press. So you are best off by limiting that power. Further, I think that if you make the power of government less attractive (eliminate corporate taxes, subsidies, tariffs etc.), you will also decrease the corruption. It's sort of a chain reaction. Government is corrupt, so make it less powerful. A less powerful government in turn attracts less corruption. It doesn't solve the problem because this problem can't be solved.

However I don't prescribe to the "no new taxes" mantra of the Tea Party either. I do think that taxes are too high, and no I don't care that taxes were once higher or that taxes are higher in other countries. Both of those argument are logical fallacies. However we have built up a huge debt. If we don't pay down that debt, then our children will have to. We splurged, it's our debt. We should pay it down. And please don't tell me that I can pay extra taxes if I choose to. That's a classic prisoner's dilemma, only on a much larger scale.

Less XML. There are a lot of places where XML is really unnecessary in Android. Top among those things is having to declare Activities, and to a lesser degree Services and BroadcastReceivers. I don't know about your code, but the vast majority of the Activity declarations that I've ever made simply stated the class and maybe some options about config changes and screen orientation. Does this have to be in the manifest XML? I'd prefer to not have to declare Activities in XML and still be able to navigate to them without my app crashing. I'd also like to be able to handle most of their options in Java. This could be as annotations, or maybe some kind of getConfiguration() method that you override, or maybe just as APIs to invoke during onCreate. I think I could live with any of those things. Now if you need to do something more complicated like put an IntentFilter, then that can be in XML. In fact I'm OK with XML always being allowed, just optional for the simpler cases that make up the majority of the Activities that developers write. You could apply similar logic to Services. If you have a Service that you only communicate with by binding to it and it runs in the same process as your main application, then it seems like you should not need any XML for it.

Functional programming. If you've read my blog for very long then this is probably no surprise. This probably requires closures to come to Java, but it seems like we are pretty close to that happening. I think that once that syntax is settled on, then Android should not only get this language feature, but the app framework should be dramatically upgraded to take advantage of it. Technically some of this can be smoothed over by the compiler allowing closures to substitute for single-method interfaces. I'd rather be more explicit about it. And by the way, I don't see why Android should have to wait for Java 8 to be finished by Oracle. Android needs to maintain language and bytecode compatibility, but that doesn't mean it has to use somebody else's javac...

No more Parcelables. Yeah I know that Parcelables are a faster serialization/deserializtion solution. Cool. In most cases this requires a developer to write a lot of mindless code. On one hand tooling can help with this, but why not just have the compiler handle it? Then we could get rid of Parcelable. If a class implements good 'ol Serializable, then the compiler could easily generate the Parcelable code that tooling could generate, but that is currently generated by annoyed developers. Of course it's cool to let developers override this if they want to do something clever. If a class has fields that are Serializable, then the compiler could generate a warning or even an error.

MVC. Now this one is a lot more controversial for me. I'm not a big believer in MVC, so I've always liked that Android wasn't either. However I think that a lot of developers would have an easier time if Android followed a more pure MVC pattern. I think that's one of the ways that Cocoa Touch is easier for new developers to pick up. I've had more than one new developer ask me if an Activity was Android's version of a controller. It is not. But maybe it should be? There's room for things beyond MVC as well. With ICS, Fragments should become the norm for app developers. Developers will have to decide for themselves about the best way to coordinate and communicate fragment-to-fragment, fragment-to-activity, etc. Direct method invocation? Shared memory? Message/handlers? Event listeners? Too many choices and too many choices makes life especially difficult for newbie programmers. Provide a "standard" mechanism, while not ruling out the other choices so more experienced developers can still do things the way that they think is best. The same issue exists for activity-to-activity communication.

Better memory management. I'd love to have gobs of RAM and an amazing GC. In the absence of those just do me some favors like make sure all my soft/weak references are GC'd before an OOM. Always fire Application.onLowMemory before an OOM, and give me 20ms to react and prevent the OOM. Kill all non-visible/foreground processes and re-allocate their memory before an OOM. An OOM is a crash and is the worst possible thing for users. I know lots of snooty engineers will look down their nose and say "your app needs to use less memory", but that kind of attitude makes Android seem like a ghetto to end users.

No more sync. This is one of those features in Android that sounds great on paper but is more trouble than it is worth at this point. By adding your app to sync, you let the system schedule a time for your app to synchronize with your server. Sounds great. However your app won't be the only one getting busy at that time, that's part of the point of sync. Worst of all if the user decides to launch some other app when the sync is in progress then that app has a good chance of being hosed. Tons of CPU & I/O will be hogged by the sync and that app will be sluggish to say the least and may even crash. All of this because of sync. What's more is that there is no real need for sync anymore. With C2DM, a server can send an event to the phone to let your app know that it needs to sync state with the server. Your server can decide how often this needs to happen and can make sure there are no unnecessary syncs.

Friday, September 02, 2011

Wikipedia describes offshoring as "the relocation of a business process from one country to another." There once was a time in Silicon Valley that offshoring was a dirty word. I was here (in the Valley) at the time and I remember it well. Let me take you back a few years in time and describe what I've seen of offshoring over the years.

I think there is still a general fear and bitterness associated with offshoring in the United States. It was much the same when I first heard about it in the Valley around 2002. It had all started several years before, but it was in the midst of the Valley's worst recession that it started to take full effect. The big companies of the Valley were under tremendous pressure to cut costs. They had offshored things like call centers, and so it only made sense to move up the value chain. There was plenty of programming talent to be found in India and China. Soon it wasn't just big companies, but even startups. It wasn't uncommon to hear about a startup where the founder was non-technical and shipped the programming tasks to a firm in

For me personally, I heard a lot about this kind of offshoring, but it did not affect me until 2004. That was when I was working for Vitria Technology. Vitria had been a shining star in the dot com era, and made a lot of money selling the enterprise integration software known as BusinessWare. However by 2004, Vitria's glory days were long past. They had been searching for their second hit for years, but were still being buoyed by recurring revenue from BusinessWare. I joined to work on one of their new ideas, what was eventually known as Resolution Accelerator or RA for short. I worked on a small team of engineers at Vitria's office in Sunnyvale. We developed RA, but our QA was in India. Vitria had moved its QA functions to India the year before we built RA.

RA turned out to be a success. We had lots of paying customs in healthcare and telecom. RA was built on top of BusinessWare, as that was part of our strategy. However a lot of the technical leadership behind BusinessWare felt that the RA team had built a lot of reusable features that belonged in BusinessWare. So even as we were selling RA to happy customers, we started a new project to migrate many of RA's key components to BusinessWare and then rebuild RA on top of this new version of BusinessWare as it was being developed at the same time. Great idea, right?

I was working with the BusinessWare team who were essentially re-inventing my wheels but within the mammoth piece of software that was BusinessWare. However this team turned out to be a lot different than the team that I worked on that built RA. There were no developers. There were multiple architects who were assigned to BusinessWare, as it was the key piece of software in the company. Then there was a tech lead from the BusinessWare team. However this tech lead did not code. Instead he wrote specs that were then implemented by a development team in China and then tested by our QA team in India. This was the model they had adopted fro BusinessWare for a couple of years already. I was essentially a consultant to the tech lead and was not supposed to interact with the developers in China -- even though they were largely writing code that was functionally equivalent to code I had already written. Meanwhile I was the tech lead of RA and we were now supposed to "graduate" to the same development process as BusinessWare. So now I had a development team in China who I was supposed to direct. My job had "evolved" into writing a spec for developers in China and coordinating with a QA team in India. Sunday - Thursday you could find me on the phone from 9 - 11 PM every night. Good times.

Meanwhile all of the developers on the RA team took off. One went to Google and worked on GMail. One went to Symantec. Finally I could take no more and split for a startup… That startup burned through cash like crazy, and went out of business a year later. All I've got is this patent to show for it, along with the valuable experience of writing my own programming language. In 2007 I went to work for eBay and ran into offshoring at a much larger scale.

Development organizations typically consisted of engineers in San Jose plus teams from our Shanghai office. Plus they were heavily augmented with contractors from two firms based in India. The contractors would have a tech lead in San Jose who interacted with a tech lead from eBay. In addition there would be a development manager who "owned" the application and who was usually the manager of eBay's tech lead on the project. The tech lead's job was similar to what it had been at Vitria. He was in charge of a spec and coordinated with the contractors plus engineers in San Jose and Shanghai. As you might guess, most of the actual coding was being done either in Shanghai or by contractors in India. The tech lead usually didn't interact with contractors in India though, instead they worked with the contractors tech lead/liaison. Finally in addition to the development manager for the application, there might also be an architect -- if the application was important enough. The tech lead worked with the architect on the design that would become the spec that would be sent off to India and China. The tech lead would also interact with a data architect to design database schemas and operations architect to design the physical deployment and requirements of the application. The point is that the tech lead had almost no chance of actually coding, and this was just accepted and understood.

Just before I left eBay, things started to change a bit. In particular eBay brought in some new blood from Microsoft of all places, who took over the search organization. This was the organization that trail blazed offshoring at eBay. It had been a search VP who had "opened" our Shanghai office, and most of the engineers and QA there were part of the search organization. New management decided that this was not working and sought to move all search engineering to San Jose. I'm not sure how the Indian contractors would play in this new vision, but it should sounded like they would be squeezed out too. The Shanghai engineers were unilaterally "reorged" to another organization in the company (an organization that had already been gutted and whose VP was being pushed out, but that's another long story.)

Ok, so what's the point of all of this? I'm not sure to be honest. From my perspective, I was very dissatisfied with how we used offshoring at Vitria. When I joined eBay we were using it in a similar way. If anything, it seemed the eBay way was even more inefficient. There was an unwritten rule at eBay that if a San Jose tech lead estimated that a technical task would take N days, then you would need to set aside 2*N days if the development was being done in China and 2.5*N days if the development was being done by contractors in India. This might seem harsh or much worse, but certainly a big part of it was the operational overhead with offshoring. Further this system was judged a failure by the largest organization at eBay.

At the same time it would be foolish to say that offshoring has been a failure. There is a lot of awesome software development being done via offshoring. I'm not so sure about the practice of "design it in America, develop it offshore" when it comes to software. At the very least I have not seen this work well in person. Then again perhaps the problem is simply one of tech leads who don't code and that just happened to coincide with my offshoring experiences.Whatever the case, one thing is for certain. There was palpable fear about offshoring a decade ago, and it turned out to be a false fear.

Monday, August 15, 2011

As someone who develops Android apps for a living and who has worked on two Android apps that have been downloaded 10M+ each, I know about fragmentation. I can tell you all about device differences for cameras, address books, and accelerometers. However when most pundit talk about fragmentation, they usually rely on exactly one example: screen sizes. I'm here to tell you that this is a myth.

If you look at the Screen Sizes and Densities page from the Android developers site, you will see a chart that looks something like the one to the right. There are several slices to this pie, but two of them are much than the rest. One is "Normal hdpi" at 74.5% (as of August 2011) and the other is "Normal mdpi" at 16.9%. What does that mean? It means that 91.4% of all devices have a "Normal" sized screen, so roughly 3.5 - 4.3". The hdpi ones have a higher resolution of course. So for those devices you will want higher quality images and what not.

Of course for anything like this, it is natural to compare things to the iPhone. On the iPhone all devices have 3.5" sized screen. However you once again have the situation with different resolutions between the iPhone 3G/3GS and iPhone 4. So similar to Android, you will probably want to include higher resolution artwork to take advantage of the iPhone 4's higher resolution screen. However as a developer you don't get much of an advantage with the higher resolution screen since the overall size is the same. It's not like you're going to make your buttons the same number of pixels and thus fit more buttons on the screen, etc.
No wait a minute, there is a difference between 91.4% and 100%. A good chunk of that difference is because of tablets, with their "Xlarge mdpi" screens. You can expect that segment to continue to grow in the future. But again, this is a similar situation to the iOS. An iPad has a larger screen than an iPhone. If you care about this, you either make a separate app for the iPad, or you incorporate the alternate layouts within your existing app and make a "universal" app. This is the same deal on Android.

To really make a fair comparison, you would have to exclude the xlarge screens on Android. Then the percentage of phones that fall in the combined Normal mdpi/hdpi bucket is even higher. It's still not 100% but it is pretty close. My point is that if somebody wants to talk to you about Android fragmentation and they use screen size as an example, some healthy skepticism is appropriate.

Sunday, July 24, 2011

At the beginning of this year, I bought a MacBook Air. I bought a maxed out one, with a 13" screen, 2.13 GHz cpu, 4 gb ram, 256 gb SSD. This past week Apple refreshed the MacBook Air line and I've seen a lot of people asking the question "Could I use a MacBook Air for ___?" So here's what all I use it for, along with a comparison to my work computer a maxed out 15" MacBook Pro.

Web browsing. I run Chrome on it and it screams. Actually the MBA set an expectation for me about how ChromeOS and in particular how my Samsung Chromebook should perform. I basically expected the Chromebook to perform exactly like the Chrome browser on my MBA, which is awesome. I was very disappointed, as the Chromebook is nowhere close. Anyways, browsing on the MBA is fantastic. I can notice a slight difference in performance on super-JS heavy sites, with my MBP being slightly smoother. I think a non-engineer would have difficultly spotting these differences.

Word processing. I'm not just talking about rudimentary word processing either. When I got the MBA, I had a couple of chapters and appendices that I was still working on for Android in Practice. I used the MBA to write/complete these chapters. This involved using Microsoft Word along with the Manning Publications template for Word. The chapters were usually in the 30-50 page range, and often with tons of formatting (code, sidebars, etc.) and large graphics. I cannot tell any difference between my MBP and MBA for these massive Word docs.

Programming. Speaking of AIP, part of finishing it meant writing code for those chapters. I've run pretty much all of the code from AIP on my MBA with no problems at all. For smaller apps like I have in AIP, there is no appreciable difference between my MBA and MBP. Building a small app is an I/O bound task, and the SSD on the MBA shines. Now I have also built Bump on my MBA, and there is a very noticeable difference between it and my MBP. There are two major phases to the Bump build. The first is the native compilation, which is single-threaded. You can definitely notice a major CPU speed difference here, even though the raw clock speeds of the two machines are close. The second phase is the Scala/Java compilation phase. This is multi-threaded, and the four cores on my MBP obviously trump the two cores on the MBA. Still, I would saw the MBA compares favorably to the late-2008 era MacBook Pro that I used to use for work.

Photography. I used to use Aperture on my old MBP. On the MBA I decided to give Adobe Lightroom a try. So it's not an apples-to-apples comparison. However, Lightroom on my MBA blows away Aperture on my old MBP. I haven't tried either on my current MBP. Obviously the SSD makes a huge difference here. Nonetheless, post-processing on the MBA is super smooth. When it comes to my modest photography, Lightroom presents no problem for my MBA.

Watching video. I haven't watched too much video on my MBA, mostly just streaming Netflix. It is super smooth, though it will cause the machine's rarely used fan to kick-in (as will intense compilations, like building Bump.) Next week I am going on a cruise to Mexico, and I will probably buy/rent a few videos from iTunes to watch on the cruise. Based on previous experiences, I expect this to be super smooth and beautiful, though I have considered taking the MBP just for its bigger, high resolution screen.

Presentations. I've done a couple of talks using the MBA. For creating content in Keynote, I do sometimes notice a little sluggishness on the MBA that is not present on my MBP. For playback, it seems very smooth. However, I did have an animation glitch during one presentation. This never happened when previewing just on my MBA's screen, it only happened on the projected screen. Nobody seemed to notice during the presentation, except for me of course.

So there you go. What's a MacBook Air good for? Pretty much everything. If you are a professional developer who compiles software, then I think your money is best spent buying the most powerful computer available. This has always been true. Maybe this is also the case for professional photographers/videographers as well. However my MBA is way more than enough for most folks. Finally keep in mind that the new MBAs introduced this week are a good bit faster than my MBA, at least in terms of CPU speed.

Friday, July 22, 2011

Today I was working on some code that needed to invoke a JavaScript function on a web page that was being displayed in a WebView within an Android application. As part of that function invocation a JSON object is passed in. It's actually a pretty complex JSON object. Of course it must be passed in simply as a string and then parsed by the JavaScript to an object. Android includes the JSON "reference implementation", so naturally I wanted to use this instead of relying on some 3rd party JSON library that would fatten up my APK. The standard way to do this with the reference implementation is to create an instance of org.json.JSONObject and use its toString method. You can create an empty instance and programmatically build up a JSON data structure, or you can give it a Java Map to build from. I chose to go the latter route.
When my web page choked, I wasn't too surprised. I'm never surprised when code I write has bugs initially. I added a log statement to show the data being passed in and was surprised to see that it looked like this:

{"a": "{b={c=d}}", "e":"f"}

This was not correct. This should have looked like this:

{"a": "{"b":"{"c":"d"}"}", "e":"f"}

To create the JSONObject that I was passing in, I passed it in a HashMap whose keys were all strings but whose values were either strings or another HashMap. So to create the above structure there would be code like this:

It seemed that the JSONObject code was flattening out my objects and producing incorrect JSON (WTF equal signs!) as a result. I put together a quick workaround to recursively replace any HashMap values with JSONObjects like this:

Saturday, July 16, 2011

So far I've been pretty disappointed with the Samsung Chromebook that Google gave me for attending I/O this year. That sounds pretty ungrateful for a free computer, but I am intrigued by the idea of ChromeOS. I'd like to see it work, just as a technical achievement. I think that perhaps the Chromebook that I have falls short because of its lack of hardware, but perhaps its shortcomings are inherent with the OS.
Anyways, obviously the main idea with ChromeOS is that people spend all of their time on the web. However, one of the other common tasks that people do is use their computers to share digital photographs that they took with the camera. This might seem like a hard thing for ChromeOS to handle, but Google claimed that they had this figured out. So I decided to give it a spin.Here are the tools that I used.
I had taken a few dozen pictures at a Giants baseball game I went to last weekend with family. I took the SD card from my camera (a modest Nikon D3000) and plopped it into the SD card slot on the Chromebook. ChromeOS immediately opened a new tab with a file browser that allowed me to navigate and view my pictures on the SD card. Very nice!
You could preview and even delete photos. The preview was a little slow to load. You could select photos and start an upload to ... Picasa Google Photos of course. The upload takes place in the background with a little notification window letting you know about the progress. Again, very nice. Once the pics are uploaded, browsing/previewing is much smoother. I assume this is because you are browsing downsized versions of the pics, whereas the file manager on ChromeOS has you browsing through the full versions.
One of the things that I often do with my photos is edit them. One my MacBook Air, I use Adobe Lightroom. I didn't expect to have something as sophisticated as Lightroom, but I did expect to be able to do simple things like rotate and crop. I would also expect red-eye removal to be available, since this is a pretty common need. Anyways, the editing tool on the Picasa website is Picnik. I've used it before, and it is great. However, it had significant problems loading on ChromeOS:
I thought that maybe this memory error was because of the size of the photo (2.5 MB). That would imply that Picnik is purely client-side? I don't think so. I would assume that the full size photo was on the server, and in which case the memory problem is purely from the Picnik tools loading and has nothing to do with the size of the picture. Either way, I don't think this photo is too much above average. Megapixels are cheap these days.

So I couldn't edit the pics once uploaded to Picasa. I actually tried just using Picnik directly, not through Picasa, but it had the same problem. The nice thing is that any web app that allows you to upload pictures from your computer works great with ChromeOS. Here's Facebook for example:
You could essentially upload from your SD card to Facebook this way. I would think that if a tool like Picnik worked, you could edit and then upload. You could probably sell a Chrome app that did that (it should allowing tagging of your Facebook friends too) and make a few bucks, assuming Chromebooks started selling.
I suppose a pretty common use case will be to simply upload all of the pics off of your SD card to Facebook. It seems like ChromeOS handles that pretty good. Put in you SD card, open up Facebook, and start uploading. If you use Picasa and Google+, it is even a little simpler. Editing seems to be a problem right now. Much like the general power performance of the Chromebook, it might be purely a function of subpar hardware. Maybe if Samsung put more memory in it, then Picnik wouldn't have choked? Hopefully the shortcomings can be addressed through software, since I don't think Google can update the RAM on my Chromebook.

Note: The "screenshots" were taken with my Nexus S. Why? Because the screenshot tool for ChromeOS didn't work for me. It works great on Chrome browser, but it would only capture part of the screen on ChromeOS and then freeze up. Sigh.

Thursday, July 14, 2011

Today was the much anticipated US launch of Spotify. I've been using Rdio for several months, and really love it. Not surprisingly, there are a lot of posts out there comparing the two services. That's all good stuff to read. Here's my summary:

Spotify has more music. If you stick to stuff from the last 10 years and that you can find on iTunes, then you probably won't find much difference. But going further back or by going "more obscure", you will notice the differences.

Spotify has a desktop app, but it is an iTunes clone. The Rdio desktop app is more of a wrapper on their website. So big advantage to Spotify, despite being an ugly iTunes clone. Also, Spotify will play music on your computer, so it tries to be a replacement for iTunes.

The Rdio mobile app is way better, at least on Android. Subjectively, it has a cleaner design and is easier to use. The Spotify mobile app looks like I designed it. Objectively, the Rdio app's sync is far superior. Spotify requires your mobile device to be on the same wifi network as your computer that is running Spotify. On Rdio, you can easily send any song, album, playlist, etc. to all of your mobile devices with no requirements except that you've got an Internet connection.

Tuesday, July 12, 2011

Remember when Netflix started making us wait two weeks to get new releases? It was supposed to be a tradeoff to get more streaming content from Hollywod. Now here we are with higher prices than ever, less streaming content, and still no new releases on DVD.

Why can't we have a Rdio or Spotify for movies and TV? At some price, this should be possible, right? I know that is what I want, and I don't think I'm the only one. How much would you pay for it? I think my max price is somewhere north of $35/month, assuming I can use it on all of my devices. Doesn't it seem like there's a very profitable business out there for this kind of service at that price? I'm guessing backwards thinking in Hollywood is to blame here, along with continued paranoia over privacy. At least HBO seems to be getting this right.

Monday, July 04, 2011

So you might have heard this week that there's this new social networking thing (*yawn*) called Google+. That's right, it's from Google. So it's gonna suck, right? I was skeptical at first, as were many others. After all, nobody roots for The Big Company who clones popular products made by smaller companies, and Google has had a well known poor track record in this space. But after a few days of using Google+, I'm a believer -- in its potential. Here's why.

Google+ is a chance to hit the rest button on social networking. For me, when I first started using Facebook it was mostly to catch up with old college classmates. Two big events happened later. Facebook was opened up to everybody, and Twitter showed up. When Facebook was opened up to everyone, I pretty much accepted friend requests from anyone I knew at all. I still didn't use Facebook that much. Meanwhile I really fell in love with Twitter when it showed up. On there I connected mostly with people in the same or related professions as me. Most of my tweets were around things that had to do with my job (programming, technology.)

Meanwhile on Facebook, I had more and more family members join. Suddenly Facebook became a great place to share family related things, especially pictures. Then I hooked up my Twitter account to Facebook. Now I could just post things to Twitter, and they would show up on Facebook. Then I would occasionally post pictures on Facebook as well. However, most of my tweets were geeky stuff. I did have some co-workers and college classmates who found such things interesting, but more and more most of my friends on Facebook (lots of family high school classmates) found this useless. Eventually I cut off the Twitter-Facebook connection.

My story here is certainly different from a lot of folks, but I imagine there are a lot of similarities too. Google+ seems to offer the possibility to do things over and get it right this time. The key is its grouping feature, Circles. You have to put people in Circles, so you are motivated to organize your friends. This is important. Facebook and Twitter have both had similar features for awhile, and they just aren't used. Twitter's lists aren't really comparable since you still send tweets to everyone. Facebook's groups are more comparable, so why aren't they used?

First and foremost, I don't think Facebook really wants anyone to use them. They have a pretty strong history of trying to decrease privacy on their network. Obviously Facebook benefits if everything posted on their network can be searched and viewed by others on Facebook. It seems like one of those features that they added because some users wanted it, but it did not benefit Facebook The Business. Within a couple of days of Google+'s debut, reports came out of a Facebook engineer easily hacking together the same interface to use with Facebook groups. So clearly it would have been pretty easy for Facebook to make groups easy for users to use to organize their friends and incorporate groups into content published on Facebook, but instead Facebook chose not to do this.

This raises the question of why the heck is Google+ doing it? If I had to guess, I doubt that Google really wants to do this either. However, this is one of many places where Google+ feels like something designed around the strengths and weaknesses of its competition, Facebook and Twitter. Privacy was an obvious weakness of Facebook and so Google+ takes advantage of that. It's the kind of thing you do to get market share, whereas Facebook has been doing just the opposite because they are trying to monetize existing users and content.

Privacy is not the only place where Google+ feels like a product that has been cleverly designed around its competition. In fact it reminds me a lot of embrace, extend, extinguish era Microsoft. I think they have realized that they don't necessarily have to come up with something that Facebook and Twitter don't do at all, they can just do a lot of the same things and do them a little better. Some other examples of this are viewing pictures and allowing rich markup in status updates. So they make a slightly better product, then they play their own monopoly card by putting the G+ bar on top of all Google pages, including search results and GMail...

Anyways, going back to privacy... The creation of Circles is only half the battle. The other half is picking which of your Circles to publish to. G+ has made this easy to do, and it is a feature that I want to use. However, I don't know if others will do the same. Right now it still seems like most posts that I read are public. This may change as more people start to use G+, but maybe not.

If it doesn't change, then G+ then seems like it will be more of a competitor to Twitter than to Facebook. It already has a lot of similarities, since it uses an asymmetric friendship model like Twitter. I definitely noticed a drop in tweets by those I follow on Twitter since G+ came out. If people don't use the privacy features, then the most it could become is a better Twitter. There have been other better Twitters before, so I don't know if that is good enough. Features like hangouts (group video chat) and huddles (group messaging) seem like they could appeal to Facebook users, but it's hard to say right now. For me, the kind of folks who I use Facebook to communicate with, but would not use Twitter to communicate with, have not even heard of Google+. Yet.

Wednesday, June 29, 2011

There's never any shortage of people debating mobile web apps vs. native apps. I don't want to waste your time with yet another rehash of this debate. Personally I sum it up as a choice between user experience and cost. But obviously I am biased -- I build native apps and even wrote a book on how to do it. So you shouldn't believe anything I have to say about this topic, it must be wrong. Anyways, one of the interesting lines of reason that often comes up in this debate is how web apps replaced desktop apps. I want to examine this a little bit closer.

I'm actually writing this blog post from a Chromebook that I got for attending the Google I/O conference last month. This device is perhaps the logical conclusion of the web apps replacing desktop apps axiom. However, I have a problem with that axiom. It is often based on the emergence of popular websites like Yahoo, Google, Amazon, and eBay. The argument is that these apps were web based and the fact that they ran on servers that could be rapidly updated is a key to why they succeeded. The long update cycle of desktop software would have made it impossible for these apps to be anywhere but in a browser.

There is some truth in this, but it's misleading. The most important factor in the success of those apps was that their data was in the cloud. They brought information and interactions that could not exist simply as part of a non-connected desktop app. They had business models that were not based on people paying to install the software. These were the keys. The fact that end users interacted through a web browser was secondary. It's pretty much the same story for newer super popular web apps like Facebook and Twitter.

Going back to the world of mobile for a minute... One of the things that mobile has taught us is that users don't care so much about how they interact with your "web-based" application. Hence the popularity of native apps for web giants like Amazon and Facebook. In fact, some might even argue that users prefer to use native apps to interact with traditional web properties. I won't argue that, but I would definitely disagree with any claims that users prefer to use a browser.

Anyways, the point is that the notion that web apps replaced desktop apps is dubious. In fact, if you look at places where web apps were tried to exactly replace desktop apps, such as word processing, they have had limited success. Currently we see a movement to replace music players like iTunes with web apps. These web apps have some distinct advantages, but it is not at all clear that they will prove popular with users. Apple has taken the approach of adding more network based features (store and download your music from their servers) to iTunes instead of going with a web app -- at least for now.

Connected desktop apps have a lot to offer. I often find myself using desktop apps like Twitter, Evernote, Sparrow (for GMail), and Mars Edit (for Blogger.) They provide a better experience than their browser based cousins. Apple's Mac App Store has definitely made such apps more popular on the Mac platform, as they have made it easier to discover, purchase, and update such apps. Speaking of updates, these apps update frequently, usually at least once a month. Again, I think that our expectations have adjusted because of mobile apps.

So will desktop apps make a comeback? Are mobile web apps doomed? I don't know. I think it is very unclear. It's not a given that a computer like this Chromebook that I'm using is "the future." We can haz network without a browser.

Saturday, June 18, 2011

Last month I wrote a bit about cloud music. Since then Apple got in the game with iTunes in the iCloud. I'm not a fan of it because you have to manage what songs are downloaded to your devices before you can listen to them. Of course having to download a song before you can listen to it is not surprising from a company that sells hardware priced by storage capacity. If you didn't have to download to use your media, why would you spend an extra $100 on the 32gb iWhatever instead of the 16gb version? Still you gotta give Apple props on the price point. I am optimistic that competition around price and features between Apple, Google, and Amazon will be very beneficial for consumers.

Anyways while the your music-in-the-cloud market is obviously very hot market being contested by three of the biggest technology companies in the world, there is a lot more interesting innovation around music going in in technology. This blog post is about two of my favorite music startup/services: Rdio and Turntable.fm. There's tons of tech press on each of these companies, so I won't go into that. I'm just going to talk about why I like them.

Rdio launched in late 2010, and I've been a paying customer since January 2011. It's conceptually similar to subscriptions services like fake Napster and Rhapsody. What I like about it is how great it is to use on mobile devices. I have it on my Nexus S, on my Nexus One that I listen to when I run/work-out, and on the iPod Touch that I have plugged in to my car. Now you might be thinking, now how can I use this on a iPod in my car with no internet connection? Well you can mark songs/albums/playlists as songs to sync to your devices and be usable with no connection. So I can use the Rdio Mac app, mark some songs, and then just connect my iPod or Nexus One to a wifi network, and the songs get synced. Then I can listen to them anytime. I regularly read reviews of new music on Metacritic, listen to some of the music I find there on Rdio, and then sync it to my devices. Then I can listen to it while running or driving to work.

Speaking of discovering new music, that is the sweet spot of Turntable. It's pretty new, I only heard about it earlier this month and started using it this week. The idea of being a DJ for the room may sound lame at first -- and maybe it is -- but it is also awesome. However I will warn you to resist being a DJ. You will be drawn in and waste massive amounts of time. It's completely unnecessary too. Like I said, Turntable is awesome for music discovery. Go to a room that sounds interesting (avoid the Coding Soundtrack unless you really enjoy electronica/dance AND don't mind witnessing pissing contests measured by playing obscure remixes) and enjoy. There's a good chance that you will hear familiar music, but an even better chance you will hear something you've never heard before. The DJs try hard to impress, and you win because of it. It's like Pandora, but better, with no ads, and ... you always have the option to enter the chat or even take your turn at DJ'ing.

Sunday, June 12, 2011

When people talk about smartphones, they often mean iPhones and Android phones. Sure there are still Blackberries out there, I think I once saw a guy using a webOS phone, and Microsoft has 89,000 employees who have to use Windows Phone 7, but it's mostly an Android and iPhone world right now. If you have a phone running Android or iOS, life is pretty good. You've probably got a really powerful phone, with a huge number of apps and games to choose from. You've also got a web browser that's more advanced than the one used by most desktop computer users. So given all of those things to be happy about, why is that smartphone owners are so hostile to each other?

iPhones user love to look down their noses and make derisive comments about Android users. Not to be outdone, Android users never miss an opportunity to mock iPhone users. There is an obvious parallel here to Mac vs. Windows, but I think it's actually much nastier than Mac vs. Windows ever was. Here's a list of my theories (in no particular order) on why there is such animosity.

It's so easy to do. The truth is that both iOS and Android are deeply flawed. Going back to the Mac vs. Windows comparison, think about how much less mature these smartphone OS's are compared to Windows and OSX. So if you have any inclination to make fun of either OS, it's embarrassingly easy to do. This is where things really differ from Mac vs. Windows. There's not much to debate there. If you wanted to say "Mac sucks", there is a short list of things that you can point to. Not true for iOS or Android.

Social networking. In this day and age of social networks, your "friends" shove what they are doing in your face constantly. Smartphone apps make heavy use of this, as a way to spread virally. But what happens when there is an impedance mismatch caused by apps available on one OS but not the other? The folks with the unsupported OS get a little annoyed, and the other folks get an artificial feeling of 133tness

It starts at the top. Apple and Google both frequently take shots at each other. They do it at developer conferences. They do it when reporting quarterly earnings. It doesn't stop there. Motorola, Verizon, T-Mobile and others have all taken shots at Apple in commercials seen by millions.

Big decisions. You only have one smartphone (unless you are weird like me) and in the US, you usually buy it on a two-year contract. So you have to pick just one, and then you are stuck with it for two years. Thus if anybody is gonna suggest that you made a bad decision, most people will defend their decisions vehemently.

The Apple effect. So this really is straight out of Mac vs. Windows. Apple wants their products to seem elite and luxurious. They want the owners to feel like they have purchased far superior products, and feel that they (the user) is superior because of this. It's brilliant marketing, and it totally works. The air of superiority hangs over any Apple product owners, but especially iPhone users. So naturally any non-iPhone users are going to react negatively to the haute attitudes of iPhone users.

Friday, June 10, 2011

This week was WWDC 2011. Last year I was lucky enough to attend what appears to be the final stevenote. This year I followed along online, like much of Silicon Valley. There are a lot of reasons why so many of us who work in the tech world pay such close attention to the WWDC keynote. This is the place where Apple typically unveils innovative hardware and software. However, this year's event reminded me of another watershed moment in recent US history: John McCain choosing Sarah Palin as his VP candidate back in 2008.

These two events were similar because they were both examples of rallying the base. In 2008, McCain decided against trying to appeal to moderate Republican/Democrats/independents who were either inclined to vote for Obama or undecided. Instead he went with Palin, a candidate who did not appeal to those people. The idea was to appeal to the most conservative elements of the Republican party and get those people to vote instead of staying home for whatever reason. Obviously this did not work.

So how was WWDC 2011 a rallying of the base tactic? Apple did not try to introduce near software or hardware that would get non-iPhone owners to go out and buy an iPhone or more strongly consider buying an iPhone the next time they were in the market for a new phone. Instead they did their best to make sure that current iPhone owners continued to buy iPhones. The strategy was two-fold.

First, they needed the places where they were weak and other were strong. Now let's be honest here, by others we are talking about Android/Google. There were a couple of glaring problems with iOS 4. First was notifications, so Apple essentially adopted Android's model here. Second was the dependency on iTunes the desktop software application. They introduced wireless sync and their iCloud initiatives to address this weakness. Apple did not break any new ground in any of these areas, they simply removed some obvious reasons for people to buy an Android device over an iPhone.

Phase two of rallying the base was to increase lock-in. If you are an iPhone user, you already experience lock-in. Buying an Android phone means losing all of your apps and games. Depending on what you use for email, calendar, etc. you might also lose that data too. Of course this is true to some degree about any smartphone platform. However, with the expansion of the Android Market (I've seen many projections that it will be bigger than the App Store soon), pretty much every app or game you have on your iPhone can be found on Android. Further, there's a good chance that it will be free on Android, even if you had to pay for it on the iPhone. Further, with the popularity of web based email, especially GMail, you probably would not lose any emails, calendar events, etc. So the lock-in was not as high as Apple needed it to be. Enter iCloud and iMessaging.

As many have noted, iCloud/iMessaging does not offer anything that you could not get from 3rd party software. Syncing your docs, photos, email, calendar, etc. is something that many of us already do, and that includes iPhone users. Further many folks already have IM clients that do everything that iMessaging does. The big difference is that all of those existing solutions are not tied to iOS or OSX. Thus they are cross-platform (no lock-in) but that also means that you have to add this software to your devices. It's very nice for users to not have to worry about installing Dropbox, Evernote, or eBuddy. But the obvious win for Apple is here is the lock-in. If you start relying on Pages for writing docs and sync'ing them across devices, you are going to be very reluctant to buy anything other than an iPhone (and a Mac for that matter.) If you get used to using iMessaging to chat with your other iPhone toting friends, same thing.

Apple is keeping the cost of using all of their new offerings very low. It's a classic loss leader strategy. It's ok if iCloud and iMessaging both lose a lot of money for Apple. If they can just lock-in existing iPhone users, they can continue to make huge profits. In that scenario, it's ok for Google/Android to have N times the number of users as Apple. The Apple users won't be going anywhere, and they spend a lot of money. Seems like a smart strategy by Apple. It should work out much better than it did for the Republican party in 2008.

Saturday, June 04, 2011

For nearly a decade now technology pundits have been talking about the end of Moore's Law. Just this week, The Economist ran an article about how programmers are starting to learn functional programming languages to make use of the multi-core processors that have become the norm. Indeed inventors of some of these newer languages like Rich Hickey (Clojure) and Martin Odersky (Scala) love to talk about how their languages give developers a much better chance of dealing with the complexity of concurrent programming that is needed to take advantage of multi-core CPUs. Earlier this week I was at the Scala Days conference and got to hear Odersky's keynote. Pretty much the second half of his keynote was on this topic. The message is being repeated over and over to developers: you have to write concurrent code, and you don't know how to do it very well. Is this really true, or is it just propaganda?

There is no doubt that the computer that we buy are now multi-core. Clock speeds on these computers have stopped going up. I am writing this blog post on a MacBook Air with a dual-core CPU running at 2.13 GHz. Five years ago I had a laptop with a 2.4 GHz processor. I'm not disputing that multi-core CPUs are the norm now, and I'm not going to hold my breath for a 4 GHz CPU. But what about this claim that it is imperative for developers to learn concurrent programming because of this shift in processors? First let's talk about which developers. I am only going to talk about application developers. What I mean are developers who are writing software that is directly used by people. Well maybe I'll talk about other types of developers later, but I will at least start off with application developers. Why? I think most developers fall into this category, and I think these are the developers that are often the target of the "concurrency now!" arguments. It also allows me to take a top-down approach to this subject.

What kind of software do you use? Since you are reading this blog, I'm going to guess that you use a lot of web software. Indeed a lot of application developers can be more precisely categorized as web developers. Let's start with these guys. Do they need to learn concurrent programming? I think the answer is "no, not really." If you are building a web application, you are not going to do a lot of concurrent programming. It's hard to imagine a web application where one HTTP request comes in and a dozen threads (or processes, whatever) are spawned. Now I do think that event-driven programming like you see in node.js will become more and more common. It certainly breaks the assumption of a 1-1 mapping between request and thread, but it most certainly does not ask/require/suggest that the application developer deal with any kind of concurrency.

The advancements in multi-core processors has definitely helped web applications. Commodity app servers can handle more and more simultaneous requests. When it comes to scaling up on a web application, Moore's Law has not missed a beat. However it has not required all of those PHP, Java, Python, Ruby web developers to learn anything about concurrency. Now I will admit that such apps will occasionally do something that requires a background thread, etc. However this has always been the case, and it is the exception to the type of programming usually needed by app developers. You may have one little section of code that does something concurrent, and it will be tricky. But this has nothing to do with multi-core CPUs.

Modern web applications are not just server apps though. They have a lot of client-side code as well, and that means JavaScript. The only formal concurrency model in JavaScript are Web Workers. This is a standard that has not yet been implemented by all browsers, so it has not seen much traction yet. It's hard to say if it will become a critical tool for JS development. Of course one of the most essential APIs in JS is XMLHttpRequest. This does indeed involve multiple threads, but again this is not exposed to the application developer.

Now one can argue that in the case of both server side and client side web technologies, there is a lot of concurrency going on but it is managed by infrastructure (web servers and browsers). This is true, but again this has always been the case. It has nothing to do with multi-core CPUs, and the most widely used web servers and browsers are written in languages like C++ and Java.

So is it fair to conclude that if you are building web applications, then you can safely ignore multi-core CPU rants? Can you ignore the Rich Hickeys and Martin Oderskys of the world? Can you just stick to your PHP and JavaScript? Yeah, I think so.

Now web applications are certainly not the only kind of applications out there. There are desktop applications and mobile applications. This kind of client development has always involved concurrency. Client app developers are constantly having to manage multiple threads in order to keep the user interface responsive. Again this is nothing new. This has nothing to do with multi-core CPUs. It wasn't like app developers used to do everything in a single thread, but now that multi-core CPUs have arrived, you need to start figuring out how to manage multiple threads (or actors or agents or whatever.) Now perhaps functional programming can be used by these kind of application developers. I think there are a lot of interesting possibilities here. However, I don't think the Hickeys and Oderskys of the world have really been going after developers writing desktop and mobile applications.

So if you are a desktop or mobile application developer, should you be thinking about multi-core CPUs and functional programming? I think you should be thinking about it at least a little. Chances are you already deal with this stuff pretty effectively, but that doesn't mean there's room for improvement. This is especially true if language/runtime designers started thinking more about your use cases.

I said I was only going to talk about application developers, but I lied. There is another type of computing that is becoming more and more common, and that is distributed computing. Or is it called cloud computing? I can't keep up. The point is that there are a lot of software systems that run across a cluster of computers. Clearly this is concurrent programming, so bust out the functional programming or your head will explode, right? Well maybe not. Distributed computing does not involve the kind of shared mutable state that functional programming can protect you from. Distributed map/reduce systems like Hadoop manage shared state complexity despite being written in Java. That is not to say that distributed systems cannot benefit from languages like Scala, it's just that the benefit is not necessarily the concurrent problems/functional programming that are often the selling points of these languages. I will say that Erlang/OTP and Scala/Akka do have a lot to offer distributed systems, but these frameworks address different problems than the multi-core concurrency.

It might sound like I am a imperative program loving curmudgeon, but I actually really like Scala and Clojure, as well as other functional languages like Haskell. It's just that I'm not sure that the sales pitch being used for these languages is accurate/honest. I do think the concurrency/functional programming angle could have payoffs in the land of mobile computing (desktop too, but there's not much future there.) After all, tablets have already gone multi-core and there are already a handful of multi-core smartphones. But these languages have a lot of work to do there, since there are already framework features and common patterns for dealing with concurrency in mobile. Event driven programming for web development (or the server in client/server in general) is the other interesting place, but functional languages have more to offer framework writers than application developers in that arena. My friend David Pollak recently wrote about how the current crop of functional languages can hope for no more than to be niche languages like Eiffel. I think that he might be right, but not just because functional programming has a steep learning curve. If all they can offer is to solve the concurrency problem, then that might not be enough of a problem for these languages to matter.

Friday, June 03, 2011

One of the exciting technologies being shown off at Google's I/O conference this year was near field communication or NFC. It certainly got my interest, so I attended an excellent talk on NFC. Here's a video of the talk:

One of the things mentioned in the talk was that you did not want to use NFC for any kind of long running, verbose communication. Its range was too short and its transfer speed was too slow. Bluetooth was the way to go for such data exchange, so what you really wanted to do was an NFC-to-Bluetooth handoff. It was even mentioned that the popular Fruit Ninja game did this, or would do this in the future. Earlier this week at Bump we had our second hackathon. I decided that local communication using NFC and Bluetooth would make for an interesting hackathon project. So based on what I had learned from the I/O presentation, the examples in the Andoird SDK, and a tip from Roman Nurik, here's some code on how to do the NFC-to-Bluetooth handoff to setup a peer-to-peer connection between two phones to exchange data between them.
We'll start with the NFC pieces. You want the phone to do two things. First, it needs to broadcast an NFC "tag". This tag can have whatever information you want in it. In this case we will have it send all of the information needed to setup a Bluetooth connection: the Bluetooth MAC address for our phone plus a UUID for our app's connection. You can add more stuff to the tag as well, but these two parts are sufficient. Technically you could do without the UUID, but you'll want this in case other apps are using a similar technique. Here is some simplified code for generating an NFC text record:

This code only handles English/ASCII characters. Take a look at the Android samples for a more generic approach. Next we need to get the Bluetooth MAC address to pass in to the above function. That is simply: BluetoothAdapter.getDefaultAdapter().getAddress(). Now we can create the text record to broadcast using NFC. To do this, you need to be inside an Android Activity:

In this code there is a String called msg that I didn't show how it was generated. It would have the Bluetooth MAC address, as well as the UUID for your app, plus whatever else you want to include in the NFC broadcast. Now when your app loads, it will use NFC to broadcast the info needed for the Bluetooth handoff. The app needs to not only broadcast this, but also listen for this information as well:

This code configures an NFC listener using an IntentFilter and a type of NFC tag (there are many.) It uses a PendingIntent for this same Activity. So when a NFC tag that matches our criteria (based on the IntentFilter and tag type), then an Intent will be fired that will be routed to our Activity (because that's the Activity we put in the PendingIntent.) Now we just need to override the onNewIntent method of our Activity, since that is what will be invoked when an NFC tag is encountered:

For our example, there should only be one NdefMessage received, and it should have exactly one NdefRecord, the text record we created earlier. Once we get the message from the NFC tag, we it's time to start the Bluetooth connection. Bluetooth uses sockets and requires one device to act as a server while the other acts as a client. So if we have two devices setting up a peer-to-peer Bluetooth connection, which is one is the server and which is the client? There are a lot of ways to make this decision. What I did was have both phones include a timestamp as part of the NFC tag they broadcast. If a phone saw that it's timestamp was smaller than the other's, then it became the server. At this point you will want to spawn a thread to establish the connection. Here's the Thread I used:

This Thread uses the device's BluetoothAdapter to open up an RFCOMM socket. Once you start listening, you'll want to immediately turn off Bluetooth discovery. This will allow the other device to connect much quicker. The server.accept() call will block until another devices connects (which is why this can't be in the UI thread.) Here's the client thread that will run on the other device:

On the client thread, you find the other device by using it's MAC address (not the client's.) Then you connect to it using the device using the shared UUID. On both client and server, we stated another thead for communication. From here on out this is just normal socket communication. You can write data on one end of the socket, and read it from the other.

Monday, May 23, 2011

I've played ESPN fantasy sports for many years. ESPN also creates one of my favorite mobile apps, their ScoreCenter app (though it could be sooo much better.) However their fantasy baseball app is one of the most frustrating apps out there. It provides access to your fantasy baseball teams. The other way to access your teams is through the website. The website is what sets your expectation of course. When you login to the website, you are first presented with a list of your teams. Once you choose a team, you are shown your team's stats for the day:

This not only sets one's expectations for interacting with your fantasy baseball team, but it really is quite useful. You want to see how your team is doing today when you login. We should expect something similar from the corresponding mobile app. Instead you get this:

This is from the Android app, but the iPhone one is almost identical (which is also a sad state of things.) You start off with having to pick your team. However when you pick your team, you are presented with the season stats for all of your players. That's not what you want. To get how your team did today, you have to open up this crazy selector, then change the option whose default value is "Season" (we only the value, not the name of what this property is) to "Day." This brings us back to the crazy selector, where you select Done and then you get today's stats for your team.

What's even worse is that this is the 2011 version of the app. There was a (different) app for 2010. To get the 2011 one, you had to buy it. The 2010 app had the same frustrating UI. Last year at WWDC I talked about this to the developer of the iPhone app (which is exactly as bad as the Android one.) I told them about how this sucks. So not only did the 2010 app not get fixed, but they didn't fix it for the 2011 app which was a "new" app that required one to