On a drive from Colorado to Las Vegas this past week my daughter and my son were in the back seat of our car using my daughter’s netbook, she has recently turned 7 years old so I bought her a netbook and I am starting to teach her how to code. My son wanted my daughter to change the video that they were watching and she began to explain how the internet works to him.

She told him that all of her stuff was on the internet ( emphasis mine ) and that the movie that they were watching was the only one that was on her netbook, she explained how her computer was barely useful without the internet, that the internet came from the sky and her computer needed to have a clear view of the sky to receive the internet. In addition she said that since we were in the car and the roof was obscuring said view that they couldn’t get the internet, and couldn’t change the movie.

Listening to this conversation gave me a bit of pause as I realized that to my children, the internet is an etherial cloud that is always around them. To me it is a mess of wires, switches and routers with an endpoint that has limited wireless capabilities. When I thought through it, however, I realized that my kids had never seen a time when someone had to plug in their computer to get to the web. Plugging in an ethernet cable is as old school as dial-up.

Once that sunk in, I understood that the Cr-48, Google’s Chrome OS netbook is a step in the right direction, and while I am very enthusiastic about several aspects of Google, and in all fairness others’ vision of a web based future, I do not feel that the current approach will work.

A centralized system where all of users’ data lives, and all communications go through is not an architecturally sound approach. As the number of devices that each user has goes up, the amount, size and types of connections is going to stress the servers exponentially.

It is already incredibly difficult to keep servers running at internet scale, we need entire redundant data centers to keep even small and simple web scale endeavors running. When you take a step back you realize that a system like Facebook is barely working, it takes constant vigilance and touching to keep it running. It isn’t like a body where each additional bit adds structural soundness to the overall system, instead each additional bit makes the system more unwieldy and pushes it closer to breaking.

Google is another example of a system that is near the breaking point, obviously they are struggling to keep their physical plant serving their users, and like Facebook they are so clever that they have always been able to meet each challenge and keep it running to date, but looking at the economics of it, the only reason this approach has been endorsed is because of how wildly lucrative mining usage patterns and the data generated by users has been.

I don’t think this will continue to be the case as the web reaches ever larger and larger groups of people. I don’t think any particular centralized infrastructure can scale to every person on the globe, with each individual generating and sharing petabytes of data each year, which is where we are going.

From a security and annoyance perspective, spam, malware, and spyware is going to be an ever increasing, and more dangerous threat. With so much data centralized in so few companies with such targeted reach, it is pretty easy to send viruses to specific people, or to gain access to specific individuals’ data. If an advertising company can use a platform to show an ad to you, why can’t a hacker or virus writer?

The other problem that is currently affecting Google severely, with Facebook next is content spam. It is those parking pages that you come across when you mistype something in Google. Google should have removed these pages ages ago, but their policy allows for them to exist. Look at all of the stack overflow clones out there, they add no real value for themselves except for delivering Google adsense off of creative commons content. What is annoying is that because of the ads, they take forever to load. Using a search engine like Duck Duck Go things are better, but this is likely only because it is still small. DDG also says that it will not track its users, that is awesome, but how long will that last?

It is possible for a singly altruistic person to algorithmically remove the crap from the web in their search engine, but eventually it seems that everyone bows to commercial pressure and lets it in in one fashion or another.

Concentrating all of the advertising, content aggregation, and the content in a couple of places seems nearsighted as well. The best way to make data robust is to distribute it, making Facebook the only place where you keep your pictures, or Google, or Apple for that matter is probably a bad idea, maybe it makes sense to use all three, but that is a nuisance, and these companies are not likely to ever really cooperate.

It seems to me that something more akin to diaspora, with a little bit of Google wave, XMPP, the iTunes App Store, and BitTorrent is a better approach. Simply, content needs to be pushed out to the edges with small private clouds that are federated.

This destroys most of the value concentrated by the incumbents based on advertising, but creates the opportunity for the free market to bring its forces to bear on the web. If a particular user has content that is valuable, they can make it available for a fee, as long as a directory service can be created that allows people to find that content, and the ACLs for that content exist on, and are under the control of the creator, that individual’s creation can not be stolen.

Once the web is truly pervasive then this sort of system can be built, it will, however, require new runtimes, new languages, protocols, and operating systems. This approach is so disruptive that none of the existing large internet companies are likely to pursue it. I intend to work on it, but I’m so busy that it is difficult. Fortunately, however my current endeavor is has aspects that are helping me build skills that will be useful for this later, such as the Beam/Erlang/OTP VM.

The benefit is to individuals more than it is to companies, it is similar to the concept of a decentralized power grid. Each node is a generator and self sufficient and the system is nearly impossible to destroy as long as there is more than one node.

This weekend, on a bike ride, I was thinking through the Apple vs Google situation, as well as the paid vs non-paid, and this whole concept of open systems vs closed and I came to the conclusion that it is really just about geeks vs non-geeks.

For about the past 20 years or so, computer stuff, anything digital really, has been produced primarily by the geeks at Microsoft, and later by various open source geeks around the world. It was reflecting their world view, that everyone ought to be able to tinker, and that they might want to. This caused the severe amounts of confusion that people have had for years.

It would appear that now that consumers have a clear and viable choice in Apple and the iPhone that they are choosing, in droves, really, the closed app store based system. It would appear that consumers would prefer an app store to the open web, an individual coherent vision to multiple pieces of different developer’s visions of the optimal way to do x. As Apple likes to put it, they want an appliance, in which applications are just another type of content, and all methods of doing anything are consistent.

I would say that consumers have chosen that, but not because Apple always provides a superior method, or that they like being closed an limited, I would say that it is because Us, as geeks, have not done a good job of providing clear and usable alternatives. For developers and geeks, configuration and making tons of choices are just table stakes for getting our devices and software working exactly the way we want them to work. We have a difficult time creating things that violate the ability to choose a different way. Part of that is that most of us never have the hubris to think that we can decide for others how to do a given thing, or which thing to choose. But that is exactly what makes Apple more powerful than Google to the consumer. Google is catching on, but in a way, at the same time they just don’t get it.

I, personally, understand and prefer many choices. I like Mac OS X and Linux, particularly because there are so many different ways to set things up, the 3rd party developer community, around the Mac especially, have done an amazing job of filling in the usability gaps that Apple has left. Should users choose these productivity enhancers, Apple has wisely seen fit to let the 3rd party devs keep doing their thing. The problem with Android, and the internet in general is that most people are not like us. They don’t want to seek out and try 5 different text editors and window managers, and text expanding solutions before finding the right one. They want to just use it most of the time, and they would prefer if the base implementation didn’t suck.

Geeks, and Google, we would prefer to just let the base interfaces and systems suck, since our partners are either going to replace them, or augment them. That is exactly what shouldn’t happen. Technical solutions should be like European Socialism… The government provides a generally acceptable set of services that everyone pays for, but it is possible to get better solutions. This provides something of a floor for service providers. Likewise, if you are developing a music solution for example, provide a playback solution that works with it first, then give the ability to plug into other services if the user prefers. That way, they aren’t left hanging initially.

Where I get frustrated with Apple, and where I continue to choose Google’s services, even they are less usable, are that they do not give me the latter solution. They provide a kick-ass initial implementation, but when I want to go and replace or augment it, particularly around the iPhone ecosystem, there are no options, in fact, they go out of the way to defeat any other option. If I wanted to use Apple’s music purchasing service, but I didn’t want to use the iTunes application, I am SOL. Apple feels that they make the best music playback solution as well as the best service. For some they may, but for me, I would much rather use AMAROK or something else to manage my music, inferior or no. If I chose the other way, I might want to use Amazon’s MP3 service for buying, but iTunes for managing. Apple should make that easy for me.

At some point, geeky companies like Google, and to their credit, they are starting to, need to create good baseline solutions that run up to, but stop short of competing with other products and services that are auxiliary to their primary product. Apple needs to accept that people may occasionally choose to do their own thing and allow them to.

I do not buy the assertion that in order to provide a cohesive solution you have to block all others. I feel that a system can be aesthetically pleasing and useful, as well as permissive. Karmic Koala I think gets really close to being there, but there are still too many places that I can get into with the OS where regular users would go WTF?!!?

This is why I am continually working on a new OS that as an ambition would combine the completeness and ease of use of the Mac OS, but honor the internet, as well as user choice. They are not mutually exclusive, and the only way to prove it is to build something that shows it. It is a huge amount of work, which is why the only way to do it is open source, but since you have to make clear choices for the user, at least in the initial state, some stuff just couldn’t be committed.

Basically, end-users won’t realize the cost of the choices they are making until they are gone. In a balkanized, app-store-ized internet, choices will be limited, prices will be high, and satisfaction will be generally low. That is where we are going, that is the choice that users are making because they can’t wrap their heads around the internet. It is our fault as geeks, and we are the only ones who can fix it. The average user is going to pick the shiniest and easiest widget. There is no reason we can’t make that.

I hear a lot of prognostication about who will buy Palm now that they are officially up for grabs. People are suggesting that HTC, Lenovo, or even Apple would be the most likely to buy them, however I don’t think any of them will get Palm. I think that Google will get Palm for around 1 billion dollars, and here is why.

Primarily, the main reason is that Palm’s WebOS falls directly in line with Google’s philosophy of web first, native second. That with the Google Native Client could make for a compelling addition to Android. One could argue that Android is lacking only in UI, and WebOS has a UI second only to the iPhone. Secondarily, buying Palm would give Google patent ammunition to use in assisting HTC in their legal battle with Apple, especially since it is Google’s Android OS that is causing the issue.

It doesn’t make sense for Apple to get Palm, even if they are in the bidding, because Google has shown in the past that it is willing to go way above a company’s valuation to snag them. This makes just too much sense so it has to happen, that is my prediction, it is sort of hopeful because I like WebOS and Palm, and would like to see it continue, albeit in a more pure HTML 5 sense.

This past weekend, I was thinking more about the iPad. One of the thoughts that kept coming back was about the iPhone / Cocoa Touch development ecosystem as a platform, and how that looks in comparison to existing platforms. The conclusion that I came to was a bit disturbing to me as a developer concerning the future of application development in general.

It is somewhat useful to quickly recap the development environments of the past to contrast them to today. First we need to talk about Microsoft and what a platform meant to them. To Microsoft, the computer was a tool for technical users. Even if their said goal was to put a computer on every desk, the engineers clearly have and had difficulty putting them into the place of their users.

As the computers’ abilities increased, so did their complexity, and the complexity of the OS. Doing simple tasks like taking a piece of text from a word processing program and putting it into your spreadsheet program in DOS was mostly ridiculously complicated. Windows made things a bit easier initially, but only for the most technical users. Doing what should be mostly simple tasks were still difficult, DOS was still around and necessary to do many common tasks. The thing booted from DOS which created no end of problems. It just wasn’t an optimal solution for the mass of computer consumers out there. This was evinced by a proliferation of “computer” classes which were supposed to take the burden of designing something that was easy off of the engineers who designed the system. That it did, and they proceeded to make a system that was even more of a tangle.

For those who would say that the Macintosh is much easier, I take issue with the word “much.” In reality Unix / Linux / Mac OS X.x is not terribly easy to use. To someone who has a good understanding of the computer, and conventions it is much simpler and more straightforward to use and manage. For a technical user Apple does a fantastic job of making most things that normal people want to do easy without preventing technical users from doing complicated things, but the underlying complexity is not without its cost to the typical end user.

Now, if you were designing a platform today, for millions of people worldwide, with different levels of technical ability, the issue of computer and operating system security looming large, and the ever increasing abilities developers have to make computers do insanely complex things in the blink of an eye, how would you develop it? Would it be like Windows, putting the burden of learning, understanding, and protecting themselves on the user? Would it be like Unix / Linux, putting the burden of everything on the user, but exposing incredible levels of customization to the user?

What you would do would depend on what your goal was, but if your goal was to provide the best possible user experience, you would likely ( I know that I would ) take it upon yourself to protect your users from viruses, phishing, hacking, malware, etc… You would likely make it difficult or impossible for developers working on your platform to make choices that would negatively impact the usability of the platform. You might choose a somewhat difficult language combination for development to make a barrier to entry for developers, to make sure that the developers that did create for your platform were of a caliber such that they could actually make compelling content for your devices.

You might establish a certification board of some sort to determine if the applications being developed for your platform met your requirements for ease of use, stability, and security. You may come to the conclusion that the only way to enforce your vision of the platform and be the ultimate consumer advocate, you would have to make sure that every application went through this board before they were available on your platform. Once available for your platform, you might make the installation and configuration experience as painless as possible for the user, even if it meant imposing further complexity of implementation on the developer.

Does any of this sound familiar. When I went through, designing a platform as a consumer advocate, what I ended up with was pretty much like what Apple has for the iPhone / iPad / iPod Touch development environment. With one exception, I was actually more stringent in that I wouldn’t allow wapletts ( web application applets ) on the platform. I would require those developers to just build a web application customized for the experience.

The funniest thing, or strangest if you don’t like that colloquialism, was that designing the platform as a developer, it didn’t look anything like this, in fact, it looked much more like the development experience around Ubuntu linux. Where I ended up is that perhaps as developers, we are heaping too much responsibility on the average user trying to use the platform. I think that Apple has the right mix with the app store experience for the types of devices that are running the Cocoa Touch framework on Objective-C.

That being said. I don’t like it. However, I understand it, and the UX / UI Designer at my heart rejoices at the emergence of this paradigm, where the responsibility for security and workflow consistency are on the developer, not the user. But the programmer in me rebels at having someone tell me how to design and implement what I want on my device. Having someone lord over me as to what is an acceptable software application is irritating to say the least. I think the UX designer, and consumer advocate in me wins, and there are platforms like Mac OS X that I can work on to satisfy the programmer urges in me.

I predict, however that Apple will do away with the use of the existing Mac OS X on the MacBook,the iMac, and the MacBook Air. I think they will start running this Cocoa Touch OS with all of the restrictions and HIG guidelines as the iPhone. I think that there will be an app store for these devices, and I think that it will be the only way to install software. Seeing iWork on the iPad is the first example of the migration of Cocoa Touch to a full fledged computer operating system.

Apple will probably, keep the MacPro line and the MacBook Pro, perhaps adding an iMac Pro running Mac OS X.x in the way we have always come to expect it, and it will likely become even geekier than it already is. The most floor slapping, hilarious thing is that Apple has come full-circle to an old Microsoft idea that was right on, however, big surprise, was improperly executed.

Originally Microsoft had its Windows Professional and Home lines, they had Windows 2000 for business and Windows 98 for home users. The concept was that they wanted to have a much simpler OS for normal consumers and a much more complicated, and powerful, platform for businesses to use. Apple has slightly turned this on its head, they, in my humble opinion, want to have a platform that is an awesome one for media consumers, and general consumers, and a platform for the programmer geeks that have made Apple what it is. It is for that reason that I anticipate a iPad Pro soon after the launch of the iPad, perhaps even as soon as WWDC ’10. The iPad Pro would likely run a Cocoa Touch OS that was less restricted, and more like Mac OS X.x.

Ultimately, I think Apple wants, and will make everyone happy, but we are at the beginning of this incredible consumer platform, and I think that for its stated goals, the App Store, the “awful policies,” et cetera, are the best possible way to get to it. However, I think for its perception among geeks, Apple needs to communicate their strategy as soon as possible. If they intend to make all of their devices like the iPod Touch, then we have a problem. However, this is extremely unlikely. I can’t wait until WWDC this year!

I have had a wave account for some time, but I never really got it. I understood it as a communication platform and all of that, but I didn’t really understand what was in it for google. Then I thought a bit more about it and I remember something that Yahoo! said a long time ago, “email is the social network.” That didn’t make sense to me at all, until now.

Most people use email for a large chunk of their interaction with other people. By Yahoo! saying that email is the social network, they were indicating that most of what Facebook does is overglorified email. People typically, pre – facebook would share photos, music, and videos over email. The biggest complaint was that email didn’t allow them to have large enough attachments. Enter youtube and flickr. They allowed people to embed links to larger content and then email them.

Enter Facebook. Facebook allowed people to be able to control who could see what. It allowed for semi-private posting, plus all of the features of youtube and flicker with email. It became the ultimate communication platform. Once apps was created, it was over, runaway success.

Google initially tried to build a social network with Orcut, but that really wasn’t going to have the traction that Facebook had obtained. Google wisely stopped pushing that. When wave was announced, I thought that it was aimed specifically at outlook in the enterprise, and maybe some minor aspects of personal communication, but nothing significant. However, with their plugin system, and its federated nature, it starts to pretty much become a better facebook than facebook.

The first aspect of Google’s attack on Facebook with Wave is that it is private by default. Waves are only available to specific people or groups that you explicitly choose. You have a wave status that you can update, you can attach pretty large files or URIs, or even embed some content into the wave… There is commenting. It really feels like a social network, and the plugins are just genius. This will eventually challenge facebook since anyone can run a wave server. It also tackles Ning, and pretty much any other social network out there. All it takes is for Google to flip a switch to give users the option to produce a public wave, or a wave that all your contacts can see, and it starts seriously eyeing content management systems.

It attacks Twitter in that it is immediate, and it is optional. I can follow or unfollow waves as I wish, so I can jump in and out of conversations. Something that I have desperately wanted for some time, this is what makes Twitter and Yammer awesome. That I don’t always have to pay attention to them, email is too immediate, and there is always important stuff mixed up with unimportant stuff. Wave lets me discriminate. Wave will always scale better, and have more history, therefore more data mining value than Twitter. It is federated, and peered from what I understand of the spec, and therefore should be more resilient than anything a single company, save Google could build. Also since it is an open standard, more people should get behind it. If I were Twitter, I would be looking at how I could merge my service with the standard.

Wave destroys Yahoo mail, period. I would imagine that Yahoo has something up their sleeve since they killed 360, but they are hurting so badly for cash right now that I’m not sure. I think that a federated wave could hurt a lot of web email providers.

Finally, Microsoft. Exchange has hammered everyone for a decade with its expensive licensing and limited feature set. Wave easily destroys it on features and usability. Hopefully Google will unleash Wave into Google Docs, and the enterprise Google Docs. I think that savvy IT managers and most of the engineers will jump nearly immediately. This will be mostly the end of Yammer if it happens. Although I think Twitter and Yammer have features that wave is missing, the standards body could just add them, everyone could implement their UI for the features and be done with it. Microsoft exchange and outlook never really understood why anyone would need additional features and media types, so I don’t expect for it to live long past the wave proper launch with enterprise wave server and client providers. The costs would be so cheap that it would be difficult for them not to look at it. Especially since most enterprises are still running very old version of Exchange.

Microsoft has such a tarnished reputation in enterprise now that most people have to seriously look at whether to upgrade to the latest Microsoft thing or not. Mostly they trial it for extremely long periods before committing the updates to the masses. Since waves can persist, this can even replace sharepoint, and it does it with a metaphor that people are very comfortable with… email.

For the past few months I have actively quested against using anything that is free, asking difficult questions of the product, and often choosing a paid alternative when the answers were not forthright enough, and I have been noticing similar tension on twitter, and the other social media places that I haunt, as well as casual encounters with friends and family. Why have I have been trying to move off of the free ecosystem? What reason could there possible be? I mean who doesn’t want stuff for free? Well, that answer is complicated, to fully understand it, I think we have to look at some of the things that the “free” ecosystem has brought us.

The first, and most significant negative thing that the expectation of free software and services has brought to us is a huge proliferation of spyware and malware. There are a few reasons that the amount of spyware and malware increased dramatically around the time that software became available for free. It is largely a consequence of the law of unintended consequences. First, fast internet became widely available at costs that are reasonable. In fact, for a while ISPs played around with having a free price point, but that faded away quickly as capital intensive enterprises are incompatible with the gift economy.

The next is a series of unsustainable business models driven by advertising with ever declining value delivered to the sponsoring companies due to consumers being advertised out. This in turn has driven to many choosing not to consume content at all, or destroying once vibrant businesses such as newspapers, music, and movies. What is the answer to the decline, to increase the ads of course, to make up for a clear down trend with increasing the volume and driving down margins while lowering the quality of the product to keep the same profitability. Does this sound familiar, it should, its the same thing that happened in the housing market to continue an unsustainable business model. Instead of innovating out of the crises, the advertising companies are clinging stupidly to the old systems.

Once fast internet became widely available, the GNU / GPL driven software model with distributed version control systems became possible. Now people were able to collaborate on software, in countries where labor costs were cheaper, driving the price of development down in general for large projects. The GPL began with a powerful intent, to make software, and its source code available to facilitate learning and improve the quality of all software. It has largely achieved this end, however it got end users used to being able to download high quality software for free. At first, this was all gravy, but eventually these same people started to get tired of giving away their hard work, some of them graduated college and needed to make money, others just wanted to improve their standard of living, the reasons are too numerous to go into, but the result is that these “alternative” business models started to spring up around software that at its core was free. The service / support model was the first to appear, along with making closed source software available for free but with embedded malicious software. The idea behind this was simple but powerful, by installing covert software on millions of remote PCs you could send spam email advertising whatever you wanted, and no technology ( at the time ) could stop you.

This was the beginning of the advertising ecosystem. Yes it basically came from malware.

TANSTAAFL : There ain’t no such thing as a free lunch. Truthfully, nothing is free. Businesses saw what was happening in the malware / spam / zombie / email space and wanted to find ways that they could do this in a legitimate way, since millions of dollars were being made off of the spam networks. What the malware / spam networks were getting from users, in addition to their IP addresses, were profiles of their behavior online, the networks could generate information about what sites were trending etc, where people went when they were looking for a product, where they went after the first click. Crack cocaine for marketing executives. It is always surprising to me how many people do not understand what is happening when they use google, bing, yahoo, etc, and why they are free. These are hugely expensive enterprises, with huge costs that could almost never be made up by charging people to use them. I don’t know what Google would cost if they didn’t advertise, but I would imagine that it would cost thousands a year to use it in order for Google to be profitable in the same way.

Webmasters often don’t think about why Google would give away analytics when Omniture has built such a profitable business selling web analytics for years. The reason is simple, Google makes more money from adwords when they can trend users from the Google search page, through their path from site to site. By including Google’s tracking code, an authenticated user logged into Google’s services can be followed.

This has benefits to the user in Google’s case, since Google has so far shown that they can be trusted with the vast amounts of user behavior data that they have amassed, and they do frequently show ads that are highly relevant. So Google’s business is in gathering data points about your behavior, and using that data to present you with the ads that are most appropriate to you. Every application and service that Google builds is to this end.

Yahoo and Microsoft are desperately trying to copy this business model, as are many smaller vendors, and that is the problem. While Google can be trusted, I do not believe that the others can be, and frequently I am not 100% certain about Google. The problem is that Google is behaving as though it were the only company out there doing this, and they seem to be oblivious to the fact that people don’t want to see ads, even good ads. I keep hearing that poor targeting is the culprit, but I am not so sure that is true any more. There is a class of people that is rapidly growing who just don’t care what type of ad it is they are just tired of the cognitive noise. I would be included in that class.

With so many different ad networks trying to copy Google, the result is end-users inundated with ads, everywhere they go there are these behavioral ad networks trying to determine what ads to show you, with varying success and quality. They are all clamoring for data, trying to convince site owners to put their little tracking code into their stream. Unfortunately this hasn’t stopped at the web, iPhone apps, Blackberry and other mobile apps, even desktop apps are showing little ads in order to compensate the developer, whose time is extremely valuable, for their hard work.

The problem for a company like Google that is interested in doing the right thing, or at least trying to, is that the lesser companies are producing ad-fatigue in users, which has lead to adblock pro and other advertising blocking solutions as end-users try to reduce the noise around them. These companies, realizing that their ad driven dreams are beginning to fade have moved to making ads look like content in the old 30’s radio business model. The funny thing is that those old tactics led to the FCC getting involved and setting guidelines as to how advertising should be embedded into programs. It is a vicious cycle that is reproducing itself in all mediums.

The embedding tactics range from “independent” product blogs, to product shils on twitter, to television programs designed to specifically and only show you a car gratuitously. Again, not all of these are bad, I follow several businesses on twitter that do not annoy me, and actually behave more like a partner than someone trying to cheat me out of my money with a product that I don’t want, and can’t use. Some of these ad sponsored “apps” on the iPhone for example are so thin as to be a press the monkey with a batman logo. What is the point of that? It is just noise.

So what’s the problem? Everyone is getting paid.

The overriding problem is this… its too much sponsored content in general. Everyone seems oblivious to this and I’m not sure why. It could be the same thing that lead to the housing crash, everyone was making way too much money to look at the obvious. People are tired of being advertised to. Everyone is touting some kind of free future where everything is free and companies are always making money in “other” ways. Typically these “other” ways are not specified, but I can fill in what “other” is. They are increasingly nefarious and opaque ways of capturing your behavior and data, then using that information to influence your behavior, usually resulting in you buying stuff with you not being able to remember why. This is bad, and is not really a proper way to run a business. It can only end with massive data leaks and a public so unhappy that government legislation is required.

I don’t think this will happen. I believe that the public is smarter than this and that they will start to back away from free software due to being saturated with ads, and begin to embrace paid software from companies with clear agendas and business models. I think that the VC money will begin to follow suit, heading instead to companies with models that a 5 year old could understand, as opposed to models that only a PhD in macroeconomics can comprehend. We make a product ( content ) and then we charge more for it than what we paid to make it.

Another of the problems with the ad model is that where once it liberated artists to develop art without needing to think about how they were going to get paid for it, it is now doing the opposite. Companies are hiring artists to make movies, television, plays, books, video games, you name it just to push some product. Artists are now the slaves to the master that they were once masters over. I would argue that the newspapers have it right, that they just need to start charging for content. It is critical, however that they get their pricing right. I think that PayPal and micro-payments will be the Visa of the future, if Visa gets their act together and drops their rates, perhaps they could be the one. Perhaps newspapers’ circulation will drop, but they would be more profitable and healthy. One company has demonstrated that this is a sound business model, and they are standing astride the world right now as a colossus.

Apple is poised to do very well in this system. Not only have they always chosen to provide high quality products and charge top dollar for them, we see that the public is more than willing to pay for quality software and hardware. MobileMe may have had its issues, but Apple’s motive in making it is simple, they want to sell more iPhones and Macs, they make 50% profit or more on each one, there is no ulterior motive, they are not selling my data, there are no ads, period. They make money in a way that I can explain to my daughter in one sentence. They could put some ads in the iLife suite and give it away for free, but why? They have proven that people will pay not only for the Mac to run the software, but they will pay a reasonable amount for software on top of that.

Microsoft and Adobe are as guilty for creating the free / illicit software market as anyone, by charging ridiculous amounts for their software for what it does, people had to figure out alternative means to get their work done. This feature of software engineering is furthering the dependence on these opaque difficult to understand business models. If you make a solid product and charge a reasonable sum, even a high-reasonable sum, people will pay. Otherwise, they will pirate or find ways to cannibalize the standard method of doing business.

To sum up, the free era is over, Google’s business model is in danger, and Apple and content companies that create quality product and are willing to charge for it stand poised to make a comeback. Microsoft and others following Google are lemmings headed off the cliff. I think the advertising bubble is about to be popped.

I have been poking around with ARM chips via the beagleboard for a while now, and I have to say that at a far lower speed, they are much more energy efficient than the Intel Atom, and I have a hard time finding the difference in performance. After Intel’s tirade at that conference, I was sure that there was something to the threat from ARM, now we are seeing some rumors about the possible fruits of the Apple / PA Semi merger : AppleInsider exclusive: Apple Tablet Early Next Year.

While that is interesting in its own right, I think there is more at stake here than just what chips are powering the coolest devices. I have been waiting, as have most everyone else, for this conceptual tablet. I want to not have to carry a Kindle, an iPhone, and my laptop. It would be awesome if I had a single device that used wireless HDMI to connect to my screen and speakers, bluetooth to communicate with my keyboard and mouse, and 3G for phone calls and mobile data. This mythical tablet is the closest thing to this. If it runs full OS X, and has the ability to run iPhone apps as well as native OS X apps it will complete the hat trick. It should have a virtual keyboard for when I am not near my bluetooth keyboard, and when it is in proximity, it should use it, without dialogs or configuration. Likewise when the monitor is away, it should display on device, otherwise, it should use my monitor.

So even if this device is only partially what my dream is, it will be enough to get me to buy it and probably most of everyone else will buy it too. That makes for an interesting shift in the consumption of applications. Now people will start developing for mobile first and desktop second. This means that they are developing for ARM first and Intel second.

From the server room to our pockets, power is a concern. One of the things that I can’t wait for is the ability to have a server that runs 100 ARM Cortex A9 cores at 1 GHz instead of 8 CPUs at 3.4 GHz. The former server would consume way less power and perform far better as a web application server due to the extreme threading that would be possible. Desktop machines would follow directly behind with 50 core desktop machines with 50 PowerVR video card on a chip chips with the monitor divided up in a 25 x 25 grid ( this will take some work ). This could be a very thin box with only one very silent fan and have insane performance. Not to mention that the same machine could be a laptop that runs way cooler than my MacBook Pro, which hovers around 135 degrees(f) while just playing iTunes. It could get 16 hours of battery life using the same battery that I use today.

In this world, Apple is far better positioned than Microsoft, with their / Kronos’ OpenCL. Snow Leopard will be in a great place to benefit from this type of architecture. Not to mention the AppStore, Apple has the DRM, distribution, signing infrastructure all in place, and hundreds of thousands of developers know how to use it, don’t think they aren’t thinking about pushing this model to the macintosh for application distribution, it just makes too much sense.

The future is clearly mobile, but who is going to lead that charge is an open question. Apple has made moves to secure their superiority for the next few years, Microsoft appears to be going backwards. Intel just can’t seem to break into the ultra-low power CPU space without an acquisition. I think the Wind River purchase was to put a dent in the number of ARM customers. Clearly the future is not dominated by WinTel. I am shocked that AMD hasn’t abandoned its platform and moved to ARM computer on a die chips, I am sure you could imagine how awesome it would be to have a muti-core ARM chip with an ATI GPU on a die. Intel would say that the ARM doesn’t perform as well as their Atom CPU, but that was the mistake that let AMD back into the game before, they just kept sticking to the performance argument while the market was telling them that the current speeds were fast enough, and that they wanted better performance per watt. If Intel hadn’t had that R&D group pushing ahead with the Mobile Pentium in Israel, the computer industry would look very different today.

I think that the Apple tablet will be a game changer, and will ultimately be their most successful computer launch, even more so than the iMac which brought them back. I am afraid that if Microsoft and Intel can’t answer, the one-two-punch of Steve Jobs, and Google will finally have felled the giant duopoly.

A few months ago, I got banned from the Android dev google group. You might think it was because I was being a troll, or because I got into an inflamed argument with a moderator, but actually it was for none of the above. I got booted because, the best I can tell, because I broke protocol and commented on a post that the moderator said was closed.

I say the best I can tell, because I received no warnings, no emails saying, “hey what you did was not OK and this is a warning, the next time you will be banned.” What was in the comment you ask? I was responding to a thread about why Google had chosen not to use Jazelle, the Java accelerator built into the G1, and the iPhone. What it is, is an ARM-7 coprocessor used to speed up interpretation of Java bytecode. The poster was railing Google for not using it and claiming that Android was not fast enough, and that the only reason they weren’t using the acceleration was that they didn’t want to license Java, etc…. The amazing part is that my response was in defending Google. I was saying that Android was only at 1.1, and that I was sure that there was a bunch of optimization that was left to be done, also that the Dalvik bytecode was likely not compatible with the Jazelle coprocessor. I reminded them what the V8 team had done for JavaScript with their assembly language VM optimizations and that perhaps those enhancements would make it into android.

Later in the day when I went back, it was like bam, the moderator has banned you from the group. My first thought was to get mad, turn off my G1, go back to the iPhone and be done with it, but then I remembered thinking that the moderator’s responses were pretty terse and that maybe they were overworked and angry and banned me for posting to a closed / moved post. I honestly didn’t know where I was in the maze of Google’s group, I couldn’t tell if I had been redirected to the discussion section or not. It is frustrating to have a company like Google, who I normally associate with free speech and open discourse, censoring me in that way. By contrast, I have never been banned from an Apple discussion group, or from any other anything for that matter. Most of us associate Apple with secrecy and killing off free speech and discourse, but actually I have found that their position on what can be said, and should not be said to be clear and reasonable, and that they are always pretty good about that on their posts. If the post moves into an area that shouldn’t be there, the moderator deletes the posts, says why they deleted the posts and moves on. I doubt that they permanently ban their board members.

I have been wanting to port Mides to Android, and to build several applications for it, I have been a huge supporter and advocate of Android in my workplace, where I have some ( very small ) influence over what platforms we support, and to have Google shut me down in this way, makes it difficult for me to continue to convince other developers to build for Android. I would think that the battle for developers would be where platforms succeed or fail, and to have a company who is steeped in the battle carelessly piss off developers makes no sense to me. I like Android, and I want to see it succeed, but sometimes I just don’t know.

I do hope you will pardon the hyperbole a bit, but If someone had told me a few months ago that we would have JavaScript threading, which I have been begging for for years, built into the HTML standard. I would have thought they were crazy. Now we have a situation where Safari 4, Firefox 3.1, Chrome ( Gears ), and IE 8 ( all in beta ) support it.

Lets look into my crystal ball for a minute. We have a situation where browser based apps are becoming more and more capable all the time. Where arguably the most efficient method for developing against mobile devices is to use web technologies, and where we have an insanely awesome JavaScript engine available for general use in any programming system in Chrome. Looking down the line, I can see that JavaScript will be the primary development language once we start seeing implementations for HTML 5 Web Sockets. It may be there, I just haven’t checked yet…

If you have Safari 4, or the webkit nightlies, you’ve got to check out this link:

The speed of JavaScript as an interpreted language is up there with any of the others, in fact, Firefox 3.1, Chrome, and Safari 4 are wicked fast. Soon, we may not need desktop apps at all, and Microsoft’s bungled ActiveX dream may just come to pass. What an exciting time to be a developer!

There are a lot of cool things about the G1, but most people don’t talk about he hardware very often. It has a pretty snappy CPU, a good amount of RAM, and expandable storage. The coolest thing about the CPU is that it is a CPU/GPU dual-core combo unit. That gave me some ideas.

I know that Apple is working on their OpenCL that will allow applications on Mac OS X Snow Leopard to leverage the GPU for tasks that use it to advantage, like say floating point math, etc… It abstracts away all of the lower level coding that one would normally have to do for this type of functionality. Nvidia has some APIs for this type of programming, and I believe that Intel does as well. But one of the spaces where I see this as being especially beneficial is in the mobile space for phones and the like. The reason is simple, there is usually no math coprocessor like their x86 counterparts, and while most software doesn’t use it, others, like say speech recognition and image processing could get a huge boost out of it, and at a modest power cost.

What would be awesome is if Qualcomm published some sort of GPGPU language or spec to the Android foundation, and that made it’s way into the Android framework. Then when the GPU wasn’t being used, or when it was being used lightly, it could be used by code in the way one would spawn a thread to be run, but instead of running in the main CPU it could be run in the GPU, and free up the CPU to spend time doing something else.

I’m sure this has already been thought of and discussed by the Android brain trust, but it just occurred to me, and it would be 100% awesome if it made it into the framework.