At ReadWrite, we can't stop talking about HBO's “Silicon Valley” and its uncanny depiction of the tech world we cover. So we're going to start offering recaps of the show.

For those of you just catching on to the show, HBO’s “Silicon Valley," created by Mike Judge, focuses on the trials and tribulations of Pied Piper, a fictional startup working on compression technologies.

Richard Hendriks is the CEO and founder of the company; Erlich Bachmann is an entrepreneur who owns 10 percent of Pied Piper from serving as Hendriks' landlord; Dinesh Chugtai and Bertram Gilfoyle are engineers; and Jared Dunn, a former employee at tech giant Hooli, now runs business development for the startup. They're backed by Peter Gregory, a billionaire who’s loosely based on Facebook investor Peter Thiel. Gregory's nemesis is Gavin Belson, the CEO of Hooli, an all-encompassing tech giant modeled after Google.

"Signaling Risk” deals with fundamental questions of identity: What will be the company’s logo—and why does the company even exist in the first place?

The episode sees Erlich negotiating ineptly with a Bay Area graffiti artist he hopes to employ to create Pied Piper’s new logo. We also get to see a quiet brunch showdown between moguls Peter Gregory and Gavin Belson. Their conflict, combined with a casual decision months ago by Peter, pushes up Pied Piper’s timeframe to launch.

Dinesh is locked outside of the Aviato branded car.

The first scene begins with Erlich (T.J. Miller), Dinesh (Kumail Nanjiani), and Gilfoyle (Martin Starr) driving in a car flamboyantly wrapped with the logo of Aviato, the name of a startup Erlich sold for a figure “in the low seven digits” a couple of years ago. Erlich takes his two coworkers to a rough neighborhood in order to find a graffiti artist named Chuy Ramirez, whom Erlich hopes will design a new Pied Piper logo.

“Every company in the valley has lowercase letters,” Erlich says. “Why? Because it’s safe. We aren’t going to do that. We’re going to go with Chuy.”

It’s a perfect play on the startup attitude of self-important individuality. Erlich’s focus on getting the perfect logo demonstrates the willingness to throw arbitrary amounts of money on superficial aspects of the business, while ignoring the actual product completely—all in an effort to play Pied Piper up as different.

This references the $200 million profit real-life graffiti artist David Choe made when he decorated Facebook’s headquarters in exchange for equity. Erlich stumbles through an agreement with Chuy for a logo on his garage door for $10,000. His clear discomfort with striking up deals with the artist is unfortunately paired with his lack of direction for the logo. Chuy is left with a clean canvas.

Jared presents his prediction of Pied Piper's future without a clear company culture.

Back at Pied Piper headquarters, Jared is also stressing “clear lines of communication” to Richard (Thomas Middleditch) because without boundaries, protocol, or a company culture, Jared believes the startup will go downhill. But it's also clear that Jared, whose previous experience was at the highly structured world of Hooli, is uncomfortable with the looseness of a new venture.

If Erlich represents a delusionally image-obsessed aspect of startup culture, then Jared is a caricature of big-tech-company process management. Jared, who gave up a position as Gavin Belson’s director of special projects at Hooli, can't let go of his business jargon, charts, and Scrum software-development methodology.

Jared and Richard have a discussion at the Pied Piper headquarters. Jared wears a button up underneath a sweater, while Richard chooses comfort in a hoodie.

The characters’ wardrobe speaks to the distances between them as well. Jared is still hanging onto his Hooli roots—and before that, his time spent working in politics, with a smart haircut to match his button-ups and sweaters. Compare that to Dinesh’s track jackets and polos, Erlich’s boho sweaters, and Richard’s Zuckerberg-inspired hoodies.

In one shot filmed through glass doors, she’s framed in a way that divides her from Richard and Erlich, highlighting her otherness as she confronts them about Pied Piper being entered into TechCrunch Disrupt, a startup competition—a move which risks embarrassing her boss. Monica stands as Peter and Erlich sit casually on a couch. Through her body language and wardrobe, Monica’s character represents business reality—a foil against Richard's builderly cluelessness.

Gavin Belson trying to work out the glitches on the Telehuman hologram.

The only other woman in this episode appears when Big Head (Josh Brener), a former Pied Piper employee who’s been promoted at Hooli, meets with Gavin Belson (Matt Ross). She's a functionary, part of a three-person Hooli team facilitating the conversation. This scene hilariously goes down the ladder of technological ingenuity rung by rung, as Big Head first meets with Gavin through a TeleHuman hologram to chat about Pied Piper’s TechCrunch Disrupt debut.

Gavin moves over to Hooli Chat after a failed Telehuman connection.

After the hologram begins to glitch and Gavin screams obscenities at his IT person, the team decide to move over to Hooli Chat, the company's video chat system. In one of my favorite quotes of the episode, Gavin opens up Hooli Chat and says, “Ah, that’s better. Sorry. The TeleHuman is a great piece of technology. Unfortunately the broadband isn’t that great out here in rural Wyoming. That presents a great business opportunity.” The Hooli CEO moves smoothly from belligerence to upside-seeking.

The Hooli team looking uncomfortable while Big Head takes a phone call from Gavin.

The Hooli Chat also breaks up, and Gavin ends up calling Big Head, with the Hooli team looking undeniably uncomfortable in the back as they are unable to listen in on the conversation.

In the end, even the phone’s audio cannot hold up—the irony of technology not functioning in its own heartland.

Gavin Belson and Peter Gregory meet face to face.

Although Monica can exercise her power with Pied Piper’s team, the limits of her role show when Peter Gregory encounters Gavin Belson. Monica and Peter are out for lunch, where Peter tells the waiter, who asks if he’s enjoying his asparagus, informs him that he only eats it for the nutrients, not for enjoyment.

During their lunch, Monica alerts Peter that Gavin has just come through the door. After a failed attempt to slink away, Peter comes face to face with Gavin in front of a sitting Monica. Peter and Gavin fumble through pleasantries and conversation about Jackson Hole, as Monica watches silently.

Monica watches on as Gavin and Peter exchange pleasantries.

With those examples of women standing by mute as men have conversations in front of them, this episode neatly fails the Bechdel Test, a standard proposed by cartoonist Alison Bechdel, which requires that two women in a work of fiction talk to each other about something other than a man. It’s hard to see how future episodes will do better, since Monica is the only female regular on the show.

Gavin mentions that he is going to be the keynote speaker at TechCrunch Disrupt, and that he will unveil Nucleus, a technological rival to Pied Piper’s compression product, at the event. This makes Pied Piper’s launch at the event all the more crucial. Pied Piper is becoming a plaything of billionaires more interested in embarrassing each other than building new technology.

Chuy Ramirez unveils his updated logo for Pied Piper.

After Chuy reveals an unspeakably obscene mural on the garage door of Pied Piper’s suburban-ranch-house headquarters, Erlich finally gets the logo he was looking for. Chuy creates a simple green block with two lowercase “p"s overlapping in the middle—just like Jared had proposed, but for an extra $10,000. Chuy takes back the original mural, and Erlich gets the bragging rights of telling people that Chuy Ramirez designed the logo.

At the end of the episode, we see Gavin looking out of his Hooli headquarter windows towards Chuy’s original Pied Piper logo on Erlich’s garage door. We learn he spent $500,000 to buy it—a last lesson in how Silicon Valley values appearances over substance.

The challenge of making business intelligence (BI) easier to use and more pervasive has been widely debated for the last five years. During that time, BI has stalled at an estimated penetration of between 10% and 20% of enterprise users. Every year sees a new analytical technology, a new analytical tool, a new process that promises more analytical power to the business analysts, but none of them have been able to move the needle toward widespread adoption, or "consumerization" of BI.

How Many Business Analysts Do We Really Need?

But is it reasonable to expect more tools for the business analysts to increase Business Intelligence's enterprise penetration? How many business analysts does a business really need?

Instead, we should be thinking about delivering BI to operational employees, suppliers and partners. For every business analyst, there are thousands of other employees who could benefit from the timely information BI can provide. To jump beyond BI's current adoption rate, the needs and skills of those stakeholders must drive BI's technology and the usability considerations.

Apple vs. Microsoft And Apps vs. Tools

When we look at BI through the eyes of end-users as well as business analysts, we can see two different approaches centered on two different philosophies, roughly comparable to the differing philosophies of Apple and Microsoft. While Microsoft has always tailored itself to the business world, Apple aimed its software to the consumer, creating an epic battle between tools and apps.

Microsoft offers a relatively limited set of tools packed into its Office productivity suite. They were designed to satisfy every business need. But of Excel's approximately 30,000 different functions, guess how many the average Excel user utilizes? Most use less than 5%. Only a few know how to use Pivot tables, and IT departments have to build thousands of macros to simplify Excel templates.

Apple, meanwhile, created an app store with 500,000 mostly single-purpose apps designed to meet the broadest possible set of wants and needs, many of which you didn¹t even know you had!

When asked whose paradigm is better, the vast majority of BI stakeholders would likely agree that their end-users would prefer apps over tools.

Fighting Functionality Overload

This is because knowledge workers suffer not only from information overload, but also from functionality overload. End-users are not analysts. When individuals need to check the weather, they do not perform a detailed analysis of the weather patterns. They trust what the weather app says. Similarly, business users want apps that deliver them the trusted information they need to do their jobs.

From this perspective, the consumerization of BI can only be driven by technologies that turn the classic enterprise BI portal into a BI app store, where end users can go and select targeted, specific apps that address their concrete questions.

Two Kinds Of BI Tools

Of course, the simplicity of end-user info apps should be complemented with higher-end tools to help professional analysts learn to perform new and more complex analyses and derive even better business insights.

Rather than striving to turn end-users into analysts, we have to give those users info apps that let them focus on their primary job skills. And vice versa: Rather than making simplistic BI tools for analysts, let's help them learn new methods and methodologies to maximize the insights they can derive. Analysts are coping with new data sources, new types of data and new forms of interaction with consumers, all of which provide plenty of opportunities for analysis, but also requires significant skills development.

How to "consumerize" Business Intelligence may not yet be completely clear, but one thing is certain: It's pretty clear that a one-size-fits-all approach won't do the job. BI-related apps could meet the varying needs of end-users more efficiently than the all-encompassing tools analysts require, and help make BI a core part of enterprise decision making.

On Monday, IDC predicted that PC sales will fall 1.3% in 2013, and that smartphone sales will continue their explosive growth, topping 50% and displacing the legacy feature phone as the dominant mobile phone platform.

Although IDC released the two reports separately, they're best considered together, for context. What IDC predicts merely reflects the conventional wisdom: that the age of the PC is ending, and that the smartphone is the dominant platform. And, if the Apple iWatch is real, and Google Glass becomes a viable platform, then we have the past, present, and future of the computing market: the PC, the phone, and wearable computing.

The Windows 8 Phenomenon

The 2012 performance of the PC market could be written off as a consequence of Windows 8: the pause in sales before the launch, followed by what might be called a "mild" reception by the market. PC sales fell 3.7% for the year, IDC found, with an 8.3% drop in fourth-quarter shipments. U.S. PC sales fell 6.5% for the fourth quarter and 7.6% for the year.

"The PC market is still looking for updated models to gain traction and demonstrate sufficient appeal to drive growth in a very competitive market," said Loren Loverde, an analyst for IDC, in a statement. "Growth in emerging regions has slowed considerably, and we continue to see constrained PC demand as buyers favor other devices for their mobility and convenience features. We still don't see tablets (with limited local storage, file system, lesser focus on traditional productivity, etc.) as competitors to PCs – but they are winning consumer dollars with mobility and consumer appeal nevertheless."

Gartner hasn't yet released its 2013 PC forecasts, but has already said that PC sales dropped 4.9% in the fourth quarter, as it seems consumers just didn't really care about them any more.

Smartphones: A Worldwide Phenomenon

Smartphones, meanwhile, have worked their way through "mature" markets like the United States and into the high-volume, lucrative BRIC (Brazil, Russia, India, China) countries, IDC reports. As the smartphone begins selling in high volume in those regions, look for even higher shipment numbers: IDC predicts that more than 1.5 billion smartphones will be shipped by the end of 2017, worldwide, or more than two-thirds of the phone market. In India, for example, less than half of the phones sold there in 2017 will be smartphones, IDC predicted - and yet it will be the world' third-largest smartphone market.

Gartner, meanwhile, said that sales of mobile phones actually fell 1.7% during 2012 - not because of lack of demand, but due to consumers turning to smartphones instead of feature phones.

Meanwhile, IDC reported earlier this month that tablet sales reached record levels, 52.5 million units, during the fourth quarter.

PC sales may yet rebound - Microsoft seems to believe that, and it still maintains close ties to enterprises and consumers. But, increasingly, the PC seems be a legacy device of interest to a slowly declining number of users.

After backing out of plans to compete with Netflix, Blockbuster is all but done. That's not great news for the streaming-video space, and Netflix is in a rough spot. But Blockbuster's latest stumble toward oblivion isn't necessarily the final nail in NetFlix' coffin.

On October 4, Dish Network scrapped its plans to revamp the Blockbuster brand and launch a subscription-based streaming-only product to compete directly with Netflix. Dish ran the numbers, evaluated its options, and (correctly) assumed it didn't have the assets to make a Netflix competitor work.

That leaves Blockbuster on the ropes again, with just 900 of its former 3,300 retail stores and no clear digital strategy. But don't assume that the math will work the same way for Netflix.

The Bad News For Netflix

Dish's decision confirms what I've been saying for some time: the flat-rate streaming market isn't a very profitable place. As I noted in Netflix Deathwatch over the summer: expensive bandwidth, second-rate content and strained relationships with content providers are par for the course for the entire industry.

Paul Sweeting, Principal at Concurrent Media Strategies, told E-Commerce Times that "…studios have long been leery of subscription-based streaming of movies because it produces the lowest per-view/per-capita return for the rights holder of any business model, and it cannibalizes higher margin businesses like pay-per-view rentals and even purchases." In the same article, another analyst predicted that flat-rate streaming may have only another five or six years of life.

The market is obviously sick, and it needs to change.

The Good News For Netflix

Troubled or not, Netflix still owns the streaming video market, and that brings advantages Dish and Blockbuster couldn't match. Most importantly, Netflix has existing content relationships that, while strained, put it in a better position than a startup.

In an October 8 analyst note, Morgan Stanley's Scott Devitt estimated that Amazon, which already has relationships with most studios, would need to spend an additional $1 billion to $1.2 billion in licensing rights to launch a similar service.

If that price is too steep for Amazon, it's probably beyond most competitors. Barriers to entry don't validate the streaming-video business model, but they do buy time for Netflix to try to sort out its problems.

It's also developing original content with headliners like Kevin Spacey to hedge against expiring contracts and differentiate from competitors. Netflix's margins per customer may not be fantastic, but with all those users, it has cash to invest in programming.

Eventually, though, Netflix needs to balance cheap back-catalog offerings with enough premium and custom content to create a profitable offering "good enough" to justify its prices. It also needs to keep an eye on Hulu, HBO and other content providers looking to ramp up their streaming businesses.

Put it all together, and I wouldn't want to be in Netflix' shoes. Blockbuster's implosion is a reminder of how tough things have gotten in Netflix' core business, but at least Netflix still controls its own destiny.

It has been a little less than a year since Google officially announced Android 4.0 Ice Cream Sandwich. The first device to roll out with version 4.0, the Samsung Galaxy Nexus, hit store shelves in December. Since then, 4.0 has reached 23.7% of all Android devices - with 4.1, a.k.a. Jelly Bean, waiting in the wings. What is the holdup?

Android 4.0's penetration has been slow, even by Android standards. But if top manufacturers are to be believed, updates for many devices should be available before the end of the year.

The problem is that it is difficult to trust the manufacturers. For instance, Motorola said it would upgrade any devices capable of receiving 4.1 Jelly Bean by the end of the year. Throughout its product portfolio, other devices were supposed to receive updates as well, such as smartphones running 2.3 Gingerbread through 4.0 Ice Cream Sandwich. On Monday, Motorola updated its upgrade list to show that many devices scheduled to receive 4.0, such as the original Atrix, would not be getting new software. Overall, version 2.3 will remain on 13 of Motorola’s 23 smartphones that have been released in the U.S. over the last couple of years.

Samsung is notoriously slow in rolling out Android updates and HTC is not much better. The top manufacturers would much rather sell a new smartphone running the latest software than perform extensive (and expensive) updates to phones that already have been bought and paid for. The carriers share some of this blame, as the data for the updates goes over their pipes and they drag their feet right along with the device makers.

When it come to 4.0, we are about to hit an inflection point where the majority of new users and many existing users will see Android 4.0 as their default version of the operating system. Almost all new phones from top Android manufacturers are now shipping with either 4.0 or 4.1, and Samsung and HTC have promised the 4.1 upgrade by the end of the year. Motorola even promised during the announcement of its new Droid Razr devices that if an old device was not upgradeable to 4.1, then Motorola would give consumers a $100 credit toward a new Motorola device.

The number of Android devices are running 4.0 a year after its release seems absurd, but that is the nature of the Android territory. The first 4.0 devices outside of the Galaxy Nexus were not widely available in the U.S. until April and May 2012, six months after the official release. Most manufacturers were not ready to issue updates to 4.0 for older devices until that time as well.

The way Google has managed the announcement of new Android versions and the time it has taken for its manufacturing partners to launch new devices has lagged by four to six months since Android began rising in popularity. The official Google announcement is akin to a soft launch where a product is announced but does not actually hit the market until months down the line. Microsoft is notorious for the soft launch, showing off its newest Windows versions as much as a year ahead of the official release date.

Google wants to close the gap between announcement and arrival of new versions of Android for top devices as well as upgrades to existing devices. That it why it announced its so-called platform developer kit at Google I/O in June. The PDK is supposed to help manufacturers create updates earlier and release new smartphones with the new operating system on accelerated timelines.

It appears to be working.

Since the 4.1 Jelly Bean announcement at Google I/O, manufacturers have announced and began rolling out updates to top devices. Both the HTC One X and Samsung Galaxy S III will receive 4.1 by the end of October. New devices will ship with it, such as the Samsung Galaxy Note II and HTC One X+.

As of now, only 1.8% of Android devices are running 4.1. That is primarily due to devices like the Nexus 7 tablet, Galaxy Nexus and Motorola Xoom that have already received the upgrade. It will be interesting to see how quickly 4.1 grows against the sea of devices that will not be upgraded from earlier versions in the coming months.

Device upgrades are one of the biggest pain points for Android users, and Google can do only so much to force the manufacturers and carriers to issue timely updates. It looks as though upgrades will come quicker than they have in the past, but the desire to sell more devices with the newest Android version will always outweigh consumers' pleas for updates for their older devices.

When Amazon announced its Appstore for Android last year, a lot of people were left scratching their heads. Despite the seeming strangeness of Amazon running an Android Appstore, mobile app developers were cautiously optimistic. The Android Market (now Google Play) at the time was a disorganized and difficult-to-monetize quagmire. A curated, third-party app store with the might of Amazon behind seemed to offer a unique opportunity. But the e-commerce giant had to learn to navigate the world of mobile apps just like everybody else.

App Of The Day And Gaming Nightmares

From the beginning, it was not all roses between Amazon and developers. Upon launch of the Appstore, it became evident fairly quickly that Amazon did not know how to handle the mobile developer community. Shortly after launch in March 2011, the International Game Developers Association (IGDA) issued a warning for mobile game developers to steer clear of the Appstore. The group's concerns were over pricing, promotion and distribution - isssues that Amazon’s developer agreement made confusing and opaque:

"The IGDA has significant concerns about Amazon's current Appstore distribution terms and the negative impact they may have on the game development community... we are not aware of any other retailer having a formal policy of paying a supplier just 20% of the supplier’s minimum list price without the supplier’s permission,” the IDGA said in a letter to its developers in April 2011.

This response was a big concern for Amazon because games continue to be the most popular single category of mobile apps.

Later in 2011, Amazon had a different mess on its hands, again tied to distribution and developer compensation. An independent app developer had agreed to be part of Amazon’s popular “free app of the day” program. The agreement between Amazon and developers held that even though the app was free for the day, publishers would still make 20% of the list price on downloads of their apps. The developer, Shifty Jelly, found that not to be the case.

“That’s right, Amazon gave away 101,491 copies of our app! At this point, we had a few seconds of excitement as well, had we mis-read the email and really earned $54,800 in one day? We would have done if our public agreement was in place, but we can now confirm that thanks to Amazon’s secret back-door deals, we made $0 on that day. That’s right, over 100,000 apps given away, $0 made,” the company wrote at the time.

Problems, Solutions And The Kindle Fire

Organizational problems between Amazon and developers were relatively common in the early days. Developers noted that review times for apps (Amazon pre-approves apps on its Appstore, just like Apple does for the iOS App Store) were lengthy, apps were not filtered for different screen sizes and it was difficult for customers to contact developers with problems.

“We definitely didn’t have everything buttoned up appropriately,” acknowledged Aaron Rubenson, the director of the Amazon Appstore, in an interview with ReadWriteWeb. “We’ve learned a lot.”

In October 2011, Amazon announced the Kindle Fire, its first full-featured, Android-based tablet. All of a sudden, the Amazon Appstore for Android made a whole lot more sense. The e-commerce King could funnel Fire tablet users to its own curated app repository and make money off of it. Fire users were blocked from accessing Google’s Android Market, even from the browser.

Perhaps just as important, Amazon started refining its developer experience.

The Appstore team has grown significantly (and Amazon is still actively hiring Android developers, managers and engineers) and the company has expanded the program to include more layers of developer support. That means extra app testing and more marketing experts to help developers spread awareness an increase distribution. For the App of the Day, the terms are clearer and there are more people involved to help create an integrated campaign for the developers and help prepare them for the deluge of downloads associated with the sale.

Amazon has also redesigned its developer portal for the Appstore and added a new tools and software developer kits. The biggest addition came in April with Amazon’s in-app purchase SDK that allows for integration of the company’s “1-Click” purchasing for in-app goods.

Easing that process can mean big bucks for app makers. “The average in-app purchase is two-times that of a paid app,” Rubenson noted.

With the newly announced Kindle Fires, Amazon also released a new maps SDK, powered by Nokia’s navigation suite. Games have also been given a higher priority in the new Kindle Fires, with a dedicated menu icon to the games section in the Appstore.

Developers Respond

To Amazon’s credit, there have not been any significant developer flare ups in 2012.

“We are really pleased with our overall relationship with Amazon,” said David Tyler, director of product development at NatureShare, in an email. “Working with our contact there has been great. He was always quick to respond and took a genuine interest in our apps. Amazon promoted our app when it was first released and our Audubon Birds app was even featured in an email.”

Yet, Amazon remains the underdog. Developers go where the eyeballs are - and the overwhelming majority of eyeballs are on non-Kindle Android and iOS devices. Android’s Google Play store has 675,000 apps, Apple’s App Store has 700,000. The Amazon Appstore has 51,000. Android’s installed base is near 500 million while Apple has activated 400 million iPhones. The Kindle Fire cannot come close to matching those types of numbers.

“Many of the mobile game developers on our network report higher ARPU (Average Revenue Per User) on Amazon's Kindle Fire devices than on either iOS or Android devices,” said Maria Alegre, CEO of Chartboost in an email. “However, there is simply a much higher volume of iOS and Android devices on the market than Kindle Fire devices. So mobile game developers and publishers, while they can make more money per Kindle user, are making more money overall on the more popular iOS and Android platforms. It's a volume issue for Amazon, and we're interested to see how the new Kindle Fire HD line of devices changes the landscape."

It is almost safe to say that Amazon has finally solved its developer issues. Amazon’s team is now bigger, more efficient and more receptive to mobile developers and their needs. Developers know what to expect from Amazon and its services so they shouldn't see more of the types of surprises that rocked Shifty Jelly in 2011.

The trick for Amazon now is to figure out how to get out from under the thumb of Google and Apple. The release of two new Kindle Fires is a good start, but far from enough. An Amazon smartphone has long been rumored to be in the works, but would an Amazon-branded smartphone sell in big enough numbers to make a difference?

But give Amazon its props. The company may have stumbled when it first entered the mobile app space - and it took plenty of lumps along the way. But now Amazon looks like it has found its stride.

Apple sold 5 million iPhone 5s in its first weekend - a company record. But some people expected more, and now have to explain why Apple didn't top their predictions. In reality, this isn't that big a deal. The rest of the year matters a lot more.

Compared to last year's iPhone 4S launch, when Apple sold 4 million phones, 25% growth may seem disappointing. This is Apple, after all, and Wall Street is used to seeing the company blow past expectations, not come up short. (The iPhone 4S launch, if you recall, was more than twice as big as the iPhone 4 launch.) This number isn’t 10 million or even 8 million or 6 million, so some are saying that it’s “WORSE THAN EXPECTED” or “very disappointing”.

So, yes, 5 million sales is below some estimates. But that really doesn’t mean much in the long run. Why not?

First, for the real story, you need to think about supply and demand, and once again, demand surpassed supply. Apple stopped taking launch weekend pre-orders after only a short period of time, and many stores were sold out of various iPhone models throughout the weekend. We still don’t know how many iPhones Apple could have sold over the first weekend if it had unlimited supply, and we may never.

Next, mobile is a complicated industry, where 2-year contracts often dictate purchase decisions. Apple sold almost twice as many phones over the past four quarters than it did the four quarters before that. Few of those people are already eligible to buy an iPhone 5 at a subsidized rate -- it'll be months before they can justify buying iPhone 5s. Anecdotally, I’ve also seen mentions that AT&T was being stingier about early upgrades this year than last year.

Further, first-weekend sales just aren’t that important relative to an iPhone’s lifetime sales. For example, the 4 million iPhone 4Ss Apple sold on launch weekend last year represent just 3.5% of all iPhone shipments over the past four quarters. The 1.7 million iPhone 4s Apple sold opening weekend represented just 2.2% of the phones it would sell over five quarters before launching the iPhone 4S. It's nice for Apple to sell 5 million iPhones in a weekend, but it'll be more impressive if it can sell 200 million phones over the next year.

Long story short: It’s a fun press release to see from Apple every year, but the iPhone 5’s real sales performance will be measured in months, quarters, and maybe even years, not weekends. It’s still crucial for Apple to sell a lot of them, but late-December sales will be a lot more important than mid-September sales.

Microprocessor vendors have begun moving away from describing their chips with the sort of nerdy “speeds and feeds” metrics that have dominated computing for few decades. It’s part of a dramatic sea change in how PCs, tablets and smartphones are evaluated, bought and sold.

In fact, the notion of computer “performance” is being completely redefined. Instead of chip vendors worrying about tweaking their processor and graphics performance to eke out a few more frames per second on the latest games, they’re worrying about how to make trackpads more responsive and how to make a laptop start up faster after being shut down.

Tablets and phones are equally dependent on microprocessors, but they’ve never really been sold on the basis of chip benchmarks - or even what chips are in them!

A New Look At Power

This week, Intel hosted thousands of developers at its Intel Developer Forum in San Francisco, discussing everything from software-defined radios to touch and voice control to the evolution of the data center.

And - oh yes - there was “Haswell,” Intel’s fourth-generation of what it calls the Core chips for desktop PCs.

Normally, the Haswell presentation would have been packed with roadmap slides, celebrations of clock speeds, a picture of the processor die and the “wafer shot,” where an Intel executive would triumphantly hold aloft the first circular wafer containing the bare processor dice. A year ago, Intel touted a level-3 cache, memory and string performance enhancements, and a dedicated random number generator within its new “Ivy Bridge” chips (the third-generation Core chips, out now).

Forget that. This week, Intel’s technical braggadocio boiled down to offering the same performance as Ivy Bridge, at half the power. There were no clock speeds, no cache sizes or instruction accelerators. Instead, Intel executives positioned Haswell as the foundation, not the focus. “Delivering a processor is not enough; at the end of the day, it’s about the software,” said Dadi Perlmutter, the head of Intel’s chip group.

Redefining Performance

A few blocks away, executives at AMD were saying similar things. There, Leslie Sobon, AMD’s corporate vice-president of marketing and head of product, quickly flipped through a presentation on the company’s APUs (accelerated processing units), which integrate a microprocessor with graphics processing. Instead, she was eager to talk about how much of the world, with the exception of the U.S., Japan, and Germany, is moving away from worrying about whether or not a chip runs at 2.8GHz or 2.9GHz.

“Low power - that’s good performance,” Sobon said. “Good battery life - that’s now considered performance.”

At the same time, also in San Francisco, Apple was launching its iPhone 5. And what do we know of its processor, the A6 - except that it’s better than than the A5 iPhone 4S? Very little, according to Apple, the new A6 promises “up to” two times the CPU and graphics performance; the A6 is 22% smaller than the A5; and the battery life of the new iPhone is slightly better than the battery life of the 4S (how much of that is due to changes in the battery is unclear#). Not a hard spec to be seen.

Designing For Consumers, Not Engineers

A key reason for the change is the attempt to market devices in ways that are meaningful to consumers. When everyone was running productivity software on PCs - there were common benchmarks that did a decent job of explaining how fast a machine would perform common tasks.

Today, there are so many different applications running on so many different platforms, that most benchmarks don’t make much sense anymore. (The possible exception - PCs for gamers, who have very specific needs around graphic peformance.)

So it’s a new world in the computer market, described in the more abstract language of “experiences,” rather than in bar charts and graphs.

“I think that people’s performance expectations are different than what the computer market is selling,” said Mike Feibus, principal analyst at TechKnowledge Strategies. “And they were slapped in the face with this with the iPad in 2010. This is how people perceive performance: if [the system] comes up right away, if things move quickly.”

Basically, the message is this: chip clock speed doesn’t matter. Power usage matters. How fast a computer boots up matters. The responsiveness of a user’s touchpad matters. For the majority of users, just about any computer can run all the software that a user needs at an adequate performance level. That’s “good-enough” computing, and it’s evolved from the desktop to find a home on all kinds of devices.

“That’s not to say raw [processor] performance isn’t important; it still is,” Feibus said, especially with applications like photo editing or voice recognition. But the industry needs new ways to sell the traditional “good, better, best” comparison, he said.

This goes beyond mere marketing. Intel internally believes that the company may have hit a plateau in processing needs, which means that its engineers have no choice but to focus on lowering power and other aspects of the computing experience. The company’s internal market researchers have produced data that shows potential customers will start to issue poorer ratings to a notebook whose trackpad’s latency, or “lag,” goes beyond 250 milliseconds, a source at the company said.

So Intel has rethought its design approach. “For a long time, we worked from the inside out,” the Intel source said. “We developed the best [microprocessor] engine we could and said, here you go. We don’t know what you’re going to do with it, but go party. Now, we recognize that if you don’t work from the outside in - if you’re not thinking about experience from, well, the first moment, you’re setting yourself up to, well, possibly to fail.”

Fortunately for chip makers, the same forces that pushed them to faster and faster clock speeds also help them lower power consumption. That would be the famous Moore’s Law, which technically states that the number of transistors on a given chip doubles about every 18 months. In reality, Moore’s Law gives chip designers a range of choices: improve the chip’s computational performance, lower the power, or some combination of the two.

Burying “Intel Inside”

Intel made that shift a couple of years ago, rebranding its Core line into the Core i3, Core i5 and Core i7 lines. And then there’s the Atom, an X86 chip that can fit inside smartphones, tablets or netbooks. But Intel’s “i” designations still hide a great degree of variation between the individual processors within each family.

Years ago, PC makers trumpeted exactly what chip was powering each PC, even going so far as to disclose details like the memory speed. The race to 1-gigahertz chips was a major event. With today’s ultrabooks and tablets hardware makers seem reluctant to divulge details beyond just the processor family.

Buyers still have to know the difference between an x86 chip and an ARM processor, although the line is blurring with the launch of Windows RT, which runs on the cheaper, lower-power ARM chips, and Windows 8, which uses x86 silicon. But it’s increasingly doubtful that consumers will give a hoot about the differences between a dual-core Core i5 3470-T and a quad-core i5 3330.

Device makers seem perfectly happy about that. Phone and tablet makers instead choose to focus on qualities like screen size, talk time, battery life and operating system. And even PC builders are following suit. Two years ago AMD’s “Fusion” program, Sobon’s brainchild, pulled off the AMD-branded stickers attached to most PCs and let the manufacturer sell the product. This week, at an behind-the-scenes look at one big PC maker’s Windows 8 tablets and ultrabooks, there were no “Intel Inside” stickers to be seen.

The Future Of Chips?

What does this mean for the future? At this point, we just don’t know. PC manufacturers are reluctant to talk about how they plan to price or market their new products before launch.

But one thing seems clear: “Intel Inside” may not go away entirely, but it’s likely to get harder and harder to find out which processor powers the a particular product. A decade ago, PC buyers bought the best collection of parts. Now, the new era of tablets, ultrabooks and convertibles is making individual components - including processors and their specs - increasingly irrelevant.

After all, if chip vendors don’t want to talk in terms of gigahertz, who does?

This week, at long last, the Federal Communications Commission explained in court why telco criticisms of its Net neutrality regulations are "baseless." Nonetheless, it has become crystal clear that the FCC's rules against online discrimination - perhaps the signature technology policy move of Barack Obama's presidency - are in the industry's crosshairs.

The Net neutrality regulations adopted by the FCC on a party-line vote just before Christmas 2010 represented the administration's attempt to find middle ground. Chairman Julius Genachowski had floated an idea variously called "The Third Way" or "Title II Lite." His plan proposed a historic, black-and-white reclassification of broadband Internet service as a telecommunications service under the Communications Act of 1934, but with caveats: the FCC would "forebear" on using all the regulatory muscle that it generally holds over common carriers, like the ability to impose sharing requirements. But Genachowski, facing a tsunami of industry disapproval, retreated to a far more modest jurisdiction over broadband. That's what Verizon now dismisses in court as the FCC's attempt to "conjure a role for itself."

Genachowski's Net neutrality rules were a tenuous play from the start, considering the Comcast v. FCC decision on BitTorrent throttling some months earlier, which challenged the commission's "ancillary authority" to regulate broadband. Verizon said it would go to court. It has.

Meanwhile, AT&T responded in public with a what's done is done air. In a hearing last March, a company executive quietly seconded a member of Congress who suggested the rules would "require no change in the business plans of AT&T." We're beginning to see why. In the run up to this week's expected release of iOS 6, AT&T has said that it will disable FaceTime, the iPhone's video chat feature, over its cellular networks except for subscribers to its pricey Mobile Share plans. Why? An uncertainty about data load, the company said. And if the FCC can make up the rules as it goes along, AT&T seems to be arguing, then so can we.

Blocking FaceTime doesn't violate Net neutrality regs, a company rep wrote, because the app is "preloaded." That's a distinction not found within the four corners of the FCC's neutrality rules. But it buys the company a little wiggle room.

Genachowski's Christmas surprise earned him the ire of critics, some of whom see an inevitability to today's challenges. "This is a mess of the commission's own making," said Derek Turner, research director of Free Press, a vociferous proponent of net neutrality regulations. Congress, it's worth noting, wasn't able to craft the FCC any clearer authority. But rather than establishing that the Internet is both the digital bits that make up its content and the (highly regulable) pipes that those bits travel along, Genachowski tried to make do with a far less coherent jurisdiction. And prodded by industry, he carved out exemptions for mobile Internet, which is exactly how more and more Americans are going online. Companies can't block competitive applications, and they have to be transparent about what they do do. But that leaves gaps big enough for AT&T to drive its FaceTime policy through.

That the FCC would claim jurisdiction over broadband, today's dominant communications medium, scares the bejeebus out of some people. Same goes for the idea that it wouldn't. The agency tried to calm roiling waters with a tempered approach to Net neutrality. But that produced only a momentary peace. Verizon is challenging it in court. AT&T is challenging it in the marketplace. What is the government's role in regulating broadband networks? More unsettled than ever. And that doesn't benefit much of anyone.

Amazon did not unveil a smartphone Thursday, despite speculation to the contrary. But its new Kindle Fire tablets give us some clues about an Amazon phone, reportedly in the pipeline. We see a $200 (almost) loss leader that makes buying anything from or through Amazon beyond easy.

An All-Amazon-Controlled Experience

The idea that Amazon will put a few of its apps on a generic Android phone and call it a day is misplaced. Amazon will control every aspect of its phone, from the way the home screen looks to the way it ties into Amazon's Kindle bookstore, video-streaming service and music cloud locker.

Bezos explainted, "Kindle Fire is a service. What does it mean for a hardware device to be a service? It greets you by name. It comes out of the box with your content preloaded. You can choose from 22 million different items. It makes recommendations for you. [...] A hardware device as a service. That's what people want."

Amazon is all about the experience, front to back -- much like Apple. Expect total Amazon control over any phone it makes.

Aggressive Prices So Amazon And Its Customers Win

Mobile pricing and profit models for mobile devices set Amazon and Apple apart. Apple makes money selling iPhones at tremendous profit and breaks even on media and app sales. Amazon prices its hardware closer to breakeven and hopes to profit on media sales.

Bezos gave some context to Amazon's strategy this week, after introducing the moderately priced $299 large-screen Kindle Fire HD.

"Above all else, align with customers. Win when they win. Win only when they win," he said.

How does this apply to hardware pricing?

"We want to make money when people use our devices, not when they buy our devices," Bezos said. "If someone buys one of our devices and puts it in a desk drawer and never uses it, we don't deserve to make any money."

Jumping from there to a phone, Amazon is probably preparing one that's inexpensive by industry standards, one that's designed so that buyers will keep using it to buy Amazon media, products and services.

Depending on how Amazon distributes its phone — directly or through telcos — Amazon could offer a no-contract phone for $200 or a carrier's standard contract phone for free.

Expect Amazon to use some of its Kindle Fire tricks, too, such as ad-supported subsidies from its special-offers program.

The Boldest Move: Going Around The Carriers?

One of Amazon's interesting announcements Thursday was a 4G LTE wireless version of the Kindle Fire HD, with a special (low!) price for data service. Unlike other tablets -- including the iPad -- which require a relationship with a wireless carrier, Amazon seems to be stepping in front of the carriers here, billing customers directly for 4G service -- at least for the basic package, which includes 250 MB of monthly service for a $50 annual fee. ("Customers can also choose to upgrade to 3 GB or 5 GB data plans from AT&T directly from the device," notes Amazon.)

It would be an especially bold move for Amazon to apply this model to phones: Bypassing the typical carrier service-plan requirements, buying wholesale data capacity directly from AT&T (as it's doing here, and as it does for the 3G Kindle), and charging a lot less for an entry-level smartphone plan than its competition.

Imagine, for instance, an Amazon phone with no monthly voice-plan requirement, fair pricing on data plans and unlimited text messaging. It could conceivably cost half what an iPhone does per month, running on the same AT&T LTE network. And this would help Amazon win with its customers and turn gadgets into services.

But: This would be risky and challenging, even for Amazon. And Bezos may decide that the distribution power that carriers have -- especially domestically -- is too great to fight. (That's the lesson Google learned with the Nexus One.) Perhaps a halfway-there, hybrid approach?

Then again, don't put anything past Jeff Bezos: If any company is ballsy enough to try an end run on the carriers, it's Amazon.

Either way, if this week's Kindle Fire announcement is any lesson, expect aggressive, ad-subsidized pricing and a service-focused approach with an Amazon phone. And maybe -- just maybe -- a bold move to disintermediate the phone companies.

If there’s one thing the DeathWatch knows, it’s that all things must come to an end. So we’re pausing to review the fortunes of our first 13 unlucky inductees. The fates of some of them may surprise you.

In reverse chronological order, here’s a look at the initial baker’s dozen and what they’ve been up to since joining the DeathWatch over the last three months (updated October 6, 2012).

Zynga

It’s only been a week since the casual gaming company hit the DeathWatch on August 27th, but Zynga shares have dropped again on news that Chief Creative Officer Mike Verdu was leaving his post, along with other high-profile execs. This kind of churn is probably inevitable among staffers looking for a quick upside, since most Zynga stock options will be underwater for some time, but it should eventually level off.

On the upside, Zynga’s first Partners for Mobile game just shipped, and it’s completely different than any other Zynga title. Mobile gaming control options still kind of suck for first-person games, so the gameplay suffers, but that should get better over time. If Zynga can become the go-to software development platform for mobile gaming, it has a shot at reinventing the company and reversing its fortunes.

It’s official. Google is selling off Motorola’s Home Division. That’s good news for Google in the short term, but it could really hamstring plans for expansion into the market for TV set-top boxes.

Despite the loss of potential toys, Motorola keeps on working with what’s available. New since August 20th, it looks like Motorola will be making a push with a new device line in September. The rumor mill seems pretty confident that the devices will include a Medfield-powered unit with rip-roaring specs, and marketing copy about “taking it to the edge” implies an edge-to-edge display. With a decent form factor and battery, the phone could put Moto back in contention for #4 in the handset market. But is #4 really good enough for the long term?

After Best Buy hired CEO Hubert Joly to reject the Schulze buyout and swing the axe of austerity, investor confidence plummeted and the dreaded stock downgrade arrived. Investor hopes (and the stock price) got a bit of a bounce as the Schulze buyout got another chance, but experts think it’s unlikely to go through. As the Wall Street Journal asked last week, the bigger question is Can Electronics Stores Survive?

After laying off staff in its newly acquired PopCap unit, EA has readjusted its Free-to-Play focus, venturing away from Facebook and attempting to cast a broader, multi-device net. It’s a good and necessary goal, but we’ll have to see how well EA can execute. Meanwhile, Madden 13 has the footall franchise back on the map with excellent reviews and record sales. It seems EA has bought some more time to figure out its social strategy. It will need it, as the company’s overall challenges haven’t softened since August 6th.

Netflix hasn’t made any major blunders or advances since joining the DeathWatch on July 30th, but the rest of the industry hasn’t stood still. HBO fired a shot across the bow with HBO Nordic a streaming movie service available only in Scandinavia. Despite the abrupt exit of Blockboster from the space, more direct competition is inevitable, so Netflix may have to do something bold. Perhaps acquiring what’s left of OnLive to leapfrog GameFly?

Since being inducted into the DeathWatch on July 23rd, T-Mobile has done nothing to stop the bleeding. Earlier this month, it lost more than a half million conract-based subscribers. If Deutsche Telekom’s infrastructure investments really happen, the company could be a technical competitor, but without subscribers, all that capacity could prove a liability. Here’s hoping that the all-hands announcement scheduled for the day everyone else gets the iPhone 5 is a game changer.

Changing the course of a behemoth as large as Sony takes more than the couple of months that have passed since the company was inducted into the DeathWatch on July 6th. So far, the best thing to happen to Sony has been the OnLive debacle, which makes Sony’s unrelated decision to jettison the service in favor of its own on-demand game competitor look downright brilliant. Product releases have been a mixed bag, including ho-hum smartphones, a respectable consumer camera, an affordable streaming music service, and a gutsy new lap-pad that shows Sony might be willing to take some risks. What’s missing? A convincing living-room attack plan. The PS4 needs to be the crux of any recovery strategy, so DeathWatch is withholding judgement on any turnaround until we see a demo.

Barnes & Noble

Barnes & Noble - inducted June 29th - is making a necessary and aggressive push for the Nook overseas. The company is starting with the UK, but it’s not alone. Building sales channels is half the battle. The rest involves filling that channel with the best possible hardware and content. To that end, the DeathWatch is waiting for something big to emerge from Barnes & Noble’s Microsoft deal before predicting a reversal of fortune. In the meantime, profits remain out of reach.

38 Studios

In early August - just over a month after 38 Studios joined the DeathWatch on June 22 - the Rhode Island Economic Development Corporation officially took hold of 38 Studios assets, including the games Kingdom of Amalur and the remains of Project Copernicus. All that’s left now is to see whether some of that amazing intellectual property winds up in the hands of another publisher (DeathWatch is betting on EA). Until then, while we mourn the loss of a lot of good work, we’re grabbing some popcorn and waiting for the latest round of It’s Not My Fault.

Samsung has just beat Nokia to the punch with a Windows 8 smartphone. On the surface, it’s not a huge deal, but it showcases Nokia’s weakness. Windows is Nokia’s only gig going forward, but Microsoft isn’t throwing the Finnish phone-maker any bones. Nokia’s stock bump from the Samsung / Apple verdict was short-lived. If Nokia hopes to lose its junk status, it will have to crawl out of that hole on its own – one smartphone at a time. That’s going to take a lot more than price cuts. Things are arguably worse now for Nokia than they were on June 15th when it became a DeathWatch victim.

HP remains committed to the PC and server markets, even as it those businesses wither on the vine. Still, while there’s still cash on hand and customers who answer the phone, there’s hope. A new tablet division looks like no more than a shot in the dark, but at least it displays a willingness to push the envelope a little. The new Envy X2 hybrid device shows some interest in redefining “PC,” as well. But as promising as these developments seem, baby steps aren’t going to turn around decades-old thinking for a company the size of HP that recently suffered a disastrous earnings report that included an $8 billion writedown of its Enterprise Services Business and the biggest quarterly loss in the company’s history. And regardless of new products, Whitman will have to win back investors hearts and minds, after they met her last announcement of weak earnings with a 13% drop in value.

Apple has requested an injuction against the sale of eight Samsung devices. The move follows its patent-infringement victory over Samsung.

According to The Verge, Apple is requesting injunctions against sales of eight Samsung devicess (see the court document here). Apple is going after some of Samsung’s most popular 2011 products.

Apple’s list includes iterations of the Galaxy S II, which was widely considered to be the best Android smartphone of 2011. The S II came in a variety of flavors as Samsung tweaked the device for U.S. mobile carriers.

According to The Verge, the list includes:

Galaxy S 4G

Galaxy S2 (AT&T)

Galaxy S2 (Skyrocket)

Galaxy S2 (T-Mobile)

Galaxy S2 Epic 4G

Galaxy S Showcase

Droid Charge

Galaxy Prevail

In the just-adjourned patent-infringement suit, 25 Samsung devices were found to infringe on one or more of Apple's patents. Many of those devices are older or generate marginal sales (such as the original Galaxy S, Fascinate and Captivate). But, the S II is a popular phone globally. In June, Samsung said it had sold 28 million S II's worldwide. (Note: Samsung says sales but it is actually units shipped.) Overall, 50 million Galaxy S and S II units had been sold as of June.

Apple's patent suit did not challenge Samsung products launched after the case was filed, models including the Galaxy S III, Galaxy Note and others. Apple’s main target with the injunction is the profitable long tail of Samsung’s mobile-product line. Every Galaxy S II sold is one fewer iPhone sold.

The fallout from Apple’s win over Samsung in a California patent court has been an extension of the rhetoric that took place within the court. Apple, smug after its billion-dollar settlement, claims the whole case was about values. Samsung still holds to the line that Apple’s design patents are frivolous and the real loser is the consumer. Neither side is wrong.

As much as Apple and Samsung want everybody to believe that one is on the side of good while the other is completely evil, the reality is that that is just not true. It is possible to not be right while not precisely being wrong.

Apple’s “Values”

Apple’s CEO Tim Cook called the victory a triumph of values.

“For us this lawsuit has always been about something much more important than patents or money. It’s about values. We value originality and innovation and pour our lives into making the best products on earth. And we do this to delight our customers, not for competitors to flagrantly copy,” Cook wrote in a memo leaked to 9to5 Mac.

Cook is not wrong, but he is not correct. Apple is right to defend itself against copying. But, it is not like Apple was defending the invention of fire. It was defending design patents based on the size and shape of the iPad and iPhone as well as utility patents used in iOS.

None of the patents that Apple fought tooth and nail over in the name of values are particularly innovative.

The utility patents may have some functions specific to iOS, but the Android manufacturers have already figured a way around most of those because it was not the function that Apple patented so much as how the function is performed. Companies like HTC, Samsung and Motorola have been working on ways to circumvent those patents through design and functional updates to their devices, and Apple will have little grounds in court to sue the Android manufacturers over these same functions again.

The patents themselves are just weapons against Samsung and other Android manufacturers.

The settlement money is also of no concern to Apple. This is a company that is one of the most valuable in the history of the world, sitting on a $100 billion in liquid assets. But taking a billion dollars from Samsung was a reward in itself.

Cook’s comments about values is public relations. Most journalists, analysts and tech enthusiasts have a better understanding of Apple’s motivations under the surface. Apple's two biggest motivations were to set a precedent for all its upcoming patent cases and to slow the Android ecosystem's growth. The more Apple can hobble Android, the more iPhones and iPads it can sell. With Apple’s extraordinarily high margins, there is a lot of money on the table.

The effect on Samsung is marginal in the short term. This case was mostly about Samsung’s long product tail, with devices that had been on the market a year or more running software that has been completely overhauled to avoid these specific Apple patents.

Samsung will likely appeal the judgment, mostly to avoid the precedent that the case sets. This is not the last time these two companies will meet in court over patents. Apple’s win makes it more likely that its similar patent cases against Samsung and other Android manufacturers will result in injunctions against Android devices. Samsung needs to negate that precedent.

Samsung: “Loss for the American Consumer”

“Today’s verdict should not be viewed as a win for Apple, but as a loss for the American consumer. It will lead to fewer choices, less innovation, and potentially higher prices. It is unfortunate that patent law can be manipulated to give one company a monopoly over rectangles with rounded corners, or technology that is being improved every day by Samsung and other companies.”

It is difficult to believe both companies. Samsung says that Apple’s win is bad for innovation. Apple said it is good for innovation. Again, neither company is right, but neither is wrong.

When Apple speaks of innovation, it is not talking about the broad scope of technology innovation. Apple is talking about its own innovation. Innovation that has been called into question many times over the years. Apple is seen as a company that makes technologies better and sexier and prices its devices higher than the competition to pad its margins.

Samsung is essentially saying that Apple’s designs and its legal claims are frivolous. It is implying that if Apple can improve on technologies and not be found guilty of copying, then so can we.

Samsung certainly has a high opinion of itself. By calling the verdict “a loss for the American consumer” it is saying that its products are so good that the U.S. consumer will suffer for the loss. It is the same tactic that Samsung has used in most of its court cases against Apple across the world. “This bully is bad for us, bad for you, bad for everybody.”

Samsung itself is a bit of a bully. It has the manufacturing might to flood the mobile market with so many devices at so many price points that it is squeezing not just Apple, but the other Android manufacturers. Motorola’s market presence is almost non-existent at this point and HTC is flailing. Samsung, not Apple, is the biggest culprit behind Nokia’s fall from grace. Samsung’s shotgun strategy works and cannot (or, cannot without great difficulty) be replicated by any other Android manufacturer.

Samsung’s own rhetoric is as hypocritical as Apple’s. While Samsung claims it did not copy Apple in the slightest way (and it has a case for that, despite the jury’s verdict), there is no question that some of Samsung’s smartphones do look very similar to the iPhone.

The Winner? Nobody

In the end, the outcome was predictable. Can anyone say that Samsung could win a case with a Californian jury in the shadow of Cupertino? Samsung never really stood a chance.

The battle of rhetoric does neither company justice. Apple comes off with a morality play that is almost laughable. Samsung sounds like a whining, arrogant twit that insists it did nothing wrong. With this decision, all Android manufacturers lose, not just Samsung. In the end, that is how the American consumer loses too.

Wi-Fi in the sky is a rare bright spot in an industry that engenders ever lower customer expectations. Five years after Gogo launched its in-flight Wi-Fi service, most passengers still don't pay for Internet in the sky, but there's every reason to believe they're beginning to see the value of staying connected en route. As Gogo disclosed in an updated IPO filing last week, its sales and installations have been growing, and it is inching toward profitability.

Gogo's in-flight Wi-Fi service was installed on 1,565 planes at the end of June and available to about 65.5 million passengers in the June quarter. Both of those stats have grown around one-third in the past year.

That growth has allowed Gogo's revenue and number of wi-fi sessions to increase, even as the percentage of passengers who pay for service (known as take rate) and the amount of money people spend on service have remained flattish.

Some 5.3% of potential Gogo passengers connected during the June quarter. That's up significantly from 4% take rate a year ago, but down a bit from 5.6% in the March quarter and 5.5% in the December quarter.

Take rates vary by airline and flight, of course, and air travel is a seasonal business. In April, Virgin America's CEO boasted Gogo usage rates in the low- to mid-20s, including generally passing 50% on its San Francisco-to-Boston route. But not every airline is Virgin America, and not every flight is so full of techies.

Still, Gogo's filing suggests that the company powered about 3.5 million Wi-Fi sessions in the June quarter, up about 75% from last year and up more than 10% from the March quarter.

That's pretty solid growth, and faster than Gogo's overall sales, which grew 51% year-over-year during the June quarter. Operating loss fell to $4.9 million in the June quarter from $7.1 million a year ago. And a $135 million credit line, announced in late June, will keep things moving.

The filing notes that Gogo has contracts to install the service on another 415 aircraft, mostly before the end of next year. That could provide roughly 25% more capacity. Take rate will almost certainly increase as more people bring aboard iPads and smartphones and as they become accustomed to paying for in-flight connectivity. And Gogo just started its international expansion efforts this year. (It also generates almost half its revenue from the "business aviation" market, serving private planes.)

But Gogo faces hurdles, too. Network capacity, for example: It's already frequently slow to connect, and Gogo warns that it must upgrade many planes to new technology to meet capacity demands. Satellite-based competitors could win deals, too. And then there's the overall instability of the airline industry: American Airlines, whose customers generated 23% of Gogo's commercial-aviation revenue in the first half of the year, filed for bankruptcy and may shed planes or lose control to another airline.

Still, for many passengers, Wi-Fi has become a crucial part of flying. It looks like Gogo has plenty of runway left for growth.

Research In Motion has nearly finished developing its BlackBerry 10 operating system. New smartphones from the Canadian manufacturer are expected to be released at the beginning of 2013, but they may not be the only Blackberry 10 devices. According to reports, RIM is open to licensing BlackBerry 10 to other manufacturers. Such an move would have been unthinkable only two years ago, but now it seems to be a real possibility. But would any other manufacturers go along with it?

Sizing up BlackBerry in the Smartphone Ecosystem

To understand what companies might license BlackBerry 10, it is important to understand the dynamics of the smartphone ecosystem. Specifically, where does the operating system that runs your smartphone come from?

Apple designs its iPhone and iPad and the operating system that runs it - iOS - in-house. The devices are assembled at factories in China (you may have heard of Foxconn) and shipped to destinations across the world. Historically, this was the model RIM followed. For all intents and purposes, RIM alone designed the hardware and software and managed the manufacturing of BlackBerry devices.

In-house design and production used to be the standard throughout the cellphone industry. Motorola, Samsung, Nokia and Palm either made or still make their own operating systems. Yet, that approach is no longer the default. Internal production takes a wealth of resources and expertise. If a company aims for the top of the market and its OS falls flat, it can be set back several years and risk its livelihood in the process. This happened to both RIM and Nokia in recent years as they fell behind the market leaders in iOS and Android.

Google and Microsoft do not follow the internal-design-and-build model. Instead, they build the operating system (Android for Google, Windows Mobile CE and, more recently, Windows Phone for Microsoft) and license it to manufacturers that wish to build their own variations. Their approaches are not identical, however. Microsoft charges a fee for a Windows Phone license, while Google provides Android to manufacturers for free (with stipulations if Google services are used).

This strategy explains why Android and Windows Phone devices are available from a variety of manufacturers including LG, Sony and HTC. Microsoft has employed the same strategy in the PC market for decades.

Research In Motion cannot give away BlackBerry 10 in the way that Google does Android. That avenue would essentially lead to the end of the company. It will have to employ the same strategy that Microsoft does with Windows Phone and charge manufacturers per license.

Possible BlackBerry Partners

There is one obvious company RIM could turn to manufacture BlackBerry 10 devices: Samsung.

The South Korean manufacturer is the perfect candidate to build BlackBerry devices. It is the world’s largest smartphone maker and does not seem to discriminate in what it builds. Essentially, Samsung will try just about anything to see if it catches fire. Its primary revenue driver is Android and its Galaxy series smartphones. But Samsung also builds devices for Microsoft’s Windows Phone (though they do not sell particularly well) and builds its own low-end operating system called Bada. Samsung is also linked to Tizen, the bastard child of the OS that was once called MeeGo. The company will likely produce a Tizen device once that platform is ready for the market.

Samsung is such an obvious choice to build BlackBerry devices that, if for some reason it declines, RIM may be in serious trouble. Few other manufacturers are poised to take on new operating systems right now. Samsung and Apple have squeezed the smartphone and tablet market so tightly (between the two, they take up about 90% of mobile hardware revenues) that almost all other manufacturers are just trying to keep their heads above water.

HTC is having a down year despite critical success with its Android-based One series devices. The company does not have its own operating system, and it has made Windows Phone devices in the past. As Samsung’s little sister in the smartphone ecosystem, HTC is the next logical choice, but only if the company can put together the resources for a new product launch.

The same applies to other second- and third-tier device manufacturers. Sony Ericsson has never been able to make a serious dent in the market with Android, and LG is performing much better in the waning feature-phone market than with any of its smartphones or tablets. Chinese manufacturers like ZTE and Huawei might be interested in BlackBerry 10 if the price is right. Both companies have an expanding footprint in international markets that RIM would love to reclaim.

The problem is that all these manufacturers are doing just fine with Android. Android is free and manufacturers can do just about anything they want with it. The design of Windows Phone is inflexible in comparison and it costs manufacturers money to license. If RIM is to follow Microsoft’s Windows Phone plan, it will have trouble convincing these manufacturers to play its game.

In addition, partnering with RIM would constitute an alliance with a competitor. RIM is not like Google and Microsoft, which do not make their own devices (overlooking Google's Nexus and Microsoft's Surface). RIM will build its own BlackBerry 10 smartphones and tablets, devices that will be on store shelves next to any partner's offerings.

Reaching Beyond Smartphone and Tablets

One area of potential growth for BlackBerry 10 is in devices that aren't smartphones and tablets. BlackBerry 10 is built using a system called QNX that the company acquired in April 2010. QNX was a platform that ran many different kinds of computers, such as those found in airplanes and cars. RIM will definitely be looking to non-traditional partners to license BlackBerry 10.

Looking beyond the smartphone could be RIM's best bet. At the company’s BlackBerry Jam in Orlando in May, CEO Thorsten Heins showed off a car that had BlackBerry 10 integrated into almost every aspect of its computing system. RIM could also push its new operating system into other infrastructure-based industries such as healthcare and utilities (electric and water systems, for instance).

All this boils down to one simple fact: If BlackBerry 10 fails, so does RIM. We will know by this time next year if any strategy RIM pursues pays off or if it's time to write the obituary of a once-great technology company.

What can you do with a ubiquitous metropolitan gigabit Ethernet connection? Google has recently gotten lots of attention with the metro fiber network that it is beginning to build in Kansas City. Welcome to Chattanooga, Tenn. The city has laid its fiber network just about everywhere, and is beginning to reap the rewards of ultra-fast Internet service. What lessons can Google and others learn from the experience?

Chattanooga's gigabit fiber network wasn't installed in the name of civic progress, or as a calling card to attract IT-related entrepreneurs, or to improve city services or to encourage telecommuting - all things that are happening as a result of the network.

Instead, it began as a project from the municipal electric utility, EPB, to improve power delivery to its customers. Chattanooga suffers many violent storms that can knock out its power grid for hours or days. The utility wanted to increase the reliability of its operations through having a smarter grid that could minimize these outages.

As part of the effort, EPB automated 1,200 power switches and added technology capable of anticipating potential transformer overloads by measuring power flows every 15 minutes using the fiber network. This smarter grid has cut the number of power outages by more than 40%. The utility says it has also saved money.

But the same infrastructure that provides the control network for the utility can also be used to deliver Internet connectivity, and once the fiber network was in place, the utility became a fast Internet service provider.

Take Me To The (Digital) River

Chattanooga Mayor Ron Littlefield says, "Think of what we did as putting in place a digital equivalent of the Tennessee River." That's an apt analogy for the city.

Chattanooga has always leveraged the Tennessee to its advantage. Back in the early 1900s, for example, it used its place on the Tennessee River to attract the first bottling plant for Coca-Cola as well as smokestack industries. The fiber network is just a different kind of river.

The utility's smart-grid efforts have made the area more of an employment magnet and given it new ways to attract talent. The city's IT department, for example, has filled its past 10 jobs with out-of-towners, a post-fiber development. Major employers are encouraging telecommuting.

"We now have a very balanced economy between industrial and clean jobs," said Littlefield. "We have something no one else in North America has, and something that will sustain our future development."

New Uses for Fast Internet

The city has continued to build on its gigabit fiber network. For example, it put together a series of initiatives to monitor and control downtown areas. At one downtown park, the police can adjust the lighting to discourage flash mobs from gathering, as well as scan license plates on cars that are parked in the lot. This helps increase the perception of safety, not to mention discourage potential criminals. "People now know not to park in the park if they have a stolen car," says the mayor.

Speaking of street lighting, city engineers are in the process of replacing the 28,000 traditional halogen lights with LED lights and sensors that adjust their output based on ambient light. And traffic signals can be controlled by the police or first responders to move emergency vehicles through the city.

In the works is the installation of more than 400 different wireless road sensors. In the past, the city needed to send out construction crews to dig up the road and install the common wire loops that are seen across cities around the world. The newer battery powered sensors are the size of hockey pucks and take just minutes to bury.

All told, the city has built more than 50 apps to use the fiber connections, and more are on the way. "Fiber makes bring-your-own-device strategies possible," says Mark Keil, the city's CIO. "We will have three times more devices on our network next year than before we had the fiber, and we have made it easier to monitor and manage them, too."

Gigabit Takeaways

Here are five lessons to be learned from the gigabit experience of Chattanooga:

Don't build a fiber network just for Internet connections. What made Chattanooga's gigabit fiber network work was the backing of its electric utility. Once this physical plant was in place, the utility was able to offer gigabit service for $350 a month to residential and business customers.

Symmetrical service is key. Having both a gigabit up and download speed is important for a variety of applications that rely on user-generated content to receive the same benefit as downloaded Web pages. A local group of radiologists built their own app so that doctors could view digitized scans whenever and wherever. That wouldn't have been possible without a symmetrical network.

Focus on both big and small employers. The region was able to attract a new Volkswagen auto assembly plant and an Amazon.com distribution warehouse, but these success stories were matched with smaller firms. The mayor is effusive in his support for the various entrepreneurial efforts around the region in bringing in smart, tech-savvy people. City CIO Keil mentions that the city asked for some programming help from several Google developers from Atlanta. By the time the project was finished, at least one of them was packing up to move to Chattanooga because of the gigabit network. And this summer several private companies put together the city's first Demo Day to feature eight tech companies who agreed to move to the city in exchange for a chance to win a $100,000 grant. One of them moved from Ireland to participate in the program. Banyan, the ultimate winner, provides integrated productivity tools.

Find or create a university-based commercialization partner. Chattanooga was fortunate in having a branch of the University of Tennessee, and was able to establish a supercomputing center and a non-profit commercialization entity to help license the technologies developed by academia. Several of their apps are being used in disaster management and large-scale urban planning simulations, for example.

Finally, don't rule out many unexpected benefits. "We got into robotics and energy development when they were popular many years ago. But our fiber network is like having the first city that discovered fire," says Littlefield. The city is just beginning to see lots of new apps on its network and is still discovering new uses for the universal connectivity.

The 10th annual student software contest, Microsoft's Imagine Cup, is wrapping up in Sydney, Australia, and there are some important lessons that all entrepreneurs, young and old, can glean from the process. The contest challenges hundreds of thousands of people - mostly college students - from around the world to come up with a new idea, code it using various Microsoft products, and pitch it in a series of judging rounds that culminates with winners in several categories, including software and game design.

I was fortunate to be selected as one of this year's judges for the contest. I got to see more than a dozen of the teams as part of the process and meet dozens more students during my stay in Sydney. The teams that advanced from round to round all had several things in common:

Basic English communication skills. The contest was conducted in English. Given that many of the contestants didn't speak English as their native language, this presented a challenge, and some of the teams relied on their best English speakers to be presenters and translate the questions from the judges. If the developer wasn't fluent in English, some things got lost in the translation. If founders have an accent or aren't comfortable with speaking in front of an audience, they should make sure to get lots of practice.

Great presentation skills. Each team had just minutes to present its slides and demonstrate its solution. The better teams structured their presentation to match the judging requirements and also rehearsed their speeches to make sure they could deliver them in the allotted time. On the other end of the spectrum, some presenters sat in their chairs when addressing the judges. Entrepreneurs who aren't polished presenters should go to their local Toastmasters branch or take a course in public speaking at a community college.

They got to the point, quickly. Some of the losing teams took too long to set up their solution, focusing on matters that weren't germane to the judging criteria. Founders need to be ruthless when trimming slide decks to make them as crisp as possible. When you are pitching an investor or potential partner, make sure you hone your own presentations so that they are succinct and on-point. Think Twitter: If you can't formulate your message in less than 140 characters, work on another message.

Solid video production skills. Video is very compelling and should be a part of any startup's marketing effort. But a bad video is worse than no video at all. The first judging round had each team submit a short video that explained their solution. Some of the videos were very slick - almost too slick: They didn't really explain the actual solution and focused on pretty images and annoying background music that drowned out the narration. Don't get so enamored with video production that you lose sight of what you are trying to accomplish.

They understood how to put together code. Some of the teams put an architecture diagram in their slide decks that didn't make any sense whatsoever. Others took the time to show their code when questioned by the judges, and prove that their demos weren't all smoke and mirrors. Don't be afraid to dive in if your audience wants to know the bits and the bytes.

They knew what business they were in. One team that didn't make it into the finals couldn't decide what business they were in: Were they going to sell their solution directly or use a reseller? Another team didn't understand what a balance sheet was or how they were going to make money. I have seen lots of entrepreneurs who make these same mistakes. Make sure to clearly state your financial assumptions and what you are asking from your audience.

They had fewer moving parts. Many of the teams put together some very elaborate demonstration systems involving a laptop PC, a Kinect motion sensor, a mobile phone and code running in the cloud, which may be intellectually interesting but also quite fragile if something breaks or if Internet access goes wonky. Resist the urge to add nonessential pieces and follow Thoreau's advice to simplify your solution.

Watching all these brainy kids was a real treat and a great learning experience in itself. Here is a video of the Singapore team that is working on a way to help people with dementia:

Disclosure: As a judge, my travel expenses to the event were covered by Microsoft.

An unlikely place to look for the latest trend for the Internet of Things is inside the sewers of the City of South Bend, Indiana. For the past six years, South Bend's city managers have been working with a group of consultants from IBM, nearby Notre Dame University and others to instrument the city's sewers as a means of delivering better service and saving hundreds of millions of dollars in capital improvements.

South Bend was facing more than $600,000 in potential government Superfund fines to bring its system up to par and had also experienced a series of regular overflows. Rather than build expensive new capacity, the city embarked several years ago on a project to do a better job monitoring its sewer conditions in both dry and wet weather. To do this, it needed to invent cheaper and better sensor technology that it could literally insert into the pipes and connect to a real-time monitoring system.

"We needed sensors which were more economical and higher-definition than our traditional systems," said Gary Gilot, a member of South Bend's Board of Public Works (BPW). The city eventually built a monitoring system of more than 100 sensors, conceptualized by city engineers and developed in Notre Dame's engineering school, and deployed them throughout South Bend's 500 miles of sewers.

Like many sewer districts, the South Bend BPW had been using 50-year-old mechanical valve technology to operate the system and direct water flows through the city's pipes. The new technology (pictured above) enables the city managers understand the demands and actual real-time usage and flows. "At a glance, we can see in real time what is happening across our entire system," Gilot said. "We are also able to examine how our system behaved in previous years when we had an inch of rain, so we can be better prepared now."

Spend $6 Million, Save $120 Million

The annual sewer operating budget is about $30 million; South Bend invested about $6 million in the monitoring project and estimates it has saved $120 million in infrastructure improvements. Not a bad return on investment! The city is now able to do a better job predicting and responding to basement backups in low-lying areas; using its new residential basement “heat map,” South Bend can now direct utility cleaning crews to areas where they are most likely to be needed. And through the new monitoring capabilities, the city has also been able to reduce the flow of water through its treatment plants by up to 10 million gallons of water per day.

The city didn't just decide to instrument its sewers overnight. "We had to convince the mayor, and it took some time," Gilot said. "We first set up our sensors in the lab and then next tried in the lakes near Notre Dame." When these demonstration projects were successful, the sewer department set up a trial at one place in the system that had some overflow problems.

The trials allowed South Bend to work out problems before a full deployment. For instance, placing the sensors inside pipes buried in the ground meant that it was hard to get radio signals out of the sewers. "We had to use our manhole covers as transmitters so we could get the sensor data out of our pipes," Gliot said. The city also needed to work on parsing all of the sensor data and creating visualizations to make the information useful and actionable.

The South Bend project represents the next stage of the Internet of Things - individual sewer pipe valves that can be tracked and controlled, with the added layer of data visualization to make it manageable and actionable. The city government is now looking beyond its sewers and seeing what else it can instrument to save money and deliver better services to its residents. That has certainly caught the eye of many other sewer and water districts facing similar circumstances.

Since we wrote about Palo Alto Networks' applications study in January, the company has continued to track corporate networking trends with their customers and today is releasing a new data visualization tool, as well. Somewhat surprisingly, they are seeing very large jumps in use of peer-to-peer file sharing and video streaming services at the enterprise level.

Palo Alto conducted network traffic assessments in 2,036 organizations worldwide between November 2011 and May 2012, and found that streaming video bandwidth consumption increased more than 300%. Since its last report covering the time period from April to November 2011, "total bandwidth consumed by streaming video quadrupled to 13% of all bandwidth on enterprise networks and now represents a more significant infrastructure challenge to organizations." And this jump doesn't even take into account this summer's Olympics, which may push the numbers for streaming video even higher.

The issue is pervasive, too. Palo Alto found streaming video use across 97% of its customers, using both Netflix and P2P video streaming services. In the Americas, YouTube, Netflix and generic HTTP video were the top three consumers of bandwidth. Almost half of this traffic was found on nonstandard ports (other than ports 80 and 443), making this traffic more difficult to potentially block. This is probably not due to users understanding how to hop among ports, but rather to more sophisticated streaming software that can find unblocked pathways to the Internet, as Skype and other file-sharing programs currently do. This means that monitoring and blocking tools are going to have to improve to keep track of this usage.

Palo Alto also found a similar, major jump in P2P file-sharing usage: "P2P filesharing bandwidth consumption jumped 700% to represent 14% of overall bandwidth observed, growing more than any other application category."

At least one browser-based file-sharing application was detected on 89% of the participating organizations’ networks, and an average of 13 different file-sharing apps were found on each customer's network. What is even more sobering is that the takedown of popular file-sharing site MegaUpload in January 2012 didn't really put a dent in this kind of traffic. Palo Alto found that Putlocker, Rapidshare and Fileserve each benefited from the demise of Megaupload, with big jumps in usage after the takedown.

You can navigate to various screens that show you a "radar" plot of different protocols and data types from the data they have collected. This screenshot shows you the rise of P2P traffic that they have observed.

What does this mean for enterprise network admins? They will have to get better and more efficient at running their networks and squeezing as much bandwidth as they can from their existing connections. Video and peer sharing isn't going away, and the use of both is only going to continue to grow. And those admins that don't employ tools like Palo Alto's and others probably need to start gaining some experience with them very soon.

When the 10-minute video of middle-school students cruelly taunting an elderly school bus monitor went viral, people responded with outpourings of anger and kindness. But what is the Internet’s role in this incident?

Basically, the Facebook video entitled “Making The Bus Monitor Cry” (eventually posted on YouTube as well) took an all-too-common incident and made it an exceedingly public issue. The four boys and their families have received threatening messages, and a fundraising effort for victim Karen Klein neared $500,000.

A Perfect Storm of Bullying

The incident itself, though, has nothing to do with the Net. Doctor of psychology John Grohol, chief executive and founder of PsychCentral, says a number of factors were likely behind the opportunistic bullying.

First, Klein was alone and vulnerable. And many children have no respect for seniors, Grohol says. “Whether it’s because they were never taught it, or believe the older people have nothing of interest to offer them or relevance to their lives, it’s not clear.”

Second, kids are increasingly quick to exploit an adult with no authority over them, Grohol says. “Just plopping an adult into a moving room of 60 kids isn’t going to have the same effect it might have had 30 or 40 years ago.”

Parental control is also an issue, as parents increasingly side with their children rather than with schools in disputes. As of this writing, none of the boys had been taken by their parents to Klein to apologize in person.

Why Did the Video Get Posted Online?

But why would these kids post a video of the incident online? A mob mentality can lead bullies to believe that they can’t be fingered individually, Grohol explains - no matter how many people know of their actions.

The incident took place in Greece, N.Y., and the town’s Central School District is considering the proper punishment for the boys. Many people on the Internet believe it should be harsh. But no matter the repercussions here, incidents like this - and their dissemination on the Net - are merely symptoms of larger changes in technology and society.

Those four kids may learn their lesson, and they aren’t likely to put their misbehavior online again anytime soon. But you can bet plenty of other kids won’t get the message.

A study on the usage of social media by the top MBA programs in the United States shows that while all are using Facebook for recruiting and marketing their programs, most of them don't do any ROI assessment of the social media tools they employ to bring in prospective students. Nor do most tap the potentially best resource: Just a few schools are using downloadable mobile apps, even though these are rated among the most effective tools studied.

In phone interviews, Barnes and her researchers spoke to 70 of the top B-school directors or deans in charge of their programs. Missing were some of the top 10 schools, but the sample was still statistically valid over the more than 400 MBA programs across the U.S. Here are some of the interesting results:

All 70 schools are using Facebook, and most are also using Twitter and LinkedIn to market their programs. Three-quarters also maintain a blog. More than half of the schools use five or more social media tools. The Thunderbird School of Global Management in Glendale, Arizona is a real social media butterfly: They are using 14 different social media tools!

While only 16% of schools are using downloadable mobile apps, these are rated among the most effective tools studied. You can see the results in the chart above of the schools' judgment on effectiveness of each social network. Interestingly, LinkedIn - which might be thought to have the closest ties to career aspirations of any of the social media tools - isn't near the top.

The majority (65%) of schools don't track the number of perspective applicants who have found out about their programs through social media connections.

And perhaps most surprisingly, 94% report recruitment is the No. 1 goal of their social media efforts, yet the top measures of effectiveness do not include tracking prospective applicants. Instead, they are looking at the numbers of fans or followers, or other metrics such as page views or the number of comments.

Clearly, social media is in a state of transition for business schools. Many said they would increase their involvement or expand to additional social networks in the coming year, with a third planning to buy additional software and nearly as many investing in new training or new hires. Still, as the study states: "Being able to measure whether these prospects actually apply to the program is something schools may be looking to do, but have not yet mastered. Without this piece of information it is difficult to really assess the effectiveness of the social media plan and to know where future investments should be made."

Google has been busy adding features to its BigQuery service in the six weeks since it became available. There are new visualization dashboards, the ability to process more concurrent queries and additional commands. Clearly, Google is trying to make this a go-to service for ad hoc data processing.

Let's look at the more notable new features. First is the ability to bring in up to 20 different data sources and run queries on them concurrently, as long as you're only crunching up to 200GB of data in that one pass. What this enables is a lot more analysis, and two vendors (QlikView and Bime) have already stepped up to provide more visualizations. Take a look at this infographic from QlikView first, which interactively examines American birth statistics from more than 100 million public records dating from 1969 onwards. You can click on the various query parameters, such as being able to view all California births or the ratio of married to unmarried women by their age, and in seconds, you'll see the display. You could use this to find the answers to such questions as "What's the average age of a mother in New York vs. in California?" (graphic)

Bime is the other vendor working with Google, and they’ve built a slick UI on top of the Google BigQuery platform that allows users to slice and dice 432 million rows of business data along with the birth dataset, too. There is no hard-coded SQL syntax, and it is also simple to explore the relationships involved in these huge data collections.

If your enterprise is shopping around for an internal social media provider, chances are that you have thought about putting together your own request for proposals (RFP). A number of organizations have put together templates and suggestions over the years, and the latest one comes in the form of a Slideshare document from Sprinklr that outlines six things any social media RFP should include.

The oldest and perhaps most linked-to template, the Social Media RFP Template and Bill of Rights, comes from Maggie Fox's Social Media Group. My one-time podcasting partner and current social media guru Paul Gillin had this to say about it: "The Bill of Rights makes for interesting reading. It provides guidance for marketers to consider in publishing RFPs that are fair to the bidding agencies. I get the sense that this guidance is born of some painful experience, which makes its teachings all the more relevant."

Sprinklr provides social media management tools for enterprises, so it isn't a completely disinterested party in the RFP process. Still, the company's document, which isn't a template in the sense of Fox's but more a collection of requirements, has a load of great suggestions for questions to ask prospective social media providers, including

How any social media tool can use a single platform to manage inbound and outbound communications on all of the primary social platforms?

How the tool is able to accelerate response times with automated, customizable, and flexible rules, filters, actions and alerts?

How the tool can be used for managing a distributed or global staff and across departments?

How the tool integrates with other existing analytics tools?

How the tool uses dashboards and other metric?

RFPs are most useful if you are willing to take the time to assemble the right document and to be as specific as you can about your enterprise's needs and requirements. Part of the problem here, though, is that because social media management is still new to many companies, you don't necessarily know what you don't know. In those cases, it may be prudent to try out a few tools first to see what they measure and how they do it before going into a more formal RFP process.

It's been a year since Jobs' last 'Stevenote'. And the Apple team is running smooth and strong.

It's going to be a few years before we really see how Tim Cook does things his way at Apple. But what the company showed off Monday at its annual Worldwide Developers Conference seemed as impressive and consistent as it ever was under Jobs.

No, Apple didn't unveil any major surprises. But that's not fair to expect: Those only happen every few years.

What Apple did do at WWDC was show that it's at the top of its game in every aspect: Software, hardware, design, and efficiency. And that's a great sign.

Take the new MacBook Pro, for example. It's not the sort of once-per-decade device like the first iPhone. But it shows that Apple is pushing the limits on display resolution, hardware design, value, and quality as no other company in the computer industry. While the PC industry is still copying last year's MacBooks, Apple is pushing ahead, bit by bit, with a new high-end notebook that's only a few hundred dollars more than its entry-level version.

Or iOS 6. There's no holy-crap mega-feature this year. But expecting something like that is fundamentally misunderstanding how Apple works. Instead, it's a solid update across the board, with highlights including Apple taking maps into its own hands and dipping its toes into the mobile payments field with the forthcoming Passbook app. And it's set to launch this fall, a year after the last version.

Oh, and another new version of OS X for Mac - just a year after the last one. Remember when Apple had to delay versions of Mac OS X because it couldn't make them and iOS at the same time? As John Gruber describes at Daring Fireball, "Apple has — dare I say finally — become a company that can walk and chew gum at the same time." That's crucial, because things are only going to get busier and more challenging at Apple, especially as it plans to increase its efforts in TV and entertainment.

The skeptic's take might be that most of what Apple announced today was probably already planned while Steve Jobs was running Apple, and that the company has a few years left before it really has to start thinking on its own. But that's not really true or fair. Under Tim Cook, Apple has already patched up one shaky, important relationship, in Facebook. And it's shipping, delivering quality products on time. You can't really plan that in advance.

Apple now has to continue to execute, and will probably have to work harder than ever. Of course, no one can predict the future, or how the company will change over time. But at least based on today's performance at WWDC, things are looking good.

The ownCloud project is adding features fast and furiously. The open-source file synchronization and sharing project announced the Milestone 4 release earlier this week, taking ownCloud in an interesting direction for corporate users. Forget Dropbox killer - ownCloud could be something even better, someday.

We all know that where the data is, the money is. What ownCloud is doing, then, is sort of surprising. The project (and the company behind it) is all about helping users and companies keep control of their data. That means giving up control of the software, and hoping that money comes from services and support.

Understanding ownCloud

Like Dropbox and others, ownCloud has a client piece that synchronizes data from your desktop to a server. The big difference here is that ownCloud also provides a server that's free software (under the Affero GPL), and ownCloud isn't in the business of storing user data at all.

Instead, it's up to third-party providers to offer hosting, or for companies to provide hosting for their employees.

The project provides a server and clients for Windows, Mac OS X, Linux, Android and (eventually) iOS. You can also access ownCloud via the Web to get to files and use its collaboration features.

What's New in Milestone 4?

The project is growing by leaps and bounds. The fourth milestone release includes versioning, encryption and drag-and-drop uploading from the Web client. Versioning and encryption are a big deal for business users, and something that the competition has had for a while.

The v4 release also includes useful collaboration features. ownCloud now has a tasks application, and this release also improves its calendaring features. For individuals, the release includes improvements to the gallery features, so users can not only sync photos - they can also create a Web-based gallery via ownCloud.

Perhaps most importantly, this release includes publicly defined APIs - stabilizing the server side should make it much easier for third-party developers to create applications against ownCloud. Now the company just needs a compelling developer program.

Finally, the Milestone 4 release offers migration and backup features so organizations that are deploying ownCloud can develop an effective strategy for their users' backups.

Not Quite There Yet

The ownCloud folks are making impressive progress, but there's still a few rough edges around the project. If you ask the ownCloud folks, they'll say that they're not a Dropbox competitor. But Dropbox is still the gold standard for users when it comes to easy file sharing and syncing.

The lack of a LAN sync option, which Dropbox has had for years, is a problem. The ownCloud clients are also a bit primitive compared to Dropbox and not entirely stable. Testing the ownCloud client on Linux, the client kept shutting down due to a segfault.

The opportunity is large, and ownCloud is something the market really needs - an open-source set of tools that allow users and companies to keep full control of their data and the ability to modify and extend the tools as needed. The question now is whether the ownCloud team can build a sufficient community and do the necessary development to get ownCloud to the stage where it's ready for adoption.

With the lackluster first day issue of Facebook on Friday, we thought we would take a moment to look at the memorable tech IPOs of the past and see how they have fared over the years. While the first day "pop" of some companies can generate news, what is more important is the longer-term performance of the stock - say, after three years of trading. The chart above shows some of these percentage gains – and losses.

One of the more memorable first-day increases was the doubling of share price for Netscape Corp. when it went public back in 1995 to raise the then-unheard-of sum of $1.6 billion. That's less than 2% of what Facebook raised on Friday. Akamai raised twice what Netscape did in 1999 and had an increase of more than 450% in its first-day share price, only to fall to Earth three years later (thanks to the tech bust) and trade at 1% of its offering price. Ouch. Even Apple had "only" a 31% increase when it raised $3.4 billion in 1980 at its public offering. Three years after its IPO, it was down 25%. (Now is another story, of course.) And Paypal, which made billions for its owners and spawned an entire ecosystem of startups, was trading flat from its IPO three years later.

The biggest three-year percentage increases of popular tech stocks were Yahoo, with a 3,500% increase from its IPO, and Amazon, which rose more than 2,700%. Both benefitted from being on the right side of the tech bubble when they went public. Yet Yahoo has had its problems, as we have documented in the past. The last time its stock price broke 100 was at the end of 1999, and it hasn't been anywhere in that neighborhood since then (right now, it is trading in the teens). Certainly, tech stocks are hot right now, and many – apart from Cisco and Microsoft – are close to their all-time highs including IBM and Amazon, both trading around 200.

One interesting trend not shown in the numbers is that all of the recent tech IPOs have come into the public markets with dual classes of stock shares, meaning that the public shares carry less voting rights (or in some cases, absolutely none) when it comes time for that annual shareholder meeting. Google will have three classes of shares next month: one for the founders, one for the public and then one with absolutely no voting power whatsoever that it will issue for dividends, employee incentive plans and acquisitions. LinkedIn, Zynga, Groupon and Yelp all have dual-class shares. James Surowiecki, writing in the New Yorker, says that this may "make the stock market less central to American capitalism." A sobering thought indeed.

So what can we learn from this trip down memory lane? Just this: The first three years of a public tech company can be very chaotic times with regard to its stock. Some of the most successful companies didn't make much money for normal shareholders. So if you are going to invest in a startup, start early, in its pre-money stage. Of course, that is pretty risky, too.

HP's Itanium debacle provides plenty of lessons for anyone who is willing to pay attention. For the past decade, HP has been making a valiant, if extremely misguided, attempt to support the high-end Itanium chip architecture and the HP-UX Unix implementation that runs on it. Oracle's open letter and drop of documents as part of the companies' legal battle shows just how much HP has been keeping from customers in order to prop up the good ship Itanic in the face of disinterest even from Intel, which actually makes the Itanium chip! Things are getting ugly.

Last year, when HP filed suit against Oracle, Oracle claimed that HP had been lying to customers. According to Oracle's statement, "HP issued numerous public statements in an attempt to mislead and deceive their customers and shareholders into believing that these plans to end-of-life Itanium do not exist. But they do. Intel's plans to end-of-life Itanium will be revealed in court now that HP has filed this utterly malicious and meritless lawsuit against Oracle."

In 2009, the documents show, HP was considering buying Sun (PDF) to take over the Solaris "franchise" and deal with the fact that "HP-UX is on a death march due to inevitable Itanium trajectory." More documents from 2009 discuss the "impending end of life" of Itanium, while HP hoped to keep the "Itanium situation" as "one of our most closely guarded secrets." (PDF)

To keep Itanium afloat, HP worked out a deal with Intel to pay for the development of Itanium (PDF) and fork over money to ensure that Intel wouldn't lose money producing Itanium chips. There's nothing inherently wrong in HP paying Intel to make the Itanium, by the way. Trying to drag other vendors along for the ride, and being dishonest with customers about what was going on, is another story.

An email in March 2011 from Martin Fink, senior VP and GM of HP business critical systems (BCS) (PDF), complained that HP could not say that Intel "at no time communicated to Oracle a change in commitment to the future of the Itanium processor family." In April 2011, an email from Dong Wei to HP's Kirk Bresniker said Intel "specifically told them [Huawei] that the Itanium line is at end of life with 2 more generations to go."

Oracle may be happy to lure customers away from HP-UX and Itanium to Solaris and SPARC (or Linux and x86), but it seems it had plenty of good reasons to abandon the Itanic sooner rather than later.

Lessons Learned

Aside from the corporate drama, what does all this add up to? The short of it is that companies need to be very careful when they're committing to expensive platforms like HP-UX and Itanium.

All of HP's bluster about sticking with Itanium for the customers is belied by the fact that the company has gone to great lengths to obscure from customers the dim future for Itanium and how much HP has had to prop up the ailing platform.

Despite obvious signs to the contrary, HP has spent years pushing Itanium and trying to convince customers that the platform has got a long and healthy life ahead of it. Remember that Itanium was supposed to be the next generation for Intel, and there wasn't supposed to be a 64-bit line for x86 systems. Intel was forced to jump into the 64-bit race with x86 after AMD led the way and demonstrated that, yes, customers wanted to stay on x86.

Red Hat announced it would drop Itanium support in 2009. Microsoft announced the same in 2010. Intel evidently wanted to abandon Itanium back in 2007. Companies that made new or additional investments in Itanium and HP-UX after that should be rethinking their IT practices - and how much they trust what their vendors tell them.

It also, once again, demonstrates why companies should seek commodity and open source systems. Companies that have adopted HP-UX on Itanium have paid a premium for those systems, and now find themselves at a dead end. They'll get support from Oracle on current products, but will have to deal with expensive migrations (one way or another) when Oracle's support commitment ends or when they need features in later releases. Meanwhile, customers that chose Xeon-based x86 systems and commodity operating systems are ticking along just fine.

On LinkedIn's blog today is a post about the top 10 most sought-after engineering startups in Silicon Valley. And no, Facebook and Google didn't make the cut because this was a list of companies with fewer than 500 employees. (Pinterest was number 6.) To compile the list, the company looked at nearly a quarter million engineer profiles on its service and tracked where they were searching for jobs.

LinkedIn did its analysis by tracking people "visiting profiles of employees looking for common connections, checking out LinkedIn Company Pages, and following companies using the LinkedIn Company Follow button." There aren't many surprises here; these are some of the hottest, best-known new companies in the Valley. If you're trying to hire engineers for your own startup, these companies are your competition. And of course, LinkedIn is looking to hire data scientists of its own.

Email, instant messaging, forums, code forges and other collaboration tools make it possible for distributed teams to get work done - but they're not great tools for making decisions. The team behind Loomio wants to solve that with a new Web-based tool for focused, concise discussions that allow all team members to be heard.

If you've ever worked with a distributed team, you know how difficult it can be to make decisions as a group. Discussions are unstructured, rambling affairs with dozens of messages flying about and no good way to track consensus. Even worse, requests for feedback can go without comment entirely, or with only a few stakeholders raising a voice.

Agree, Disagree, Abstain, Block

Discussion in Loomio starts with a discussion and specific proposal, and members have the option of voting on the proposal. A group can define the options (defaults are yes/no, abstain and block), and each member can give their view summary. As votes are tallied, everyone can see get a chart that shows how many folks are in agreement, how many aren't, how many have abstained, etc.

This sounds pretty simple, but most of today's collaboration tools don't provide a good way to focus a discussion. The key to Loomio is that it provides a central tool for discussions and (if used properly) narrows things down to decisions that are easy to vote on. Central is key here. It helps a lot to confine activity to one tool rather than making users look all over for information.

A lot of online teams communicate in several ways, including email, IM, IRC, over the phone and face to face. Stakeholders who prefer one medium (like email) lose out if discussions are held in IRC, or vice-versa. Even worse, stakeholders may be totally unaware a decision is being made at all. If a group settles on Loomio, it would enable the group to say "decisions are made here and nowhere else." If something isn't put up in Loomio (or another approved tool), then a decision wouldn't be legitimate.

Settling on a decision tool like Loomio should also help cut down on noise in other communication channels. It's popular to have discussions in email and CC everyone who might have an opinion or might need to vote on something. An active team can inspire email fatigue pretty quickly with discussions that are neverending. Loomio would allow users to visit, vote and get back to work.

Actually, Loomio isn't only for distributed teams. There's no reason it couldn't be used in any organization, but its especially appropriate for situations where team members or stakeholders are far-flung.

Can Loomio Solve the Problem?

Like any tool, Loomio would only be effective if used properly. The early design could probably do with some modification - a more obvious start and end date for votes, for example - but the initial design is solid. The Loomio team says it's already in use by some organizations. New Zealand companies or organizations like Enspiral and BuckyBox are among the first adopters - though no one seems to be providing a public instance that we can point to.

If you want to help, the group is looking for contributions from Ruby on Rails developers, as well as a little extra cash (NZ $5,000) to help the volunteer team devote more time to Loomio development. The project is sort-of open source and already on GitHub. It's "sort-of" open source because the site says it's open source, but if you look at the license text on GitHub it's basically a stump saying: "We need to add the license. GPLv2?" The pledge drive (through the Pledge Me platform) ends on May 18th. The developers have already raised more than their target, but more money might mean more time spent on development.

If adopted a bit more widely, Loomio might help take distributed teams to a new level - much like GitHub has helped with development. It is a simple concept, but bringing order to decision-making could help teams communicate better and make better decisions, no matter where they happen to be located.

You have just returned from a corporate retreat or some other business event that was well-documented with several amateur photographers. Now you want to share all of these pictures amongst your co-workers. The challenge is that you want to keep them private to the participants and not plaster them all over the Internets. What to do?

Assume that your requirements are to satisfy the ultra-paranoid in the group and also find something that is dirt simple to use. You don't want to make everyone join a new social network just to see the photos; most of us have too many logins already. That leaves out most of the microblogging sites. And you don't want to have to worry that someone will click on the wrong button and inadvertently share the entire photo collection with the universe, including the press, competitors and so on.

Facebook, Instagram, Google+ and many other social-networking sites aren't very good at setting up discrete group-privacy controls, so they are out of the running for our purposes. And while there are dozens of file-sharing sites such as Box.net and Evernote, the idea is to find something that is designed around uploading and sharing images.

None of these services is perfect, but they fall into two broad categories: those that have better privacy controls and those that are easier to use.

Let's look at our requirements in more detail:

First, we want a service that can create a private space that doesn't appear on search engines and can't be discovered by unauthorized users. Photobucket and Shutterfly both do this, by setting up a special URL (Photobucket.com/groupname or Groupname.shutterfly.com) for your group. In Photobucket, for example, you have three choices for each album's privacy controls: everyone can see them, no one else can see them, or you can password protect them by invitation only. The latter is perfect for this application, and you can set up an album password so that only those folks who know the password can see and download the photos. (See screenshot below.) Shutterfly has similar options with its Share Sites feature.

The problem with both Photobucket and Shutterfly is that you need to become a member to upload photos: That is fine if you have just a few shutterbugs in your group, but if everyone wants to be able to contribute images, it can become cumbersome.

Flickr offers URLs for groups, such as http://www.flickr.com/groups/groupname. But Yahoo really wants you to sign up to its service, and you will need to do so if you want to post any photos. Flickr has a guest pass option, but it is designed to work with individual photos. And Flickr users have to make sure to set up its autoposting/notification features to keep your photos from showing up in your Facebook Timeline or other places.

Zangzing (which we have written about previously) is easier to use but that comes at a privacy cost. You can set up individual albums that have their own URLs, such as http://www.zangzing.com/username/albumname. But because there is no password required, anyone who knows the URL can access the entire album. And you must join the service in order to upload pictures, you will need to join. On the plus side, you can also email pictures to albumname@zangzing.com, and they will be automatically posted to the album.

Finally, Posterous is more of a blogging site than a photo collection, but it can be used for sharing photos, as well. Indeed, if you want to mix your photos with other business content, Posterous could be a good choice and could serve as the base for a simple low-end Web presence. Groups of photos can have their own URLs, but you do need to become a member to post content. You can also email your photos and have them posted to your site, like what Zangzing does.

Recommendations: Start with Zangzing

We recommend you start with Zangzing, especially if you require the simplicity of a shareable URL and don't want to mess with having each person sign up for the service. If you need the additional security that a membership site offers, then look at Photobucket. It has more granularity for the security options than Shutterfly. Steer clear of Flickr: Its interface is somewhat long in the tooth, and it is too easy to click on the wrong button and end up sharing your entire photo collection to Facebook or Twitter. If you have more confidence in your users' abilities, you can set up private groups in Facebook or Google+.

The cloud database market continues to solidify as Google puts a price tag on its Cloud SQL offering. With actual charges to begin on June 12th, the move finally gives developers a way to see what they'll be spending on Cloud SQL, but comparing Google's offering to Amazon, Microsoft and others might still be a bit tricky.

Google's Cloud SQL is MySQL-based and is intended to be used with Google App Engine (GAE). Google's pricing structure is very simple, though not as comprehensive or as expandable as Amazon or others.

Google has two billing plans: a package plan and a per-use plan. The package plan has four tiers, each of which includes a set amount of RAM, storage and I/O per day. For instance, Google charges $1.46 per day for the D1 tier, which has .5GB of RAM, 1GB of storage and 850,000 I/O requests. The top package (D8) includes 4GB of RAM, 10GB storage and 8 million I/O requests for $11.71 per day.

The same instances are available on an on-demand basis, starting at $0.10 per hour, with storage and I/O extra.

The cheapest package from Google, then, runs about $45 a month and the most expensive runs about $357. That doesn't count any overages for I/O or storage.

Amazon's DB instances seem to be a bit more powerful than Google Cloud SQL instances, and Amazon has features that Google Cloud SQL doesn't. For instance, Amazon's Small DB instance has 1.7 GB of RAM and has the equivalent of a single CPU. You're also limited to Google App Engine supported languages, Python and Java.

Developers can choose between 5GB and 1TB of storage (the max for Google is 10GB storage). The Small DB instance runs about $77 a month, if it's on-demand. But, choosing a one-year reserved instance brings that down to about $45 a month. The pricing, then, seems to line up for the "small" instances for Amazon RDS and Google Cloud SQL, but Google has fewer features and what looks to be less compute power.

But if you're using GAE, then Cloud SQL is the natural choice - so it's nice to see Google finally getting this into developers' hands. If you're using GAE and Cloud SQL, we'd love to hear what you think.

As Windows 8 approaches, Mozilla developers have been working hard on a Metro version. If you're using Windows 8 on the desktop, no problem. Tablet users, however, are going to be denied a fully functional Firefox - and will face restrictions on many other third-party applications. In the name of security, Microsoft is forcing them into a "sandbox" on ARM devices. The lockdown renegs on the company's prior promises, and it's going to have some far-reaching effects on many applications.

Mozilla's Asa Dotzler touched on this issue yesterday, saying that Microsoft "is trying to lock out competing browsers when it comes to Windows running on ARM chips." But it actually goes farther than that.

Microsoft is restricting access to some APIs on ARM-architecture devices that are, as Dotzler says, "absolutely necessary for building a modern browser that it won't give to other browsers so there's no way another browser can possibly compete with IE in terms of features or performance."

Dotzler is focused on the implications of Microsoft's win32 API restrictions on ARM because they affect Firefox. This makes sense because Dotzler works for Mozilla and focuses on Firefox in general, not to mention Microsoft's long history of anticompetitive behavior towards third-party browsers. Make no mistake, though: Limiting access to the win32 APIs is likely to impact many other applications as well. How can LibreOffice or Apache OpenOffice compete with Microsoft Office if they're shut out of the win32 APIs?

In the Name of Malware

Microsoft is getting cut a lot of slack for its anticompetitive stance, because it is casting the anti-features for developers in the name of "protecting users from malware." It's OK if Microsoft cuts off competing applications at the knees, because it's trying to prevent malware.

Leaving aside Microsoft's intentions - perhaps it truly is motivated only by the best interests of users - this argument fails on a number of levels. First, it assumes that Microsoft's own applications won't be exploitable. Given Microsoft's history with security, this isn't likely. Why does Microsoft get the assumption of secure applications, while third parties do not?

And let's not forget who got us to this juncture in the first place. Microsoft users have been worn down by more than a decade of security issues that trace back to Microsoft itself. Microsoft is essentially using its own failings to excuse its blocking of third-party apps that may well have better security than its own applications.

Sandboxing third-party apps into limited parts of the machine does nothing to ensure that Microsoft's own browser won't be ownable by malware. Since Internet Explorer code isn't open source, security researchers can't audit the code directly. Firefox, which can be independently audited, won't be available on the new ARM tablets.

Why Not Complain About Apple?

Some folks have tried to dismiss complaints about Microsoft's ARM policies by pointing at Apple. Since Apple also discriminates against developers on iOS, why shouldn't Microsoft?

Yes, Apple's iOS developer policies suck, but they've sucked since the operating system's inception. What's more, there's little chance that Apple is going to change its policies unless users start abandoning iOS, or there's some sort of legal interference. Given that it'd be hard to make a case that Apple has a monopoly, legal interference seems unlikely.

That doesn't mean that third parties should just shrug their shoulders and accept the same treatment from Microsoft. If Microsoft is successful in the tablet market, ceding the Windows 8 ARM tablets is going to be a big loss for third parties. Loss of one platform is difficult, but being shut out of two tablet platforms in a three-horse race is going to spell major problems for Mozilla.

Dotzler distinquishes between a tablet OS and a general-purpose OS, though. Right now, at least, iOS is just for phones and tablets. Firefox can still compete with Safari on Mac OS X. Whether the distinction really makes sense, I'm not sure, given iOS' dominance on tablets so far.

But Windows 8 is not tablet-only. As Dotzler points out, tablets may be a "tiny sliver" of the PC universe now, but if you're looking ahead, that's not going to be the case in a few years. "ARM will be migrating to laptop PCs and all-in-one PCs very quickly," he says. "If you read Microsoft's blog posts about Windows on ARM, you'll see that they expect ARM PCs to cover the whole spectrum. ARM chips are already being used in servers. This is not a tablet-only concern."

Giving Microsoft (or Apple) so much control over what applications run on their platforms is not good for developers or users. It should be assumed that users have control over their computing devices, and that means having the option to choose their own applications for Web browsing and everything else.

It's not at all puzzling that Mozilla is complaining about being shut out of Windows 8 tablets. What's puzzling is how many developers and industry pundits are willing to give Microsoft a pass.

Having data available electronically is not the same thing as the data being useful. Campaign finance disclosures provided electronically by the Federal Elections Commission (FEC), are a good example of that. The New York Times's Fech (not "fetch") is a RubyGem - a packaged application - designed to help journalists and public interest organizations access and make sense of FEC filings.

Here's the NY Times' description of Fech from its first release last year:

Journalists who work with these filings need to extract their data from complex text files that can reach hundreds of megabytes. Turning a new set into usable data involves using the F.E.C.'s data dictionaries to match all the fields to their positions in the data. But the available fields have changed over time, and subsequent versions don't always match up. For example, finding a committee's total operating expenses in version 7 means knowing to look in column 52 of the “F3P” line. It used to be found at column 50 in version 6, and at column 44 in version 5. To make this process faster, my co-intern Evan Carmi and I created a library to do that matching automatically.

Fech (think “F.E.C.h,” say “fetch”), is a Ruby gem that abstracts away any need to map data points to their meanings by hand. When you give Fech a filing, it checks to see which version of the F.E.C.'s software generated it. Then, when you ask for a field like “total operating expenses,” Fech knows how to retrieve the proper value, no matter where in the filing that particular software version stores it.

Why Fech Matters

Fech is already being used by the NYT for its reporting and interactive visualizations of campaign spending. But that's just one editorial team. Putting this tool in the hands of any developer or reporter that wants to work with the data opens a lot more possibilities.

For example, there's ProPublica, which is using Fech and the NY Times' APIs for its reporting and interactive graphics. ProPublica is able to show not just what campaigns are spending, but how much and with whom. (So far the biggest winner is Mentzer Media Services, an ad agency that specializes in GOP campaigns - including the Swift Boaters. Fech doesn't automatically point that out, of course, but it helps journalists uncover it.

Data without context is useless. By helping developers and journalists work with the filings in a more structured way, Fech helps newsrooms (or any other group) put the data in context to find the story behind the data. It's a long way from being simple to use, but it represents a significant improvement over the raw data. It's Apache-licensed, so it might find its way into all kinds of data analysis tools over time.

With Fech maturing well before the elections this fall, it could help all kinds of organizations follow the money trails much more efficiently. Here's hoping that happens.

The central issue in Oracle's Java copyright/patent case against Google, which has been lost after a million-and-one interpretations of the case over the last two years, remains this: IfCompany #1implicitly grants Company #2 the right to use technology that #1 created and owns, to the extent that it's perfectly fine with #2 copying portions of that technology for its own purposes without seeking explicit permission first, does that implied consent transfer to Company #3 when it acquires Company #1? The interim answer, issued by a jury in U.S. District Court in San Francisco today, appears to have been, "It depends."

In today's partial verdict, it appears that Oracle failed to capture the jury's heart with a story that Google somehow conspired against it. It's difficult to believe that Google's creation of Dalvik, a lighter-weight interpreter of Java code built for deployment in Android devices, was anything less than a genuine effort by Google to extend the reach of Java. At the time the Dalvik project began, Java was under the stewardship of Sun Microsystems. Sun often portrayed Java as though it were public property nurtured by its own good graces. When Sun engineer John Rose first learned of Dalvik's existence at a Google I/O conference in May 2008, his initial response was limited to curiosity coupled with the need for Sun to step up and provide guidance.

"The Dalvik bytecode design executes Java code in less power (fewer CPU and memory cycles) and with more compact linkage data structures (their constant pool replacement... reminds me of some recent experiments with adapting the JVM to load Pack archives directly)," Rose wrote at the time. "The VM uses 'dex' files like Java cards use their own internal instruction sets. The tool chain does use class files, but there is a sizable... tool called 'dx' that cooks JARs into DEX assemblies. The dex format is loaded into the phone, which then verifies and quickens the bytecodes and performs additional local optimizations."

It's "dx" which serves to distinguish Dalvik as a machine in its own right. But the issue here was not "can Google copy Sun's concepts to build a separate class of device?" (The answer there would probably have been: Yes.) The issue was to the extent that Google had to make Dalvik compatible with Java code by implementing bits and pieces of Sun's original code (which it appears Google's engineers did), how small do those pieces have to be before the difference between copying the idea and copying the execution becomes trivial and insignificant?

The issue of granularity is where today's jury's verdict shows signs of specificity. Without yet deciding the issue of liability (that comes later), the jury found that Google infringed upon Oracle's property as a whole by having copied certain small elements of it in their entirety. The jury answered "yes" to Question 1A (whether Google infringed). But by answering "Yes" only to Question 3A, and not Questions 3B or 3C, the jury indicated that copying methods - not copying source files or English-language comments, but the way functions are executed and the way work is done - constitutes infringement.

What's more, it infringes upon Oracle property, says the jury, even though Sun's attitude toward that same property at the time was one of permissiveness. The implication here is, had Sun truly been interested in protecting the openness of Java, it should have made more explicit grants to Google up front, with clauses mandating that Sun's permissions proceed to its successors.

Put another way, openness must be licensed.

The possibility remains that Google may not be penalized for this infringement - the jury may yet find that Oracle has not been damaged. (There's a viable argument that Android may have actually expanded the market for Java, thus benefiting Oracle.) But this may be the last time in the computing industry (or at least for several weeks) where an "open-door policy" regarding sharing methods is treated as a waiver from having to use that door at all.

In a world where the caretakers of open source fight with just as sharp swords as anyone else, you'd best be careful whose door you choose to use.

Data gravity is a term coined in a blog post by Dave McCrory. Basically, McCrory says to consider data as if it were an object:

As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet. As the mass or density increases, so does the strength of gravitational pull. As things get closer to the mass, they accelerate toward the mass at an increasingly faster velocity.

...Services and applications can have their own gravity, but data is the most massive and dense, therefore it has the most gravity. Data if large enough can be virtually impossible to move.

Later, McCrory's post went on to talk about artificial influences on data gravity, such as costs, data throttling, legislation and more. Basically, factors that influence the movement of data in ways that wouldn't happen "naturally." For instance, Amazon allows free inbound data transfer, but charges for outbound data transfer. Another "artificial" influence is legislation, telling companies where they may or may not store data, or dictating terms of its storage.

Data Gravity in Action

You don't have to look very far to see data gravity in action. Consider Dropbox, Amazon S3, iTunes or just about any CMS migration ever.

Lots of companies want to emulate Dropbox, but few have managed to attract the same kind of user base as Dropbox. None are as ubiquitous as Dropbox. And that presence is paying off for Dropbox, which has now attracted quite a few third-party apps to its orbit, like Wappwolf and Ifttt. Perhaps that's why Apple is trying to disrupt Dropbox's gravitational pull and rejecting some iOS apps that use Dropbox.

You'll note that Amazon S3 and other Amazon AWS services make it very easy to get data in, but getting data out gets spendy. No shocker here - Amazon wants to encourage as many developers and companies to toss data into AWS, and then tie them to the service.

Apple's iTunes is all about keeping data in Apple's services. Aside from Apple's now-defunct DRM on music, there's no using iTunes to transfer music or movies to other devices. It's Apple devices or nothing. Getting the entire library out of iTunes is non-trivial for many users, so in many cases it's like a digital roach motel: data checks in, but it doesn't check out.

If you've ever worked with content management systems, you already know all about the concept data gravity - even if you've never heard the term. Getting all the data out of one CMS to another is, well, painful at best. Often impossible. This is one reason why companies often stick with aging CMSes rather than go through the pain of migration.

Consider Gravity Before Deploying

Whether it's a single-user application like iTunes, or a company wide project: You need to consider the implications of data gravity - once your data is in, how hard will it be to break the gravitational field?

The stronger the data gravity involved, the more cautious you should be when you choose your data storage solution. It's likely that once you have a sufficient amount of data wrapped up in a solution, it's going to be very difficult (if not impossible) to justify the costs of moving it away.

Back when we first started using PCs, we all wished that they would become as easy to use as a telephone. Well, we got our wish, not because computers got easier to use but because phones are now so darn complicated. If we examine the process and story by which phones became so complex, we can uncover a variety of lessons that startups can learn - and hopefully avoid.

The Phone's Transformation

First, the landline is becoming extinct. As we moved to cell phones, it became easier and more convenient for everyone to have their own number. This is true for both home and office lines: I gave up both a long time ago.

Minutes became ultra cheap, thanks to voice over IP telephony. Remember when you had to think about calling someone "long distance?" Now toll calls are pretty much a thing of the past. And the whole notion of area codes also got more complex. Forget traveling; cell phone users often keep their original area codes when they move to another city, so you can't tell where you are calling anymore. For example, I still have a Los Angeles "310" area code even though I have lived in the Midwest for several years now. In some countries, cell phones have their own area code, so all you can tell is that you are calling a mobile phone.

Then cell phones became more than just phones: About half of us now use them for surfing the Web or running apps, and navigating the typical cell plan now requires a degree in accounting. When we get a new plan, we have to figure out prices for our data and texting plans, and how many actual voice minutes we'll need – that is, if we still use phones to make actual phone calls.

And then, of course, there's the task of finding the right phone to purchase. Computers now seem a lot easier by comparison.

Lessons for Startups

So what, you say? Modern life is complex; deal with it.

But startups can take away some important lessons from this thought experiment:

Don't assume that technology is understandable by everyone. Consider the context in which an item is going to be used, and its intended audience. This is Marketing 101, but still. Just because brilliant engineers from Stanford or MIT design your product doesn't mean that everyone who will actually use it has that kind of training.

Simplify your pricing and eliminate degrees of freedom. I once had a client in the network storage business. Its pricing sheet comprised not one but a series of Excel spreadsheets. Since pricing had six different metrics, it could take the better part of an hour to come up with a final price for customers. It shouldn't be that hard. Take Thoreau's maxim ("Simplify, simplify, simplify!") to heart, and make your pricing easier to understand.

Align your product with your domain name. How often do companies start with one name and end up having to change it because their major brand got more popular than the name of their company?

Don’t penalize your best customers. When you run over your cellular airtime minutes allotment, you get hit with overage charges. It shouldn't take an act of Congress to convince companies of the folly of this tactic. Stop trying to extract more money from your best customers, and instead, make it easier to do business with your company.

There is nothing wrong with having subscription-based pricing, but make it clear how a customer can end a contract without paying a hefty penalty.

Don't make your product instantly obsolete. This issue is huge for cell phone makers right now, but every time you buy a laptop, the manufacturer instantly seems to introduce something lighter with a better screen.

As you can see, there is a lot that startups can learn from the saga of cellular phones' growing complexity. It does make you long for those days when we could just pick up that black model and ask the operator to dial a number for us. As Lily Tomlin's "Ernestine" would say, "I work for the phone company. It isn't my job to think."

The Murky Basics

CISPA starts off strong, with a goal "to provide for the sharing of certain cyber threat intelligence and cyber threat information between the intelligence community and cybersecurity entities." Unfortunately, the sentence doesn’t stop there, finishing with "and for other purposes." The last four words are the beginning of the confusion, and it just gets worse. The bill leaves a lot to interpretation on some very important topics, such as defining exactly who constitutes a threat. According to the bill, a cybersecurity threat is someone guilty of "misappropriation of private or government information, intellectual property, or personally identifiable information." That gives government a wide berth, and it terrifies civil-rights activists.

A Slippery Slope

Rebecca Jeschke, media relations director for the Electronic Frontier Foundation (EFF), thinks the bill’s ambiguity could have catastrophic results: "CISPA gives companies a free pass to bypass all existing privacy law, with vaguely worded provisions and no oversight. It's a situation ripe for abuse." How far down the rathole could that abuse go? "If this legislation is passed, Americans will always have the spectre of government surveillance over their online activities - no matter who they are or how private their activities," Jeschke says.

While that might seem harsh, the EFF isn’t alone. The American Civil Liberties Union claims "this broad legislation would give the government, including military spy agencies, unprecedented powers to snoop through people's personal information - medical records, private emails, financial information - all without a warrant, proper oversight or limits."

If CISPA passes, though, we probably wouldn't notice a thing, at least initially. Unlike SOPA, which outlined more specific, direct (and ultimately, useless) consequences of being labeled a bad guy, CISPA merely removes legal and procedural barriers and adds a veil of anonymity for companies that choose to share customer data. But CISPA is a two-way street, allowing the government to share information about cybersecurity threats with businesses, and who wouldn't want access to that? "Voluntary" might not be when the government is dangling your company's security like a carrot on a stick.

Civil-rights organizations aren't the only ones worried about government leverage. Microsoft, an initial supporter of the bill, recently withdrew its backing, citing concerns about violating existing privacy agreements with its users. Since information sharing remains optional under CISPA, many see Microsoft's waffling as a tacit acknowledgement that government strong-arming is inevitable. President Obama has cited similar concerns and threatened to veto the bill if it comes across his desk. CISPA will not turn the country into a police state overnight, but with the president and some of the industry's biggest players backing the EFF's claims, there's little doubt that over time, the bill would erode some amount of personal freedom and privacy in the name of security.

Our Only Hope?

Lost liberty has always been the cost of security, and many believe society will give up its freedoms if the reward is great enough. Dutch Ruppersberger (D-MD), one of CISPA's two sponsors, isn't shy about what he feels is on the line: "We weren't ready for 9/11. But we have an opportunity to be ready for [a cyberattack]."

Comparing a hack to the greatest tragedy in American history may be extreme, but Ruppersberger has a point. Foreign hackers have already disrupted satellite operations, and they steal as much as $400 billion in trade secrets each year. An organized attack on a traffic grid or power plant could absolutely lead to real-world deaths. Clearly, we're underprepared, and we need to do something. If CISPA doesn't pass, are we screwed?

According to Paul Sweeting, principal at Concurrent Media Strategies, not really. To Sweeting, there's not a lot of upside to the bill. His evidence? The people most familiar with CISPA don't seem to believe in it. "I think it’s fair to assume, in light of President Obama’s threatened veto of the bill, that the White House, at least, does not believe the bill as written would be particularly effective," Sweeting says. "This administration has not exactly been shy about putting its paws on the Internet in the interests of 'national security,' or about aggressive measures to protect the intellectual property of U.S. businesses. So if the White House is willing to torpedo CISPA, I think we can assume that its impact on cybersecurity would be limited, even if it passes."

And what about the coalition of business backers, including Facebook, AT&T, Symantec and other tech heavyweights? Sweeting thinks they're just in it for a free pass. He claims they’re "mostly interested in the liability exemption and don’t really believe it would have much effect on security. That’s why I think you see some of them going wobbly on their support now (e.g., Microsoft), as the opponents of the bill have gained some traction in the committee for tightening the exemption." It's worth noting that nearly all of the CISPA supporters were against SOPA, which would have forced tech companies to police their own content.

If that's the case, a more specific bill that everyone can support might be worth the wait. After all, as the EFF points out on their website, CISPA does nothing to reduce the number of exploitable vulnerabilities that facilitate the vast majority of exploits, so with or without CISPA, the bad guys aren't going away any time soon.

That Apple remains in first place in the tablet market comes as no surprise. IDC's latest research shows that in the first quarter of 2012, Amazon's once-hot Kindle Fire is struggling. According to IDC, Amazon's share dropped from nearly 17% of the tablet market to 4%, with fewer than 700,000 units sold compared to Apple's 11.8 million.

The inexpensive Kindle Fire took off when it was introduced in late 2011, giving Amazon 16.8% of the tablet market with 4.8 million shipments. Amazon's 7-inch tablet was the right product at the right price at the right time, that being the all-important holiday season. The Fire is an inexpensive tablet that offered many of the features that people want for less than half the price of an iPad. But the Fire didn't knock people's socks off, and many of the reviews were lukewarm, at best.

Amazon Still Trounces B&N

Apparently, the bloom is off the rose. Amazon's Q1 sales put it behind Samsung sales of Android tablets, but still comfortably ahead of Barnes & Noble's Nook tablets. Lenovo took the fourth slot, while B&N grabbed fifth place.

IDC predicts that Amazon will try to win back market share with the introduction of a "new larger-screened device... at a typically aggressive price point." IDC's Tom Mainelli, research director for IDC's Mobile Connected Devices group, also predicts Google will debut a tablet co-branded with ASUS.

Lessons Learned: Price Matters, to a Point

The lesson that Apple's tablet competitors should take from IDC's research is that price does drive sales - up to a point. The drop-off from the last quarter of 2011 to the first quarter of 2012 is far steeper than is easily explained by the end of holiday shopping. IDC had predicted overall tablet sales to be 1.2 million units higher than they were this quarter, with the shortfall mostly attributed to Amazon's slip.

Even though Apple introduced a new iPad this year, it's continuing to sell iPad 2s at a reduced price, fending off the cheap Android tablets and defending its market share while maintaining high margins on the rest of the iPad line. Apple owned 54.7% of the market in Q4 2011, and has bumped that figure back up to 68% in Q1 2012.

Tablet sales have grown 120% from last year, but were still lower than IDC's predictions. Whether tablet sales continue to slow in Q4 will be interesting to see.

Android Still Lags

Android vendor sales have not been able to catch up to iOS in the tablet market in the same way that they've been able to catch up with iOS phones. Despite two years and a slew of vendors chasing Apple's tail, Android has been able to glom on to only 32% of tablet sales in the last quarter. Android's fragmentation, poor customer reviews and pricing missteps have held it back.

It had looked like Amazon's Kindle Fire would be the breakout device for Android. And it was - briefly. But even Amazon's muscle and cut-rate pricing hasn't been enough to overtake the iPad.

We've already established that members of Congress are pretty bad at informing the public via their websites. The good news is that you can find a number of excellent sites for keeping an eye on the U.S. government. Not surprisingly, most of these are provided by third parties, rather than the government itself. To help ReadWriteWeb readers as the election season approaches, we've pulled together a list of the best sites for seeing just how the sausage is made. Just remember: What's been seen can't be unseen.

POPVOX: Bridging the Public and Congress

Tracking bills through Congress can be complicated, to say the least. Giving elected officials feedback, and making sure it's heard, is even more so. POPVOX was founded in 2010 as an attempt to help voters and Congress by making it easy to find bills, voice support or opposition to legislation, and share opinions. But don't look to POPVOX for its opinions - one of the site's goals is to be free of editorializing.

POPVOX tracks all of the bills in Congress, and how members vote. If you sign up and give POPVOX your information, it will help you track how your representative and senators vote on bills before Congress. You also get to see whether other POPVOX users support or oppose the bills, with handy little pie charts that show support and opposition, as well as how many users have spoken.

The bill summary pages also list organizations that endorse and oppose the bill, as well as the administration's stance on a bill. Naturally, the site also includes the text of the bill and its status before Congress.

POPVOX is supposed to provide a more effective way to read public sentiment on bills and get feedback on them. If POPVOX takes off, maybe it can counter the influence of paid lobbyists in favor of the public.

OpenCongress

While OpenCongress does not avoid editorializing, it's still a fantastic tool for paying attention to Congress. The site is a project between the Sunlight Foundation and Participatory Politics.

One tool that you'll find on OpenCongress that's not available via POPVOX is a way to track your representative and senators specifically. OpenCongress shows how often they vote with their party, their votes and their money trail. OpenCongress even lets you pit legislators against one another by comparing their voting records.

OpenCongress also lets you follow the money trails by industry sector, so you can track things like pro-gun and gun-control spending, how the entertainment industry spends money, and so on. If you want to track specific issues, there's an index of broader issues as well. This shows "hot bills" by the issue area, key votes, the latest bills and enacted bills.

The committee view has potential, although it looks like this is an underloved section of the site. Several committees have no membership data, though the site promises that "it's coming soon in August 2009." Keeping up with Congress isn't easy, though, and on the whole, the site provides a fantastic resource.

Poligraft

The concept behind Poligraft is simple, but extremely complicated to pull off. Give it the text to an article, press release or post from a blog, and it will give you an "enhanced view" of the people, organizations and their relationships.

Give it the URL to a political story, and it will filter the story for points of influence, campaign donations and individuals in the story. It tries to show where money goes and where it comes from, in relation to any given story. For example, this story from Politico points out when donors and recipients are mentioned in the same story - like Goldman Sachs and James Walsh. It helps give context, for instance, when politicians are talking about organizations that may be giving them or their opponents money.

Follow the Money

Money talks in Congress - loudly. Finding out who's spending what, and how, can be pretty difficult, especially with the explosion of super PACs. The Sunlight Foundation's Reporting Group provides a handy site called Follow the Unlimited Money for all super PACs that have raised at least $10,000 since the beginning of 2011.

OpenSecrets.org

The OpenSecrets.org site is a treasure trove of information for tracking the influence of money on U.S. politics. The use of big data in tracking government is trendy now, but OpenSecrets.org was well ahead of the curve. The Center for Responsive Politics has been publishing since 1983, and the Web site has been up since 1996.

OpenSecrets.org goes a bit beyond reading the tea leaves of big data. It also does good old-fashioned reporting and finds a lot of information you might otherwise miss. One of my favorite projects on the site is the revolving door, which tracks former members of Congress and staffers, so you can see where they go when they leave Congress. OpenSecrets.org also lets you check the top lobbying firms and see who hires folks who used to work on the Hill. See also the Sunlight Foundation's lobbying tracker if you're into paying attention to lobbyists.

Federal Register

Want to see what executive orders are coming from the White House, or rules being proposed by federal agencies? Then you'll want to take a look at the Federal Register. The U.S. government posts notices, proposed rules, rules taking effect and "significant documents" for public inspection.

There's a lot of information on the site, but it's not as easy to use or friendly as some of the sites that are provided by organizations like the Sunlight Foundation. You really have to know what you're looking for here to be able to find out what's going on. However, the Federal Register does have an API, and code for the site is provided on GitHub, so everything is there for third parties to take on examining and taming the data.

MuckRock

Ever thought about filing a Freedom of Information Act (FOIA) request? The folks over at MuckRock have. In fact, they've filed more than 1,000 requests and received more than 30,000 pages of government documents. Out of all of those requests, only 273 have been "successfully completed" and 85 have been denied - meaning there's still some wiggle room for the government to just ignore requests or delay them significantly. Take, for example, this request for FOIA filings in Boston. It's gone unanswered for nearly two years.

Still, the MuckRock folks are turning up interesting information and showing others how it's done.

If you know where to look, you can find out much more about what's going on in government these days - thanks to the series of tubes we call the Internet. But there's always room for more information and better efforts to put that information in context. Have a favorite open government site? Let us know in the comments. And yes, we know this is U.S.-centric. Think we should try to pull together a list of international open government resources? Let us know.

There's a reason that IDC, Forrester, and Gartner are so big. They offer scale and coverage that small firms can't match, and they attract industry heavyweights who can make or break emerging technologies. But there's a downside to scale. Unless you're a corporate whale, it's easy to get lost in the shuffle, and getting that superstar on the phone in a pinch might take more time than you have.

I'm certainly not suggesting that you throw away your existing subscriptions, particularly if you're a vendor or solution provider. Put some effort into those relationships, and they'll pay themselves back several times over. But there's something to be said for the little guy, and there are hundreds of smaller analysis firms that can provide you with the kind of service and support you need to make informed decisions on a daily basis.

There's no way to provide a comprehensive list of analysts or coverage areas in small firms, but I've chosen five analysts who exemplify the kind of breadth in business model, coverage areas and perspective you can find when you look beyond the Big Three. Full disclosure: I've worked with some of these people before, but don't hold that against them.

The gaming industry is a tough nut to crack. It's an art, a business and a unique exercise in supply-chain economics. Plenty of analysts cover financials ("300,000 units shipped!") and tech ("11 million polygons!"), but most leave the games themselves to the press.

M2's Billy Pidgeon understands all three worlds. While he's spent the last dozen years at various research houses, Pidgeon will always be a gamer at heart. He's produced more than 20 games, including major releases such as 1997's Turok: Dinosaur Hunter. This street cred gives him access to insights and talent that more buttoned-up analysts might miss. If you're looking for one-on-one practical advice about the gaming market from someone who's been there but also gets the big picture, check him out.

If you're a Firefly fan, think of RedMonk as the BrownCoats of the analyst world. If you're not, their motto should tell you what you need to know. "Analysis by the people, for the people" says it all. I would have chosen just one of their four analysts, but that would have violated their whole "community" vibe.

RedMonk tips its hat to the open-source world it covers by giving away its research, believing that an open discussion provides the greatest benefit to everyone, including their paying customers. They make their money from consulting services that start at a flat $5,000 per year, increasing with the size of your company or your consulting demands. For your money, you get access to very astute technical minds focused on helping vendors produce tools that developers will actually want to use. As the business model might suggest, it's a very populist approach in which the end user, IT manager, or systems analyst is a lot more important than the CIO, which is dramatically different than the coverage aims of most larger firms. If you're a software developer, $5,000 a year is a very small price to pay for a contrarian perspective.

Sustainability is no longer just hip; it's an essential (and sometimes mandated) part of doing business, sitting on a growing pile of hard science. It's a big industry, so hundreds of consultancies have bolted on an "eco-" to get your business. It's tough to weed out the pretenders.

David Schatsky has a background in technology, policy and finance. He also spent nearly 10 years at JupiterResearch as a Research Director and President (yet more disclosure: He was also my boss there for a while), so he understands the analyst gig. But what sets him apart from the rest of the eco-kids is his understanding that he shouldn't do it alone. When he founded Green Research, Schatsky brought in David Meyers, an environmental heavyweight, to build out the company's real-world expertise and complement his research experience, and they've further rounded out their expertise with associated content providers. The result is a small, personalized shop that should be able to address most of your environmental concerns directly, but has the connections to pull in other experts where needed.

Real Story Group doesn't work with vendors they cover. At all. No consultations, white papers, or appearances at vendor events - nothing that could possibly influence their coverage. This independence irritates the industry and helps their clients (anyone working with content or knowledge management) trust what they read. While RSG has a number of top-notch analysts (Theresa Regli deserves a shout-out, particularly regarding international content management issues), the man behind the business model is Tony Byrne, the company's founder.

RSG's Evaluation Reports are their most popular deliverable, largely because of their Consumer Reports-style comparison charts. They aren't cheap (running around $2,500 per report), but they can save you tens or hundreds of thousands during your evaluation process and give you the answers you need to ask the right questions of your vendors. Byrne is convinced that RSG's objectivity and laser focus will convince most one-off purchasers to stick around as clients for further research, as well as advisory services to help manage the tools and content with the software you've bought. So far, so good.

Seniors are our fastest-growing demographic segment, and the technology required to help them age is of tremendous social and financial importance. So it's strange that until fairly recently, most major research firms treated the category like an afterthought. Laurie Orlov is one of the few experts in that space, and the foremost authority in the study of using technology to remain in the home as you age. In fact, she kind of created it.

Jeff Makowka, AARP's Senior Strategic Advisor, Thought Leadership, explains her impact: "She's a real visionary. She took her past life (as a Forrester analyst) and overlapped it with a caregiving experience and basically thought up the category. Solutions already existed, but she defined and legitimized Aging in Place Technology."

Like every boutique analyst, Orlov's journey is unique, and probably impossible at one of the largest firms. Small firms will never give you the coverage of the Big Three, and can't shout your voice as loudly to the world, but they do a great job of filling the gaps if you're willing to do some searching.

Have you had experiences with small research firms? Let us know who you've used and how it worked.

Last month, Netcraft recorded nearly 677 million websites in its April Web Server Survey. May is a different story, though. This time, Netcraft found a drop of 14 million hostnames, the first decline in nearly two years. Despite the decline, things are still looking very good for the Nginx web server and its continued foothold in the Web's most-used sites.

The hostname decline, according to Netcraft, is due to more than 28 million .info hostnames that were controlled by SoftLayer going into oblivion. The drop was enough to offset new growth, in a month in which Apache lost more than 17 million domains.

Netcraft looks at more than just the total domains, of course. It also measures the million busiest sites and the active sites - which helps to get a view into the Web servers that are actually being used for live sites, as opposed to the parked domains that make up the bulk of the Internet.

SPDY and IIS 8.0

Netcraft's survey has also picked up on some cutting-edge tech out there, in very small numbers. Netcraft spotted 654 hostnames being powered by Microsoft IIS 8.0, which is the Web server in Microsoft's Windows Server 2012. It'd be interesting to know how that compares with servers running Apache 2.4x, which was released in February but is still in the early stages of adoption.

Even fewer servers are running SPDY. Netcraft spotted a whopping 339 servers running SPDY, which is mostly Google and a handful of other sites. SPDY usage is likely to increase if and when Apache and Nginx have bundled support for it. You can get a module to use SPDY with Apache now, but it's not distributed with the official project. Nginx isn't expected to have SPDY support until later this month.

Nginx Still on the Rise

Once again, Nginx increased its share of the million busiest sites, but only by a hair. In April, the up-and-coming Web server had 100,394 domains responding to the Netcraft survey. In May, it nudged up to 100,417, maintaining its 10.09% share of the market.

Nginx's share of the active sites actually dropped a bit. In April, it had about 24.3 million. In May, Nginx only had about 23.9 million, which gave it a -0.27% drop in share. Apache increased here, from about 107.7 million to about 109.3 million, or a 0.36% boost to 57.02% of the active servers.

Microsoft also lost servers in May. Microsoft IIS now has 11.9% of the active servers counted by Netcraft, and 14.76% of the top million domains. It might not be long before Microsoft IIS slips to number three behind Nginx and Apache. But it doesn't look like Microsoft is losing a lot of sites because customers are switching; rather, it appears that IIS is falling behind because it's not being deployed on new servers.

What remains to be seen is whether Nginx can put a serious dent in Apache, or if it's always going to be a distant second. Apache still powers the majority of Web servers, and it has managed to beat back IIS pretty handily.

We are surrounded by failure in the tech world, and some of those failures are big enough to sit in our memories for years. After the latest news from Google, we were reminded of many other shameful moments in tech. We put together our own RWW Hall of Shame to see if we could learn any lessons from these sordid tales of woe.

Google's Street View brought the concept of "payload data" to the forefront: While those nifty cars with cameras were cruising our communities, Google was purposely collecting data transmitted over open Wi-Fi networks to which it could connect. First Google said it wasn't intentionally doing this, then the word leaked out that many project teams within Google had access to this information. Google should have come clean on its intentions, and the executive who authorized the project should come out of the Googleplex and take responsibility for being "evil."

But Google is hardly alone in acting shamefully. Consider:

Amazon should be chastised for patenting one-click shopping. Leave it to Amazon to have gotten one of the most annoying software patents of all time: the ability to purchase something with a single click online. Lately, it has been very unresponsive to its customers and has suffered lengthy outages with its Web services. The company needs to swallow a huge antihubris pill and come out with better support mechanisms if it wants to keep its customers.

Last year, Netflix tried to split itself into two companies, one for streaming and one focused on its legacy DVD rental business. The split didn't go well, and as a result, Netflix lost at least 10 percent of its customers, with many of them going to competitors. Certainly, trying to charge more for the same service it had previously offered was bad news, no matter how small the increment.

Lexmark was one of the first laser-printer companies that forced customers to use its toner cartridges. It did this via adding special ID chips to its toner cartridges and then having its printers check for the ID, so you needed to buy Lexmark cartridges as replacements. Apart from starting an entire cottage industry focused on defeating this procedure, requiring your customers to act a specific way is generally a really bad idea.

Sony deserves mention for installing malware on its music CDs in the name of copy protection. Back in 2005, Sony made news with its special rights management software from a company called First 4 Internet. The software came with the music CD from the Van Zants called Get Right with the Man (ironic title completely unintentional). The software is used to play the music files from the CD and monitor how the PC uses the music, ostensibly to prevent digital copying and ripping the music. Sadly, the software did more than that, including burrowing deep into your Windows OS and purposely disguising itself and hiding its executable files from plain sight. Worse yet, the software stole performance from your computer in doing its bidding. That got a lot of attention, and Sony was forced to offer a removal tool. Now the issue is moot, as how many of us really buy CDs anymore?

Sears provided its own malware on MySHCcommunity.com. Sony wasn't the only big company that installed spyware on your PC. You would think that others had learned from its mistakes, but in 2008, Sears decided to try it on its own with a special website that pretended to be a portal for its resellers. Oops! There is such a thing as being too close a partner. Again, denials were followed by fixes.

Dell shipped a laptop that brought fresh meaning to the term "explosive new release." Back in 1993, Dell made news with its SL320i laptop, which had an exploding battery. Initially the company denied it, then worked hard to offer replacements. Since then, Dell has gotten its customer-service listening act together and today is an exemplary social-media operation.

Miniscribe developed the concept of "brick drives." In late 1989, the well-known Longmont, Colorado, disk-drive maker found its short-term financial situation in bad shape and thought it had the solution: Ship bricks instead of disk drives that customers had ordered, use the payments to stabilize its situation, then chalk it all up to a packaging error and send out the real drives. We'll never know if it would have worked, because the company laid off a number of employees who had been complicit in the shipments - and who then turned around and outed the whole scheme. (As if the customers wouldn't have noticed that their drive installs were more difficult than usual.) At least Miniscribe paid for its sins with a very quick bankruptcy.

Some lessons learned from these events: Denial is not a river in Egypt. Come clean with the facts and offer a fix ASAP. Also, make good on any customer slight. In an era when customers can tweet and post on social media, you want to work toward keeping the customer happy, and the cost will be small. Finally, steer clear of putting any software on someone's PC without his or her knowledge. Anything else is just spyware.

Feel free to suggest some of your own egregious and shameful tech acts from the past.

As a foreign correspondent in London 10 years ago, my job was to unearth innovative new startups for my business magazine's readers. I traveled across the Continent, from Helsinki to Milan, meeting entrepreneurs, venture capitalists and big company researchers to write about the next big thing.

In the summer of 2002, I attended a launch party for a startup demonstrating their nascent service at a swanky Haymarket bar. Upon walking in, there were printed instructions to visit one of the tables playing music and then navigate through a maze of confusing WAP mobile phone menus. What resulted was my phone magically telling me the name of the song playing in the room. The event was Shazam's coming out party. It took almost 10 years for the music recognition app to truly gain widespread recognition but, for me, it was the first time I saw firsthand what was only possible with a mobile phone.

Ten years later, publishers are still plotting the best ways to engage readers on mobile devices.

The stakes are high. As technology continuously improves, the percent of content consumed from mobile devices increases. On average, 20% of sites’ content is now being consumed in mobile browsers. But, evolving technology platforms and consumption patterns makes it far more difficult to succeed on mobile than it is on desktop.

And the challenge of building a great mobile experience isn't solved by simply ensuring the content displays in the right way in the right environment. The bigger challenge is to figure out how best to match the content and mission of that publisher with the unique properties associated with varied operating systems, devices, browser and app environments.

Different technology translates into different consumption patterns. For example, users are consuming content in very different ways in apps than they are on the mobile Web. Gaming and social apps account for 80% of all app activity. By comparison, those activities account for just 40 percent of time spent on the desktop. Mobile Web consumption more closely mirrors what people do at a desktop with news, utilities, entertainment and topic-specific content accounting for the bulk of activity. Most publishers are responding to the rapidly evolving technology landscape with a wait-and-see approach.

A brave few are experimenting early, and with promising results.

Food52 has tailored its approach to the screen size. Its iPhone app is focused on its Hotline, a forum for user questions and answers. To take advantage of the bigger screen and encourage users to take their iPads into the kitchen, Food52's Holiday app included a variety of entertaining tips, such as step-by-step instructional videos on how to prepare a dry-brined turkey or Tuscan onion confit.

The logical first step for publishers into mobile publishing is to create a mobile-optimized site. SAY makes that easier with technology used by Remodelista that automatically resizes the page based on the screen size the content is being accessed from.

Still others are pushing the envelope even further. Kinfolk Magazine's luminous iPad app complements its quarterly books about small gatherings by encouraging readers to experience the content in a way unique to a tablet device. Whether swiping down for a peek at an intimate dinner by a freezing lake or rearranging the layout and size of photos of a salty dinner of buttered clams and beer in Maritime Canada, readers have never been able to personalize content like this before.

Like it or not, another U.S. election season is upon us. Among other things, that means that people will be spending more time visiting the websites of U.S. legislators to study up on their views and voting records. What they're going to find is not pretty. Congress seems to have found at least one issue that crosses party lines: truly horrible websites. If you're looking for real information, the official websites for members of the House and Senate, regardless of party affiliation, are uniformly useless.

But the second key finding from the CMF rings true: "A significant number of Member websites lack basic educational and transparency features and content valuable to their constituents." In visiting dozens of sites for members of the House and Senate, I found little that could be described as useful or valuable.

Interstitials, Really?

Let's start with an issue that was literally in my face most of the time I was researching this. Who loves the interstitial ads and notices that pop up when you visit a website? That's right, no one. Yet the sites for too many representatives and senators throw up an interstitial ad to sign up for their newsletter or take some poll the minute you click on their Web page.

In some cases, these interstitials pop up every time you visit the member's home page. Take Senator Roy Blunt (R-Mo.). His site spits up an interstitial every time you reload the home page, so if you access any of the site's other pages and return, you're smacked in the face with it once again.

It's annoying when it's advertising. It's even more annoying when it comes from elected officials.

For example, almost every page features lousy navigation and overuse of JavaScript - and many require plugins that would require the use of a mouse. Virtually all of the sites I checked employed video of some kind, but I found no sites that offered a transcription of the audio portion of the video.

Some of the sites appear to have controls to resize text, though they don't always do what's expected. Take, for instance, the site for Senator Claire McCaskill (D-Mo.). When viewing the site in Google Chrome, the text controls resize only some text. On the front page, the controls resize only the text in the box next to the slideshow on the right-hand side. Constituents who would like to enlarge, say, the menus, will be disappointed.

Voting Records

One of the most obvious things that constituents care about is the voting record of their representatives and senators. It is, after all, one of the key duties of members of Congress. Despite that, you'll be hard-pressed to find a congressional site that displays voting information in a usable form.

Sure, you'll see news about specific bills if the legislator is really interested in publicizing their position on that bill. But comprehensive voting records? You'd hope to find them on each and every congressperson's page, but not so much.

One of the sites that does offer a voting record belongs to Senator Lisa Murkowski (R-Alaska). Murkowski's site sports a chronological list of bills and amendments, a short description and Murkowski's vote on each. Savor that page; you'll not find many like it. Erik Paulsen, from Minnesota's 3rd district, has a page that promises a voting record, but it's blank. John W. Olver, from Massachusetts' 1st district, lists his voting record - but without any context except the bill number, and there's no search at all.

Incumbency Through Obscurity

One might think that a congressperson's website would exist to inform their constituents about what's going on in Congress, and to help make sense out of the process. Judging by the actual content of the sites, however, that doesn't seem to be the case.

Each site (or at least all that I visited) sports dual "About" pages: one for the congressperson, one for the state. So if you're a fifth-grader taking a crash course on your home state, and lack access to Wikipedia, then you're golden. Well, not really, because the state summaries are more a random collection of facts than a comprehensive overview of the state. The "About" pages for the congressperson, of course, always read as a glowing tribute.

The overall site design for most of the sites is mediocre, overly busy, and more like a travel brochure site than anything designed to inform voters. You'll find plenty of pictures of the congressperson and the home states, press releases, videos and links to social media. You'll also find "news" with the typical partisan spin you'd expect from campaign commercials.

What you won't find is any information about many things you might actually want to know, such as the aforementioned voting records. Also uniformly absent is a list of committees that the congressperson serves on, how bills actually become law, the lobbyists that they've met with, campaign donors, or anything that poses a danger of arming citizens with any real information that might lead to more intelligent voting. It's as if our elected officials don't want us to know what they're doing in office.

But if you would like to purchase a flagthat's been preflown over the Capitol, you'll find a link for that pretty easily. If preflown flags are as in-demand as they seem to be given the number of links on congressional websites, we may have a solution for whittling down the debt.

Compared to a problem like the federal debt, the state of congressional websites may not seem the most pressing issue. But what should be vehicles for informing the public turn out to be little more than campaign ads, with a few concessions to public service and communication thrown in.

So much for Barnes & Noble's standing up to Microsoft's "anticompetitive scheme" against Android. One year after their nasty patent spat flared up, Microsoft and B&N have buried the hatchet with a "strategic partnership" that has Microsoft dumping $300 million into a new subsidiary company. It's a smart investment for Microsoft, since allowing ambiguity to fester around Android's patent status earns it far more than the $300 million it's putting into B&N.

The terms of the deal have Microsoft settling its suit with B&N, giving the company a royalty-bearing patent license for the Nook line. Microsoft is putting a $300 million investment into "newco," an as-yet-unnamed subsidiary, in exchange for a nearly 18% equity stake in the company. B&N will own the rest of the company, which will have an "ongoing relationship" with B&N's retail stores. The company will also include the B&N college business. There will be a Nook app for Windows 8 as part of the deal, as well.

Microsoft Saves Face, B&N Gets a Boost

So, despite the fact that the deal took many industry watchers by surprise, it shouldn't have been a total shock. Microsoft wasn't about to let a losing case go to trial that might jeopardize its Android cash cow.

By settling with B&N, Microsoft avoids an ugly court battle that might not have been decided in its favor. Like most companies that wield patents as weapons, the goal is to prevent competition and maximize royalties. Microsoft has no dog in the e-reader fight, so the partnership with B&N makes sense for Microsoft anyway.

Since Microsoft's patent license for B&N is a royalty-bearing one, it means that Microsoft may well make back its investment and wind up with a portion of the Newco to boot.

Winners and Losers

The big winner here is Microsoft, make no mistake. While the ITC decision was not a lock for B&N to win the case, it should have carried quite a bit of weight.

Barnes & Noble avoids a protracted legal battle with a company with much more legal firepower. The best case for B&N was to fend off Microsoft's suit, which still meant spending a lot of time and money when it's busy trying to compete with Amazon and Microsoft. Having the Microsoft suit cleared off the deck, with some cash to boot, is a win for B&N in the short term. Investors certainly seem to like the deal: Barnes & Noble's stock price is up by more than 60% since the news hit the wires this morning.

This is a loss for Android, though. Once again, patent FUD remains strong in the absence of actual legal decisions. Microsoft can point to the deal and claim, once again, that another company has found its patent claims compelling. It also doesn't have to deal with trial testimony that echoes its back-room patent negotiating tactics, which B&N seemed very willing to disclose.

Is it a win for users? It's hard to see how. As usual in patent cases, nothing in the announcement points to any innovation taking place, just two large corporations trying to decide how to carve out market share and avoid real competition.

Amazon is helping to bring the mythical "paperless office" a bit closer, if only by a tiny fraction. The new Send to Kindle for Mac app lets Mac fans join PC users to bypass the printer altogether and "print" documents directly to their Kindle. The question is, what's taking the other e-book providers so long to deliver similar functions? Can we get a little more movement here, please?

The Mac app follows the Send to Kindle for PC app released in January. Both apps integrate with the OS to add a Send to Kindle option to the printer dialog and file manager (Finder on Mac, Explorer on Windows). The Mac version also gives users the option of dragging a supported file to the Send to Kindle Dock icon to format and send a file to the Kindle.

Paperless Office

The myth of the paperless office has been with us for decades. Computers were supposed to lead to less paper wasted by doing away with all those forms, memos and other unfortunate side effects of corporate life. We'd still have all paperwork, just without the paper. Or so the story went.

The unfortunate truth, though, is that most of the advances that should or could have resulted in a paperless office have generated more printing. That company memo? Better print it out. The slides you need to review? Printed. That great article on ReadWriteWeb about the paperless office? Let's see if there's a "print view" or send to Instapaper so I can... you get the idea.

Email? You've no doubt seen the email signatures that beg people not to print out their email. Yet millions of users regularly print out their messages for later reading.

Many thought that the PDF could help to achieve the paperless office, but even Steve Partridge, Adobe's Acrobat evangelist, had to concede that the PDF didn't solve all the problems needed. Why? In part because people want a better way to read the documents that they're printing.

E-Books and Tablets to the Rescue, Kinda

E-books and tablet computers solve the portability issue, but they don't really address the "getting my stuff from point A to point B" problem.

Sure, you can kludge together something if you try. You can use the Instapaper iPad app, for instance, to read Web pages on your tablet or use the Instapaper site to send articles to your Kindle queue. You can print to PDF and read it on the Kindle (with varying degrees of success and speed) or use apps on the iPad or other tablets to read PDFs.

But workflow is often an issue. Sending items to the e-book or tablet, before now, was less than optimal. At least for Kindle owners, it's gotten a lot easier. Well, as long as the Kindle owner is also a Mac or Windows user. No love for Linux here.

Tip of the Iceberg

Amazon's Send to Kindle apps are just the very, tiny, tip of the iceberg. In pursuit of the paperless office, there's a slew of opportunities that are just waiting to be tapped.

Right now, Amazon's tools are really tailored to end users and depend on each user deciding "yes, I'd like this on my Kindle or tablet." But there should be a market for tools that let any organization push documents to tablets or e-books. Amazon recently (and a bit tardily) made its AWS documentation available on the Kindle. Wouldn't it be nice if every company could make its employee documentation and communications printable straight to the Kindle?

Hello, Apple? Barnes & Noble? Anybody?

So far, Amazon is the only provider that's provided a way to print directly to its e-books. Barnes & Noble doesn't provide anything of the kind for the Nook, nor does Apple provide anything for iBooks or another iPad app. Amazon is at a slight advantage here, since it's unlikely Apple is going to provide a "print to iBooks/iPad" option for PCs anyway.

This could be a selling point for third-party e-book makers, especially if publishers do reject DRM in the long haul. The third-party e-books could compete on features, and one of the features could be the tools to create and send content straight to the e-book.

Paperless at Last?

For an individual user, namely me, the Send to Kindle app is fantastic. If I think back to my corporate days with Novell, this would have let me send all kinds of materials to my Kindle instead of printing them to read later on the plane. (Except the lack of a Linux version, which is a problem.)

Would widespread adoption of sending materials to e-books and tablets eradicate printing altogether? Of course not. But it could make printing much less necessary and desirable.

Are you using the Send to Kindle app? What's missing, and what would you like to see to make your office paperless?

The term, "vendor lock-in" strikes terror throughout the IT community. And yet in reality, many companies are pursuing strategies destined to increase their dependence on a limited number of vendors mostly driven by the ineffectiveness of IT to provide simple connectivity capabilities between various corporate applications.

By shrinking the number of vendors, IT is actually creating vendor lock-in. Instead, IT should be aligned with its business users as they seek to increase information access, both internally and externally, by promoting increased vendor participation - more expansion, inclusion and diversity of information sources. Restricting information flow is a flawed control tactic doomed to fail.

Guest author John Yapaola is CEO of Kapow Software. He has a successful track record of managing and
growing high-tech startups. With Kapow Software, he has created the industry's leading provider of cloud, mobile, social and Big Data application integration solutions that drive enterprise innovation and transformation for companies like Audi, NetApp, Intel and Commerzbank, and dozens of federal agencies.

At a recent IT forum, attending CIOs were asked, "How many cloud offerings do you currently have in your organization?" The responses varied, but many had more than a dozen presently and growing. One of the CIO panelists noted, "We are now beginning to restrict the expansion of cloud offerings in the company. We slap their hands if they add any more."

That was a disheartening response. Policing their customers (the business users) is not a viable solution. How about being less CIOfficer and more CIOptimizer?

The Crowd-Sourced Browser Standard

Standards and controls are two words not often associated with innovation (unless you're channeling Steve Jobs). IT has traditionally looked to organized standards committees such as ISO, IEEE, SESC, ITU, ASME, SOA and ASTM to establish the rules of engagement. Of course, there is value in establishing guidelines and policies, but too often blind adherence and company-wide implementations of restrictive procedures trigger a sense of incompetence directed at IT organizations.

Consider crowdsourcing, which leverages the decentralization of tasks and the delegation of these tasks to a community of users. One of the most important and universally accepted user-interface standards ever established is the Web browser. Although the W3C and others are now involved, this powerful and transformational user experience was largely established via crowdsourcing, not some previously established standards body.

IT has a history littered with rigid adherence to standards bodies and their overly complex and convoluted dogma. Enterprise business users, on the other hand, act like a "crowd," seeking solutions via disruptive and transformational sources in real-time.

This is playing out due to the confluence of three noteworthy developments disrupting companies and IT organizations:

The cloud is a key agent of change for lines of business and IT. Enough said.

Applications (including mobile) are the next "land grab" by business users. Coupled with cloud access, users can now order an application directly from a vendor - creating a new data source - rather than ask IT for permission. Given modern application development environments like GWT (Google Web Tool Kit), new applications can be designed, developed and deployed in a fraction of the time and cost previously required. Combined with an impatient business user community, an eruption in the number of enterprise applications and transactions is just on the horizon.

Consumer Application Integration

Business users want to automate the access of information and data that they do not control. Therefore, addressing the next business user impasse is now underway; business users will drive application interactions via a self-service model.

The early stages of basic consumer integration have arrived with companies like IFTTT.com (IF This Then That) simplifying the integration process by helping users stitch together external Internet websites (with APIs) and perform routine application integrations. Is this "industrial strength" for enterprise consumption? Probably not just yet, but it's coming.

IT can choose to ignore or restrict these trends, but enterprise workers already beginning to use these sources to automate their work day. Increasing numbers of interactions, integrations and extractions will swell the number of transactions. You can expect that API restrictions will eventually be lifted and application interactions will become a commonplace routine of business users.

The blurring of traditional company firewalls is underway. Corporate data and information access will reside inside and outside of the corporation. Cloud Infrastructures will serve the needs of the masses, but integrating applications are the keys to the kingdom. The choice for IT is whether to facilitate these changes or fight for control with their business customers.

Amazon's primary claim to fame is online retail - beginning with books, remember? - and the company has been very effective at growing that side of its business. So effective, in fact, that it's easy to mistake the Amazon Web Services (AWS) cloud-based computing platform as little more than a way for Amazon to make use of excess computing capacity. That underestimates the resources that Amazon is putting into AWS, and the affect the company is having on the future of computing.

If that assumption were ever true, it's now a long-obsolete concept. Amazon burned through its excess capacity very quickly. According to Andy Jassy, the guy who wrote the original business plan for AWS, "We've far exceeded the excess capacity of our internal system. That ship sailed 18 months ago." That interview ran in Wired in April 2008, which means that Amazon had churned through its excess capacity by the end of 2006.

How Excess Capacity Led to AWS

What Amazon's excess capacity really taught Amazon, says Bezos, wasn't (just) that the company needed to use its own excess capacity. Instead, it taught the company that other companies have the same problem. Which meant that there was yet another market that Amazon could tap into, and it was already working on solving that problem for itself.

At Amazon we had developed unique software and services based on more than a decade of infrastructure work for the evolution of the Amazon E-Commerce Platform. This was dedicated software and operation procedures that drove excellent performance, reliability, operational quality and security all at very large scale. At the same time we had seen that offering programmatic access to the Amazon Catalog and other ecommerce services was driving tremendous unexpected innovation by a very large developer ecosystem. The thinking then developed that offering Amazon's expertise in ultra-scalable system software as primitive infrastructure building blocks delivered through a services interface could trigger whole new world of innovation as developers no longer needed to focus on buying, building and maintaining infrastructure.

All Evidence to the Contrary

Even if Amazon had kicked off AWS just to handle excess capacity, it's clear that the service has far exceeded that role to become a big and important part of Amazon's business.

Consider just some of the developments and expansions for AWS in the past few months:

None of these really address excess capacity, but all require an investment of developer time on Amazon's part. If the company were looking only to let people tap into its excess capacity, it would not be spending quite so much time on developing new features.

Nor would the company need to be scaling out AWS quite so quickly. How quickly? According to a presentation from last year (PDF), Amazon adds "enough new capacity to support all of Amazon.com's global infrastructure through the company's first 5 years, when it was a $2.76B annual revenue enterprise" per day!

If anything, Amazon's getting to the point of running its retail operation in the excess capacity of AWS, not the other way around.

One publisher does not a trend make, but Macmillan imprint and science-fiction house Tor/Forge's decision to abandon DRM this July may be a sign of things to come. Tor/Forge is dropping DRM because its customers, and authors, have been asking for DRM-free titles. The game isn't won yet, but it's a safe bet that Tor/Forge won't be the first to abandon Digital Rights Management for e-books and other publications.

The Department of Justice's suit against Apple, et al, over e-books has restarted the discussion about the usefulness of DRM versus its unintended consequences. Specifically, by embracing DRM, e-book publishers have unwittingly helped provide Amazon with far more power over the nascent e-book market than is healthy for anyone (except Amazon). Since Amazon's DRM works only with Kindle readers, all of the DRM-encumbered e-books purchased through Amazon effectively lock readers into the Kindle platform.

Tor/Forge is primarily a publisher of science fiction and fantasy titles. Because it serves a slightly geekier and DRM-averse audience than many other publishers, it makes sense for Tor/Forge to be one of the first anti-DRM publishers out of the gate. As the company noted in its blog, "[DRM] prevents [readers] from using legitimately-purchased ebooks in perfectly legal ways, like moving them from one kind of e-reader to another.”

But Tor/Forge is one of the first, if not the first, to have embraced DRM and thought better of it later. And it's part of the Big Six book publishers.

The decision is coming from way up top the publishing chain. Tor/Forge is under the Macmillan umbrella, and as Charlie Stross writes in his thoughts on the move, "the final decision to drop DRM on ebooks from Tor/Forge was taken by John Sargent, CEO of Macmillan, who ultimately has to account for his actions to the shareholders."

What About Infringement?

Going DRM-free can help solve the problem of lock-in to a single provider, but what about infringement? DRM's effect on piracy may be a red herring. As publishers already know, DRM isn't really that effective at stopping e-books from showing up on torrent networks, etc. The DRM on e-books can be cracked, easily. It's a pain in the posterior for consumers, but less than a speed bump for someone intent on distributing e-books.

As an author, I haven't seen any particular advantage to DRM-laden eBooks; DRM hasn't stopped my books from being out there on the dark side of the Internet. Meanwhile, the people who do spend money to support me and my writing have been penalized for playing by the rules. The books of mine they have bought have been chained to a single eReader, which means if that eReader becomes obsolete or the retailer goes under (or otherwise arbitrarily changes their user agreement), my readers risk losing the works of mine they've bought. I don't like that. So the idea that my readers will, after July, “buy once, keep anywhere,” makes me happy. I had been planning to ask Tor whether or not it would be feasible to offer my e-books without DRM; now I won't have to have that conversation.

Salvation for Independent Bookstores?

E-books have also been less than a blessing for independent bookstores. Here in St. Louis, the indie stores have formed an alliance to try to bolster sales in the face of the e-book trend and competition from Amazon. But the writing is on the wall - customers want e-books more, and real books less. That's a problem for indies right now.

If publishers abandon DRM, though, e-books might actually benefit independent bookstores. Stross writes, "Right now, there is a window of opportunity for smaller resellers: Amazon's inclusion of masses of self-published material in the Kindle store has made it impossible for heavy consumers to browse it effectively. Smaller bookstores may be able to gain a strategic edge by curating their content, providing quality control on reviews, and other tactics we can't predict at this time."

That's not a sure thing - as Stross admits - but it gives indies at least some shot at fighting against the Big Three (Apple, Amazon and Barnes & Noble) than they have now.

The Big Question

The big question is whether Apple and Barnes & Noble are going to embrace DRM-free e-books. Amazon already allows publishers to go DRM-free if they wish.

Offering e-books in DRM-free formats may be a selling point, just as dropping DRM from digital audio was a few years ago. It took Apple quite awhile to get on board, but it did eventually.

It's early, but the tea leaves seem to indicate that more and more e-book publishers are souring on DRM. It may take time for DRM to disappear, but it's got very little to recommend it. Let me know if you think e-book DRM has a future.

You can't work in the tech industry without suffering buzzwords and marketing speak. But for anyone with an interest in big data, the term "datasexual" goes well past buzzword territory to wandering in the weeds of silliness.

The attempt to coin the term comes from Dominic Basulto, who wrote a piece on personal data called "Meet the Urban Datasexual." The title has two glaring problems. First, Basulto doesn't distinguish between urban, suburban or rural. There's no reason someone with an interest in the quantified self - the concept of self-knowledge through self-tracking - couldn't be a suburbanite or a farmer.

Not an Honest Description

That's a minor quibble, though. The big gripe is the attempt to coin the term "datasexual." A portmanteau can be useful, but not when it's dishonest. "Quantified self" might not roll off the tongue, but it's plenty descriptive of people who are interested in tracking their lives. You could call it "egodata" to make it a bit snappier, as that's more descriptive and a play on "geo data." But quantified self, or QS, does the job nicely.

There's nothing necessarily sexual about QS. Yes, Basulto is attempting to piggyback on the term "metrosexual," but the quantified self is almost entirely unrelated.

Unfortunate terms aside, Basulto does have a point buried in the post, which is this: "Just as elements of the metrosexual movement eventually found their way into the fashion mainstream, the whole datasexual craze is starting to tip into the mainstream. All of us - not just the datasexuals of today - will soon be equipped with a breathtaking array of digital devices and sensors from 'cool' companies like Apple and Nike."

Personal Data Is Going Mainstream

The quantified-self movement is likely to continue its ongoing move into the mainstream. Basulto is wrong that there's an "obsession" for recording "everything about their personal lives." The Placeme app he refers to is a bit more extreme than most folks are likely to want. But apps like Runkeeper are already finding adoption outside hardcore data nerds and quantified-self enthusiasts. And our phones are already equipped with a "breathtaking array" of sensors, and if that's not enough there's plenty more to be had. The Fitbit, for instance, comes to mind.

It's also worth noting that many of the folks adopting apps like Runkeeper probably aren't thinking in terms of data (and certainly not sexual). Data geeks are helping to enable apps that push data consumption into the mainstream, but the mainstream isn't getting into data per se - no more than mainstream acceptance of the Internet meant immersing themselves in HTTP.

But use and consumption of personal data is going mainstream, and it's a major opportunity for companies that know how to collect, analyze, interpret and present useful data in a meaningful way. It's also going to be a challenge for mainstream users who, so far, haven't spent much time thinking about data and privacy. But that's a different topic for a different day.