It’s hard to imagine anything more perfect that Slate’s decision to lay off its respected media critic Jack Shafer. Not perfect in a good way — I count myself amongst Shafer’s legions of fans — but perfect in the way that Alanis Morissette not understanding the meaning of ‘Ironic’ is perfect, or the way that a safety inspector falling out of a tenth story window would be perfect.

“I tolllldddd yyoooouuu sooooooo…”

I mean, what better illustration could there be of online media’s woes than an ezine laying off its media critic because the economics of web content don’t support a writer of his stature and specialism? At least Shafer can take some satisfaction in the fact that his departure is in and of itself an absolutely perfect piece of media criticism: Jack Shafer as both medium and message.

Slate’s admission that, even with a minuscule staff of 60 and the financial “might” of the Washington Post company, it can’t make money from online content is also perfect. The perfect opportunity, that is, to acknowledge once and for all that the grand experiment in free online content has failed.

“General interest is a pretty good concept for a physical product that gets delivered to your doorstep, where getting all those disparate sections bundled together makes sense. It's not such a great concept on the web. The web hates artificial bundles. If you're going to do a general-interest news product online, you have to be prepared to do it on the cheap, as Matt Drudge and Arianna Huffington do (or at least used to do, in the latter case). Conversely, if you want to put out an expensively produced, professionally-edited product, it's better to stick to a niche, preferably one with a demographic that advertisers want to reach, like technology or business.”

…and he’s right. Up to a point. In fact, many niche publications are feeling the pinch too. It wasn’t long ago that Bercovici’s own employer, Forbes, abandoned its status as a professionally written and edited financial publication and decided to style itself as a kind of HuffPost for finance; embracing cheap guest-posters regardless of what conflicts they might churn up. Meantime, it would be petty of me to name those of our rivals in the technology blogosphere who have embraced bullshit slideshows and top ten lists over their more costly cousins: actual fucking reporting. (So far TechCrunch’s acquisition by AOL hasn’t lead to our editorial arm having to choose between God and Mammon, but a cynic might say it’s only a matter of time until we too are tested.)

The blunt truth is, online advertising is a numbers game. And, even on niche sites, the number of salable page impressions required to even break even is huge. There are just too many pages of content being produced for advertising to remain a viable long-term business model. The New York Times can’t make money online, the Guardian can’t, Slate can’t and Salon barely can. As Bercovici points out, even Slate’s attempts to launch verticals aimed at business readers, and women, were relative failures.

There are maybe two general-interest publications which can reasonably claim to have cracked the free content code: The Daily Mail and the Huffington Post. But in truth the only way those publications can afford to pay their growing armies of real, grown-up editors is by selling millions of pages of animal stories and celebrity fluff, churned out by underpaid hacks. (One day I want to produce a HuffPost slideshow of the best Daily Mail celebrity slideshows — it’ll clean up.)

AND YET. It’s easy to wail and moan about how the Internet is killing journalism, but that dystopian future only exists if we assume that the Internet is the only place that editorial content can possibly live. In fact, over the next five years or so what we’re likely to see is a bifurcation in digital content.

On one side, those content producers who choose to stay on the free-and-open web will be forced into making more and more ethically dubious decisions to stay profitable. Out will go professional writers and church-and-state separation of content and commerce; in will come more Groupon-style “reader offers”, affiliate links behind every keyword and an Idiocracy of dumber and dumber linkbait. Ten ways to make extra income with Lady Gaga Sony Porn — Kittens!

But on the other side? The fact that the Economist’s North American circulation has just reached its highest ever level tells us that the audience for quality content isn’t going away. It also suggests that those of us who prefer our content unsullied by payola, and who appreciate the beauty of a well-crafted headline are turning our backs on the web. Increasingly the best writing and reporting is to be found in books and Kindle Singles, where readers are happy to pay directly for high-quality information and entertainment. As web content continues to get dumber, and more ethically compromised, the market for high quality content away from the web will continue to grow.

Of course, it would be idiotic to suggest that publishers should rush back to print, in the hope of emulating the Economist. But nor should they be wasting money publishing their content on the web. As any wildly profitable app developer will tell you, the web is a great marketing tool, but it’s on dedicated portable devices that the real money, and attention spans, are found. A smart publisher looking to launch a new magazine today — focusing on business, technology, or just about anything else — would be wise to develop it specifically for e-readers rather than wasting more money chasing the dumb eyeballs of the web. Oh, and they should hire Jack Shafer. He’s brilliant.

As far as Verizon devices are concerned, the Droid Bionic probably holds the crown for “Most Anticipated Handset,” but newly leaked shots of the HTC Vigor may steal a bit of that spotlight.

Rumored to be the newest addition to Verizon’s LTE line up, the Vigor sports a name that’s downright ancient in comparison: it was first spotted in a trademark application from 2009.

The Vigor made waves earlier this month when an ersatz version was spotted on a Dutch retailer’s website, but the real deal sports a less angular body that matches up nicely with HTC and Verizon’s design language. Specifically, with its funky textured back plate and red camera trim, the Vigor could easily pass for another entry in Verizon’s Incredible series.

The four capacitive buttons on the Vigor’s face mean it won’t be one of the first handsets to run Ice Cream Sandwich, but its rumored specs will dazzle many a phone geek regardless. Under the hood, the Vigor reportedly has a 1.5 GHz dual-core processor, 1 GB of RAM, 16 GB of internal flash storage and Beats by Dre audio. A 4.3 inch HD display graces the front, and if it were on, HTC’s Sense UI would be running the show.

Note that the Vigor is largely free of branding at this point, leaving the claims of its LTE compatibility and Beats audio processing in question. The device is likely in the test phases now, which could explain the overall lack of flourish, but here’s hoping the rumors hold true. Without the Galaxy S II on board, this may end up being one of Verizon’s heavy-hitters come the holidays.

It was widely reported that Best Buy was sitting on over 200,000 TouchPads before HP enacted their drastic price cut, but the fire sale has come and gone, and that would normally be that. Instead, a notice in Best Buy’s Employee Toolkit system shows that their contentious relationship with the TouchPad may not be over just yet.

The image, sent to Droid Matters by a Best Buy insider, indicates that Best Buy stores will once again begin to receive TouchPad shipments. Due to the swarms of bargain-hunters last time round, employees are being instructed to stick to a ticket system and take down the information of the interested parties that come their way.

While it’s possible the notice has been pushed out just in time to make a big splash on the front cover of the Sunday circular, you shouldn’t hold your breath. Different areas tend to have different shipping schedules, but if this holds true, it’s more likely that the units will begin trickling back into stores during the middle of the week. At this point, it’s still unknown whether the notice only applies to some stores or the whole lot of them, but thanks to a bit of corporate foresight, your nearest store may soon have a new recording in their phone system that could clear up the specifics.

It’s a bit of a surprise, to be sure: 16GB TouchPads are selling for nearly double the going rate on eBay, a testament to the fact that people have all but given up on more traditional sales outlets.

HP’s own site admits that they are only “temporarily” out of inventory, and that coupled with news of a major retailer suddenly receiving stock gives me pause: how many of these things does HP have left? And more importantly for some, how many are shipping with Android inexplicably preloaded? The answers, it would seem, may come later this week.

About a month ago, Tom Anderson (or Myspace Tom, if you prefer) wrote a post on his new favorite social network, Google+, offering a few bits of advice for Twitter. While many of us enjoy a good Twittering now and again, Anderson pointed out that there are a few simple features Twitter might consider if it wants to boost the overall quality of its user experience. The main thrust being that the social experience of Twitter might be improved were the company to add a “discussion” or chat function that would, in Tom’s conception, give the viewer an input box by which to leave a comment and easily discuss tweets without flooding followers’ streams with one part of an on-going conversation.

Well, Tom might just be interested in a new startup launching today, called Joint. Ok, well it’s not an exact replica of the Myspace founder’s idea, but it’s attacking the same pain point long discussed by Twitter users: In that the platform is badly in need of a better way to facilitate realtime, private, and longer-form conversations. Of course, there’s some disagreement among users over whether or not Twitter should be the one offering this feature, or whether it should stay simple, just as it is.

Joint Founder Ethan Gahng says (and I tend to agree) that Twitter will be best served by staying simple in terms of its UI, and instead allowing third party startups and developers to be the ones to add further social and chat features from the outside. (And Twitter’s actions over the last few years seem to largely be in line with this philosophy.)

To work towards this goal, Joint essentially turns any Twitter hashtag into an IRC (Internet Relay Channel)-like chat room, which is integrated with a realtime hashtag stream from Twitter. Check it out below. This combo allows users to participate in a number of different social interactions, including a front-and-center realtime group chat feature, which populates with a live hashtag feed in the right sidebar.

Users can then pull the hashtags directly into the group chat, or invite the people who wrote the tweets into the group chat, right from the chat room, or simply hang out and enjoy synchronous chat, watching as the tweet stream populates. Compared to Hootsuite, Tweetdeck and other third party apps that let you track hashtags, being able to watch someone tweet from “outside” and bring them in and chat immediately is a subtle boundary and distinction proffered by Joint that really makes a big difference.

If you’re trying to engage in a conversation with someone on Twitter that goes beyond a few “@ replies”, you’re either forced to DM or take the conversation elsewhere. Joint allows users to easily join a group chat, as well as discuss notable or popular hashtags. For instance, of late “#irene” has become a much-used hashtag, as Hurricane Irene is poised to hit the East Coast. Joint could become a very useful resource for people looking to easily congregate and discuss ongoing situations like hurricanes, protests, or events, live, from any location.

Another cool aspect of Joint’s platform is that it’s meant to function as an off-the-record conversation medium for Twitter users, meaning that if I’m having a conversation with someone and a third person joins the chat room, they won’t be able to see the ongoing conversation. This, Gahng says, is intended to make Joint group chat more reflective of interaction in the real world.

As to Joint’s intended use cases, Gahng says that it’s easy to connect to other people on Twitter, but it’s hard to actually get to know them, so using Joint, you can meet someone on Twitter that you want to play Starcraft with — many of your followers may not want to join in on the fun. Which is why open standards warrior Chris Messina proposed the hashtag in the first place, but of course, not many people regularly follow hashtags in their day-to-day Twitter usage. Joint looks to change this by making it easy to search for different hashtags, discuss, and follow them synchronously in realtime. For an example, check out the Starcraft channel here.

Not to mention the fact that, because tweeting with hashtags means that your tweets get archived and live forever on search engines, etc., many people feel uncomfortable about having public conversations (about more private issues, especially) on Twitter. We’ve all had to delete a tweet or two, and often too late. By giving Twitter users that added benefit of social flexibility, Joint hopes to give itself a leg up on other third party Twitter apps.

Lastly, beyond simply being able to follow a hashtag group, Joint also informs a user when there’s a new user in their chat room, offers search descriptions, and gives users the ability to browse the main directory, or even start their own hashtag channel.

Joint solves a major pain point experienced often by Twitter users, and from my experience in the chat rooms and poking around on the site, the UI is straightforward, and chat is fast and easy to use. The three-person Joint team has been working on this since January, and the startup is bootstrapped at this point, but if the platform can scale and continue to function in realtime without glitches, this seems like something that can definitely have legs.

Joint and its team isn’t affiliated with Twitter in any way, but I wouldn’t be surprised if the social network comes knocking at their door at some point down the road.

In a blog post this evening, Facebook — which is by far the biggest photo site on the web — has announced that it’s launching a new photo viewer that presents images that are 960 pixels wide, as opposed to the 720 pixels they’ve been since March 2010 (they were 620 pixels before that). The viewer itself is also getting an update that replaces the current black lightbox with an opaque white, which it says puts more of the focus on the photo itself. Facebook also says that photos now load twice as fast, though it doesn’t get into how it’s serving the content so much faster.

Facebook’s last major Photos update came out in September 2010, when it introduced the black lightbox-based photo viewer and added support for photos as large as 2048 pixels in width (it doesn’t actually display these in the viewer, but you can download them at this size). That update finally made Facebook a viable way to share high-quality photos (before then, you could only download the low-res versions).

This has been a big week for Facebook. On Tuesday it announced a slew of tweaks largely related to its privacy controls and photo tagging —you’ll soon be able to approve photos before they show up on your profile, which users have been requesting for years now. It also drastically changed the way Facebook Places work, placing less emphasis on check-ins. And earlier today confirmed that it’s killing off its Groupon-like daily deals just four months after launching them, though location-based deals are still around.

The recording industry doesn’t have the most respectable history when it comes to lawsuits. Between asking for millions for trivial acts of piracy, and asking potentially for trillions in more serious cases, they’ve shown that they’re not only completely disconnected from reality, but totally unheeding of the actual effects of their litigation. So it’s not surprising to see them tilting at yet another windmill.

Today’s target is TubeFire, a site that should be familiar to you, at least in principle. It allows you to download and convert YouTube videos to a format more easily watched offline (FLV files can be tricky). You give it the URL, it churns for a bit, and then you can download the video in MP4 or another format. Clearly this re-containering of free content is a grave threat to the recording industry, and must be stopped at all costs. So 25 of the world’s largest labels have gotten together and sued them.

TubeFire’s services are temporarily suspended pending examination of the complaint (in its place is a note apologizing and briefly describing the situation). And to be honest, the complaint is probably valid: technically, TubeFire was modifying and redistributing copyrighted material, at least so it appears on the face of it. The site is owned by Japanese media company MusicGate, and the suit was filed in Tokyo District Court. How international content protection laws will play out is beyond the purview of this article, but an international consortium of content providers is likely to make its effect felt regardless of jurisdiction.

The funny thing is that, as it so often is with these clowns, they’re not only barking up the wrong tree for a number of reasons, but they don’t seem to understand that they’re in a whole forest of wrong trees.

TubeFire is an ace away from being a perfectly legal service. To begin with, it’s plainly providing a useful service that’s only potentially a danger to copyright. Re-encoding videos to enhance portability isn’t criminal. YouTube is a fundamentally online service, and this is a natural extension of it, the way image servers and short URLs have acted for Twitter for so long. Users want to watch these videos, which for all intents and purposes are being given away for free, in places other than YouTube, for a number of perfectly legitimate reasons: bandwidth caps, coverage issues, traveling, and yes, sharing.

Next, local copies of the videos in question may already be present on the user’s computer. By simply viewing the video, it’s possible they have duplicated the whole thing in RAM or a temp folder. This writing, rewriting, renaming, and so on must count as modifying copyrighted data, mustn’t it? If not, then TubeFire isn’t much different. The video is already being encoded multiple times, transmitted as packet data, decoded and translated to display data. One more encode in there doesn’t materially affect the product.

Furthermore, is it really TubeFire doing this? Just as it is not Bittorrent Inc that pirates movies and music, TubeFire should not be held accountable for the actions of users. Terrorists used Google Maps to plan their strikes. Stalkers use Facebook to find victims. TubeFire is a simple in-out operation that corrects a minor problem with videos that users already have access to.

And let us not forget that TubeFire is one of perhaps hundreds of tools used for this purpose. They must not have looked very hard for them. Let me help, guys. I have one myself, built into my browser! I’ve buried it in the menu so it doesn’t clutter the screen, but look at how easy it is for me to grab one of many copies of a video:

Update: Mike reminds me that we in fact had our own tool for several years. YouTube sent us a cease and desist letter and eventually disabled the tool, but no one shook us down for millions in damages. It was a TOS thing, not a copyright thing.

Many of these are easily accessible just by changing the URL slightly or other simple methods. Some sites and tools strip the audio out, another extremely easy process — and one replicable, of course, by loading the YouTube video and closing your eyes.

These companies want to have their cake and allow no one to eat it at all. They don’t seem to understand that putting content on a service like YouTube comes at a price. They are making the content publicly available, free to all. They are literally giving away the content — and then they get mad when someone takes it!

The labels are seeking what appears to be statutory lost-income damages of $300 per video for an estimated 10,000 videos. That adds up to $3 million — the amount the labels would have earned if TubeFire had licensed each video. Now there are two objections here. Why is it a license and not royalties? If anything, TubeFire “rebroadcasted” the content, more like a relay station than anything, and a standard royalty fee of however many pennies or yen seems like it might be more applicable. I’m dubious on that point, however. It’s also unclear whether TubeFire knew they should have been licensing. The service does not require that information; it takes an identifier code, downloads the associated FLV file, and repackages it. Were the labels paying their artists when a file was watched, downloaded, or only when purchased? And how do they define “download”?

If the labels are in fact successful at bankrupting and shutting down TubeFire, I must warn them that the effect will be utterly nil. Any user who wants to download a video from YouTube will do so. There will be no reduction in this practice. The site is easily cloned, as the great number of similar sites shows (I can’t even remember which one is the original, if there is one). And like most of their legal actions, this one will bring down a rain of bad PR; if anything, piracy will increase. Here’s where I would put the hydra metaphor if this article weren’t already over a thousand words long.

Why, I wonder, did they not think harder about this and try something more effective and interesting? Maybe for music videos, the YouTube version is only half the song, and then there’s a link to the artist’s site, where there’s a more secure player and various buy and share links. Or release 10-second snippets on YouTube leading up the actual release elsewhere. Or just accept that when you let the cat out of the bag, you’re unlikely to get it back in. Piracy is when people steal things. Piracy is not people taking the content you gave them and watching it somewhere else. And TubeFire is nothing but a simple shortcut for actions users would be able to carry out anyway. Unfortunately, this distinction requires a judge capable of comprehending tech issues like this, and those judges are in short supply these days.

I know the music and movie associations are famously impervious to reason, but this is beyond stupid.

There’s a simple fundamental reason why Grand Theft Auto exploded into a phenomenon. Everyone has criminal tendencies sometimes. And virtually indulging them is a hell of a lot better then actually indulging them and dealing with the moral consequences — or the physical consequences. Like prison.

But what if you could make the Grand Theft Auto concept even more immersive by tying it to the real world? That’s what Life Is Crime is all about.

The new mobile game by Red Robot Labs — a startup founded by Mike Ouye, Pete Hawley, and John Harris, former executives at Playdom, EA and SCEE — allows you to put a life of crime onto your phone. It’s a location-based game launching today for Android devices that’s likely to be highly addictive.

Think of it as Foursquare meets Grand Theft Auto meet Spymaster (remember Spymaster?) meets Gowalla — well, the old Gowalla, before they recently stated they were killing off the virtual goods element. The point is to go around your city and battle others to control properties. The point isn’t to “check-in”, it’s to attack other players with everything you’ve got in order to take over a city.

“The social utility guys have taught people how to check-in, but it’s not a real deep gaming experience,” Ouye says. “We’re going after location gaming. It’s about discovery of new places while playing a game,” he continues.

Life Is Crime uses real maps that are custom-tailored by the Red Robot Labs team to include virtual representations of key landmarks in a city. Right now, Seattle (where Red Robot Labs is unveiling the game at PAX today) is built out. Soon, San Francisco and other cities across the U.S. will be too. These maps incentivize people to fight over the Golden Gate Bridge, for example.

But any location is fair game. The team added the TechCrunch office, for example.

The fighting nature of the game is pretty straightforward. You find someone you want to fight and it becomes a battle backed by your weapons and stats. If you have a higher reputation score than your opponent, you’re likely to take them down in a fight. But maybe they have a better weapon than you to even that out a bit.

At first, the game will mainly be a single-player experience. But down the line, the Red Robot guys hope people form virtual gangs to battle other gangs for location supremacy. One idea the team has is to have Android vs. iPhone teams when the iPhone version launches later this fall. Maybe Jason and I will play it on OMG/JK.

At one point, the Red Robot team got about 200 Googlers playing it at the Googleplex, we’re told.

Eventually, as gangs form within the game, there will be different levels individual users can rise to within the gang.

Another element of the game is to pick up and drop off virtual goods with other users — both sides are rewarded in the game for this action. There are around 200 items within the game right now, and a lot of customizations for users.

More broadly, Life Is Crime is just step one of the location-based gaming platform that Red Robot Labs hopes to build. Their intention is to have three games on the platform this year — two built by them, and one by a third-party.

“Location games are wide open right now,” Ouye says. “And we’re going after it, because they’re really sticky,” he continues.

“We’re competing for the 30 seconds or 1 minute when you’re in line waiting. Do you want to commit a virtual crime in than span, or do you want to check-in?”

After quietly announcing they were killing off their nascent Deals product this afternoon, Facebook caused some confusion. You see, with the decision to kill off Facebook Places earlier in the week, everyone wondered what it meant for the location-based deals they launched alongside it? Those would remain alive, Facebook said at the time. But does today’s execution change anything?

No, says Facebook. Daily Deals are separate from Check-in Deals. The Check-in Deals will work a bit different with the end of Places, but the company will continue to support and enhance that product. Daily Deals are dead — and my email account thanks them for that.

Facebook’s statement on the matter:

After testing Deals for four months, we've decided to end our Deals product in the coming weeks. We think there is a lot of power in a social approach to driving people into local businesses. We remain committed to building products to help local businesses connect with people, like Ads, Pages, Sponsored Stories, and Check-in Deals. We've learned a lot from our test and we'll continue to evaluate how to best serve local businesses.

In more violent terms that may be easier to understand: they’re killing off their Groupon-killer, but keeping half of their Foursquare-killer while killing off the other half of their Foursquare-killer.

Below, a reminder of what the still-alive Check-in Deals will look like on the Facebook iPhone app:

Much has been made of supposed decline in the number of cable TV subscribers. But not everyone agrees that mass cord-cutting is reshaping the industry. Indeed, according to Kyle Dixon, Time Warner's VP of Public Policy, we are seeing the "opposite" of cord-cutting with Time Warner seeing no "significant decrease" for its paid content.

I interviewed Dixon earlier this week at the Technology Policy Institute's Aspen Forum where he spoke on a panel about the economic implications of online video. What Dixon stressed to me is that, for all the online videos of what he described as "kittens flushing toilets", consumers still really want high-quality news and entertainment content from networks like HBO. And thus, while the Internet is obviously changing our viewing habits, it is yet to revolutionize the television industry. So is Dixon right – is cord-cutting an illusion?

You might remember Rob Spence, known online as the Eyeborg for his project to create a working bionic eye. We wrote about him before, and interviewed him a while back, but the project has advanced to the point where even a seasoned tech blogger is left speechless with amazement.

Spence has worked with a team of engineers to adapt an endoscope into a working in-socket video camera. It’s turned on by waving a magnet near it, at which point it will begin transmitting a wireless video signal to a handheld LCD viewer. Absolutely incredible.

Watch the video from Sky News below, but be warned that it is slightly graphic. If you can’t handle someone installing and removing an artificial eye, consider this your warning.

Just astonishing that this is even possible. But really, this is more of a general achievement in miniaturization, not bionics. Endoscopic cameras with wireless transmitters are now commonplace; the enclosure and ergonomics of the device would be the hard part of this build. What’s yet to be accomplished with an artificial eye is hooking it up effectively to the visual cortex, and that is still years away from being practical — at least, for producing any kind of detail. Existing cortical microelectrode arrays just don’t have the density required, and as a result produce something only loosely definable as an image.

The timing of this new info is part of a media push for the new Deus Ex game (of GameStop infamy), in which cybernetics and prosthetics figure prominently — which doesn’t diminish the wonder of the thing, in my opinion. They also produced a short documentary about prosthetics and research in that field that’s worth a watch as well. It’s a very exciting field and the best bit is that they’re creating things that truly improve people’s lives. A prosthetic eye is a long way off, but it’s people with passion and dedication, like Spence and his team, who drive innovation, regardless of how far off the “final” product might be.

Bad news for anyone who was looking to rent the latest episode of Top Gear from iTunes, as Apple has quickly and quietly removed their 99¢ television rental option today.

The functionality has disappeared from both the Apple TV’s interface and the iTunes store proper, signalling a drastic shift in Apple’s pricing policy. Individual episodes of a series can still be bought as usual, and movie rentals still cost the same going rates, so not every iTunes customer will be weeping over the loss.

AppleInsider has also found that support documents pertaining to iTunes episode rentals were similarly pulled, although cached versions can still be found for those who don’t mind a little digging.

In a perfect world, this would be a not-so-subtle signal that Apple’s looking at different ways to handle television rentals. Apple’s big push into cross-device music and app sharing with iCloud could carry over, and rentals could reappear in a new form (and maybe a new price point) but with the ability to be pulled onto any iDevice for the same 48 hour period.

Alas, it’s also completely within reason that factors like mounting studio strife forced Apple to axe the rental service. Major studios have scoffed at the low price tag for iTunes episode rentals, with players like NBC Universal’s Jeff Zucker stating that such renting episode for 99¢ “devalues” their content. Rentals, to be fair, weren’t terribly well-priced for people who have cable television, but here’s hoping they live on in a new shape.

Ben Elowitz (@elowitz) is co-founder and CEO of media company Wetpaint, and author of the Digital Quarters blog about the future of digital media. Prior to Wetpaint, Elowitz co-founded Blue Nile (NILE). He is an angel investor in media and e-commerce companies.

Next year, search advertising will be a $15 billion market in the U.S. alone, growing by 14 percent, according to eMarketer. And, if Facebook can capture half the share of that market that Google has today, it could easily add an extra $25 billion or even far more to its value.

For most any CEO who could have even a modest chance of succeeding at it, that payoff would be reason enough to take a serious look at entering the search category. And yet, while I'm sure he wouldn't scoff at the extra revenues, profits, or valuation, I suspect that Mark Zuckerberg finds something else far more motivating than just increasing the financial value of his company.

And that's what will propel him next year to make a completely disruptive entry into the search category.

So if it's not for the financial value, then why am I so certain that Mark Z. will make a play for Google's home turf?

It's because it's so irresistibly good for his users. And that's the most important principle that seems to guide his product development.

The Five Reasons For Facebook To Enter Search

With that in mind, here are the five specific reasons why Facebook should enter search next year:

1) To make Facebook the ultimate home page. Consumers make Facebook their home base. Half log in every day; and users come to Facebook 70 percent more times per day than even to Google. They stay twice as long as even users of Yahoo's vast network of email, content, and more. Facebook has become the Connected Web's de facto operating system. But right now, its "start button" is limited to what other people put in your newsfeed. Part of being home base is being a launching pad to go anywhere you want. So Zuckerberg will need to give users a great connection to the rest of the Web – whatever their intent.

2) To fix a broken feature. Facebook has a search feature today (powered by Bing); and a few people already use it for Web-wide search, even though it isn't very good. It needs significant upgrading, and Zuckerberg knows it. Having a feature this important be this incomplete creates an unacceptable user experience. It must be fixed.

3) To improve people's life online. Facebook has an enormous data set that it can use to deliver better search results than anyone on the planet. Facebook can see everything that Google can see in terms of pages and links, but with a whole extra dimension of human connection that is impenetrable to Google. Facebook knows what your friends like, and what people like you like. And it knows the difference between real interest and spam. Translating that knowledge into great results will improve online life for his users.

4) To fully connect the world. More than anything, Zuckerberg and his company's DNA are all about providing services to connect users to each other and, increasingly, to the world at large. Serendipity and sharing aren't enough: sometimes people know what they want to find. Facebook must have a search feature to fully enable connection.

5) To add to his immense data set. Search will not only help users; its users will help Facebook. Specifically, it will provide Facebook with even more data about what people want so Facebook can further personalize itself for everyone. Go ahead and cue the creepy privacy music, but remember that so far most users have been happy to make a privacy tradeoff to get valuable personalized service.

With Facebook Connect, Open Graph, and Like buttons, Facebook has already shown its vision to fully connect to the rest of the Web. The next step is to help people better access it.

Facebook: The Social Operating System For Connected Lives

Facebook began as a social application, but it's now in the process of becoming a Social Operating System for the Web at large. Offering world-class search is the next step in its evolution as that "Social OS." The Web is now organized around connected people, not documents – and Facebook is the OS that links those people together.

Once fully connected, can you imagine how Zuckerberg must think about a Web all wired-in through Facebook's central hub? He'd know the time spent on every page; the usefulness of every link; the patterns of every user. He'd have a real-time system that provides feedback on every recommendation. You know what's cooler than a billion connections on the Web? How about a quadrillion!

The value of that data will be immense in making recommendations to users, serving advertisers, refining search itself, and enabling next-generation social applications. It will give Facebook a competitive advantage over every other Internet company in building a map of where the gold is buried – in the form of the content each individual user wants – among the trillions of pages on the Web. But more importantly, it will allow Zuckerberg to serve his users.

"Social Search" Is More Than Just Links From Your Friends

The idea of a socially-powered search is not brand new for Facebook. Bing and Blekko have both incorporated features that bring your friends' Facebook content into the search results. And while that is one modest way to improve the search, its impact pales in comparison to the full potential of what Facebook can do to help you by fully exploiting its social data set: It can individualize search results just for you, by using not only data about you and your friends, but by using the full dataset of people you haven't even met yet.

Let's look at it competitively. Google and Bing have, with limited exceptions, held themselves to the standard that the results should be the same for everyone because they work in an anonymous environment. A friend from Microsoft tells me that Bing has a rule that, with the exception of bucket tests, the top ranked result must be the same for everyone. This rule, he says, was copied from Google – where it fits well with Google's increasing positioning of itself as the great defender of identity control, compared to Facebook's ethos where everything is public. But that differentiation hands Facebook an incredible opportunity: in the Facebook environment, it's not only accepted but expected that everything you do is customized for you alone.

Can you imagine the power of combining Amazon-like personalization with Facebook's deep dataset to offer better results?

Facebook Can Redefine Search in a Social World

That's why beyond just improving a search algorithm, Facebook's greater opportunity is in redefining the category. The last decade of Web use has been defined by Google's clean white splash page with a single query field, and the 10 blue links which follow. But just as that approach from Google displaced the prior generation's directory pages, it's time for a breakthrough experience. And Facebook is the natural player to provide it.

I'm sure the engineers at Facebook are already visualizing what search could be in a fully connected world. Searches could be proactive, prompted by items shared by friends, rather than awaiting a text field completion. Searches can favor brands and publications that you like, or your friends like. But most importantly, searches can be predicted based on people like you, people who are located where you are, or people with similar interests, profiles, and behaviors, without you ever even knowing them. All of these are ways that Facebook can fundamentally redefine search, thanks to its knowledge of each user's identity, interests, and behaviors.

How It Could Happen in 2012

But building a search engine that takes (a difference-making) advantage of the social graph takes lots of time and money, as does building a new operational infrastructure, Web crawler, and advertising engine to support it. And, even more significantly, this is one where Zuckerberg will need to get the privacy implications right from the start. Facebook is currently building its rep with major advertisers on its social network – and that's a great start, because that will provide a captive customer base to transition into its search engine right at launch.

A competitive search engine is one of the most ambitious projects you can imagine – the degree of difficulty is mind-boggling, and the cost is hundreds of millions or more. For Facebook to best Google, it would need to catch up in substantial ways before it could shoot ahead of the leader, even with its valuable dataset. But that's only an impossible challenge if it has to do it all alone.

And Facebook doesn't have to.

It already has an alliance with the #2 player in search, Microsoft. And – in the way of "the enemy of my enemy is my friend" – it has a common interest in outperforming Google. And Facebook and Microsoft have enough separation between their businesses that they could complement each other rather than compete. Indeed, Facebook's increasing strength in its advertising engine could be a huge lift to Bing's struggling monetization – offering hope of raising Bing's monetization toward Google's levels. The two truly are more valuable together, and it's no surprise that smart people have begun to speculate on a Bing-Facebook combination, a step beyond a partnership. Working with Bing for its search entry could save Facebook billions of dollars of initial R&D and speed its entry into the category by years – and by many dozens of engineers. And any agreement they'd sign would likely still give Facebook the option to create its own search engine down the road.

The Chilling Threat To Facebook's Enemy

Regardless of how Facebook structures its efforts – and with whatever degree of help it gets from Microsoft – it will be able to create a search capability that will be significantly different from anything we've ever seen. And it will shake the tectonic plates underneath Google's Mountain View headquarters, even as it vies to earn users' adoption with better, more personalized results.

Google will not perish in the digital earthquake without a fight, though. Its recent Google+ launch, for example, shows just how boldly Google intends to enter Facebook's home territory. That, of course, makes it even more imperative for Facebook to counter-invade by pushing into search.

Looking forward, it's clear that search and social won't always occupy separate spaces. Indeed, for consumers, over time, they will converge; and the blended (or, just as likely, reimagined) product that emerges will serve as a home base that will serve as a jumping-off point to everything that's important and relevant on the 21st century Web.

It's fascinating, and it's all about to unfold. In the meantime, while Zuckerberg quietly forges ahead, and readies Facebook's game-changing search entry, Eric Schmidt, Google's former CEO, is publicly lamenting lost opportunities to catch Facebook. The diverging fortunes of these two digitally defining companies could not be more apparent right now.

We’ve heard quite a bit about the Galaxy S II, which isn’t all that surprising seeing that it sold 3 million units in its first 55 on the market. As people from other parts of the globe got to experience the wonder that is the GSII, we here in the States played the waiting game. But it’s so close I can almost taste the Gingerbread.

On August 29, Samsung will finally unveil the GSII’s U.S. iterations in the Big Apple for T-Mobile, Sprint, and AT&T. If you haven’t already heard, Verizon is holding off on the GSII. In the lead up to the event, this image was leaked to PocketNow, which shows all three little beasts posing for the camera.

They’re all a bit different in design, most notably T-Mobile’s Hercules. If what we’ve previously heard about the Hercules is true, T-Mobile’s Galaxy S II will sport a larger 4.5-inch Super AMOLED Plus display, as opposed to the original GSII’s 4.3-inch screen.

Of course, T-Mobile’s variant may not be called the Hercules. We actually don’t know what any of the carrier names will be, although we sure have heard quite a few: the Attain (AT&T), the Within (Sprint), the Function (Verizon), and even the Samsung Galaxy S II Epic 4G Touch (also Sprint?). What a nasty mouthful, right?

Either way, it doesn’t really matter what the phone’s called because it’ll be a hit no matter what. Just take a look at the specs: a dual-core 1.2GHz processor, Android 2.3.4 Gingerbread, TouchWiz 4.0, 8-meagapixel rear camera (1080p video capture), 2-megapixel front-facing shooter, and a 4.3-inch 480×800 Super AMOLED Plus display.

Of course, things like screen size may be different from one carrier to the next (read: Hercules), but all in all those should be the specs we’re looking at. There’s also one minor change in the U.S. variants compared to the international version, which would be the loss of that snazzy little home button. Instead, the phones will sport the same four buttons we’ve grown used to on Android.

What better way to cleanse the palette than a quick tromp into a conceptual rabbit hole? 3D animation shop AatmaStudio has released a concept video showing what they imagine as the iPhone of the future, and… well.. I’m ready to pre-order.

Now, just how much of this is actually feasible with current tech? None of it, really — but a good chunk of it is within the realm of plausibility if we consider said tech’s foreseeable evolution.

The Design: That design looks far thinner than the 8mm barrier that no one has really managed to crack yet (unless we’re counting those which tuck the thick bits into one lumped region, taper the rest, and then base measurements on the thinnest part of the profile — which is kind of cheating.) With that said, the thickest bits of most modern smartphones tend to be the radios and the camera sensors, and these are getting slimmer and slimmer every few months. Just two weeks ago, for example, OmniVision announced an 8-megapixel camera module that comes in at a build height of just 4.4mm..

The Keyboard: Projection keyboards have been done before (IBM patented them in 1992!), but never quite like this. Though they never really seemed to take off, the few projection keyboards that do exist are generally dedicated Bluetooth/USB accessories, as opposed to being integrated into the handset itself. Even as rather clunky, separate components, the projection was one color, red laser-based stuff — nothing like the high-resolution, beautifully scaling board you see here. But these days, we’ve got itty-bitty pico projectors, and folks like Microsoft/PrimeSense dumping millions into IR-based motion tracking. Let those technologies continue to evolve, and we’re probably but a few years from something like the concept keyboard shown here.

The Holographic Projector: As for projecting video into thin air, without any sort of screen to reflect the light… that’s something that’ll probably be stuck in concept videos and the Star Wars Universe until further notice. Damn you, physics! It’s probably for the better, really: while interacting with a floating screen seems futuristic and fun, the absence of any sort of tactility would be a rather miserable user experience.

Steve Jobs and I went to the movies this weekend. It was the new Woody Allen one, the one where he remade Manhattan only in Paris. I think Steve likes these get togethers because he knows how right I always turn out to be. Me and the other 250 million people just like me.

Steve Jobs is like everybody’s big brother. He isn’t trying to do what’s right for us, he’s just doing the right thing. Sometimes it can come off arrogant, but so does my big brother when he says something in that particular way. The only thing he’s got extra is that couple of years of experience, the next two or three turns around the park that separate the men from the boys.

I don’t think it’s a coincidence that his period of greatest achievement came as he grappled with mortality. Before then, he was tilting at his own windmills, the missing years when he was exiled from his laboratory, banished to boarding school as his parents took over and ran the ship aground. But when he came back he had the gravitas necessary to not make it about himself but rather what he wanted to get done.

Before then, it was about fashioning the future out of the rough clay of an emerging creative culture. Drafting on a miraculous culture that spawned Dylan and the Beatles, Jobs and Wozniak were astronauts testing out unlimited possibilities. Like some combination of Lennon and McCartney and George Martin, Brian Wilson and Hendrix, Dylan and The Band, these guys made a brown album, a white one, failed, went electric, back from the ussr, rosé from the dead, sang with the Dead, disbanded, blew their minds out in a car, God Only Knows.

And then, just when we thought it was over, just one more thing became the mantra. A different time, a different more self-controlled era, one of possibility but careful ascension of a logical series of steps. A calculation of checks and balances, building one thing to fund another, learning how to pivot as the entropy of the deliberate provided an opening for elegance. A seeming indifference to the enterprise while all the while producing a generation of consumers who backed into power.

Jobs is not a child of the ’60s, but he has inherited the family business. Having used the legacy to build an adult toy based on the music of his youth, he harvested the audience and connected them via the phone, broke the carrier’s hammerlock, and changed the firmware from CD to DVD to iPad and WiFi. Just as Dylan broke the song barrier, Jobs created the new record, razor and blades, a wirelessly streaming living album that wraps, informs, emits, and shares our lives.

Sure we’re afraid, afraid to grow up, afraid not to. We watch in awe of what can be done so quickly and so immensely satisfying in the reaction to it, the joy of getting our hands on the next in a series of impossible objects — Pet Sounds, a Day in the Life, even the president of the United States must stand naked. Kind of Blue, Back to Black, impossibly beautiful with a glint of something hard to pin down, dark and deep.

We see our own possibilities in Steve Jobs. We are not afraid of losing him but of having to make things happen ourselves. And like the big brother he is, he loves us and we him for not laying the sins of the father on us but giving us the high sign to get on with it. It may be a slow fade ahead but he’s doing even that with a fierce grace. As we enjoy his work, he is rewarded in the best way he could possibly imagine.

IBM Research has just set a world record in data storage by building a drive array capable of holding 120 petabytes. It was done at the request of an unnamed research group that needs this unprecedented amount of space for running simulations of some sort. These simulations have been expanding in size as the datasets grow, but also as more backups, snapshots, and redundancies are added.

How did they do it? Well, the easy part was plugging in the 200,000 individual hard drives that make up the array. The racks are extra-dense with units, and need water cooling, but beyond that the hardware is fairly straightforward.

The problems come when you start having to actually index this space. Some filesystems have trouble with single files above 4 GB or so, and some can’t handle single drives larger than around 3 TB. This is because they just weren’t designed to be able to track so many files over so large a space. Imagine if your job was to name everyone in the world a different name — it’s easy at first, but after a billion or so you start running out of permutations. It’s the same way with file systems, though modern ones are much more forward-looking in their design, and I doubt you’ll have that problem again — unless you’re IBM Research.

120 petabytes of storage is an insane amount, eight times larger than the 15 PB arrays already out there, and they already had to deal with address space issues. In IBM’s huge array, tracking the location and calling data for its files takes up fully 2 PB of its own space. You’d need a next-generation file index just to index the index!

Their homegrown file system is called General Parallel File System, or GPFS. It’s designed with huge volumes and massive parallelism in mind: think RAID for thousands of drives. Files are striped across as many drives as they need to be, reducing or eliminating read and write capacity as a bottleneck for performance. And boy does it perform: IBM recently set another record, indexing 10 billion files in 43 minutes. The previous record? 1 billion files — in three hours. So yeah, it scales pretty well.

The array, built by IBM’s Storage Systems team at Almaden, will be used by the nameless client as part of a simulation of “real-world phenomena.” That implies the natural sciences, but it could be anything from subatomic particles to planetary simulations. These projects are generally taken on as much to advance the field as to provide a service, though. And of course now IBM gets to boast that it built this thing, at least until an even bigger one comes along.

FEMA’s had a mobile version of their website available for a while now, but all that information does you no good if you can’t get an internet connection. Given the fragility of mobile networks during disasters, going without web access is a very real possibility.

Enter FEMA’s new, self-titled Android app, which puts a wealth of emergency preparedness information right in the palm of your hand just in case.

The app contains information and advice on what to do for disasters ranging from earthquakes to wildfires, and everything in-between. Also present is the emergency kit checklist, which outlines all the items and provisions one may need to get through some trying times. Very useful, especially because some things they recommend (like a “whistle to signal for help”) aren’t exactly the first things to come up when brainstorming the contents of a survival kit. For those worst case scenarios when the best bet is to head for a nearby shelter, the app lists locations where it should be safe to hunker down.

It also provides a quick way for disaster survivors to apply for federal assistance, although it’s my sincerest hope that none of you readers will ever have to. While it’s essentially a pocketable version of the FEMA site, it’s a valuable resource in it’s own right, especially with Hurricane Irene poised to barrel up the East Coast in coming days.

Creating an app is a smart move for FEMA, especially considering the state of most mobile networks during an emergency situation. Cellular networks are quickly jammed up by handsets try to make calls, as some of you may have noticed during this past week’s earthquake. FEMA recommends sticking to text messages and emails when trying to contact others, and that the app works fine sans data connection only helps. One less thing for the network has to cope with will hopefully mean everyone can get in touch with everyone else without too much headache.

The first on our list is Parental Controls, which is kind of a necessity on just about any gaming platform. Where there are games, there is gruesome, violent, and profane content. Definitely not suitable for small kids. Parents can now set certain restrictions on the account to block M-rated games, chatting with strangers, Brag Clips, or spectating.

OnLive has also introduced a beta version of Group Voice Chat. Before today, OnLive already had a strong social element with its Game chat (for multiplayer sessions) and Spectator chat (for people watching others play). Group voice chat will let you chat with friends whether they're playing, watching or picking their nose. Even in beta, the Group chat feature should work fine on all of OnLive's supported devices.

And rounding out our new feature line-up: yet another opportunity for you to brag on Facebook. OnLive has today integrated Facebook achievement sharing, which automatically posts any game achievement direct to Facebook. OnLive has already been doing this with Brag Clips, which are little videos of certain stellar moves or big-time wins you've done within a game. If it just so happens that your game has Brag Clip Achievement configured, the achievement will get blasted out to Facebook alongside a Brag Clip, so your friends have proof of your ability to dominate.

OnLive says these newest features are the product of customer requests, and that they're far from the last. So if you're an OnLive junkie and have a great idea to make the platform better, ask and ye might receive. No harm in trying, right?

Well, what do we have here? Looks like T-Mobile has just snagged a new tablet accessory, but we’re not quite sure which tablet the accessory is supposed to go with. This 10-inch leather sleeve isn’t going to fit very well on either of T-Mobile’s current slate offerings: the 7-inch Dell Streak 7 or the 8.9-inch T-Mobile G-Slate.

Unfortunately, the sleeve itself doesn’t have any specific tablet in mind. It’s a universal leather sleeve “for most 10-inch tablets,” according to the image leaked by TmoNews. This could mean one of two things.

The first is that T-Mobile decided to sell a 10-inch tablet sleeve for the fun of it. People often get their tablet/phone accessories at their local carrier outlet, whether they bought said device there or not. And there are plenty of 10-inch tablets on the market that are in need of a snug little sleeve. This option is totally plausible, but not what we’re hoping for.

The second option is that T-Mobile is working on getting itself a 10-inch tablet. Which 10-inch tablet? Your guess is as good as mine. A few rumors suggest that the Galaxy Tab 10.1 may be headed in pink’s direction, which would be pretty huge since it’s widely regarded as one of the stronger tablet offerings on the market.

Right now, T-Mobile’s tablet selection is a bit limited compared to other big carriers like Verizon and AT&T, so we’ll definitely be keeping our fingers crossed that the latter is in fact the reason for this slightly random 10-inch tablet sleeve.