After a longer-than-expected wait, some shipping glitches, and a good deal of anticipation, my open-source, crowd-funded, cloud-gaming, Android-powered Ouya game console arrived in Friday's mail. I unpacked the box, plugged it in, and fired it up. After 24 hours, I've come to some conclusions about the device – though I can't say they're all positive.

Ouya: Out Of The Box

The Console: The first thing I noticed about the console itself was its size. The thing is small – about the size of a Rubik's Cube. With no optical drive or expansion slots, there's no reason for the device to be any bigger, but it was still a little jarring. It's also pretty idiot-proof. Plug in the included power adapter and HDMI cable, press the only button on the device, and you're ready to get started.

The Controller: The controller was reputed to be the system's crown jewel, and overall, it's a success. The pop-off panels for accessing the dual battery compartments seem a little insecure at first, and I would have preferred a more traditional hinged compartment on the back, but the Ouya design seems rigid enough once everything is snapped together, and it's probably cheaper to fix, down the line.

Other than that, the pad, sticks and buttons worked as planned, the controller fit my average-sized hand nicely, and I was able to forget about controls and focus on the games immediately. And that's really the point. I found it worlds more comfortable than any Sony controller, and somewhat more natural than the Xbox 360's. If this controller shipped with a next-gen system, I wouldn't be upset.

Ouya Setup

The hardware was great, and pairing the controllers was straightforward. When I logged into my account, though, the Ouya's Kickstarter roots started to show. Setup went smoothly enough, but even a little documentation might have been nice. The box included only an FCC-mandated warning: no manual or diagrams. The log in process was simple, but to retrieve the username I'd registered months ago, I had to swap to my laptop and Google "Ouya username retrieval." An inline "Retrieve Username" next to the "Lost Password" link in the setup screen wouldn't have been terribly hard to add.

With any luck, that retail units will ship with more documentation and a smoothed-out interface. As an early backer, a reviewer and someone who'd like to see this type of project succeed, I didn't really care, but the Best Buy set is accustomed to a higher level of hand-holding.

The Ouya UI

Once you're logged in, the Ouya interface is pretty clean, but there aren't too many more positives worth noting. It's tough to make four menu items a jumble, but Ouya somehow succeeded. The designers may have been trying a bit too hard to make things cool.

The menu items:

PLAY: Play the games you’ve downloaded. Simple enough.

DISCOVER: This is the Ouya app store. DISCOVER is a horribly awkward list of downloadable games, with confusingly named sub-menus (What’s the difference between CHECK IT, STAFFPICKS, and FAVS, anyway?). The GENRES section is more useful, but it reveals an unfortunate lack of content designed for the device. As of the weekend, there were only six games in the DUAL STICK category and only three applications in APPS.

MAKE: Information for software developers that really doesn’t belong in a main menu.

MANAGE: System configuration.

I get what Ouya was going for, but everything abut the interface screams BETA, and it wouldn't have been that hard to do it right. Drop me straight into PLAY, provide a prominent link to the store, and link to games that are related to the one I'm currently playing. Hide the rest somewhere boring. Done.

Some of the gaps should get filled when more titles become available, but that list is likely to to see a lot of static. The bar is pretty low for Android games, so not every entry will be up to par for console games.

That's where some content curation could help. Branded channels (e.g., something by IndieCade or one of the gaming mags) could really help users find games worth playing. So could a healthy peer rating system and some filtering based on past ratings. The good news is that all of this can be fixed in software. The bad news is that the retail release date is coming up fast.

never made it past the loading screen and forced a hard reset), but there's certainly no "must-have" franchise Ouya title yet.

Final Fantasy III: What about Final Fantasy III? If you've played the Android version on other devices, you know what you're getting. If you played the original version 20 years ago, it's a refreshing trip down memory lane. FFIII offers Game Boy mechanics with 3D graphics: think Pokemon Stadium on the N64 compared to Pokemon Yellow and Red. Younger gamers without an appreciation of history will probably get bored very fast. It's great to see a major studio throw some weight behind the Ouya, but this game is not a kingmaker.

The Ouya Verdict

I think the gaming industry needs a kick in the pants, and I'm glad to have helped support the Ouya's attempt to provide it. I have hopes that in time, the Ouya can provide exposure to indie game developers, add playability to Android games that could really use a solid controller and function as a valid over-the-top box for Netflix and other TV apps.

As a geek and freedom fighter, I think my money was well-spent. If I were a parent on a shopping mission or hardcore gamer looking for a fix, though, the Ouya just doesn't deliver. If you're looking for anything resembling a AAA-title gaming experience, your $99 would be better spent on a used Xbox 360 or a new video card for your gaming computer.

I think Ouya has the potential to fix the bugs and round out its stable of apps and games to make a really viable complement to traditional consoles, but the company needs to move fast, before gamers decide to move on.

On Wednesday, Bloomberg released a new website for its Bloomberg Billionaires Index, complete with some snappy data visualization tools. While we'd like to see a few more features added to the mix (a year-by-year progression of adds and drops would be great), it's a fun tool, and it makes certain trends easy to spot. For those of us who watch the tech community, the list provides a quick gut-check about where the tech sector fits into the larger world's priorities.

Technology Isn't Everything

Technology is important, but don't forget that people need to build things, buy things and pay for them, and those industries generate revenue, too. Only 12 of the world's 100 richest hail from the tech industry, so it's far from a dominating presence. The tech billionaires are well distributed throughout the list, with only Bill Gates and Larry Ellison in the top 10, and half the techies landing the bottom 50. Tech has a significant showing on the list, but its not as strong as retail (17 total, with 9 in the top 20). Overall, technology seems to be about as lucrative as mining or finance. It's still a much better bet than newspapers, though. Only three of the top 100 hail from the media world.

Almost All-American - For Now

All but two of the tech 12 are Americans. The others – Wipro CEO Azim Premji, (India, #49) and Samsung Chairman Lee Kun Hee (South Korea, #91) – are the first of what is likely to be many more future overseas members of the club. The U.S .list is still weighted pretty heavily toward the heavy hitters of the 1990s. SAS, Microsoft, Oracle, and Dell are solid companies, but most people wouldn't consider them the future of tech. Yet these "legacy" companies account for 6 of the 10 Americans on the list. It's a pretty sure bet that 5 to 10 years from now, we'll see a lot more members from South Korea, China, India and Russia.

$10 Billion Takes Time

Becoming an overnight millionaire in the tech industry is no big deal, but there's no shortcut to the top 100 billionairs. Well, there might be just one. Mark Zuckerberg. Zuck is the only person under 30 in any industry to make the list. Even stepping up to the sub-40 bracket, there are only three more additions: Larry Page, Sergey Brin, and Colombia's Alejandro Santo Domingo, who manages a collection of media companies and SABMiller. Technology can certainly make you rich in a hurry, but to join the ranks of the mega-mega-rich, even geeks have to work at it for a while.

Family Planning

One bonus for the children of tech billionaires is inheritance. While many of the more traditional industries seem to favor having lots of children, the tech industry tends toward a more reasonable family size. The sweet spot seems to be 2-3 children - which leaves lots more cash for each offspring. Zuckerberg and Paul Allen have no children (yet), while Jeff Bezos and Michael Dell each have 4. On the rest of the list, 21 have 5 or more kids, and Malaysia's Robert Kuok has 8! Of course, since many of the techies on the list have already signed Bill Gates' Giving Pledge to donate the bulk of their fortunes to charity, those kids might have to settle for measly single-digit billions, anyway.

Japan didn't just give us the quartz wristwatch, the DSLR, and the Playstation. It also gave us the Subway Sleeper, the Hay Fever Hat and the Kaba Kick Russian Roulette Toy for Kids! The country has long been a hub for wacky inventions you never knew you wanted, so in tribute to the keepers of the bizarre, here are some of our favorite devices to make you say それはクールだ!

#1. Taily - "The Tail That Wags When You Get Excited"

We're all fans of Necomimi, the "Brainwave Cat Ears," right? Right. I mean, without cat ears to wiggle, how will we ever know if you liked your donut? Well, Shota Ishiwatari, creator of the Necomimi prototype, isn't resting on his fuzzy laurels. Ladies and gentlemen, we give you Taily:

You wouldn't wear a jacket without pants. Now there's no need to wear those adorable kitty ears without a matching tail. You'll be the talk of the furry convention circuit, and your friends will never have to ask how you're feeling again!

#2. The H-Boya USB Creepy Kid

The letter "H" is bad. Very bad. So bad, in fact, that you need a bobbleheaded child to blink at you in creepy terror every time you type it. Yup. That's all the H-Boya does. It blinks when you type "H," because, as the story goes, "H is for Hentai," and that's bad. But terrifying robochildren who watch your every move? That's totally right.

Source: audiocubes.com

#3. The Marriage-Hunting Bra

When a suitor is feeling amorous, the Marriage-Hunting Bra helps him check his intentions. The bra sports a digital countdown timer headed for the precise moment when its wearer wants to tie the knot. It also has a ring-carrying compartment and a pen holder, so you'll never be a a loss if you decide to jet off to Vegas for a quickie ceremony.

Lest you think we're just beating up on Japan, we give you two humble entries from other countries with too much time to burn. First up, China's entry:

#4. The Home Core Toilet-Sink

The iPotty was child's play. The champion of the pro toilet circuit is industrial designer Dang Jingwei. His second toilet design (following the ingeniously low-tech Portable Paper Toilet) is the Home Core Integrated Toilet, which combines a sink, a vanity, and, of course, a toilet. It's a bit crowded for our tastes. Gray water recycling is a fantastic idea, and we're all for saving the environment, but a shower might be an easier place to start.

Source: yankodesign.com

And in case you think innovation is dead in the USA, we bring you the awesomeness that is:

5. The TV Hat

Yes, that's right. Theres no need to buy a battery-busting Galaxy Note II to watch movies when you can slap a magnifying glass over your iPhone. Plus, you get a super-cool hat to hide your identity from thieves who'll want to break into your house and steal the hat! Or the horrifying possibility that someone you know might see you wearing this thing.

There's been a lot of talk about why the proposed Dell buyback doesn't add up. Some of it dates back more than two years, and the arguments all center on one thing – money.

The problem, as critics see it, is that going private robs Dell of critical cash at the time it most needs to spend that cash to acquire companies that diversify its lineup. It's a valid point, but that doesn't mean it's the only way to look at the situation. Even with a cash crunch, pulling Dell off the public market might be exactly what the company needs to avoid prying eyes that could toast its chances for future success.

Source: Dell.com

The IBM Ideal

When IBM sold its PC business to Lenovo in 2004, everybody won. IBM pulled in some needed cash, exited a low-margin business and focused on the enterprise. Lenovo got instant credibility and brought efficiencies to a market that still had years of oomph.

Nearly everyone predicts that Dell wants to do the same with its buyback, but today's market is fundamentally different. PC sales are dropping, and Dell's share of that market is falling even faster. Plenty of PC manufacturers would be willing to fold Dell's brand into their lineup, but not at the premium investors would ask. Time isn't on Dell's side. The longer it waits to offload its PC business, the worse the deal will get. Going private would at least shield the company from having to make those details pubic.

The HP Boondoggle

Dell is smaller and more dependent on PCs than is HP, but the two companies line up well enough to illustrate how a reinvention of Dell might work.

HP is an absolute wreck. Investors are shaky, key executives are fleeing, and even Meg Whitman's rosiest turnaround scenario offers years of bleeding to come. HP has product problems, legal problems and PR problems, and it's headed for a fire sale. Seven out of the ten first-page results on a Google News search for "HP" were negative. HP is floundering in full view, and all the negativity is making it difficult for the company to manuever.

So why didn't HP go private?

According to Gartner Senior Research Analyst Chris Gaun, "HP has a larger market capitalization, and going private might not have been an option." Raising $15 billion for a Dell buyout is pushing the envelope. $34 billion for HP would be in a completely different zip code. Gaun also points out other complications, such as the Autonomy investigation, which would add substantial risk and complexity to any buyout. HP is too big and too messy for a buyback.

According to Reticle Research Principal Analyst Ross Rubin, a buyback could benefit Dell, regardless of its goal. "Going private would insulate Dell from investor scrutiny and the expenses of running a public company. It would have more flexibility to continue the low-margin PC business if, like HP, it continued to see it as part of a solution - or spin it out and take the revenue hit, as HP was considering."

Source: Shutterstock

Protecting The Brand

Even the high-margin products and services Dell wants to protect are being pushed toward commodity status. In the end, Dell will be trading on its name. Insulating that name from controversy that might cheapen it could be worth some belt-tightening.

In a publicity bid this week, German website Populeaks.org announced The NRA Bet, challenging the National Rifle Association - in fractured English - to use its guns to "keep all non-public data a secret till April 30, 2013." If the NRA manages to protect its data, the site promises to provide two unrequested staffers to provide unwanted services – in this case, helping to polish 500 guns at the NRA's next annual meeting.

I'm sure Wayne LaPierre is thrilled. "The NRA Bet" may never amount to much. It's not as if Populeaks, which was created only three months ago, can really call down the fury of a mighty hacker horde.

Source: http://www.g4tv.com/hurl

Still, by framing its call to action as a humorous contest, Populeaks staggers across an interesting point. What if instead of threats, hactivism groups tried offering their targets something they wanted in return for complying with their demands (er, requests)? And what if instead of anger and outrage, the conversation included a little humor and satire? Would that kind of lighter approach be more likely to achieve the desired results? Or at least make the hacktivists more likable?

PopuWHO?

First things first: You might be asking "What the hell is Populeaks?" That's an excellent question. According to its About Us page:

POPULEAKS confronts governments, corporations and non-governmental organizations with the assertions made by our whistleblowers – and demands substantiated replies or information within a time window of ten days.

In order to increase people’s readiness to respond, POPULEAKS informs up to 6,000 journalists and bloggers from its custom media mailing list - at the same time it receives an inquiry - and publishes the full text of the inquiry, word-for-word, on www.populeaks.org.

In other words, the site encourages users to spam it with gossip, attempts to force the subjects of the gossip to reply, then turns around and spams "6,000 media contacts" (including us), hoping to get coverage. It's an annoying business model, and it's highly unlikely you'll see the group mentioned here again unless it actually breaks some news. But the tactics employed in this particular case (which actually has nothing to do with PopuLeaks' stated whistleblower mission) are worth a look.

Taming The Shame Game

Public shaming by the hacktivist community is hardly new. Anonymous has made a fetish of it, following grand pronouncements about unchecked corporate greed and disregard of the common man with threats to tear down Facebook, Egypt and Iran, among others.

Demand => Threat => Resolution. It's a time-tested formula that's been around since the Greek siege of Troy (probably since Gok threatened Gom with a rock if he didn't quit hogging the mastodon leg). The problem is that after a few years of constantly threatening people, you kind of seem like a jerk, particularly if you have to follow through on your threats. Facebook may or may not be evil, but if you're the one keeping the masses from sharing their kitten videos, you're the bad guy.

Groups like Anonymous tread a fine line between Robin Hood and (in the words of BreitBart.com) "terrorist" in the court of public opinion. Mixing it up could help the hactivists' image. Populeaks is an unknown site, and the NRA Bet isn't even particularly witty or creative - or even coherent. Overall, it's pretty low-rent, and offers nothing of value to the NRA.

But if someone with a bit more street cred offered to do something useful, relevant and funny if their target complied with their requests, we might actually see some progress.

Friendly rivalries and side bets with conservative pundits like Bill O'Reilly and Mike Huckabee have earned Jon Stewart and the Daily Show a legitimate place in the political discussion, increased everyone's likability and actually put substantive political discussions back on the air. Maybe injecting some of that into the cyber-rights battlefield wouldn't be such a bad idea. I bet the EFF's leaders would gladly pump gas for a week at a Chevron station if the company would drop its lawsuits and admit it was wrong about Ecuador.

There will always be intractable situations that call for severe responses, but dangling a few carrots couldn't hurt.

We all know PR reps work charm and tsotchkes to build "relationships" with journalists and analysts, but they're way less shady than, say, Black Hat scammers in the SEO biz, right?

Apparently not. A few weeks ago, I saw the following job post on Elance. The description was a remarkably up-front pay-for-play bid:

We need people who can publish news articles at news sites PR2+.You choose the topic and weave in the subject we assign. After a review, you publish to a specified news site.Several journalists needed.

In 2013? Seriously? Wasn't Payola over in the 60s (or at least after the J-Lo scandal of 2005?). I had to see if this had legs, so I submitted a bidless proposal.

Into The Heart of Darkness

The next day, I got a response, asking me to promote a small virtualization vendor:

Their price for selling my soul? 25 bucks. That hurt.

I contacted the vendor's PR department, ready to tear them a a new one, but instead of excuses, I got shock and horror. They claimed to be completely in the dark – and I believed them. I emailed the poster, asking if the company was a knowing client. They said yes. I replied, telling them I'd contacted the company and heard just the opposite.

Crickets.

It's been four days, and all communications have ceased.

The vendor was pissed. In fact, the Senior Product Manager with whom I spoke was angry enough to threaten legal action against the job poster, and I believe that's already begun. That action is the reason I'm not naming names, though you can probably find the job by digging on your own, if you're curious. UPDATE: After the publication of the piece, I was contacted by the vendor's actual PR firm, a boutique outfit that's worked with some very large technology brands. We've worked with them in the past. At this point, I became even more convinced that the scammers were flying solo.

I reported the job to Elance, describing its varying levels of sketchiness, but it's still up, and will probably stay there. As far as I can tell (and please correct me if I'm wrong, folks), there's no law against paying for references in a blog. Still, when I mentioned "paid placement" to the poster, they shot back a clarification right away. They were "pitching an idea." They just happened to pay money if they were allowed to review the post before publication and that "idea" made its way to a site.

Pay-For-Play Alive And Well

Semantic juggling fixes everything.

The point here is that payola is apparently alive and well in the blogosphere. If the going rate is truly only $25 for a mention in a respectable publication, it's easy to see the appeal – a dozen mentions would cost less than the airfare to send one rep on a briefing. But why would one of these shady agencies spend money to promote a non-client?

The biggest advantage is protection. When you're feeling out an ethically-icky situation, you don't want to dangle any top-shelf clients that can expose you. Once the journalist is on the take, both parties have a vested interest in keeping things quiet, so the agency can relax a bit. The other benefit is promotion. The by-product of all this fishing will be a catalog of company references in relevant publications, which could be an excellent door-opener for sales. In any case, freelancing sites provide a fantastic layer of anonymity for shady promoters. It's something Elance, Guru.com and the rest of the freelance marketplaces will have to address in the future.

Maybe all my instincts are wrong, and the vendor was actually involved. It's always possible, but it doesn't change the conclusion. There's apparently still money in buying off journalists, and from the looks of it, they aren't very expensive.

For all our brand loyalty, consumer electronics are commodities. A very small number of suppliers produce the guts of most electronic devices, and competing brands are often assembled in the same factories (we're looking at you, Foxconn). Assuming the same components, the only major differences among many products are fit-and-finish standards and customer support.

What Would You Pay For Giant Monitor?

Sometimes support is reason enough to pay more. When my Macbook Pro's hard drive died 10 months after purchase, I had a replacement hard drive installed within two hours. That beats boxing the computer and waiting weeks for a replacement. When it comes to laptops, a few dollars more can be a worthwhile investment. But what about components that don't usually break? Like monitors, for example?

For the past several years, budget-savvy buyers have saved cash by buying grey-market Asian (usually Korean) merchandise - including large-screen monitors - directly from importers. The sellers typically work through eBay, Amazon, or an auction site, and the products the buyer receives are pretty bare-bones. Seller warranties usually cover products that are Dead On Arriva and (in the case of monitors), a negotiable number of dead pixels, but that's it. The manufacturer warranties are typically written in Korean, and it's up in the air whether they even apply in the States. It's a lot like the gray market trade on which many camera vendors have built a business, but in this case the manufacturers themselves are relative nobodies, too. When you buy a Yamasaki Catleap or a Crossover 27Q monitor, you're pretty much on your own.

Source: Shutterstock

The flip side, of course, is that you get a whole lot of 27-inch monitor for your money. Less than $400 to your door (add an extra $10 to $100 for a "pixel-perfect" guarantee) buys components found in domestic monitors at more than twice the price. Inputs are limited, controls are basic, and case design can be a bit wonky, but you'll get the same LG IPS panel Apple uses in its Cinema Display, which is a truly beautiful thing to behold.

What About The Warranty?

At this year's Consumer Electronics Show (CES), Monoprice (the ultra low-cost retailer that's been the king of cables and accessories for some time) was showing off its entry to the sub-$400 27-inch monitor market: the CrystalPro WQHD. Like the other Korean imports, the CrystalPro sports a high-resolution, 2560 x 1440, LG IPS panel, a VESA wall mount, and dual-link DVI inputs. The difference is the warranty. Monoprice offers a 30-day money-back guarantee, a full one-year warranty on the monitor, and a lifetime warranty on cables and accessories. Plus, it's located in Rancho Cucamonga, California, with live chat support seven days a week.

There's no denying that Apple's Cinema Display is a better, more polished product, but when properly calibrated, the display quality of the Korean imports can hold their own at a fraction of the cost. For system builders, those contemplating a multiple-monitor setup, or anyone looking to step up from a smaller screen, the $400 deal is tempting. With the addition of a real warranty from an American importer, we may have reached a tipping point.

My old 24-inch monitor is suddenly looking kind of small and tired. For $390, I'm willing to give an off-brand alternative a shot. How about you?

When Microsoft gave its first public preview of Windows 8 in 2011, the now-President of Windows Julie Larson-Green sent shockwaves through the Windows development world with just four words: "our new development platform." The reason? That platform was based on HTML5 and Javascript.

Microsoft has backpedaled in a number of forums since then, assuring developers that while HTML5 is the new standard for cross-platform apps, other tools will continue to work for Windows-only development. But the writing is on the wall. HTML5 is the future, so if you develop enterprise Windows applications, should you bite the bullet and make the move?

Will HTML5 Save Enterprises Money?

The cost argument will rage for some time. One camp holds that HTML / Javascript developers are cheap and plentiful, so HTML5 is necessarily cheaper. The other side believes that instability of the HTML5 spec (only recently finalized and not scheduled for Recommendation status until 2014) compared to the more mature development environments available for "traditional" Windows development means developers can build complex applications faster, without worrying about tweaking things down the road.

The CTO of one small software vendor saw value in both views: "For our simpler apps, I can hire kids with good Javascript skills and let them learn the Windows specifics on the job. For really complex applications with tens of thousands of lines of code or more, It would be dumb to break what already works." He added that his more experienced Windows developers are mentoring the generally younger HTML developers to cross-pollinate knowledge. "Ultimately, each tool will have a use, for at least the next several years, and I want all of my devs to be able to pick the one that makes sense."

"Serious Coders" vs. "Script Kiddies"

His biggest problem so far is a reluctance to embrace change. "I have a couple 28-year-olds who act like grumpy old men, afraid that the 'script kiddies' without any real computer science knowledge are moving in on their turf. To them, HTML5 cheapens the application, dumbs down their resumes, and opens the door to a whole lot of bad coding from people who know how to make Web pages, but don't have any formal experience with structured coding."

The last point is probably the most valid. Knowing HTML and some Javascript isn't a particularly high bar, so enterprises need to be diligent about hiring and mentoring. If you pull developers off of Craigslist for $15 an hour, you're not going to get quality enterprise work. Even well-established Web developers coming from a LAMP background may not have the right experience. A mentoring program using Agile or another pair-programming methodology – can be a great way to ease Web developers into a more formal programing environment.

What Do Developers Want?

One long-time C++ and (more recently) C# developer wasn't excited about the rise of HTMLt5: "Eh. I get what they're doing. It's all about the portability of UI. They've been on that path for a long time, but whatever. The thing is, developers don't want to learn a new markup when Microsoft has already forced them to learn one recently. WPF / Silverlight is crap, but so was Winforms. If they'd skipped WPF, they'd probably have more success trying to get people to shift to HTML5... I'll go where the money is, though."

That last point is telling. Developers will follow the work, they really don't have a choice. And that it won't be long before everyone will be doing at least some work in HTML5. Smart enterprises will be begin mixing in some of that work now makes sense, but there's not yet good reasons for a complete shift.

It's official. Long after the XBox 360 is relegated to scrap heaps and Gamestop bargain bins, the Microsoft Kinect – the XBox peripheral that lets you control the action with body movements alone – will be going strong.

A Mediocre Game Controller

To tell the truth, the Kinect is a pretty ho-hum video game controller. It works with a fairly weak selection of game, given how long it's been on the market, largely because blockbuster games generally require the kind of pinpoint control you can get only from a joystick or control pad. Microsoft may be working on games that take better advantage of the Kinect hardware, but that's not the point.

The point is that the Kinect is a cheap, open, powerful piece of hardware with a life beyond video games. It's been hacked in a number of ways since its inception, and with October's launch of Kinect for Windows, Microsoft is fulfilling the promise of its SDK and throwing the company's weight behind the effort in a big way.

Microsoft Moving Beyond The Xbox

Microsoft's emphasis on the Kinect makes sense. The XbBox has been wildly successful within the high-end game market, but that covers only a fraction of total households. To earn the company's SmartGlass system a spot in non-gamer living rooms, Microsoft needs a central piece of hardware, and an open Kinect gives it an in that Apple and Google can't currently match. On the back-end, Microsoft is positioning the Kinect as a boon to revenue-hungry content providers, but on the demand side, it's hoping the market will take care of things on its own.

So far, the market has responded. Projects like OpenKinect have spawned dozens of interesting uses of the original Kinect sensor, including virtual touchscreens and three-dimensional image tracking that works in the dark. It probably won't be long before the Ouya has its own Kinect hack. With the addition of official support and upgraded hardware, Kinect for Windows should encourage those developers to productize their work, and attract a lot more interest from commercial developers. The $200 device provides a standard platform with a high-quality camera, skeletal tracking, face and voice recognition, and a wealth of development tools and support. Its camera alone is probably worth the cost.

Kinect-ing With Physical Therapy

Late in 2012, the Department of Defense expressed some interest in using the Kinect for therapy. The DoD found the Kinect particularly interesting for the ongoing treatment of remote patients, or those who wanted to maintain anonymity while undergoing care. The economics of the system make sense (the costs of just a few patient transports could easily pay for a Kinect and PC), and Microsoft is pursuing the deal aggressively.

Medicine is a big market for the Kinect. Tokyo Women's Medical University is currently using Kinects as part of its Opect project (see video here), which lets surgeons access information in a hands-free, Minority Report style that doesn't contaminate their hands.

While medical uses make better PR than an automatic Nerf gun turret, they still doesn't get the Kinect into the average living room.

For that, we'll need an entirely new killer app. If Microsoft gets really lucky, that app might come from crowdsourcing. But the more likely source is a certain television manufacturer with a dislike for Apple and Sony.

On December 23, the Journal News published an interactive map showing the names and addresses of all pistol-permit holders in two New York counties. Some 43,000 comments later, the battle over the paper's move rages on. Incensed gun owners claimed the article made their homes targets for thieves and drew unwarranted attention to them "like it was some sort of sex offender registry." More than 20,000 people responded by circulating the author's address, phone number on social media in a "How do YOU like it?" strategy.

It didn't end there. On January 3, Putnam County officials refused the paper's request for its pistol permit records, citing the risk of "endangering citizens."

Is it Legal?

The fight will probably go to the courts, and the county will probably lose, because the newspaper is perfectly within its legal rights to publish the information. The information was obtained legally, and everything published was available, for free, to any resident who asked.

According to Mark Rumold, a staff attorney with the Electronic Frontiers Foundation, the issue is cut-and-dried: "I can say, in no uncertain terms, that publishing the information was legal and squarely protected by the First Amendment. Whether or not publishing the information was the right thing to do, or smart, or in the public interest, is probably a question of journalism ethics that I'm not qualified to answer."

Another criticism – leveled at both the newspaper that published the data and the gun owners who later published the author's address – is that it's a question of "intent." According to that argument, if the intent of the publication was to shame the named parties, the First Amendment doesn't protect that.

Again, Rumold dismisses the argument: "The First Amendment, if it protects anything, certainly protects the publication of truthful, lawfully obtained information about a topic of significant public interest. That protection includes shielding a newspaper from civil liability – for example, for violations of privacy." He adds that the line gets blurry in some "edge cases," such as publicizing a rape victim's name, but, in his opinion, "this case doesn't even approximate that level of privacy intrusion." So until someone comes out and says "Let's all meet at 5pm to steal their guns," Uncle Sam is fine with it.

Clearly, the paper published the list to attract readers, and that worked in spades. It's less obvious that it considered the additional ramifications of its actions. Still, all of the permit holders referenced in the article are over 21 and (one would hope) aware of the fact that their permits were open to the public. If they were not made aware of that fact, the fault lies with the permitting system – not the newspaper. Rumold agrees: "In my opinion, for those upset about the publication of the information, I think their grievance is with New York's legislature's for making the information a public record."

What Happens Next?

Governing bodies clearly have failed to anticipate the kind of proactive publication modern technology allows. While publishing a database of public information may be perfectly legal, it could very well cause unintended headaches. Over the next few years, we'll probably see a lot more protections against massive data aggregation pop up in the form of data throttling or outright bans on publication, followed by court challenges to all those moves.

We'll see how that all plays out, but for now, it looks like the press has the advantage.

For all the lip service paid to social media marketing and all the "New Media Marketing" hires on LinkedIn, it seems that a lot of businesses are coming around to the idea that social media actually might not be very important. Generally, the survey found that smaller businesses (those with fewer than 10,000 employees) and business-to-business (B2B) companies were less likely to engage with their customers via social media. In fact, 43% of B2B companies admitted that their CEO ignores online reputations altogether.

On its face, the survey makes a certain amount of sense. If you sell to other businesses, you probably have fewer customers and a direct sales force, so you can just pick up the phone to reach them. Likewise, smaller companies probably have fewer resources to dedicate to a social campaign - and less ability to make a splash.

But the one-to-many push of social marketing is comparatively cheap, and there's no reason to make your sales force perform damage control when they should be selling. Sure Facebook fatigue seems to be catching these days, but it seems like many businesses are missing an opportunity.

After the recent tragedy in Newtown, CT, some commentators and - notably - the National Rifle Association (NRA) remarked that video games played a role in a "culture of violence" and detachment that can ease the path to violent behavior. This, in turn, has given new life to the debate about the role of media violence – particularly, violent video games – on real-world aggression. It's a serious topic, so ReadWrite thought it was important to to recap the latest on the discussion and see where scholarly studies and popular opinion fall.

Understanding The Numbers

We all know the guy who plays Call of Duty eight hours a day, then goes home to a world of puppies and rainbows. We've also heard of the kid who plays a game for an hour or two, then goes on a shooting spree. There are exceptions to any rule, and if we're going to find real answers, we need to look at trends and averages, not statistical outliers.

It's also important to remember that even if there is a link between violent games and aggressive behavior, that does not imply causality. Violent criminals may well choose violent games, but tens of millions of gamers play those games every week, and the vast majority are law-abiding, normal citizens.

At the same time, it might be shortsighted to ignore such links. According to a recent publication by Iowa State University professor Dr. Craig Anderson, "Correlational studies are routinely used in modern science to test theories that are inherently causal. Whole scientific fields are based on correlational data (e.g., astronomy). Well conducted correlational studies provide opportunities for theory falsification. They allow examination of serious acts of aggression that would be unethical to study in experimental contexts. They allow for statistical controls of plausible alternative explanations." In other words, short of placing a subject in a dangerous situation, correlation is often the best evidence available, and it can be useful debunking other theories.

The State Of Research

At the moment, studies are all over the map, largely because just about every study of video game violence uses different definitions of the terms. The Legend of Zelda, Grand Theft Auto and Missile Command are all violent games in their own ways, but they're not at all similar. Likewise, throwing a fake roundhouse kick at your buddy, checking a box describing "elevated feelings of aggression," and setting fire to a building are all extremely different violent expressions. Unfortunately, current studies span both spectrums, so anyone with a vested interest can easily find a study to support their position. Worse, this makes meaningnful meta-analysis across multiple studies is effectively impossible. 80% of studies agreeing with a certain position doesn't mean much if half of those studies were poorly structured and the other half were measuring something completely different.

5 Emerging Truths

With that said, there seem to be five theories gaining traction. Each has its naysayers, of course, but they have real data to back them up:

1. At-Risk Populations Are Vulnerable To Violent Stimuli

One popular theory holds that some people are more vulnerable to the effects of gaming violence than others. This resonates with our gut instincts, and provides a happy, reasonable-sounding middle ground for both sides. In the Review of General Psychology, Drs. Patrick and Charlotte Markey outline the three most predictive traits for vulnerability:

high neuroticism

low agreeableness

low conscientiousness

This doesn't mean that games cause violent behavior. It suggests that violent games are among the many influences that can be linked to violent behaviors. We've seen copycat murders modeled after television newscasts, Mark David Chapman's obsession with The Catcher in the Rye, and thousands of years of killings based on stories from holy works. Violence and rebellion in media have always been lightning rods for the mentally ill, and video games are a popular medium for the young male demographic most likely to commit violent acts.

The upshot? Young people who are emotionally upset, detached or combative, and impulsive should probably not be exposed to violent games. Unfortunately, that describes a fair portion of teenagers, so use discretion applying the rule to your own kids.

2. Video Game violence Is Not A Significant Danger To The General Population

Even the most damning studies don't claim that video games will create violent monsters of your children. They can't. If that were true, we'd have blood running in the streets. For the majority of "normal" gamers, the worst claims seem to be short-term aggression without substantial consequence, and a general lessening of communication and empathy skills – but again, without specific consequences attached.

The majority of research on the subject seems to indicate a fairly tenuous link between in-game and real-world violence. For example, two studies conducted by Texas A&M and the University of Wisconsin - Whitewater, respectively, found no conclusive evidence. "Structural equation modeling suggested that family violence and innate aggression as predictors of violent crime were a better fit to the data than was exposure to video game violence."

In other words, a predisposition to violence or a violent homelife is very likely a predictor of future violent behavior, while video games are not.

3. Fantasy Violence Is Less Dangerous

Killing Falatacot Raiders won't make you murder humans, though we're not sure about Hitman. Some people have pointed to studies showing that even E-rated games can lead to imitation (e.g., children punching or kicking) for a period following play, but it appears that transference of aggression from aliens, orcs, or Pokemon to humans is minimal, at worst.

4. Violent Games Do Increase Simulation

Just like watching action movies or sprinting down a street, violent video games (and other competitive or action games) increase stimulation and adrenaline production, which can produce short-term disruptions and enhanced moods. Some studies claim short-term affects can last long enough to disrupt sleep when played before bedtime, while others saw certain effects lingering up to 24 hours. At the very least, the "amp up" factor is real – it's kind of the point. For parents of children who may be particularly affected by such things (e.g., those with Attention Deficit Hyperactivity Disorder, or ADHD), this can be a concern.

5. Content Ratings Matter

People on both sides of the issue agree that content ratings are important. Even absent a long-term impact on violent behavior, graphic scenes of violence, nudity and other adult situations can impact developing minds. Video game access should be restricted like access to any other type of media.

The Easy Answer

Anyone who wants the government to step in and make the call on what to do about video game violence will be sorely disappointed. There simply isn't enough evidence linking video games and violence to even start that discussion, particularly when films and images of far more graphic violence are readily accessible.

The answer to the problem seems to be the same as the answer to concerns about TV rotting your kid's brain in the 1960s: personal responsibility. If you're a parent, pay attention to the ratings, research the content of games online before you buy them, and above all, know your child's sensitivities and limitations. If you're in doubt about the effect of a game or other piece of media, say no.

That won't end the debate, of course. Truly troubled teens often don't have the parental supervision they need to limit their gaming or other media consumption. But it's unclear exactly what the right strategy would be to deal with that issue.

The ReadWrite DeathWatch is known for serving up plenty of doom, gloom and grumpiness. But for the Holiday Season, we're taking a slightly different tack - highlighting companies, technologies and perspectives that have managed to cheat death.

This week, we're taking a look at MySpace, that social network that showed Friendster how it was done, then got shown the door by Facebook. When its users bailed and the tumbleweeds started rolling, MySpace could have packed it in, but instead, it regrouped for one last shot – behind a leader with some really sweet dance moves.

Where Myspace Came From

Remember Tom? Sure you do. If you're under 45, he was probably your #1 friend at one point. In 2003, Tom Anderson and Chris DeWolfe founded the MySpace social network, and it was an instant hit.

In 2005, traditional media took note, and Rupert Murdoch's NewsCorp dropped $580 million for MySpace and its parent company. By fiscal 2008, Fox Interactive (MySspace's new parent)turned a $10 million profit on revenue of $500 million. Murdoch was ecstatic on an earnings call. Things changed pretty quickly.

Planning to refocus on MySpace's biggest strength – music – Specific Media brought in Justin Timberlake for some star power and industry cred. Management went into planning mode, and apart from a more graphical home page, the site just kind of sat there. On the plus side, the bleeding stopped by December, and Myspace (now with a lowercase "s") actually started adding users again.

Where Myspace Is Now

In September, 2012, the team began teasing a new, more visually attractive site, focused on a single mission: providing a single social space for consumers and labels to discover musical artists. In the words of an employee, "There's going to be a huge emphasis on surfacing unknown and up-and-coming artists of all kinds and content around them that is different from what you get on other sites."

The site looks a lot like Twitter. And Pinterest. And every other social media site that's hot right now. There's a layer of media sharing and consumption tools, plus reporting tools that let artists see who's listening to and sharing their music, help them strike up relationships with promoters, influencers and (if they're lucky) music labels.

Myspace began taking requests for invitations and pushed its beta to the industry, with consumer invites to follow. The beta has drawn mixed critical reviews of the business model. ReadWrite's Jon Mitchell can't understand why anyone would invest so heavily in the Web when mobile is where the action is, and he may be right. Still, everyone seems to agree about three things: The beta looks fantastic. It's hyper-focused on sharing and discovery. Justin Timberlake is more than just a name on the marquis – he has a very real stake in the product's success.

So why would Timberlake risk his reputation on reviving Myspace? For one thing, it still has assets. With 28 million unique monthly visitors, Myspace is no Facebook (152 million in just the US), but it's bigger than Pandora (21 million), and more than twice the size of Spotify (12 million). It also has global rights to its catalog, while Pandora is limited to U.S. distribution. The catalog itself is much larger, too, owing to Myspace's direct relationships with unsigned artists, who bring 27 million of the service's 42 million tracks. And as Myspace's parent company pointed out in a leaked slide deck in November, those unsigned artists' songs are free, dramatically lowering Myspace's costs per listening hour.

A Myspace spokesperson was honest about the company dropping the ball. "I think, internally, we all felt like we’d let that community down; like we owed them something and had to make it right by delivering on the promise of Myspace." But she also understood that no one had picked up the slack. "In its heyday, Myspace was a great platform for artists. When we stopped serving that community, no one else stepped in to give artists a place to put their music, connect with audiences, see how their art’s resonating with people, promote themselves, collaborate with other creators… The need for a Myspace is there. We plan on delivering on that need."

Where Myspace Is Headed

Myspace has a product to sell. It has the economics to make that product profitable. It just needs to make the product desirable.

This is where star power comes in. Timberlake has managed to endear himself to teenage girls who think he's cute, grown-up women who find him sexy, and grandparents who want to pinch his cheeks – all without alienating guys, who want to drink a beer with him and meet his wife. Timberlake is there to bring labels to the unsigned bands, marquee names to the catalog, and users to the site. If he delivers, it's up to the team to keep them interacting.

And what about that team? As one Myspace worker puts it: "Before Tim & Chris & Justin took over Myspace, I was looking for jobs elsewhere. I wasn't even confident that new ownership would make a difference. But when they bought the company and we met them, heard what they had to say, there was an immediately noticeable shift. These guys were really ready to turn this thing around, and they had a plan. And the sense I get from them is that they don't fail. They work hard and are focused on specific goals that can succeed, which was really refreshing. The staff, who were tired of being on the losing team, started to perk up and get energized again because we started to have a focus and a goal and some real hope."

Employee excitement doesn't necessarily lead to a turnaround, but it's a necessary component, and something Myspace has lacked for a while.

Can Myspace Make It?

There are a lot of "ifs" in this scenario, but once you posit that there's a market for a music discovery and sharing service, Myspace actually seems to be in the lead.

Or it will be, once the beta launches. The date on that is still uncertain. According to the company, "We’re literally pushing fresh code every day. We’re actively getting feedback from our community and making tweaks and adjustments based on what they tell us."

The site is due to relaunch at some point in 2013, and when it does, we'll see whether Myspace belongs on real DeathWatch list.

As of the end of 2012, though, Myspace is still here – and still potentially relevant. So we tip our caps to them, and look forward to seeing whether the company can make it all the way back.

The fall is the result of years of poor choices. The overpriced, underperforming uDraw is the poster child for the company's woes (in fact, it's called out specifically in the filing), but THQ has been doing things wrong for years.

While accidental hits from the Saints Row or UFC franchises have propped up the business from time to time, its Electronic Arts-style buy-and-bleed policies (snapping up game development studios but not investing in them) have robbed credible franchises of life and produced financial train wrecks.

THQ acquired studios Juice Games, Paradigm Entertainment and Kaos Studios, fumbled their titles, and ended up closing all three shops. Big Huge actually thought its chances would be better with the now collapsed 38 Studios (think about that)! Add some underperforming releases like Darksiders II and a total bungling of its once-core children's line-up, and there was only one way for this to end.

Can Bankruptcy Be A Good Thing?

So what does Chapter 11 mean for the company? According to THQ president Jason Rubin, nothing but sunshine. On his corporate blog, he posted the following: "The most important thing to understand is that Chapter 11 does not mean the end of the THQ story or the end of the titles you love. Quite the opposite is true, actually."

Rubin goes on to explain that Clearlake Capital Group has agreed to purchase the company's assets, and work will continue uninterrupted on current projects. With the bankruptcy wiping liabilities off the table and new funding in place, "the teams will be unburdened by the past and able to focus on what they should be focusing on - making great games."

He has a point. If Clearlake is willing to invest in the company at a substantial premium, it must have some faith in upcoming products, so a hands-off approach makes sense.

Bankruptcies Are Never Clean

But despite the promises, bankruptcies are never clean, for good reason. Something got the company into this spot, so something has to change. It's unlikely that THQ will have the financial resources to go on a buying spree any time soon, but that too could be fortunate. The bankruptcy may indeed force what's left of the company to focus on the little picture of game design and playability - a concentration that's been absent for too many years.

Rubin did note that anything can happen during a Chapter 11 filing, so they money isn't locked in just yet. But with no other buyers circling, Clearlake looks like a good bet to "win" control of THQ. And it might just be able to force THQ to turn things around.

2013 promises to be more interesting. It will still be a transitional year for the industry, but those transitions will come faster and be a lot more obvious. Here's what to look for:

1. Sony Will Continue To Flounder

Nintendo's 3DS outsold the Playstation Vita 46-to-1 from November 5 to November 11. One more time: 46-to-1.There are plenty of reasons for that (the game catalog being the biggest), but the upshot is that Nintendo is crushing Sony in an already-brutal handheld market. Sony's PS3 lineup isn't doing much better, failing to generate any mojo in its later years.

The PS4 is still far, far away, and despite a predicted return to profitability, Sony as a whole is still in pretty rough shape. Beyond the usual franchise previews at E3 and some teasers about the PS4, don't expect to hear a lot from Sony next year. 2013 is about regrouping for one last big shot.

2. Free-to-Play (F2P) - And Some High-Profile Failures

The trend of Free-to-Play games is inevitable, but it's going to force developers to think – and build – differently. The frontloaded revenue of unit sales allows developers to go nuts in ways that Free-to-Play doesn't. That's fine for an indie shop building a small game and trying to make money on in-game sales, but the AAA publishers used to gorgeous cutscenes, pristine graphics and endless hours of linear content available at launch could find themselves in a bind. Creating Call of Duty isn't cheap.

It's guaranteed that someone will fail to learn from the Star Wars: The Old Republic flop (and that was with retail sales) and will overbuild a F2P ghost town.The good side is that, by the end of the year, we'll see a renewed focus on storytelling, with nonlinear narratives that encourage multiplayer cooperation and replay – the kinds of behavior that forgive corner-cutting elsewhere and ultimately lead to the in-game purchases the F2P model requires.

3. The Ouya Changes Everything, Even If It Fizzles

The Ouya, Kickstarter's darling, will launch on schedule, with a host of titles, a great price point and a ton of fan support. While the GameStop, in the short term.

4. The Resale Market Lives Another Year

Digital downloads are already having an effect on the market for used games, but with no new XBox or Playstation on the horizon, there will still be plenty of low-cost playthrough on existing systems. That means a lot of swapping through back catalogs and playing the games you missed the first time around. That's good news for a banged-up GameStop, in the short term at least.

5. Augmented Reality Takes Off

Back in 2009, ReadWrite readers already knew Augmented Reality would be a big deal. Sony's Wonderbook was an interesting (if limited) foray into AR gaming in 2012, but the real push will be on the mobile side. 2012's AR Defender 2 (trailer video below) was a cool new take on tower defense, but 2013 will be focused on getting us outdoors.

Plan to see a slew of location-based mobile games like Ingress that also add full AR via smartphone cameras. Primitive-but-cool combat and cooperative mechanics will show up, as well. Just don't step into traffic while you're playing, and look out for the cops.

This month, the state of California sued Delta airlines in a very big way for failure to comply with the California Online Privacy Protection Act (CalOPPA). The suit alleges that the Fly Delta mobile app lacked a conspicuous, accurate privacy policy, and seeks up to $2,500 for each download. Delta quickly threw up a policy (though researchers have already found flaws in it), but the suit stands, and the potential damages are very real.

The really dumb thing is that this lawsuit never should have happened. Delta was given 30 days notice by the state of California, and it still couldn't make the deadline. There's no excuse for that. It's a privacy policy, made of words, not code. Delta - and any company in that position - should have had a policy up within a week.

So consider this your company's official notice. If you don't have a privacy policy for your mobile apps, write one today. Here are some tips to get started:

Step 1: Review Your App

Get your app developers and your spec together and perform a 6-step review:

1. Document any collection of personally identifiable information (PII). PII can include but is not limited to:

Name

Terrestrial or Email Address

Phone Number

IP Address

Current Location

2. Note whether any of the PII your apps collect (for example, a social security number) is more sensitive than others, and any special steps you take when collecting it.3. Take special note of your target age range. If your apps knowingly collecting information from users under 13, consult your attorney before continuing.4. List all the parties (such as ad networks and technology partners) who have access to PII and how it will be used.5. List all user profile control options: can users request, view, edit or delete their information?6. Outline data retention and disposal policies for all user data, paying particular attention to canceled accounts.

Step 2: Write Your Policy

With that in hand, it's time to write your policy. If you have an attorney on staff with the requisite experience, start there. If not, there are lots of free templates and tools like the Privacy Choice policy maker to get you started. Customize as you see fit. (There are also plenty of paid services that specialize in privacy policies.)

If you have a privacy policy for your website, you've already done most of the work. Your job now consists of identifying the ways in which your app is different from your website, then displaying your policy in a succinct manner that mobile customers will actually read. The Center For Democracy and Technology (CDT) has an excellent, free resource called Best Practices for Mobile Application Developers that will help smooth out the edges.

Step 3: Review Your Policy

In all the prettying up, you may have misinterpreted some facts. Run the finished policy past your developers. Then compare your policy to those mandated by any of the app stores that will be distributing your app. The CDT document has some good summaries, but you'll want to check the most recent terms from the stores themselves.

Step 4: Get Certified (Optional)

If you really want peace of mind, take the next step and get your app certified by TrustE. It's not strictly necessary (Google doesn't even require a privacy policy – but California does, so write one!), but it provides users with an additional layer of confidence, and verifies that you've done your job right.

Having a mobile app privacy policy doesn't guarantee you won't get into trouble. But not having one is just asking for litigation.

The ReadWrite DeathWatch is known for serving up plenty of doom, gloom and grumpiness. But for the Holiday Season, we're taking a slightly different tack - highlighting companies, technologies and perspectives that have managed to cheat death. That are surviving even after many observers wrote them off.

Ed Robben's LinkedIn Profile page lists his title as "SVP & CIO, Fossil." The irony of that title isn't lost on the latest wave of doomsayers predicting the end of the CIO.

In January, retailer J. C. Penney named Kristen Blum its new CTO. At the same time, they let Robben – their CIO – go. The newly rebranded JCP was too small for two sheriffs, and the CTO was running the show.

Did that send a bigger message to the technology world? A message that the CIO - as we understood the role - was a dying breed?

Plenty of people seem to think so.

At a September event in Sydney, a Forrester Research analyst described the CIO position as "potentially under threat". A different analyst at the same event called out the emerging Chief Mobility Officer (not to be confused with a Chief Marketing Officer) as one of the primary usurpers. The logic behind that assertion, it seems, is that CIOs are traditionally PC-focused, while CMOs are gaining power with the rising tide of mobile devices in the enterprise. A report by Getronics claimed the real threat comes from increasingly tech-savvy CFOs.

Alphabet Soup

There's no shortage of theories about who's gunning for the CIO, but they all miss the point. Yes, the CIO's role is changing, but so is everyone else's. Marketing is buying software. CFOs are shifting from purchases to subscriptions. It's a wacky world out there, to be sure, but the CIO still has plenty to do - and a lot of value to bring to the party.

The biggest danger to the CIO is tying the position's definition to the devices, rather than the responsibilities. Once upon a time, the CIO kept the mainframes running and – in a very literal sense – "kept the lights on" in the datacenter. These days, many of those services are being pushed to the cloud. If you view the CIO as the person in charge of making sure someone reboots the servers on schedule, she's obsolete. If the CIO is the person who keeps mission-critical technology running smoothly at the best possible cost - while grabbing opportunities for innovation and competitive advantage, she's still an invaluable asset.

Businesses play fast and loose with C-level titles all the time, particularly when technology is involved. Before taking the CTO position at JCP, Kristen Blum held CIO titles at PepsiCo and Abercrombie & Fitch. And in her spare time? According to J.C. Penney, "She is a member of the National Retail Federation's CIO Council and a Governing Body Member of the CIO Summit, among other leadership roles in CIO-focused organizations."

CIO vs. CTO

So what's the difference, anyway? It depends on the organization. Traditionally, most businesses view the CIO as a cost center, charged with maintaining system reliability. While the CTO is seen as a potential revenue center, building new technologies.

In some companies (for example, software outfits), the delineation between the two jobs and the skill-sets required of them is fairly clear: IT versus product development. In other companies, much of the development may be internally focused, and internal and external systems may be intimately connected, blurring the lines.

In those cases, businesses may find a single chief executive managing technology can be more efficient. That's likely what happened at J. C. Penney. We'll probably see a lot more consolidation in 2013, but businesses could just as easily call their surviving executives CIOs and have the same outcome. After all, Kristen Bum is Kristen Blum, regardless of her title.

Chief Accountability Officer?

Whatever the letters, one thing that won't change is the CIO's most basic function. One man who's been both a CIO and CTO described it this way: "I don't care what you call me. I'm still the guy with his ass on the line when something breaks. Someone needs to take responsibility for whatever systems you're using, and it's sure as hell not going to be the Marketing guy."

That's exactly right. The CMO may become the new technology customer, but someone else needs to be responsible for oversight, integration, performance monitoring, compliance auditing and other technological evaluations – all functions that are squarely in the wheelhouse of the CIO.

The CIO function will continue to evolve, and we'll probably see a reduction in line-of-business CIOs as platform consolidation occurs, but the job isn't going anywhere.

GPS-based Augmented Reality is great for a mobile game like Ingress, but it won't help you fix your car.

New software from PAR Works has a visual take on Augmented Reality, bringing the benefits of the concept to an entirely new class of applications. It might even work its way into gaming, too.

Current implementations of AR usually tie into your phone or tablet's location features (GPS, compass, etc.) to determine exactly what to show you. In some cases, like an interactive subway map, this is exactly what you need, but there are times when you want something more precise – or times when location doesn't matter at all.

Companies like Aurasma have brought some really interesting AR applications to 2D picture recognition, and they have the potential to transform marketing.

But we live in a 3D world. Imagine being able to tag 3D objects and have your overlays viewable from any angle. That's what PAR Works claims its Mobile Augmented Reality Solution (MARS) can do.

What MARS Does

The video above simulates an application an automaker might ship with its cars. Open the hood, start your camera and tap anywhere for support information. Turn the phone upside-down, lean in from the side, or take a shot from beneath the car, and it still works.

There are plenty of applications for this sort of technology, usually centered around large, fixed objects that could be viewed from multiple angles. Imagine a virtual tour for an installation like the Space Shuttle Endeavor that worked from any angle and any distance. Or a construction site that allowed workers to pull specific schematics with a click, regardless of whether they were two blocks away or inside the building.

How MARS Works

MARS' bread-and-butter is the way it translates 2D images to 3D point cloud models (the type you normally get when you put something through a 3D scanner).

Creating a MARS overlay goes like this:

Upload 20-30 2D images of a building, object or location, taken from different angles.

In about 2-3 minutes, MARS renders that into a 3D model. This happens on the back end, and you never see the model.

Choose one or more of your 2D images and tag as many hot spots as you want with URLs or other data.

MARS applies those zones to its model, and the user can now view AR content from any angle.

MARS' Limitations - And Its Potential

For industrial and commercial uses, it's a great idea. For most consumer apps, including gaming, it's pretty limited on its own. In an online scavenger hunt, for example, seers could fake out the system by snapping a picture of a photograph. And if you wanted to create a MARS-enabled version of something like Ingress, good luck uploading 20 or more images for each of your thousands of portals.

Of course, the MARS system isn't designed to stand alone. Developers still have access to all of the phone's other functionality, so they could choose to combine location-based services with PAR Works' visual model. A scavenger hunt or a "sniper" game might require visual confirmation plus physical proximity within a viewing distance, allowing users to find creative ways of targeting a tough-to-find objective.

A tourist app might query your general location, then pull down all visual maps matching "Times Square" so you can search for restaurants by storefront. And for that Ingress-like game, you could always distribute the load, letting users upload photos and create their own hot zones.

MARS' Future?

MARS certainly won't be a cure-all for all developers' AR ills, but it looks like it might be a powerful tool. Until January 31, 2013, PAR Works is running a $25,000 developer contest, and it has some 250 coders in the program. It will be interesting to see where developers take the MARS platform.

Holding on to old data slows applications, increases storage costs and backup times, and dramatically increases the danger of attacks. A good data disposal policy can reclaim some of your budget and help you sleep better at night.

For the sake of argument, let's assume your company already has a data retention policy. If it doesn't, stop reading right now and make one. No one wants to be left in the lurch when auditors come calling or a client claims you didn't pay that invoice back in 2011.

But what about the other side? Is there such a thing as too much data?

Absolutely.

Why You Need To Do It

According to the Compliance, Governance and Oversight Council, nearly three quarters of all data stored in an organization has no current business use. If that seems like a lot, consider the forms that data might take. The biggest and scariest culprit is email, which often contains sensitive personal and client information, as well as multiple versions of files forwarded as attachments. Email is a horrible storage and versioning system, but it's one of the most popular.

Then there's the problem of department-specific data silos, which often hold redundant records that can be orphaned. Imagine your HR, Marketing and legal departments each keep separate copies of employee records. For compliances's sake (or, more likely, because you never got around to integrating your systems), those records are all stored in separate systems. If HR terminates an employee but the information doesn't sync, you've just created orphans in the other system that may last forever.

On the other hand, maybe you've done it right. Your records share a common repository and each department has properly permissioned views.

You still might be in trouble.

HR might need to retain certain data after a termination, but retaining other sensitive information might actually be illegal. If you're in a highly regulated industry, you're probably aware of these restrictions. If you're not, you may not know about them until there's a lawsuit after a breach.

Don't forget about the storage issue. Slashing your storage by 50% to 75% would save a lot of cash. The CGOC estimates a savings of up to $50 million in some enterprises. In some highly virtualized enterprises, storage costs can account for as much as 40% of the total IT budget. Plus, everything – from record searches to backups – will run faster.

You're on board. Less data equals the less risk carried, faster systems, and more money.

How Do You Get Started?

Create A Policy

This might sound obvious, but the first step toward disposing your data is to create a data disposal policy. It should mirror and integrate with your data retention policy, as well as any other physical destruction (e.g., shredding) policies you follow. You don't want anything falling through the cracks.

Don't try to make decisions on your own. Each department should have input, and the final policy should pass through legal and compliance reviews before landing on the CEO's desk. Everyone needs to be on board.

Assume The Worst

Try to minimize the amount of effort required by employees. For example, autoarchiving emails past an age threshold will point out inappropriate use pretty quickly. One CTO of a mid-sized firm remarked that when his company moved from POP to IMAP and began archiving older emails, his sales department panicked. "They'd been storing customer data in emails and spreadsheets instead of using our CRM system. We were storing sensitive data without gaining any value, and our sales reps weren't doing their jobs." There will always be room for human error, but prevention will ease the cleanup burden after the fact.

Consider The Hardware

Different types of data require different disposal methods. Medical records or confidential design documents may require physical destruction of a disk or a magnetic degaussing. Old tweets and press releases probably need only a simple overwrite. If you're still storing a mix of data on the same physical disks, this might be a good time to change that.

The disposal methods you choose will be based on your industry, so your Legal department is the ultimate authority, but you can start your research with the NIST's Guidelines for Media Sanitization.

Get Service Guarantees

This is a problem even the largest enterprises sometimes face. Much of your data is in the hands of third parties, and more will be shifting that way soon. It may be their cloud, but it's your data.

Send your disposal plan to your service providers and get a guarantee that they'll abide by it. This may add costs to your contract, but failing to do so makes the policy pointless. If your provider already specializes in government or industry compliance, this should be an easy talk to have. If its not, consider shopping around for new services.

Remember: It's A Process

You won't be able to do everything at once. Some parts of the policy may require more review than others. Some systems may require redesign. Get the low-lying fruit first.

If you're starting from scratch, even the first steps are steps in the right direction.

OptioLabs has just released OptioCore, a secure version of Android, to handset makers. It's pretty cool, but does it mean Android is ready for the enterprise?

From a security standpoint, Android has always been a case of untapped potential.

The Two Sides Of Android Security

On one hand, it's an open and popular operating system, which means it's a prime target for hackers. According to researchers from Georgia Tech, 2013 will be the year mobile malware gets serious, and Android is vulnerable. Google's App Verification Service, which is supposed to identify harmful applications upon instalation, is kind of a flop, and the majority of users don't install any third-party antivirus software.

On the other hand, Android's dominance and openness also creates a market for third parties to try to fix these problems, and that's just what Optio Labs, created by Allied Minds, claims to have done. The mobile device management and security firm has recently released a hardened version of Android that includes a bunch of baked-in security features – and not just malware detection.

The OptioCore OS and administrative tools (Optio MDM) will be distributed through a series of hardware partners and software integrators. But the company was unwilling to share specifics: "We are in collaboration with numerous established, multi-national OEMs, systems integrators and software companies on various strategic initiatives and commercial activities." We'll know soon enough, as devices using the new OS should be available in late 2013, and the PR push should begin even sooner.

Lots Of Security Features

So what does OptioCore do? Pretty much everything.

First, there's malware protection. The company claims to protect against "all known Android malware variants including Rage against the Cage and other root exploits."

Second, there's auditing down to the application level, which is good news for regulated businesses.

Third, based on policies that can be stored locally or in the cloud, admins can remotely administer or wipe phones, view devices that are out of compliance, and perform all of the other features that are common to Mobile Device Management (MDM) applications. It even allows users to store different profiles on a phone, so a work wipe won't affect personal files.

That's all great to have, but it can already be done with existing software. What really makes Optio's Android different is the system's ability to tie into location-based services.

Location, Location, Location

Admins can lock down phone behaviors through PhantomLink, a service that uses Bluetooth "beacons" to determine physical proximity. If you want to disable a phone's cameras or turn off texting in a product development meeting, you can. You can also require physical presence in a location to access documents or applications, ensuring that data can't slip out the door to your office, even if the devices accessing that data go home with your workers every night.

If you already have an MDM solution you like, OptioLabs isn't against using it, but the vendor will have to write its own hooks into OptioCore via an application programming interface (API). That means early adopters will probably be playing around with the bundled tools for at least a few months.

The OS is also open to further customization, particularly for vertical markets with specific needs that can't be met through the MDM console. According to Brian Dougherty, OptioLabs' Director of Engineering, "OptioCore can be augmented with additional procedures and controls to create custom, domain-specific flavors of OptioCore." Security reviews for these products would happen through a third party.

OptioCore isn't perfect – someone with physical access to the hardware could still root the phone – but being able to tie into physical spaces via PhantomLink should dramatically limit the risk of intentional or accidental data leakage. If it all works, it's a massive step toward making BYOD manageable, and since it's still Android, there's a good chance it will run on phones employees actually want to bring to work.

As Silicon Valley girds for $1 trillion wealth transfer from the enterprise software incumbents to nimble upstarts, it makes sense to look how the entrenched players are responding the challenge.

While some are buying up the competition while they still can or building their own startup-like operations (praying they won't cannibalize their main businesses), others are hiding their heads in the sand. For them, at least, the results aren't going to be pretty.

The Old Guard

The enterprise software market is big and messy for a reason: The "enterprise" isn't a thing, precisely. Enterprises are just big businesses - and businesses do everything. Clorox, Facebook and the Department of Defense might all be considered "enterprises," but they have very different organizational structures, revenue models and supply chains. If you're selling software to support those differences, you'd better be ready to customize.

Enter SAP, Oracle, IBM, EMC and other enterprise "solution providers." Typically, these vendors sell customers a product for $500K, charge another $1 million to integrate it and make it work, then take even more ongoing fees to keep it running. Enterprise software products scale - which is why companies are willing to invest so much in them - but they're expensive, slow-moving and complicated.

For small-to-medium businesses (SMBs), speed and cost trump scalability, so they tend to focus on separate "point" solutions. An SMB might stitch together Quickbooks, Microsoft Project, Joomla, payroll services from ADP, and a ton of spreadsheets and email to fill in the gaps. These solutions are cheap and quick, but integration between them is often glitchy, crticial data can get lost in the shuffle, and they have an annoying tendency to fall apart as companies grow.

The New Guys

Over the last 10 years, Software as a Service (SaaS) delivered from the cloud has bridged the gap between the two worlds, offering scalable enterprise-class services that can be up and running in a matter of weeks, rather than months. SaaS applications are generally less customizable than their on-site competitors, but they're a lot simpler to understand, often provide better performance, and their pricing is much more straightforward. Some of the biggest enterprises in the world are moving chunks of their infrastructure to these SaaS upstarts, and many newer companies are building their entire platforms in the cloud.

Salesforce.com is the most successful example of a SaaS vendor, racing from nowhere to a multi-billion dollar valuation in just a few years. Along the way, Salesforce has proved that the cloud can, in fact, support large enterprises. The company currently manages a 25,000-seat contract with Merrill Lynch, for example, and it closed a $140 million deal with State Farm Insurance earlier this year.

Deals like that get headlines, and you'd think that traditional enterprise software vendors would be worried enough to do whatever it takes to response to the challenge.

You'd think that, but it's not always true. In many cases, the big dogs don't seem to be paying attention, and it could end up costing them.

Quick Response

Sometimes they get it right. In HR for example, the SaaS threat is well understood, and the response is already in motion. Workday, created by PeopleSoft founder Dave Duffield and other PeopleSoft refugees (and powered by an underpublicized but smoking IPO) provides a full suite of hosted Human Capital Management (HCM) and financial management applications that compete directly with SAP and Oracle (the company that bought PeopleSoft). SAP responded by acquiring the cloud HCM provider SuccessFactors for $3.4 billion – a 52% premium. For its part, Oracle spent nearly $2 billion on Taleo.

Game on.

The Laggards

But that's not the whole story. There's still plenty of head-in-the-sand thinking going on. Take Web Content Management (WCM), for example, the software companies use to store, edit, manage and publish their assets online. It's an absolutely essential piece of the enterprise puzzle.

In a podcast titled "The Big Shift," Gartner Group's Mick MacComascaigh declared a sense of urgency "driving attention to SaaS-based Web Content Management (WCM)." His partner on the podcast? CrownPeak CEO Jim Howard, who's been promoting SaaS as the "new" face of WCM for the past 10 years.

SaaS has plenty of benefits for WCM. It's much more marketer-friendly, for one. Service-based solutions lack some of the infinitely tweakable options you get by running on your own iron, but they make it far easier for Chief Marketing Officers (CMOs) and their minions to put together something quickly, without IT help.

According to Howard, enterprise sales at CrownPeak are heating up. "CrownPeak will experience over 60% growth this year, with about half of the growth coming from expansion in Fortune 1000 accounts like MetLife, Microsoft/Skype and Intercontinental Hotels," he claims. Howard also notes that the majority of CrownPeak's clients are companies with more than $1 billion in revenue, and that many of them plan to migrate away from in-house systems.

Nothing To Fear?

So why isn't everyone launching an SaaS WCM system? According to Tony Byrne, founder of Real Story Group and one of ReadWrite's Five Analysts to Watch, market demand hasn't met the expectations. "Web CMS customers seem to want more platform-oriented systems, rather than highly productized, SaaS solutions."

That kind of conservatism makes a lot of sense. After all, your content – articles, video, contracts, code – is what you do and who you are. Outsourcing that to a service in the sky is a major leap of faith. It's also the kind of thinking that keeps established software developers entrenched. No matter how clunky a system is now, a "rip-and-replace" will always bring more near-term pain.

As a result, Byrne argues, entrenched Content Management vendors like EMC and OpenText are undergoing less of a paradigm shift and more of a hybrid evolution, trying to address demands for easier management without disrupting the current ecosystem. "We're beginning to see more traditionally on-premise CMS tools begin to become more 'cloudified,' with managed hosting offerings, including some cloud-based alternatives. To be sure, this is not the same thing as SaaS, but it does offer a kind of compromise where you can customize and extend the platform in bespoke ways, but can outsource most of the systems administration."

CrownPeak's Howard, understandably, thinks "cloudification" is nearsighted and misses the point. "What the old guard calls Cloud (or Hosted or SaaS) has the same IT bottleneck that their premise solution has. The only difference is that the 'IT guy' works for the vendor, and not the company. To be true SaaS, a company needs to design the application from the ground up to support parallel development of multiple projects, invest millions in scalable and secure infrastructure, and have services that go beyond fixing what's broken. When you install a traditional application in the cloud, you still have all of the big, expensive headaches and poor outcomes." In this view, "cloudified" solutions are just in-house software plus a hosting plan.

But is that really what's going on? The established vendors don't seem to want to clear up the confusion. Many are sure selling cloud solutions like they're traditional software. OpenText Cloud's product page, for example, does a horrible job of summarizing what the service actually does and how a knowledge worker might actually use it.

The gist seems to be a murky "We do good stuff. Call us and we'll talk about how we can do good stuff for you." That might work for existing customers looking for options, but it probably won't sell well with CMOs – often the new technology customer.

Winning The Battle, But...

Just because the content management market hasn't yet fully embraced the SaaS model, established vendors can't afford to take a break. Their job is to stay ahead of customer demand – not just meet it.

CrownPeak and plenty of others are making beachheads in the enterprise, one department at a time. According to Howard, "The SaaS option doesn't have to be an either/or. In many large and very large organizations, SaaS can initially fit a niche need while the existing solution stays in place." That approach introduces the enterprise Marketing Department to a new, responsive company that gets the job done. Since Marketing will be signing the checks, those small sales could lead to much bigger ones down the line - threatening the long-term prospects of the traditional vendors.

The ReadWrite DeathWatch is known for serving up plenty of doom, gloom and grumpiness. But for the Holiday Season, we're taking a slightly different tack - highlighting companies and technologies that Cheated Death. Companies that might have died, but didn't.

At the plate this week is ARM Holdings, a company that was never going to go out of business, but very well might have settled for a comfortable position in a single market. Instead, it built on the low-power processing that gave it dominion over all things mobile, and now it's poised to attack Intel on the chip giant's own turf.

Where ARM Was

From its founding in 1990, Advanced RISC Machines (later changed to ARM Holdings) was a different kind of processor company. Unlike fellow chip designers IBM and Intel, ARM didn't actually manufacture or sell the chips it created. Instead, like (pre-Nexus) Google and (pre-Surface, pre-XBox) Microsoft, ARM licensed its designs and its relationships with foundries to semiconductor companies. It even

Where ARM Is Now

ARM technology powers more than 90% of cell phones and 80% of digital cameras. It has a less-dominant but still substantial position in embedded devices, such as toasters, TVs, pacemakers and everything else in the Internet Of Things.

And then there are the tablets. The iPad uses an ARM chip. So do the Samsung Galaxy Tab, the Kindle Fire and the Google Nexus. Even Microsoft hedged its bets with the Surface RT, the lower-cost, lower-power sibling to the Intel-based Surface Pro. Theres a war going on, and ARM is selling everyone guns. If a device doesn't have a keyboard, there's probably an ARM design inside.

New Platforms

It's good to be king, but where do you go once you've cornered a market? You find another market. Instead of resting on its laurels and waiting for its lead to erode, ARM has spent the last year recruiting allies that bring the fight to Intel's doorstep.

While the Surface RT got less-than-glowing reviews, Microsoft's tentative support could eventually lead to more head-to-head competition for Windows devices.

There's also been talk of a shift toward ARM-based Macs, though you shouldn't hold your breath. Consumer Macs and Windows PCs are both on the long-term horizon, particularly in the ultraportable market, but power-gulping Intel chips still outperform ARM by a wide margin, and performance is still important for many computing applications. Surprisingly, then, the far more likely near-term expansion for ARM is in the datacenter.

Can ARM Stay On Top?

Intel sees the opportunity in mobile and embedded devices, and it haven't conceded anything. It continues to push its low-voltage Atom processors toward those markets, and its 14nm Airmont chip (also scheduled for a 2014 launch) could be very competitive. Intel also claims to be focused on the microserver market, though that may be causing some internal conflict.

One way or another, ARM will likely lose at least some of its mobile and tablet market share to Intel. The question is where. An Apple move on the iPad or iPhone would be surprising, as would a Samsung defection on anything running Android. Intel's immediate fortunes in the space are probably tied to Microsoft, as always.

Meanwhile, any losses ARM suffers to Intel in its core markets should be more than offset by the overall rising tide and ARM's potential to attack Intel's core strengths.

According to a post on Reddit (I know, I know – but stay with me on this), an Ingress player in Ohio was detained by police for his in-game actions. Specifically, he was "hacking a portal" near a police station. His phone had technical difficulties, which led him to linger by the portal/police station for a bit, catching the eye of local law enforcement and leading to the detention.

After the original post, other Ingress players responded with similar stories. One aroused suspicions by wandering around an empty parking lot at night. Another, trying to hack a portal next to an air traffic control station, had to run from the local sheriff. A third was called in for questioning after hacking a portal outside of a "high-traffic drug area."

It's In The Game

As Dan Rowinski mentioned in his earlier post, there's plenty of "creep" factor built into the game. In fact, much like geocaching (Ingress' non-digital ancestor), lurking in strange and hard-to-get-to places at odd hours is kind of the point.

Getting detained (as many Redditors pointed out, the poster wasn't technically arrested) probably adds to the intrigue, and certainly gives a player a certain amount of street cred. It could also call into question the boundary between the First Amendment and public safety.

Legal, But Risky

All of Ingress' portals are on public land. There's no law against walking past a police station, post office or airport. There are, however, very legitimate safety concerns held by the people charged with protecting those facilities and keeping an eye out for potential risks.

As one law enforcement professional joked, "I hope they don't put one of those in front of the White House." In fact, there are apparently a bunch of portals in front of the White House, embassies and other sites that could be high-interest targets for vandalism or worse.

At least Ingress doesn't require players to dig up or bury physical objects, a phenomenon that has caused some high-profile problems in the geocaching community. Still, as similar games take off (and they will), we're going to see more friction between gamers and law enforcement, particularly in full AR environments that use cameras. In addition to trespassing and loitering violations, there's greatly increased potential for distraction, perhaps leading gamers to injure themselves or others. It's all the danger of texting - plus headphones - with the added possibility of being labeled a terrorist by overzealous cops.

The Future

By all accounts, Niantic labs has been responsible about these issues. The game doesn't encourage trespassing or dangerous behavior, like using your phone in a car. Other developers may not feel the same sense of duty, or their goals may encourage "creative" players to take unnecessary risks.

If enough negligence, trespassing, and public nuisance suits (and maybe some claims of police harassment) hit the courts, we'll eventually wind up with legislation governing the balance between gameplay and public safety. We might see an increase of no-device buffer zones around sensitive areas, or certain games limiting accounts to only users of age to accept legal responsibility for their actions. There could even be outright bans on AR games in certain areas.

Until then, it's up to game developers to police themselves and players to stay smart. One dumb move could lead to a ton of regulation that could really spoil everyone's fun.

The latest numbers on the video game industry aren't encouraging. But don't worry, you can probably ignore them.

In a preview of its upcoming report, research firm NPD announced its November gaming numbers. The results? Despite strong sales for the top five titles (led by Call of Duty), the industry continues to slide. While November had "the smallest year-over-year decrease we have seen for dollar and unit sales so far this year," it was still a drop, and the last year has seen an overall 11% ebb from the previous 12 months.

According to some in the blogosphere, the sky is falling and gaming is in trouble.

They're wrong. You can now return to fighting redcoats in Assassins Creed. Here's why.

Aging Consoles

If you have any interest in playing an XBox 360, a PS3 or a Wii, you probably already own one. The Wii U is new, but it was released half-way through November, so we won't really know what sales are like until the December numbers come through. With new hardware on the horizon, sales are supposed to slump. It happened with the iPhone 4, and it's happening here. Aside from must-have releases like Madden NFL 13 and Halo 4, gamers who are waiting for the next-gen system slow their software purchases, too.

NPD realizes this, stating that it's "important to compare this month's results to November 2005, which was the last time the industry began to transition between console generations with the launch of a new platform." When you compare those years, the industry is up 97%. NPD also acknowledged that there's momentum building into the holidays. So in meaningful numbers, business looks doesn't look that bad.

What's Not Being Counted

The NPD numbers don't include digital downloads, subscriptions, or any in-game purchases. So online marketplaces like Steam, F2P games like League of Legends, and subscription-based products like World of Warcraft - also an aging platform with slowing sales, but still a strong generator of ongoing revenue - are off the table. And there there are the growing number of ad-supported casual games.

According to Wanda Meloni, Founder and Senior Analyst at M2 Research, "Retail sales alone do not provide an accurate picture of the overall market. Steam's revenues are estimated to be well over $2 billion alone. There are also publisher-specific sites such as Origin from Electronic Arts that provide consumers access to all of EA's product line, and all the mobile, social and online games that don't get counted in a pure retail play cumulatively make up well over $5-$10 billion annually."

It's pretty clear that the market has outgrown the traditional "units shipped" metric. A more accurate estimate of industry health might be gaming company revenue, though privately held companies, indie developers, and a blurring definition of the term "game" make that difficult too.

Maybe, as Ben Kuchera suggests, we should just stop trying to quantify it at all. As long as the games keep coming, someone's making money.

Under the terms of the agreement, Netflix will become the online distribution platform for Disney's straight-to-video releases in 2013. In 2016, it will carry pay-per-view versions of Disney's new, theatricallyreleased films. Effective immediately, Netflix will also have access to a back catalog of classic Disney films for its current subscriber base.

What It Means For Disney

By cutting a deal, Disney gains a pay-per-view foothold (and likely some perks to be named later) in the biggest online video distributor without giving up anything but Dumbo and Pocahontas. Its classic freebies will serve as a powerful lead-in for up-sells, and it will retain the power to charge a fee it considers fair for premium content. The deal also draws considerable leverage from cable operators that may have been less willing to negotiate a favorable revenue split.

What It Means For Netflix

The Disney deal is a major lifeline for Netflix. First, it brings reliable, popular content into the system right now, repairing some of the actual and perceived damage caused when Disney/Starz pulled out. It also shows Wall Street and other content providers that Netflix will be around for the long haul. If the mother of all content licensing providers is willing to do a deal, other suppliers are more likely to want in as well. It remains to be seen how far Disney has locked out competitors, but Netflix will draw new interest that it really needed.

According to Ross Rubin, Principal Analyst at Reticle Research, the deal is a very good thing. "This is, as Red Hastings has observed regarding Amazon's investments, a gold rush, with many online video providers such as Google, Hulu, Amazon and Netflix looking for original and exclusive content, and Disney has an unparalleled brand in home video. Kids' movies are a great fit for Netflix as some of its heaviest users are parents who use it as broadband babysitting."

The agreement also formalizes what everyone knew was coming: Netflix is evolving beyond the buffet model. Premium content will remove the pressure from the baseline offering and allow all sorts of new opportunities that provide legitimate value.

For example, millions of Netflix users catch up on back seasons of still-running TV shows, only to find themselves stuck in the limbo between the Netflix catalog and the current season. That's a well-qualified sales opportunity sitting on the table. Now Netflix and content publishers can monetize that opportunity while consumers willing to spend a bit extra on a premium subscription or an a la carte purchase can stay up to date on their favorite shows.

This deal puts pressure on other video distributors to follow suit. Hulu, with its close ties to NBC, Fox and yes, Disney, will probably launch a counterattack soon.

Let's be clear. This is a win for Netflix, but Disney is in charge. Netflix's content model was getting pinched, and it needed an out. Content is still king, but the deal helps Netflix last long enough to maybe tip the scales a bit more toward distributors.

The ReadWrite DeathWatch is known for serving up plenty of doom, gloom and grumpiness. But for the Holiday Season, we're going to take a slightly different tack, and highlight companies and technologies that Cheated Death - that might have died, but didn't.

First up for the Cheating DeathWatch is Microsoft, which somehow managed to stay relevant even as the market and the media have pivoted away from the desktop computer arena that made the company rich and famous.

Where Microsoft Was

After buying DOS to enter the operating system market in 1981, Microsoft knocked out IBM and held off Apple to become the undisputed champion of the computer operating system. Along the way, Microsoft imitated and intimidated as much as it innovated, gaining a reputation as a bully - bulldozing its way to success with marketing, money, and industry leverage.

Whatever its tactics, Microsoft was successful. Windows killed IBM's OS/2, Office killed WordPerfect and Lotus 1-2-3, and Internet Explorer demolished Netscape. Microsoft was king of the tech world, closing out the millennium at its highest valuation ever.

And then everything fell apart.

As first RIM, then Apple and Google brought computer functionality to mobile devices (from smartphones to tablets), they shifted the center of the tech world to mobile devices and away from Microsoft's desktop stronghold - and Microsoft couldn't answer. In the lucrative search market, Microsoft's bid to buy Yahoo failed, and its Bing search engine shows no signs of dethroning Google. In the enterprise, Linux was becoming a viable alternative to Microsoft's products. Only Microsoft's XBox was able to become a leader in a new market. Pundits were increasingly ready to write off Bill Gates' creation as a dinosaur, not able to keep up with swifter, smarter competitors. The DeathWatch was on.

Where Microsoft Is Now

But Microsoft didn't give up. Love it or hate it, Windows 8 is everywhere, from PCs to phones to tablets, and people are actually taking it seriously. It's early, and Microsoft has a lot of catching up to do, but even tech insiders can now bring home a Windows-powered Surface tablet or Lumia 920 phone and hold their heads high.

Even the gaming market is looking up. Halo 4 sales hit $220 million on opening day, with killer reviews. Microsoft's E3 preview of SmartGlass displayed a well-planned move toward cross-platform gaming, and the rumored Xbox Surface could back up its mobile gaming ambitions.

Microsoft is refreshing its whole lineup. Office is finally moving to the cloud, IE 10 is completely rebuilt, and bing has a new consumer focus, thanks to deeper relationships with Facebook, Amazon and Yahoo. There's even the first new logo since 1987.

Can Microsoft pull off this reinvention without a hitch? Not a chance. There's evidence that the tide may already be slowing. Still Microsoft has gone from an aging granddad to a revitalized contender in just about a year. Depending on who you ask, it might even be kind of cool.

How Microsoft Got Here

From one angle, Microsoft came back by doing what it always does - rolling in late and large. It finally understood that mobile, touchscreens and data portability were big, the same way it realized the Web would be huge back in 1997 - years after everyone else. It the media blitz for Windows 95 or Internet Explorer 4 all over again - only with at least slightly better ads:

But Microsoft actually did something kind of bold. In its own words, it "shunned the incremental" for once. Windows 8 is a major departure from other operating systems - including Windows 7. The Surface may not be quite sure whether it's a tablet or a laptop, but it's a novel flagship for a hybrid OS that looks great in ads and challenges third-party manufacturers to do better.

Microsoft has also finally learned to adapt. It has dropped Windows prices to compete with Apple, addressed the Google Docs threat by moving Office to the cloud, and given non-PC devices a nod by expanding Windows to ARM processors.

Sinofsky's replacement, Julie Larson-Green has been with the company since 1993, and arguably had as much influence on Office and Windows 7 and 8 as Sinofsky did. She's smart, knows the lay of the land, and has a lot of friends. The question, really, is whether she can use that political capital to push the company forward. Sinofsky could be brutish, but he got things done.

Larson-Green's first task is to pull more support and innovation from its hardware partners. Microsoft has always prospered by getting its partners to do much of the heavy lifting, but many of its biggest and most important relationships have well-earned spots on the ReadWrite DeathWatch. We're looking at you HP and Nokia.

Those challenges are very real, and Microsoft is not out of the woods yet. But make no mistake, Microsoft is still a player.

When the Great Place to Work Institute released its 2012 World's Best Multinational Workplaces list this month, ranking the world's 25 best employers - tech companies ruled. High-tech companies grabbed 9 of the 25 slots including 4 of the top 5.

It's a nice feather in the caps of Google, SAS, NetApp, Microsoft and the other winners, but beyond bragging rights, is there a point to this or any similar lists? Don't these awards always go to rich companies that can afford to pay and coddle their workers. Isn't that why fast-growing tech companies always seem to dominate them?

To find out, I asked a Director of Human Resources for a federal agency - who asked that ReadWrite not publish his name. He told me I was looking at the lists all wrong.

"I think they're great!" he said. Just not for employees. I was looking at the wrong consumers.

Impractical For Job Seekers

"They aren't very practical job-hunting tools," he explained, "unless you're young and mobile, and willing to go where the work takes you. But for employers – particularly 'boring' employers like us – they can put numbers on qualities we usually can't quantify, and that helps us compete."

His reasoning makes sense. Hip, well-funded companies offer catering, gyms, cocktail hours and a ton of perks that look great on a website. More traditional companies, particularly those (like the government) with budget constraints have to carve out less-sexy intangibles. "We have teams that have been working together for 30 years, and we really do operate as a family. We have solid benefits, a great retirement plan, and if something goes wrong and we have to lay off employees, we do everything in our power to find them work without disrupting their lives or incomes. But that's a horrible pitch, because it's fuzzy and it's tough to prove, without talking to our employees."

The Feds Do It Right!

It turns out the best, most complete report is the Federal Employee Viewpoint Survey, which provides unparalleled transparency and detail. The Federal Employee Viewpoint Survey was a great start, he says, because it provided actionable data he and other managers could use to make things better, and it gave his agency "a bit of an edge with recruiting transfers" from other agencies.

But the Great Places surveys' focus on employee trust was most exciting to him. "It's a measure of employee contentment, and I honestly think we can go toe-to-toe with the private sector on this. We may not be able to match salaries dollar-for-dollar, but if our employees are happier, applicants might take a closer look at why. And then we have a discussion."

Why Do Tech Companies Dominate?

So, why do so many tech companies dominate the list? Was it simply because they have the money to spend, or because their brand recognition makes them desirable?

He didn't think so. "Part of it might be their lack of legacy, sure. But mostly, I think it's because they're used to competing for the best, and they know what that takes. The good ones, like Google, know it takes an actual culture of support, and they've built that, as much as that makes my life hard when I have to recruit tech people in California. They're just ahead of the curve, and most of us have to catch up."

So there you have it, the tech companies that make the top of these lists really are likely to be good places to work - no matter what your salary.

Dos And Don'ts

Finally, if you really must use the Best Places To Work lists as job hunting tools, be sure to follow these Dos and Don'ts:

Don't use Best Places To Work lists as a primary job-hunting tool. Don't worry if your future employer isn't on a list - it might still be the perfect spot for you. Do use these lists to find new leads. Do consider high-ranked employers that you might have otherwise dismissed.

Intel's success and a shrinking PC market have seriously hurt the No. 2 chipmaker. Does it have enough fight left to reinvent itself?

The Basics

In 1982, Advanced Micro Devices cut a deal with Intel to provide secondary manufacturing services for IBM. Under the terms of the agreement, AMD could manufacture Intel's 8088 chip, which IBM used for the now-legendary IBM PC. The agreement created a rivalry that would last through more than a decade of court challenges, as AMD continued to manufacture low-priced chips based on Intel's designs.

In the mid-1990s, AMD began to find its own identity with the K5 processor, a Windows-compatible chip based on in-house designs. The K5's low price gained it some acceptance from PC manufacturers and hobbyists building budget systems. In 1997, AMD released the much more powerful and successful K6 chip, which offered a substantial performance edge for the money over Intel's Pentium II. The K6 created major demand in both the budget and gaming sectors, and its price/performance advantage made AMD a solid second option for PC owners. AMD strengthened that reputation at the high end when it merged with graphics processing specialist ATI Technologies in 2006, adding Nvidia to its list of competitors. For much of the early-to-mid 2000s, AMD was the go-to for high-end gaming systems, both custom-built and vendor-sourced.

When it acquired ATI, AMD was on top of the world, with a market cap topping $18 billion. But the realities of the integration softened the joy, the recession of 2008 took a toll, and 2012 has been brutal. Last fall, AMD laid off 12% of its workforce on lower-than-expected numbers. This year, it's cutting another 15% after losing $137 million in the third quarter. As of November 20, AMD's stock has tumbled more than 75% in just six months. A Reuters article recently revealed that AMD has enlisted the services of JPMorgan Chase & Co to "explore options" that could include a sale of all or part of the company. Officers have downplayed the significance of the bank's work, stating that AMD is "not actively pursuing a sale of the company or significant assets at this time." But they haven't pulled it off the table, either.

The Problem

AMD has pointed to a number of temporary and very correctable issues that have contributed to its current troubles. For example, a glut of Llano chips on the market is dampening demand for upcoming products. While that certainly doesn't help, AMD's real problems are simpler and more intractable:

AMD is a PC company. Intel is winning the PC war And the PC pie is getting smaller.

In its PC CPU space (still the company's bread and butter), AMD has dropped the ball. Bulldozer was a massive disappointment, Intel has strung together a couple of clear winners with Ivy Bridge and Sandy Bridge, and the performance crown won't be leaving Santa Clara any time soon.

Could AMD fight its way back to desktop dominance? Maybe, if Intel stumbles and AMD knocks it out of the park - but at what cost? These days, hard drives, graphics processors or other components are more likely to be the performance bottleneck - so the CPU maters less than it used to.

This is neither a secret nor a surprise. AMD management has seen it coming for some time. It just haven't reacted fast enough. In AMD's latest earnings report, CEO Rory Read acknowledged that “It is clear that the trends we knew would re-shape the industry are happening at a much faster pace than we anticipated. As a result, we must accelerate our strategic initiatives to position AMD to take advantage of these shifts and put in place a lower-cost business model. Our restructuring efforts are designed to simplify our product development cycles, reduce our breakeven [sic] point and enable us to fund differentiated product roadmaps and strategic breakaway opportunities.”

In other words, AMD needs to do something new, and fast.

The Prognosis

AMD can't – and shouldn't – abandon its CPU business unless it finds a serious buyer, but it will almost certainly focus elsewhere, looking to its embedded products for growth. It looks like the company will try to crack the mobile handset market, as well, but based on the beating TI took before recently calling off the mobile charge, that could be an expensive, low-probability gamble.

As others have pointed out, selling AMD outright would open a very large, well-funded can of worms and could fan the flames of a legal war. Still, discrete pieces of the company's intellectual property could generate lots of cash.

Can This Company Be Saved?

There's only one way AMD gets out of this more-or-less intact. If a large enough company with substantial legal resources and a presence in the PC, tablet and smartphone markets wanted to diversify its hardware options, AMD would be a relatively cheap and easy way to do it. Unfortunately, there are only two potential buyers who meet those criteria, so there aren't a lot of options.

After 100 years of innovation, Sharp is the worst-performing company in the world. In the end, its lack of consumer focus sealed the deal.

The Basics

100 years ago, a metalworker and inventor named Tokuji Hayakawa opened his first shop in Tokyo. On the backs of his "Tokubijo" snap belt buckle and "Hayakawa" mechanical pencil (also called the "Ever Ready Sharp"), he built an industrial empire that produced Japan's first crystal radio, the world's first mass-produced microwave oven, the first LCD calculator and some of the earliest solar panels. The company's stock price peaked in 2000-2001, when the company released the first camera-equipped mobile phone and the groundbreaking AQUOS LCD televisions.

Like most consumer electronics companies, Sharp took a hit in late 2001, but it didn't recover as well as its competitors did. As market power shifted away from its strengths, Sharp began to suffer, despite maintaining technical leadership in LCD display technology. Sharp is currently the worst performing stock in the world, its credit rating has been cut to "junk" status, and its leadership has cast "material doubt" on its ability to survive. All options are on the table, and more than 60,000 jobs are at risk.

The Problem

How did things get so bad? A sluggish economy certainly played a role, and the emergence of Samsung didn't help, but in the end, it was Sharp's own lack of vision that drove the company into irrelevance.

To be fair, the Japanese economy is struggling, and electronics companies are bearing the worst of it. Panasonic, the revenue leader in Japanese electronics, has lost $16.79 billion in the past 12 months, and on November 11, Moody's cut Sony's credit rating to one step above "junk" status. A reasonably strong yen hurts exports, the television market is glutted, and Japan's domestic economy began shrinking for the first time in a year.

A weak global recovery, troubles at home, and an emergent Samsung have hurt the entire Japanese electronics sector, but Sharp is bleeding nearly three times as fast as Sony (also on the ReadWrite DeathWatch list), as a percentage of total revenue, and its prospects are far more dire. The reason is more than just poor execution. It's a lack of corporate identity.

Sharp has all but given up on its consumer brand and accepted a role as a manufacturer of commodity parts. By all accounts, Sharp makes fantastic screens – that's why Apple sources iPhone displays from the company. But in an industry as fickle as consumer electronics, being a supplier means losing control of your own destiny. As you move down the supply chain, margins become slimmer and the ability to control your own destiny shrinks. As Samsung has shown, the only path to long-term success as a widget manufacturer is to sell them in your own products.

On paper, Sharp does just that. In practice, however, it's pretty obvious that the company is barely showing up to the fight for consumers. On its U.S. smartphone website, Sharp gives a completely unenthusiastic pitch for two of the world's least exciting phones. Highlights (pulled directly from the site) include:

Wi-Fi

A QWERTY keyboard

Touch-screen navigation

"A huge selection of downloadable Android applications."

In other words, features shared by every Android phone made by any manufacturer in the last three years. Compare that to Samsungs Galaxy S III blitz or Nokia's PureView ad campaign.

Sharp doesn't even play to its strengths very well. For all the buzz it generates with announcements like the world's biggest LED TV, Sharp has been incredibly unsuccessful branding Aquos televisions as a viable alternative to Samsung and Sony, and its oddly Sim City-esque LCD monitor microsite shows a complete lack of understanding of how to sell products - even without the typos (copyedit on "exsamples," please).

The Prognosis

Sharp's own leaders have acknowledged that it can't survive on its own. The most probable course of action is acquisition of its manufacturing assets, either through investment (like a proposed deepening of ties with Foxconn, the industry's favorite sweat shop) or a post-bankruptcy sell-off. In either case, expect to see a reduction in consumer brands produced by the company.

Can This Company Be Saved?

Given its similar ties to Apple and previous investments in Sharp, Foxconn will probably play a role in Sharp's future. The coupling could work. Foxconn's financial backing and massive labor machine, combined with Sharp's precision display expertise could create a high-quality, moderately priced television that could be perfect for a high-end consumer brand looking to make a big splash. But again, that puts Sharp's fate in someone else's hands.

The Qualcomm Tricorder X Prize promises to turn everybody into a Doctor McCoy by 2016. It could change everything about the way we practice medicine. But are we ready for it?

If you're a redshirt thinking you might have a case of Rigelian Fever, where do you go for advice? Whether you're planet-side or in the sick bay, odds are you're going to start with a tricorder. In the Star Trek universe, the tricorder was a non-invasive, handheld device that scanned geological, meteorological, and biological data. When used by medical personnel, the tricorder could diagnose all but the rarest diseases.

The tricorder inspired X Prize Foundation Chairman Peter Diamandis, who wondered whether – with enough incentive – engineers could build a medical diagnostic tool that could monitor health and identify illness on the spot, without a doctor's assistance. Add in $10 million in total prize money from Qualcomm and you have the Qualcomm Tricorder X Prize, launched in January, 2012.

In an interview earlier this year, Diamandis described the ideal healthcare tricorder:

"A device that is easy and friendly to use that a consumer–whether that's a mom at home at 2:00 in the morning or someone on the road–can use to diagnose themselves without having to go to a doctor or a hospital. It's really about reinventing the future of healthcare."

Any winning device will have to be able to diagnose these conditions routinely and accurately.

Sounds great, right? So is there a catch?

Maybe.

Some people in the medical industry are a bit concerned. The tricorder's goal is the "deskilling" of routine medical checks. Ultimately, for simpler ailments, it aims to remove doctors from the treatment equation altogether, down the line. Diamandis gets major points for going big in an industry as conservative as healthcare, but are we legally and physically prepared for the consequences?

There are plenty of things that could go wrong. What if a device misses a melanoma a visual inspection might have caught? Who's legally responsible for malfunctions? Will a false sense of security cause users to skip routine physical checkups?

The Thermometer Test

To get some perspective, I ran the tricorder idea past a friend of mine who's also an epidemiologist. He shared my enthusiasm, praising "any technology that gets people more involved managing their own health," but immediately applied some cautionary brakes. "It's a fantastic idea, and a big step when everyone is thinking incrementally. But it also kind of worries me."

The reason? It might not pass "the thermometer test."

To a doctor, the home thermometer is the single best piece of home medical equipment ever created. "It's a perfect triage device," my friend explained. "It provides accurate, objective information that medical personnel can use to make judgements. It's a pretty good barometer to judge the severity of many common ailments, but it also doesn't try to diagnose anything."

In other words, a thermometer tells you if you're running a fever, but it doesn't try to tell you why. It provides critical data to healthcare personnel, but leaves the decision-making in their hands.

That can make a world of difference, particularly in situations requiring counseling or judgement of the patient's mental state. "I'd hate to see physicians removed from the discussion. It's not a matter of job security. It's a matter of full-circle patient care." He went on to surmise that absent a doctor at the point of diagnosis, users might be more likely to pursue treatment options online, or a diagnosis might unhinge a mentally fragile patient who could do harm to himself or others.

Concerns notwithstanding, self-diagnostic technology is coming, in condition-specific devices like a bra that detects breast cancer and general-purpose machines like the tricorder.

The first-gen Scanadu Scout, which falls nicely within the thermometer test zone, should be hitting the market in late 2013.

Like all data, medical information is useful only if its used properly. Here's hoping that patients accept some responsibility along with their flashy new devices.

Assume you're a front-pager, a specialist in need of a certification, or someone else who really needs a degree to make career progress. Have we reached the point where online universities like the University of Phoenix or Kaplan University are worth your investment and time?

If you answered "yes," there's a lot of data to back you up. Online universities like the University of Phoenix, Kaplan University, AIU, and Ashford are fully accredited, and thanks to heavy marketing pushes, they're becoming household names. And to shore up their offerings, most of the higher-ranked online schools offer hybrid classroom/online coursework. Just as important to gaining legitimacy, the online model is increasingly embraced - albeit in limited form - for classes at more prestigious traditional schools, from the California State University system to Massachusetts Institute of Technology (MIT).

Getting Beyond The Sketchy History Of Online U

But there are still big problems with the digital classroom, and graduates of online universities garner little respect.

The industry's sketchy past is one factor. Once upon a time, distance learning was the domain of Sally Struthers and the ICS correspondence school, where you could "learn gun repair by mail!"

That's changed, of course. Products like Saba LMS. Moodle, and even iTunes U have brought e-learning into the mainstream, and most major universities now allow at least some portion of coursework to be completed online. In a world where Skype conference calls are the new business normal, is there any logical reason why the best of the new online universities can't rival their traditional counterparts?

Yes, but not for the reasons you might expect.

The most common, most quantifiable criticisms lobbed at online universities concern lackluster graduation rates, test scores and post-graduation employment statistics. But many employers are willing to hire from traditional schools whose stats are no better than the online outfits, so what's the problem with the Internet schools?

Why Online Grads Still Don't Get Respect

To get some real-world perspective, I spoke with a San Francisco-based recruiter for a large government agency and an executive recruiter for Washington, DC-area nonprofits - both of whom asked for their names not to be used. Their opinions weren't particularly promising for online schools and students.

According to the government recruiter, "For a decent job with upward mobility here, a University of Phoenix degree wouldn't get someone through the door unless they had something else really good on their resume. A recent USC grad with no experience could get an interview, but I'd be shocked to see a recent online grad get the same."

The nonprofit recruiter was a bit more forgiving, but she agreed. "In the not-for-profit world, employees' most important assets are their relationships, so I wouldn't discard a good candidate based on an online degree. Still," she admitted, "it's not ideal. It doesn't set a baseline expectation, for me or for the people he or she will meet in the field. If I just need to check off a 'degree' box on a requirements form, online will do, but if two candidates are similar, I'm going with the one from Stanford or UVA."

Brand And Social Interaction

When pressed for the reasons behind their opinions, both recruiters felt the differences between online and traditional schools boiled down to two things: brand and social interaction. Most traditional schools – from the local community college to the University of Chicago – have clearly understood reputations, strengths and weaknesses. An engineering degree from Carnegie Mellon or a veterinary science degree from UC Davis carry the weight of an established program with a history of results. Without historical data and a history of success or failure, the online schools' GPAs, class standing and other performance metrics seem like arbitrary numbers.

Ultimately, education is a promise, rather than a product.

Academia is not like the business world, in which an online startup can trounce an established business by building in the cloud and delivering commodity goods with less overhead. Reputation and consistency matter when building trust in hard-to-quantify-results. Ironically, innovation, lower costs, inclusion and reduced barriers to entry can actually hurt the prestige of online schools. One of the key functions of a selective college is to do some pre-sorting of applicants: "if you got into Yale you must be smart." Giant online schools that accept pretty much everyone may be democratizing education, but they're not helping employers or anyone else separate out the best and the brightest.

Accreditation is a step, and elearning is a tool, but they aren't sufficient on their own. Over time, the best online schools have to build portfolios of successful clients and amass enough alumni performance data to distinguish themselves from diploma mills.

In short, online colleges have to build their reputations just like offline schools do. It's taken centuries for the top schools to cement their positions, it'll take decades at least for online schools to do the same. Until that happens, recruiters would rather play it safe and go with the well-known brands.

Are The Social Issues Solvable?

Recruiters also worry that online schools can't reproduce the critical social environment of traditional colleges. According to the government recruiter, hiring decisions are about more than weighted scores, and college provides a lot of soft-skills training that is just as important as test-driven learning.

"There's more to getting an education than completing a class. Social interactions, extracurricular activities, just being able to get yourself out of bed and into class every day – these are all learning experiences with a direct effect on someone's ability to become a productive employee and work on a team." Online education doesn't really address any of these factors - at least not right now.

At one point, Multiple Listing Services was innovative technology designed to help buyers find homes. Today, though, its only getting in the way. With more users finding homes on national search sites, arcane rules and local focus have made MLSes as relevant as the binders they replaced.

The Basics

For more than 100 years, real estate brokers have worked together to help sell listings. In the pre-digital days, brokers kept "listing books" full of their local board's available properties, which allowed each broker to promote a wider range of homes than he or she represented individually. The result was faster-moving inventory for the entire board, and everyone won. The service was (and still is) opt-in: brokers choose which listings to cross-promote. But results down't lie, and the vast majority of properties wound up in the book.

Eventually, the listing book gave way to a brokers-only digital database, which improved performance and accuracy. When the Web came along, real estate boards acknowledged the necessity of a consumer interface, providing plug-in solutions that let users search its inventory, or licensing third-party software vendors to provide similar services. Brokers who want to develop something fancier can request direct data feeds (often at substantial cost), subject to the board's restrictions.

Technology disruption in real estate is a very volatile topic, so let's be clear about what this article is not saying. We're not saying the Internet will eliminate the need for agents. It's already changed the way they do their jobs, but houses aren't cars, so an expert voice will always be important.

The Problem

For a very long time, even after the MLS went digital, brokers were gatekeepers to an area's listing data. Seeing available inventory meant taking a trip to the local real estate office, which guided the buyer through every phase of the search and purchase. The Internet has changed that. According to the National Association of Realtors (NAR), 88% of home buyers (and 94% of buyers age 25 to 44) used the Internet during their home search, typically long before contacting an agent.

That same NAR survey revealed that more than a third of Realtors received no business whatsoever from their personal websites. The reason for the discrepancy? Homebuyers are bypassing MLS-sanctioned broker sites for online marketplaces with more inventory and a broader customer experience.

1. Searches Are Bigger Than Boards

The real estate industry is local. Real estate searches, however, are not. MLS coverage areas often overlap. It's not unusual for a broker to participate in three or more boards, each with its own MLS, its own custom data fields, its own naming conventions and its own restrictions on how data can be searched and displayed. In many cases, the boards may even restrict "commingling" of their listings with listings from other boards, creating a logistical nightmare for brokers trying to list as much inventory as possible on a website.

A broker whose territory bridges the Pennsylvania/New Jersey border explained his frustration with board-specific data restrictions. "I have customers who come to me and say it's impossible to search through all of the area properties on my website, and I can't blame them. They don't care about which [real estate] boards are where. They just want to see everything within 20 miles of work, so they bounce from my site, do a search on Trulia, and if I'm lucky, they come back to me with questions. If I'm not, I lose the lead."

2. The Industry Has Moved On

National and regional search portals like Trulia, Zillow, and even Craigslist offer a wider range of inventory than brokers' MLS-specific search widgets. Typically, they can get away with this because their listings aren't part of the MLS system. Trulia, for example, actively recruits direct listings from agents, building its own national database of searchable properties that cuts through local boards' red tape. Syndication services like ListHub and Point2 allow brokers to push their listings to dozens of portals with a single click. More progressive MLSes are partnering with these and other, similar services to help brokers promote their listings and maintain data integrity. Other have blocked syndication deals, but determined brokers are pushing listings on their own.

Does listing aggregation introduce the potential for errors and fraud? Absolutely.

The Prognosis

Third-party aggregators aren't going anywhere, and the trend toward listing decentralization will continue. In January, four national franchisors entered a partnership with ListHub to try to deliver a national real estate search to end users in the face of a NAR ban on direct data feeds from MLSes. Expect to see more of this sort of maneuvering as everyone tries to create national portals.

At the same time, we'll likely see more consolidation into mega-MLSes like California's CARETS, which encompasses more than 50 boards. By migrating boards to a common technology platform and (hopefully) encouraging them to adopt similar rules, larger MLSes should make independent sites more useful, help secure deals with aggregators that actually protect data integrity, and make homes easier to buy and sell.

Can This Technology Be Saved?

The local real estate board will no doubt continue to exist, but the MLS as we've known it is already on its way out, just like the three-ring binders that came before it. The best shot MLSes have at remaining relevant is acknowledging that change is inevitable, and doing what they can to keep that change safe for their clients. If they embrace the world of syndication, consolidate where it makes sense, and position themselves as guardians of data accuracy in a distributed world, they can remain an important part of the industry and even improve it. If not, well, it's not clear anyone will miss them.

Previous ReadWrite Technology Deathwatches

Video Game Consoles: The utility of bundles apps like Netflix and Vudu seems to be slipping. An NPD Study showed that one in five consumers who view streaming video on their TVs do so without a peripheral device.

Blu-Ray: The same NPD study reveals that "online video is maturing” as users migrate to watching streaming media on their TVs.

If it's not a smartphone, it's dumb. Despite current global dominance, basic "feature phones" will give up the ghost in just a few years.

The Basics

After the Blackberry and then the iPhone created the "smartphone" category, we needed to call the rest of our cellphones something, and "dumbphones" sounded, well, dumb. Thus was born the "feature phone."

While some initially viewed feature phones as an in-between category describing something more than a basic mobile device and less than a full-powered smartphone, the term has generally come to represent everything south of the portable computer-as-cellphone Apple/Android/Windows/Blackberry kinds of devices that get all the media attention. There's still some disagreement, but for the sake of this post, we're talking about anything "non-smart."

In the U.S., we most frequently associate feature phones with cheap prepaid plans and stubborn parents who refuse to upgrade, but in the majority of the world, low-tech (and low priced) remains king. According to Gartner, more than 63% of mobile devices sold in the second quarter of 2012 were feature phones.

Cost is the main reason, followed by durability. In countries without massive carrier subsidies of handsets, both advantages are magnified. And feature phones are also generally better at being, well, phones. If you spend a lot of time actually talking, aclamshell phone feels a lot more natural than squishing a Galaxy Note II up to your cheek.

They're cheap, they're durable, and they work. So why are they on the Deathwatch?

For starters, they've peaked. Feature phones are quickly losing ground to their cleverer cousins. According to that Gartner report, while overall mobile phone device sales were actually down in Q2, smartphone sales jumped 42.7%. While smartphones currently account for only 21% of handsets in the Middle East and Africa, they're on pace to break 50% in just two2 years. In Southeast Asia, smartphones are currently outsold 3-to-1, but sales are growing at 78% per year.

The Problem

Durable, cheap phones that don't use expensive data don't add up to a very good business. The last thing anyone on the sell side wants is users who talk on their cellphones – and do little else.

Feature phones leave little room beyond ringtones and text messaging for up-sells. Without apps and zippy interfaces for accessing them, theres little differentiation, no long-term platform lock-in and almost no carrier value. That's why Motorola recently joined Sony in winding down feature phone production. (In the short term, that helps Nokia, which is seeing modest growth in feature phone sales, but even it knows the money's in Windows.)

It's not all a supply issue. There's also legitimate demand for smartphones. In the developing world, mobile networks are often the most reliable form of Internet access, and having a phone that can take advantage of those networks is often critical. In the U.S. and Europe, the spread of social networks is helping drive smartphone adoption. And with the cheapest 4G prepaid phones dropping below $100, there's little reason to settle for an old-fashioned burner.

The Prognosis

Mass adoption of smartphones continues to drive down component costs, making feature phones even less attractive. By 2020 - eve

n sooner in richer areas - you'll be hard pressed to find them on the street. Within a few years, the TracFone racks at Wal-Mart will be full of low-end and mid-range Android smartphones.

Can This Technology Be Saved?

There will always be a small market for stripped-down phones, particularly in the industrial sector, where rugged design and reliable voice calls trump gesture-aware touchscreens and consumer-friendly glitter. Think Nextel. Still, the clamshells-of-the-future will probably come packed with high-end features, and the average consumer will walk straight past them to the fun stuff.

Previous ReadWrite Technology Deathwatches

Video Game Consoles: The utility of bundles apps like Netflix and Vudu seems to be slipping. An NPD Study showed that one in five consumers who view streaming video on their TVs do so without a peripheral device.

Blu-Ray: The same NPD study reveals that "online video is maturing” as users migrate to watching streaming media on their TVs.

One Laptop Per Child puts computers in the hands of the world's most vulnerable children to help educate them out of poverty. It's a noble cause championed by our brightest minds - but it doesn't seem to work.

The Basics

In the mid-2000s, faculty members from the MIT Media Research Lab set out to "to design, manufacture, and distribute laptops that are sufficiently inexpensive to provide every child in the world access to knowledge and modern forms of education." By 2006, the nonprofit One Laptop Per Child (OLPC) had created the XO, a rugged, low-power laptop with a number of innovative features, including ad hoc, peer-to-peer wireless networking, water-resistant keyboards and a solid-state hard drive. By running a Linux variant (highly customized for education) and a using unique, low-cost screen, OLPC was able to reduce the price of the XO to $200 – just within the reach of cash-strapped governments in developing nations.

OLPC's mission was simple: "To empower the world's poorest children through education." To that end, it worked with education ministries around the world, and have distributed more than 2 million XOs in 42 countries. While Uruguay was the first participating country, the largest deployment by far has been in Peru, involving more than 8,300 schools and 980,000 laptops.

The Problem

The XOs have been in the field now for several years, and the numbers are starting to come in. Unfortunately, they don't seem to be working – at least not well enough to justify the expense.

The Economist called the project "a disappointing return from an investment," noting that after Peru put $225 million of XO laptops in the field, an Inter-American Development Bank study found no measurable improvement in math, reading, motivation or time spent on homework. Specifically, the study found that "although many countries are aggressively implementing the One Laptop per Child (OLPC) program, there is a lack of empirical evidence on its effects."

OLPC has never leaned heavily on empirical evidence. According to its website, "the best preparation for children is to develop the passion for learning and the ability to learn how to learn." And the IDB study admits that "some positive effects are found, however, in general cognitive skills."

But as the Economist pointed out, any improvements just weren't worth the cost. ROI might seem like a cold measure for an educational program, but every dollar spent on XOs is a dollar not spent training teachers, building schools or subsidizing transportation, meals and other programs that encourage children to attend class. In the world's poorest regions (OLPC's target market), where average spending per student is just $48 per year and the cost of an XO could feed a family for months, ROI is essential.

At its heart, the problem comes from the top. In the video above, OLPC Chairman Nicholas Negroponte lays out a radical educational vision for disadvantaged regions that might not require teachers at all:

"What is transformation? It's not making the classroom better. It's not trying to do traditional educational technology. It's actually using the kids – and I really mean the word using the kids – as the agents of change."

According to Jeff Patzer, a former OLPC intern, that's precisely what they did in Peru. Hardware degraded faster than expected, and OLPC allowed Peru to build its own branch of the system software that was incompatible with patches. Interns were not prepared to educate teachers, and teachers were not prepared to use the XO to teach students.

"The only thing that happens is the laptops get opened, turned on, kids and teachers get frustrated by hardware and software bugs, don’t understand what to do, and promptly box them up to put back in the corner." Patzer explained.

In an interview with the Associated Press, a Ministry of Education official admitted that, "In essence, what we did was deliver the computers without preparing the teachers…The Ministry is not going to do another macro project of this type. It is not going to make multimillion-dollar purchases and distribute (computers) like candy."

OLPC may be a noble organization with a valid cause, but its methods just don't seem to be moving the needle. Like many people, I truly wanted OLPC to work - wanted to believe that it made sense. But there's no evidence that this kind of investment makes sense for poverty-stricken countries. It's time to try something new.

The Prognosis

The next few years will be rough. Internet access will continue to lag in the world's poorest areas, greatly diminishing the XO's utility, and Peru's difficulties may cause other countries to rethink the true cost of building and maintaining an ecosystem to support the devices.

At the same time, more powerful (if less rugged) hardware using standard software has come down in price and will challenge the XO in wealthier markets. Perhaps more significant, as low-cost smartphones flood the developing world, the XO will have to justify itself as more than a media consumption device. It's highly unlikely that we'll see many more large-scale installations.

Can OLPC Be Saved?

To survive, OLPC needs to take a step back to consider the "why." Its mission was based on a fuzzy notion that giving every child a laptop would magically make things better. But if the organization can accept a more involved role as an educational consultant (or find partners to do so), it could conceivably still play a part in global educational reform.

Previous Technology Deathwatches

Video Game Consoles: The utility of bundles apps like Netflix and Vudu seems to be slipping. An NPD Study showed that one in five consumers who view streaming video on their TVs do so without a peripheral device.

Blu-Ray: The same NPD study reveals that "online video is maturing” as users migrate to watching streaming media on their TVs.

With the impending announcement of the iPad Mini, all eyes seem to be focused on making tablets smaller. But the truly unexplored territory lies in the other direction - in tablets that go big. Really, really big.

While Samsung, Amazon and others have proven that smaller tablets do have value, and the impending release of the iPad Mini seems to indicate that Apple now agrees - there's only so small a tablet can go until it turns into a smartphone.

The Limits Of Small

The area between a 7-inch tablet and the 5.5-inch screen Samsung Galaxy Note 2 - and other "phablets" is a no-mans-land for tablet makers. We're literally running out of room to make tablets smaller.

Ironically, the field is wide open in the other direction. The full-size iPad's 9.7-inch display is hardly the upper limit for tablets, it's just the jumping off point. Several Android tablets sport 10.1-inch screens, and the upcoming Microsoft Surface tablets will have 10.6-inch screens.

But the search for more screen real-estate hardly stops there. Toshiba is about to test the waters with the Excite 13, which breaks the 10.6-inch screen barrier without breaking a sweat. Its 13-inch screen is frankly ginormous, easily bypassing the screen size of ultrabook laptops and landing squarely in the realm of portable business computers - though still without a real keyboard and other laptop niceties.

Is Bigger Better?

Sure, movies would look awesome on such a big screen, but what's the real value of giant tablets? After all, they'd be almost impossible to lug around without a dorky GoPad), and they'd likely cost more than smaller devices.

To find out, I asked a professional interaction designer at a San Francisco interactive agency. The designer, who asked not be named because his opinions were not necessarily those of his employer, was initially positive: "As long as you can still hold it with one hand and poke at it with the other, it should work."

As he thought through the reality of working on a giant touch screen, however, he started seeing problems.

"You can accelerate a mouse pointer to cover more ground with a smaller motion, but the main interface for a tablet is your hand, which is moving through physical space. Increased size would start to make navigating more tiresome. On an iPhone you can drag something across the screen by just moving your thumb. On an iPad, it's a wrist motion. On a bigger screen, you'd have to move your entire arm. Without doing any studies, I think 13 inches isn't too big of a deal, but if screens keep getting bigger, it may start to become less efficient to use your hands."

And then there are issues with gestures. "Increasing the size of screen objects and the distance between them could render some gestures – like pinching – impossible," he said, ""or at least make them awkward, which defeats the purpose of an intuitive touchscreen interface."

The UI designer was quick to add that "these problems aren't insurmountable," and "designers are getting used to building for different form factors," but he feels that at a certain point - and that might be 13 inches - "you're looking at a different class of device requiring a different set of controls."

Already Thinking Big

This isn't virgin territory. There are already plenty of giant touchscreens already in use. The 17-inch dashboard touchscreen on the new Tesla electric sedan is gorgeous, but designed for intermittent use – anything more might cause a crash. Larger touchscreens have been successful in ATMs, Point-of-Sale terminals, information kiosks, and other purpose-built devices where visibility and simplicity trump control and interactivity.

There's no doubt a market larger tablets, but we don't yet know how large will be comfortable to use or carry around. And it's likely that really large at-least-semi-portable touchscreens will find homes in different kinds of devices - like the Lenovo Yoga - not just what we now think of as tablets.

When the Web was still text links and tables, Adobe Flash brought us rollovers, interactive games and kitten videos. But a hard stand by Apple was the begining of the end for the groundbreaking technology, and guess what? We'll be OK without it.

The Backstory

The early years of the Web were pretty barren, multimedia-wise. Browser inconsistencies, bandwidth disparities, perpetually evolving standards and the cowboy coding needed to hack everything together made interactivity beyond text forms a mess.

Quality online multimedia experiences were a joke. To fill the holes, ambitious developers released a slew of plug-in applications users could install to augment their experience. Some of these were specific enhancements, like allowing a browser to display a new image format, while others were entirely new environments that ran inside a browser. Over time, the best plug-ins tended to work their way into the browsers or updated HTML specifications, while lesser ones died on the vine as they became irrelevant.

The biggest exception to this rule was Macromedia Flash, a graphics and animation client plugin with its own design environment. Flash, which began as a Mac and Windows application called FutureSplash Animator, made it simple for designers to bring shrinkwrap-quality, graphically rich interactive media to Web users for the first time.

Over the next decade, Flash's powerful, simple authoring environment attracted legions of developers and designers and its user base exploded. Ad agencies and ambitious businesses jumped on the additional interactivity it added to vanilla HTML, and by 2000, Flash was unavoidable, showing up in interactive ads, pop-up menus and online video players. In some cases, it even replaced entire websites. Adobe's 2005 purchase of Macromedia further consolidated the design tool industry and gave Flash even more support.

While pop-ups and online games were the most noticeable example of the platform's dominance, Flash started creeping into traditional business applications, as well. The broad developer base and cross-platform appeal gave rise to Rich Internet Applications (RIA) like Balsamiq Mockups, a prototyping tool of which I'm both a fan and a paid user. RIAs require installation of a client framework (in Adobe's case, the Adobe Integrated Runtime environment), but developers can push out a single application in a very short time that runs on any compatible client, which is also a big plus for mobile workers.

The Problem

In a word: Apple.

Flash's problems run deeper than any one competitor, but Apple brought down the house. When Apple released the iPhone and iPad without support for Flash, it ended a long history of cooperation between the two companies (Apple actually owned a fifth of Adobe early on) and called into question the validity of Flash's cross-platform claims. Sure, Android supported Flash, as did Windows, Linux and Apple's own Mac OS, but iOS was a glaring hole.

There were a host of other problems with Flash, from serious security flaws to performance problems (many of which Steve Jobs called out in his now-famous 2010 post), but in the end, the lack of an iOS client spelled the doom of mobile Flash.

With iOS off the table, Adobe ceded the Android market, as well. That leaves mobile developers with the task of developing redundant native apps or – as Apple and others have long recommended – apps built in HTML 5.

And there's the issue. By giving up the mobile Web, Adobe has effectively abandoned the rest of the Web, too. Why bother writing a desktop-based browser app in Flash when you can just reuse (or at least tweak and repurpose) the code you've written for mobile platforms? It took 10 years longer than usual, but Apple's refusal to support Flash exposed a truth. Technology has caught up, and we no longer need Adobe's plugin–or at least we're close. Microsoft announced a limited role for Flash in Windows 8's Metro browser. It's an acknowledgement that we're not quite Flash-free yet, but the writing is on the wall.

The Prognosis

With tablets and smartphones outselling PCs, the mobile Web is the Web, so Flash isn't an option. Developers can bridge UI differences between devices (e.g., designing for both mouse-driven and touchscreen interfaces) within HTML 5, so Flash in the browser will all but disappear.

Can This Technology Be Saved?

Flash will never return to the prominence it once had, but it will linger on the desktop for as long as there are skilled developers willing to do the work. Adobe offers solid tools that appeal to a lot of non-traditional developers, and the development environment could continue to serve those users as they build apps for other platforms. However, compared to the juggernaut of an ecosystem Flash used to be, that's a niche market, so Adobe could easily decide to bow out or sell off the product.

Previous Technology Deathwatches

Video Game Consoles: The utility of bundles apps like Netflix and Vudu seems to be slipping. An NPD Study showed that one in five consumers who view streaming video on their TVs do so without a peripheral device.

Blu-Ray: The same NPD study reveals that "online video is maturing” as users migrate to watching streaming media on their TVs.

After backing out of plans to compete with Netflix, Blockbuster is all but done. That's not great news for the streaming-video space, and Netflix is in a rough spot. But Blockbuster's latest stumble toward oblivion isn't necessarily the final nail in NetFlix' coffin.

On October 4, Dish Network scrapped its plans to revamp the Blockbuster brand and launch a subscription-based streaming-only product to compete directly with Netflix. Dish ran the numbers, evaluated its options, and (correctly) assumed it didn't have the assets to make a Netflix competitor work.

That leaves Blockbuster on the ropes again, with just 900 of its former 3,300 retail stores and no clear digital strategy. But don't assume that the math will work the same way for Netflix.

The Bad News For Netflix

Dish's decision confirms what I've been saying for some time: the flat-rate streaming market isn't a very profitable place. As I noted in Netflix Deathwatch over the summer: expensive bandwidth, second-rate content and strained relationships with content providers are par for the course for the entire industry.

Paul Sweeting, Principal at Concurrent Media Strategies, told E-Commerce Times that "…studios have long been leery of subscription-based streaming of movies because it produces the lowest per-view/per-capita return for the rights holder of any business model, and it cannibalizes higher margin businesses like pay-per-view rentals and even purchases." In the same article, another analyst predicted that flat-rate streaming may have only another five or six years of life.

The market is obviously sick, and it needs to change.

The Good News For Netflix

Troubled or not, Netflix still owns the streaming video market, and that brings advantages Dish and Blockbuster couldn't match. Most importantly, Netflix has existing content relationships that, while strained, put it in a better position than a startup.

In an October 8 analyst note, Morgan Stanley's Scott Devitt estimated that Amazon, which already has relationships with most studios, would need to spend an additional $1 billion to $1.2 billion in licensing rights to launch a similar service.

If that price is too steep for Amazon, it's probably beyond most competitors. Barriers to entry don't validate the streaming-video business model, but they do buy time for Netflix to try to sort out its problems.

It's also developing original content with headliners like Kevin Spacey to hedge against expiring contracts and differentiate from competitors. Netflix's margins per customer may not be fantastic, but with all those users, it has cash to invest in programming.

Eventually, though, Netflix needs to balance cheap back-catalog offerings with enough premium and custom content to create a profitable offering "good enough" to justify its prices. It also needs to keep an eye on Hulu, HBO and other content providers looking to ramp up their streaming businesses.

Put it all together, and I wouldn't want to be in Netflix' shoes. Blockbuster's implosion is a reminder of how tough things have gotten in Netflix' core business, but at least Netflix still controls its own destiny.

Even if your resume is perfect, your references are in tip-top and your job skills are up to date, new research suggests employers and others could still find reasons to dislike you.

Last month, Seoyeon Hong, a doctoral student at the University of Missouri School of Journalism, released a study about user perception of Facebook profile pictures. The study, originally published in Cyberpsychology, Behavior, and Social Networking, evaluated the influence of user comments and internal "social cues" and found they had a significant impact on perceptions of physical, social and professional attractiveness.

Active Pictures Are Better Liked

According to the author, “People tend to rely more on other-generated information than self-generated information when forming impressions." In short, most people want to be told what to think. You can start the ball rolling with "social cues" that give a peek into the outrageously interesting person you are – maybe a picture of yourself with a guitar or a surfboard.

Bait the hook, and users will bite. According to the report, the author "found that people with Facebook profile photos that include social cues were perceived as more physically and socially attractive than people with profile photos that were plain headshots." I do things, therefore you will like me. Bam.

Positive Comments Will Make Strangers Like You

But don't stop there. There's safety in numbers, so to get the herd mentality working in your favor, you'll want to conscript legions of friends into your positive-comment gang. "No matter what the profile owner does to tailor their Facebook page, comments left on their page from other users should be monitored as well. Positive comments are very helpful, but negative remarks can be very damaging, even if they are silly or sarcastic." So yes on the "OMG Luv the Hair!" but nix the snarky comments from your frat brothers.

[ http://www.shutterstock.com/pic.mhtml?id=86941594 ]

Beyond the pruning the obvious offenders (like deleting inflammatory posts from crazy exes), should any of this matter to you? Do you really need to cultivate a comment network and turn every Facebook photo into your own private Yelp?

Will It Keep You From Getting The Job?

To find out, I showed the report to an human resources director with more than 20 years of recruiting experience - who asked not to be named. Her reaction was fairly dismissive:

I think that the recruiter/hiring manager could certainly not help but get an impression from the profile photo (or other photos of the candidate), but I would not expect 'comments on profile photos' to be any strong special determining factor. It just seems really hard to generalize, and I don't doubt those Missouri people were able to get the results they got in a test lab setting, but I wonder about how significant their findings are for the real world.

It seems the comments angle might be overblown, and recruiters, at least, ought to be able to resist peer pressure.

Be True To Yourself

But what about the internal social cues? Could semi-subliminal messaging really work? Let's say you do decide to roll the dice and tweak your profile pics, just in case. Here's a tip from a PR pro: don't try too hard or you'll let people down.

According to Diane Schreiber, Senior Managing Director at Sparkpr, "In today's social world, it is best to provide a true perception of you. If you try to PR your profile photo and it doesn't reflect who you are, its just going to be a disappointment in the end."

So, assuming you actually like rock-climbing – a shot of you at the climbing gym might make you seem athletic. But that picture of you BASE jumping with a beer in your hand? That just makes you look like a tool.

A business' data is still its most important asset, but that doesn't mean it every company needs to spend millions of dollars building and maintaining its own datacenters. The cloud has grown up, and the days of the company-owned-and-operated datacenters are numbered.

The Basics

In the pre-PC days, computers were sprawling and temperamental beasts requiring constant care and feeding. Mainframes lived in climate-controlled, secure facilities with round-the-clock surveillance and redundant power and bandwidth connections. Over time, most mainframes were replaced by more modular, inexpensive servers, but the operational requirements remained: a cold, secure room, redundancy and 24/7 management.

During the prosperous '90s, IT managers with datacenter blueprints and wads of cash set out to build their own data fortresses. Data was a critical asset that could not be left to chance, so businesses wanted to keep it safe and close. The number of in-house datacenters exploded, and to IT managers of the 1990s or early 2000s, datacenter vendors like Avocent, Veracode, and EMC were every bit as familiar as Microsoft or Oracle.

Until the past five years or so, outsourced data management was generally thought of as a "little guy" option, conjuring all-to-accurate visions of system outages, sloppy security and spiky performance. But datacenter complexity has risen and the cloud has grown up, so more and more businesses - even really, really big ones - are getting out of the in-house datacenter game and leaving it to the pros.

The Problem

Datacenter management has always been tricky, for two related reasons: servers are hot, and all technology eventually fails. All other datacenter issues (cooling, power consumption, physical space management, redundancy, and monitoring) stem from these. If anything, progress has made things worse. Every year, server manufacturers pack more processing into smaller footprints, and datacenters pack more servers into each square foot. Higher processing density has tripled heat levels in recent years.

Headaches

Balancing heat density and floor temperatures is not a game for the faint of heart. 60% of datacenter costs are electrical and mechanical, and that adds up to real money. Upping your floor temperature just one degree can save 4% in power costs, and companies have reclaimed millions of dollars per year simply by tweaking cooling. But those gains aren't free. Raising temperatures just a few degrees can reduce your time to respond to outages and increase the risk of catastrophic failure. That's a lot of responsibility to place in the hands of an IT department that's probably better at (and more interested in) software and networks than HVAC.

Even a datacenter operating at peak efficiency might cost more than its worth. Few businesses have the kind of predictable, linear growth best suited to a datacenter construction plan. Mergers, opportunities, new regulations or unexpected setbacks can all influence the scope of data under management, and that can change on a dime. Not enough capacity is a logjam. Too much capacity is wasted money. The standard logic regarding outsourcing goes like this: "If it's core to your business keep it in-house." But if you're an information company, are refrigeration and construction really core to your business?

Quality

Once upon a time, managing your own data center was the only way to ensure adequate Quality of Service and data security. Businesses assumed the painful realities of in-house management to keep their customers happy and their data safe. These days, that claim is pretty tough for most businesses to make. Netflix, which already runs its consumer business in Amazon's cloud, plans to move more than 95% of its internal servers to the cloud, dropping from 2,500 in-house virtual servers to about 50. The quality is there, and unless you're NORAD, so is the security. In fact, the US federal government, will close more than 1,000 datacenters by 2015 through consolidation and migration to the cloud.

Larry Tabb, founder of the research firm TABB group, summed it up nicely in an interview with Advanced Trading: "Just because it's a shared datacenter, that doesn't mean it's any less safe than your own datacenter. If Citi can get hacked, and the big banks can get hacked, and Sony can get hacked, it can happen to anybody."

The Prognosis

Cloud vendors have advantages due to economies of scale and domain expertise, and they've reached parity or better on security and QoS. In the long run, in-house datacenters in all but the rarest circumstances are done. As Axcient's CEO Justin Moore pointed out, though, migration to the cloud is a gradual process. More entrenched companies with massive investments in datacenters won't move overnight, but newer businesses that have not made major investments will be hard-pressed to build any infrastructure. Over five to ten years, as aging hardware and support systems in established datacenters come due for replacement, most larger businesses will begin to consolidate in the same manner as the federal government – pulling key data close, migrating non-critical data to the cloud, and eliminating excess capacity.

Can This Technology Be Saved?

The datacenter itself isn't going away – just the paradigm of running it internally. As storage technology continues to improve, datacenter management will become increasingly prohibitive, and frankly, fewer and few CIOs will want the headache. It's possible that a few high-profile hacks of cloud service providers might slow the tide for a while, but ultimately, outsourcing management just makes sense.

Previous Technology Deathwatches

Video Game Consoles: The utility of bundles apps like Netflix and Vudu seems to be slipping. An NPD Study showed that one in five consumers who view streaming video on their TVs do so without a peripheral device.

Blu-Ray: The same NPD study reveals that "online video is maturing” as users migrate to watching streaming media on their TVs.

Google's Project Glass has made wearable computing hip. Now a Canadian Kickstarter project wants to make it dorky. And the GoPad folks might actually be on to something.

The GoPad isn't a pad at all. It's a tablet stand plus a carrying strap plus some other hardware to encourage tablet owners to wear their hardware while they walk around. In the above video, taken from the GoPad's Kickstarter page, you can watch inventor Peter Kielland demo the unit.

GoPad Is Not A Gag. No, Really!

His George Takei-esque delivery and $1-level "Dance of Joy" Kickstarter reward might make you wonder if the GoPad is a gag, but Kielland is the real deal. He's also created the Scruzol, a simple-but-handy screwdriver/drill-bit hybrid sold on the Home Shopping Network that's actually gotten really positive reviews. He's a little odd, but Kielland is legit.

After watching the Kickstarter video, I immediately wondered who'd actually use this thing?. I could think of only two potential customers: keytar players and this guy.

Clearly, Gordon Lightfoot Junior has no problem putting it out there, and a wearable tablet would free his hands for ironic beard grooming. But once the laughter faded and I dug in, I started to believe Kielland might be right. For the average street-crossing civilian, wearing a computer around your neck is pretty dumb, but there are plenty of people who use tablets at work, and something like the GoPad might actually do the trick. For doctors, mechanics, and anyone else who needs hands-free reference images (and please leave a comment if I'm wrong here), there don't seem to be a whole lot of options.

The GoPad holds the tablet in a serviceable position, and once you unhook the strap, the three-way stand is actually quite useful for everyone.

GoPad Changes The World!

When I add it all up, the GoPad's biggest problem is really the marketing. Canadians tend to be a modest bunch, but Kielland isn't scared of a big statement. On the company website, he declares the GoPad the "Future of Mobile Computing." The Kindle Fire? The iPhone 5? NFC? Mere toys! The reversible/wearable/adjustable tablet strap – that's what's going to change the world! Shades of Matthew Lesko?

If Kielland had focused on the stand, rather than the wearable aspect (and released it as "The OmniStand, with optional strap," or something equally Belkin-y), it might have seemed a little less silly.

But he didn't. No matter how you dress it up, wearing a laptop around your neck is just plain goofy (and this from someone who owns a pair of Vibram Five Fingers), but despite some remarkably hokey marketing, GoPad might actually have a niche. Just hope you don't meet your future spouse while wearing it.

Point-and-shoot cameras have been a hit with consumers for more than a century, and they keep getting better. But it doesn't matter, they're doomed anyway. Even awesome image quality and all the features in the world can't save them from the mighty smartphone.

The Basics

"You press the button, we do the rest." In 1888, with that tagline, George Eastman released the portable Kodak camera, and consumers have been snapping pictures with point-and-shoot devices ever since. In 1900, he released the Brownie, a cheaper, even simpler camera that would define point-and-shoot photography for almost 60 years. Over time, consumer cameras evolved, moving to cartridge-based film, then to digital. They added zoom lenses, digital viewfinders, video recording and Wi-Fi. Still, they never lost their focus on simplicity and functionally. When you down to it, a Canon PowerShot isn't all that different from a turn-of-the-century cardboard Brownie. Simple plus portable equals mass adoption, and that equation worked for more than 100 years.

The Problem

First, let's be clear about definitions. When we say "point-and-shoot" cameras, we're not talking about digital single lens reflex devices (DSLRs), mirrorless cameras, or any other device with interchangeable lenses. We're talking about dedicated, autofocus, fixed-lens cameras that generally fit in your pocket, like the Canon PowerShot, Sony CyberShot, or Nikon Coolpix – also called "Compact Digital" cameras.

The issue is that point-and-shoots are caught between a rock and a hard place. They can't match the quality and flexibility of DSLRs, and they can't match the cost and convenience of the cameras included in every smartphone.

Point-and-shoot quality keeps getting better, but it will never match the quality of an equally modern DSLR or mirrorless camera. There are several reasons, but the most obvious is the lens. A large, fast and appropriate lens gives a sensor more to work with, and that will never change.

But with point-and-shoots, beyond a certain threshold, quality was never the point. These days, few photos actually make it to print. Most pictures wind up on Facebook or a desktop background, and your average point-and-shoot camera packs enough pixels to print 5x7s or 8x10s of your best shots. And for most people, "good enough" is all that matters.

The real issue, though, is that smartphone cameras are also increasingly "good enough." In fact, the iPhone 4 is apparently good enough to be more popular than any point-and-shoot camera on Flickr, and the iPhone 4s is more popular than any other camera, including DSLRs. Sure, this is because of a massive distribution advantage over dedicated point-and-shoots, but that's the point. They're good enough, and everyone already has one, so why buy (and carry) a second device that's only marginally better, and far short of a pro-style DSLR model?

To some, the answer is "optical zoom," the dedicated camera's biggest advantage. Smartphone cameras, by contrast, have to get by with so-called "digital zoom," which basically means "cropping." Others require better performance in low light, or other special features.

But their numbers are limited, and high-end smartphones keep closing the gap.

The the Nokia 808 can now produce 41 megapixels, far more than most consumers need, giving a lot of room for digital zoom while maintaining acceptable quality.

And smartphone makers are all working to improve camera performance - especially in low-light conditions where their tiny lenses suffer the most. Electronic trickery like Nokia's PureView oversampling, for example, helps compensate for poor image quality. Take a look at the commercial above, shot on the Nokia 808 it's advertising. That's definitely "good enough" for the vast majority of users.

The Prognosis

Smartphone camera sensors and software that have already reached an acceptable quality threshold will continue to get better. And low-profile, lower-power smartphone optical zooms are in the works. So for the mainstream audience, dedicated point-and-shoot cameras will go the way of PDAs.

Some households will still want a better camera for portraits, vacations and other must-capture events, but with prices for prosumer-level DSLRs dropping, there's won't be much reason to settle for a point-and-shoot when quality is at stake.

Can This Technology Be Saved?

Sure, there will always be some people who want a pocket-sized camera with a fixed optical lens and a big battery. And camera companies will continue to pump out new and better examples. Some will even run Android, like the Coolpix S800c, or even have built-in phone functionality, like the Polaroid SC1630. But point-and-shoot is already becoming a niche, not the mass market, and it will take an unforeseen, revolutionary breakthrough to change that.

Previous Technology DeathWatches

Video Game Consoles: The utility of bundles apps like Netflix and Vudu seems to be slipping. NPD Study showed that one in five consumers who view streaming video on their TVs do so without a peripheral device.

Modern game consoles are much more powerful than yesterday’s Pong-pads, but their setup and business model remain virtually identical to the Ataris and Colecovisions of yore. But all of that’s about to change - this coming generation of game consoles will be the last to resemble anything you remember.

The Basics

In 1972, television maker Magnavox introduced the Odyssey gaming system for just under $100. It lacked sound and color, and relied on primitive plastic screen overlays to display backgrounds, but it ushered in a Golden Age. Five years later, Atari released the 2600, which IGN called “”the console that our entire industry is built upon.“” The Atari 2600 popularized game consoles, making the home gaming industry a multibillion dollar industry almost overnight. By 1982, the Atari 2600 was generating $2 billion in yearly revenue, with the competing with Colecovision and Intellivision systems also performing well.

Perhaps inevitably, the boom made publishers sloppy, and game quality started to slip. Lousy games, overproduction, a tight economy and heavy competition from PC gamess combined to derail the console industry in the infamous Video Game Crash of 1983. Two years later, Japan revitalized the industry with the release of the Nintendo Entertainment System (NES). The NES (IGN’s #1 console of all time) offered more polished graphics and sound, more versatile controllers and much more quality control than previous systems. The public ate it up, and the gaming industry has bought tens of millions of consoles and hundreds of millions of games ever since.

The console business model hasn’t changed since. Every 5-7 years, hardware vendors release new, dramatically more powerful dedicated gaming systems. The vendors subsidize these consoles, breaking even or losing money on each sale, in order to gain wider distribution than competitors. Over time, they turn a profit through a handful of “must-have” in-house titles and licensing fees from third-party publishers. The market supports two-to-three major players at any one time, and Sony, Microsoft and Nintendo have ruled the roost since 2001.

The Problem

Nobody’s really happy with the current console scene. Consumers worry about backwards compatibility and format wars, so sometimes wait for years to buy new systems. Microsoft and Sony worry about selling enough titles to justify their hardware costs. Publishers spend way too much money for the tools and rights to produce games for multiple platforms. Samsung and Apple want in on the action. Something has to change.

Let’s start with Yerli’s point about casual gaming. A 2011 ESA study revealed some interesting statistics:

55% of gamers play games on their phones or handheld devices

Puzzle, board games, game shows, trivia and card games represent 47% of the total market, while action, sports, strategy, and role-playing games (the bread-and-butter of console gaming) account for only 21%

But let’s not get ahead of ourselves. Console hit Call of Duty still generated several times the revenue of Angry Birds in 2011. But Angry Birds cost much less to make and distribute. That’s one reason game publishers aim to widen their scope beyond hardcore gamers.

Cloud gaming, meanwhile, is already a big enough threat to convince Sony to buy in for $380 million. Latency on all but the fastest connections makes cloud gaming a non-starter for hardcore games right now, but by the time we’re ready for a Playstation 5 in 2020, that barrier will likely be history.

The Prognosis

By the time the XBox 720 and its ilk have worn out their welcome, we’ll have seen a complete shift in the way we buy our games, and how we think about the hardware that plays them.

1. Death Of The Discs The company that invented the Blu-Ray considered dumping it from the Playstation 4. That’s a pretty good sign that version 5 won’t have an optical drive. Physical media will go away, for good reasons. Discs are easy to lose, difficult for publishers to regulate, and – most important – easy to resell. Resold software generates nothing for publishers that would like nothing better than to make every player a customer.

2. Free-to-Play Goes Hardcore If discless consoles shut out rentals and resales, something needs to fill the void for the try-before-you-buy crowd. Enter Free-to-Play (F2P), the payment model that already dominates social gaming and Massive Multiplayer Online games (MMOs). A model that provides a baseline tier of free services (or in Ouya’s case, a set of several free levels) could expose new games to millions of new customers and dramatically reduce buyer’s remorse. We’re already seeing in-game purchases generate serious money in hardcore games, and that trend will only grow.

But Free-to-Play has a major downside. It destroys the current model of hardware subsidies paid back by large, up-front game costs. Without a guaranteed subsidy, every system sold will have to pay for itself. That means either dramatically higher up-front costs or dramatically less complex hardware.

Raising prices too high would lock out families and teens, and invite competition with more versatile and upgradable PCs. Dumbing down the box would work fine for cloud gaming, but high-end local processing would suffer. Expect to see a fragmentation in hardware offerings to meet these needs, and don’t expect the Big Three to keep making all the hardware themselves.

3. “Playstation” Becomes A Spec. Sony, Nintendo and Microsoft will continue to cut deals with cable companies, set-top-box providers (including Apple), and television manufacturers to embed special gaming processors in their devices. We might even see hardware tiers. For example, an Apple TV might be “Wii Streaming Certified,” but the latest Nintendo fighting game might require a compliant PC, or a box with more oomph. This would leave plenty of room for everyone in the ecosystem to sell different combinations of hardware and bandwidth, and where there’s potential profit, there’s industry excitement.

Look for Sony to try to beef up its TVs with exclusive Playstation content, while Microsoft inks deals with everyone else. In the words of Twisted Metal creator David Jaffe, the Big Three become “more like movie studios for video games.”

4. Tablets Are The New Consoles If you’re going to build a seamless, cross-platform entertainment system, you might as well go all the way. Android will be a major gaming platform, with or without the Big Three, even if Ouya fails. Microsoft’s SmartGlass is already hinting at where this trend is going. By 2020 it wouldn’t be surprising if the remaining video game titans all had proprietary virtual machines running on Android that would stream each company’s cloud gaming service and allow at least some level of local execution for downloadable games.

Can This Technology Be Saved?

The Crash of 1983 wiped console companies off the map, but that was an extinction event. This is an evolution, and all the players see it coming (Microsoft more than the others). Today’s gaming giants are resilient, and they’ll adapt. It may be possible that the Big Three or their minions will continue to offer a higher-cost hardware bundle - much like Microsoft plans to sell the Surface as a flagship product -but given the economic direction of the industry, innovation is more likely to manifest in a novel controller or a software layer that could be used by multiple hardware and OS configurations. In 2020, your TV may very well be your console.

Everyone knows optical storage discs are on their way out in the long run, but ironically, the the biggest, newest format of them all could become extinct before the rest. Here’s why Blu-ray might join VHS in the dustbin long before DVDs or CDs give up the ghost.

The Basics

In 2006, a consortium of media companies spearheaded by Sony launched Blu-ray Disc, a high-capacity (25GB) optical disc with the same dimensions as DVDs and CDs. Blu-ray’s storage capacity and entertainment ties made it the leading contender to replace the aging DVD video format, and the results were impressive. Early Blu-Rays players delivered HD video and crystal-clear sound, and successive versions added extras like downloadable content.

Later that year, Sony made a major push, shipping Blu-ray drives in its Playstation 3 consoles and a number of high-end PCs. After a two-year standards war with Toshiba’s HD-DVD format, Blu-ray won the day in early 2008. Warner Brothers, Netflix, Wal-Mart and Best Buy dropped HD-DVD, so Toshiba abandoned the format, creating its own Blu-ray player. Since then, Blu-ray capacity has increased (to 128 GB for the newest quad-layer discs for BDXL drives), and it looks like even the Xbox 720 will support the format.

The Problem

Two of three console vendors will ship Blu-ray players into millions of homes, and Blu-ray disc sales as a percentage of total physical entertainment media are still climbing. So what’s the problem? There are actually three:

1. Timing. 2008 was a bad year to end a format war. With the Great Recession in full swing, many families were unwilling to spend to upgrade to Blu-ray, especially if doing it right required a new television and new content. A 2011 NPD study showed that 5 years after Blu-ray’s introduction, a full 57% of households were still using standard DVD players. According to recent Nielsen research published by Home Media Magazine, Blu-ray sales for popular films can account for as much as 75% of a title’s sales - or as little as 11%. As DVD players wear out and studios release more “must-have” HD titles like Avatar, Blu-ray’s share will likely increase. But that may be too little too late.

2. Netflix. Led by Netflix, Hulu and iTunes – with Amazon and a swarm of others in the wings – digital video is real, and it’s become a contender far faster than most people anticipated. As early as 2010, streaming surpassed optical disc rentals on Netflix. These days, every game console and most televisions bundle multiple streaming video services, every cable provider offers its own suite of pay-per-view titles, and iTunes offers thousands of films and TV episodes for purchase or rental. And those are just the legal sources. Service-based streaming models (ideally with some form of local caching for viewing off-network) are definitely where we’re headed.

3. Apple. Blu-ray is not just an entertainment delivery system. It’s also an efficient data distribution format, or it would be, if anyone but Sony adopted it. Unfortunately, Apple and most PC makers have opted to pass on Blu-ray drives, so software publishers have followed suit. If it doesn’t fit on a couple of DVDs, you’re getting it online. Apple actually shipped an entire operating system online, and no one seemed to mind. As a consumer alternative, USB flash drives are portable, reusable, and cheap ($40 gets you 64 GB), and they work with a much wider range of devices. Blu-ray may be still the most powerful player in the optical disc storage class, but that class has graduated.

The Prognosis

As video consumption moves toward alternative devices, Blu-ray’s significance will wane. DVD/Blu-ray/digital download packs (which are pretty cheap) will help bridge the gap for a while, but with dependable HD downloads and streaming, why would anyone bother with a physical disc? Eventually, Blu-rays will go the way of audio CDs, selling for a buck a piece at yard sales after their original owners have safely ripped them (possibly after using a VUDU-like conversion service).

Can This Technology Be Saved?

Americans still like to own things, and right now, Blu-ray is the most archivable, durable format for HD video storage. So until a cloud-based service emerges as a clear winner, there will be a case to keep that stack of discs by the TV. But all data storage formats run their course, and no amount of data-density improvements can stop the natural progression to streaming media.

As streaming and download services learn to take advantage of ubiquitous broadband Net access, Blu-Ray will be dead. It will happen faster than you think - and few folks will mourn its passing.

With TechCrunch Disrupt SF 2012 in full swing this week, it’s only a matter of days before a new winner is crowned. We decided to check in with previous Disrupt winners to see how they’ve fared since their victories - and try to determine how it means to ace a high-profile startup contest.

Startup contests like TechCrunch Disrupt can generate a lot of hype and interest, but does that translate into any lasting benefit for the startups involved? Sure, winning brings monetary benefits - TechCrunch Disrupt offers $50,000 - which never hurts, but does participating really help a startup succeed? And does winning a key contest really predict eventual success? To find out, we caught up with the winners of the four most Disrupt events:

What It Is: Audio conference calls aren’t going away, so Firespotter Labs' ÜberConference made them less sucky. By removing login annoyances (“Can you text me that code? I’m on my cell!”), adding visual controls and giving you something productive to do with your mouse and keyboard while sitting on hours of endless calls, ÜberConference actually makes audio calls cool again.

How It’s Doing: Uberconference is killing it. Just a month after taking home first prize at Disrupt, Firespotter won $15 million from Andreessen Horowitz and Google Ventures. The company is hiring for several open positions, most of them in engineering. iOS, Android and paid versions of the service are due soon, and UberConfernce can monetize its already-solid feature set, prospects look good.

What It Is: Shaker’s founders are betting that users will want to hang out in virtual spaces (starting with a bar called Club 53), represented by avatars. If this strikes you as a little like Second Life, you’re not alone, but the folks at TechCrunch and $18 million in venture funding believed Shaker offers something more than just another chat room.

How It’s Doing: Most avatar-based chat environments have been notoriously anonymous, quickly degrading from social discovery to Leisure Suit Larry. Shaker removes the anonymity by tying profiles to your Facebook account, helping users to meet others with common interests snd form real relationships. Promoters (at this point, mostly bands) sponsor rooms, and Shaker works its magic on the back end to segment the rooms so you’ll actually bump into people with shared interests beyond the band.

For now Shaker remains a bit of a ghost town. As of September 10, there were only two events on the calendar: a Foo Fighters Tribute party with 226 RSVPs, and a “Hangout” the following Sunday with 37. Still, Live Nation has signed on as a promotional partner, which could help Shaker attract the users it needs to really take off.

What It Is: Getaround is AirBnB for car rentals. Owners can list their cars during downtime, allowing drivers to rent them by the hour. Drivers can place up to five requests, and the first response wins the business. One price covers rental cost, background checks and insurance. Owners get a monthly payment for their car, drivers get to drive whatever they want while saving money, and Getaround takes a cut from the middle without having to maintain its own inventory. The concept is a win-win-win.

How It’s Doing: So far, it seems to be still growing. Getaround has signed up thousands of cars, and it’s hiring for aggressive expansions into new markets. The startup’s most recent venture is Getaway, a long-term rental service for drivers who need a car for a week or longer. But over all, the car-sharing market has a long way to go before becoming mainstream.

What It Is: Qwiki is a slick, simple, online application that lets users create video presentations in their browser. The interface is extremely simple, and and even a total noob can throw together a slick-looking video in less than five minutes.

Even the hottest startup needs a little time to change the world - or make a billion dollars - so it’s a little early to pass final judgment. But so far, at least, the four most recent Disrupt winners are all still in business - and not every startup can say the same.

At the same time, though, while some have gotten funding, none has come close to changing the world or a transformative financial event for founders and investors. And plenty of other startups - ones who didn’t win startup contests - or doing even better.

The ReadWriteWeb DeathWatch has tagged 13 companies against the ropes. But this week we’re trying something new, taking a close look at technologies on their way out. First up, the QR Code, a concept that was always more flash than bang.

The Basics

In 1994, Denso Corporation created the Quick Response (QR) Code, a 2-dimensional square barcode. It was more easily readable than traditional Universal Product Code (UPC) barcodes, was capable of storing a great deal more information, and its design was durable enough to be read through severe damage to a tag.

For 15 years, the QR code lived a quiet life in factories and warehouses, but when camera-loaded, apps-enabled smartphones burst on the scene, advertisers saw an opportunity. Businesses began embedding URLs in QR codes (and other 2-dimensional tags) so users could simply snap a picture of a tag and visit a website without having to type in the address. Clever marketers exploited the QR Code’s extended data storage, filling extra space with custom colored images and text. Microsoft even launched a competing product, which is usually a sign that a technology has arrived.

The Problems

Missing Data Used properly, QR codes make it very easy to segment customers and campaigns. For example, a real estate agent might use different QR Codes in her print campaigns and yard signs, so clicks on different tags would show her which medium is driving interest. Unfortunately, since QR Codes usually launch a Web browser, the agent won’t get access to the most critical piece of information – the prospect’s phone number.

This is a shame, since by definition, everyone who snaps a QR Code is holding a live, connected cell phone and interested enough to engage. A simple, low-tech “Text this code to this number for more information” message would be accessible to a far greater number of prospects and create a workable lead with valid contact info.

To be fair, a QR code can also be configured to send a text message on scan, but manual text messaging has several advantages. Many users find auto-launch texting jarring and intrusive, and only a subset of phone users have compatible handsets.

User Error: For most users, scanning QR codes isn’t all that easy. Many smartphones still require users to download a custom code reading app. The odds that the average user will do so when presented with a code to scan are pretty minimal. The chances the same user will go back home and download the app later? Even for users with an integrated QR code reader, the process still isn’t sufficiently automated. On my Android phone, the process involves five steps, the fourth of which is completely unintuitive.

Open the Camera app.

Take a picture of the code.

Open the picture.

Click “Share”

Click “Decode QR Code”

Code processing will be easy enough for mass consumption the day every phone’s camera auto-senses a code and prompts the user to autoload a link, and not a moment before.

Programming Error: Like most technologies looking for a reason to exist, QR codes are completely misunderstood by marketers trying to shoehorn the “next big thing” into places it shouldn’t go. I’ve seen QR codes on roadside billboards (dangerous), athletes' butts (tricky and awkward), and – my favorite – in email signatures or on Web pages in which the user could just click on a text link. For every sensible use of a QR code (for example, subway advertisements or bus shelters), there’s a really dumb one that just makes marketers look silly.

Better Technologies: Perhaps the flashiest replacement for QR Codes is Augmented Reality (AR), which overlays artificial reality on a backdrop of the real world. As I have pointed out, AR holds more promise than QR codes because it can provide immersive environments in a current context, takes up no space in print media and can be applied retroactively to existing assets.

The Retro Voice Option: Led by the iPhone’s Siri, faster processors are putting a new twist on the old pastime of talking into your phone. Speaking the name of a company or person into your phone is a lot easier than typing in a URL. Plus, a voice-activated search can provide lots more information than a directed click of a link or tag.

The Prognosis

So, if not all users have smartphones, not all smartphones can process QR codes and not all users of capable smartphones bother to scan the codes, what do QR codes really bring to the party?

Over the next few years, marketers will begin to target QR codes more effectively, but without simpler client tools and much better awareness, it’s likely that texting, speech-based searches and alternative scanning technologies will win out. It won’t be long until QR Codes return to their industrial roots where their comparatively low cost make them more appealing than RFID chips.

Can This Technology Be Saved?

Probably not. It isn’t worth the effort. Without a major corporate backer, who has a real stake in the survival of the standard in commercial advertising?

In a Santa Monica Airport hangar on Thursday, Amazon announced its latest round of Kindles and related services. Overall, they look pretty impressive.

After a big corporate self-hug as CEO Jeff Bezos took the stage, Amazon focused on three key items: new Kindle Readers, new Kindle Fires, and enhancements to the Amazon ecosystem tying them all together. (For more analysis on the new products and services, see What The New Kindle Means To Amazon.)

The New Kindle Fires

Amazon’s Kindle Fire line is getting a much-needed hardware boost designed to help it compete with Google’s Nexus 7 tablet and, to some degree, the iPad (more on that later). The entry-level, 7-inch Fire tablet is now priced at $159 (down from $199), with double the RAM and a claimed 40% speed boost.

But the real story is the Kindle Fire HD units.

The 7" Kindle Fire HD is a massive upgrade for $40, and is aimed squarely at the Nexus 7. It comes with 16GB of storage (with an optional upgrade to 32GB), a 1280x800 HD screen, dual-antenna dual-band Wi-Fi, Dolby Digital Plus audio, front-facing camera, stereo speakers and a zippy TI OMAP4 4470 Processor. Amazon estimates the 7" unit’s battery life at 11 hours. Pre-orders are available now, and the Kindle Fire HD 7 is due to ship on September 14.

But wait, there’s more. Amazon also debuted a larger 8.9" version aimed squarely at the Apple iPad, with prices starting at just $299. This 8.8mm thick, 20 oz. unit sports a 254 pixels-per-inch display with 1920x1200 resolution. It’s battery life should be somewhere below that of the 7-inch model when it ships November 20.,

Then there’s the Kindle Fire 8.9" HD 4G, starting at $499. This model boosts the default storage to 32GB and adds a custom 4G LTE modem. Even more intersting, thought, Amazon’s base 4G data plan offers 250MB/month for just $50 per year - plus a $10 app credit. Power users will want to upgrade to more expensive 3GB/month or 5GB/month plans from AT&T, but the base package might be enough to let you download books and surf the Web when you’re not near a Wi-Fi connection.

The Services

Bezos opened the event by saying that “people don’t want gadgets anymore. They want services.” And he explictly described the Kindle itself as a service.

There were also plenty of actual services announced, too, as Bezos tried to showcase an integrated Amazon ecosystem:

He showed Whispersync for Games, which saves game progress in the cloud so you can pick up your game on another device.

He demoed Whispersync for Voice, which lets you listen to an audiobook and then pick up where you left off in a regular e-book.

He showed X-Ray for video, which pulls content from IMDB and suggestions from the Amazon store when you pause a video on a Kindle Fire HD.

He got a round of applause from parents for Kindle FreeTime, which allows parents to set user-based time limits on different types of media (for example, unlimited reading, but only x hours of gaming).

No single service will be reason for most users to buy a Kindle, of course, so Bezos also pressed what he saw as Amazon’s most potent weapon in the tablet wars: content.

No vendor other than Apple can offer a closed-loop ecosystem with as much content as Amazon has. If Bezos can bring that content to bear in a complete, high-quality system at an atttractive price point, Google will have a lot to worry about… and Barnes & Noble should be very, very scared.

The Kindle Reader

The big news for e-readers was the Kindle Paperwhite, a new device with a higher-resolution, front-lit screen. Amazon has upped the pixel density to 212 pixels per inch, and text really does look crisper than it does on previous generation readers. As a result, the Paperwhite offers greater font flexibility, and in a moderately well-lit room, even the smallest font size in a number of different fonts was perfectly readable at arm’s length.

Unlike backlit screens, which project a light from toward the user, the Paperwhite’s front-lit screen mimics ambient lighting and creates 25% greater contrast than previous Kindle screens while using less battery power, the company said. Amazon claims eight weeks of usage with the light on between charges.

In theory, this should reduce eyestrain. Without time to test the unit under a variety of conditions, it’s impossible to confirm, but ad hoc tests of the demo system looked promising. There’s also a new feature called Time To Read, which estimates a running total of the time left in a chapter or book based on your reading habits, and X-Ray, a hyperlinked meta-glossary that allows readers to track characters, concepts and other information. For example, a user could jump from a passage to a character’s biography, then skip to each of her appearances in the book. This feature could be especially useful on textbooks.

The Wi-Fi Paperwhite starts at $119, and the 3G version will retail for $179. Amazon also announced a refresh to the previous-generation, $79 Reader, with upgraded fonts, faster page turns, and a $10 drop in price to $69.

If there’s one thing the DeathWatch knows, it’s that all things must come to an end. So we’re pausing to review the fortunes of our first 13 unlucky inductees. The fates of some of them may surprise you.

In reverse chronological order, here’s a look at the initial baker’s dozen and what they’ve been up to since joining the DeathWatch over the last three months (updated October 6, 2012).

Zynga

It’s only been a week since the casual gaming company hit the DeathWatch on August 27th, but Zynga shares have dropped again on news that Chief Creative Officer Mike Verdu was leaving his post, along with other high-profile execs. This kind of churn is probably inevitable among staffers looking for a quick upside, since most Zynga stock options will be underwater for some time, but it should eventually level off.

On the upside, Zynga’s first Partners for Mobile game just shipped, and it’s completely different than any other Zynga title. Mobile gaming control options still kind of suck for first-person games, so the gameplay suffers, but that should get better over time. If Zynga can become the go-to software development platform for mobile gaming, it has a shot at reinventing the company and reversing its fortunes.

It’s official. Google is selling off Motorola’s Home Division. That’s good news for Google in the short term, but it could really hamstring plans for expansion into the market for TV set-top boxes.

Despite the loss of potential toys, Motorola keeps on working with what’s available. New since August 20th, it looks like Motorola will be making a push with a new device line in September. The rumor mill seems pretty confident that the devices will include a Medfield-powered unit with rip-roaring specs, and marketing copy about “taking it to the edge” implies an edge-to-edge display. With a decent form factor and battery, the phone could put Moto back in contention for #4 in the handset market. But is #4 really good enough for the long term?

After Best Buy hired CEO Hubert Joly to reject the Schulze buyout and swing the axe of austerity, investor confidence plummeted and the dreaded stock downgrade arrived. Investor hopes (and the stock price) got a bit of a bounce as the Schulze buyout got another chance, but experts think it’s unlikely to go through. As the Wall Street Journal asked last week, the bigger question is Can Electronics Stores Survive?

After laying off staff in its newly acquired PopCap unit, EA has readjusted its Free-to-Play focus, venturing away from Facebook and attempting to cast a broader, multi-device net. It’s a good and necessary goal, but we’ll have to see how well EA can execute. Meanwhile, Madden 13 has the footall franchise back on the map with excellent reviews and record sales. It seems EA has bought some more time to figure out its social strategy. It will need it, as the company’s overall challenges haven’t softened since August 6th.

Netflix hasn’t made any major blunders or advances since joining the DeathWatch on July 30th, but the rest of the industry hasn’t stood still. HBO fired a shot across the bow with HBO Nordic a streaming movie service available only in Scandinavia. Despite the abrupt exit of Blockboster from the space, more direct competition is inevitable, so Netflix may have to do something bold. Perhaps acquiring what’s left of OnLive to leapfrog GameFly?

Since being inducted into the DeathWatch on July 23rd, T-Mobile has done nothing to stop the bleeding. Earlier this month, it lost more than a half million conract-based subscribers. If Deutsche Telekom’s infrastructure investments really happen, the company could be a technical competitor, but without subscribers, all that capacity could prove a liability. Here’s hoping that the all-hands announcement scheduled for the day everyone else gets the iPhone 5 is a game changer.

Changing the course of a behemoth as large as Sony takes more than the couple of months that have passed since the company was inducted into the DeathWatch on July 6th. So far, the best thing to happen to Sony has been the OnLive debacle, which makes Sony’s unrelated decision to jettison the service in favor of its own on-demand game competitor look downright brilliant. Product releases have been a mixed bag, including ho-hum smartphones, a respectable consumer camera, an affordable streaming music service, and a gutsy new lap-pad that shows Sony might be willing to take some risks. What’s missing? A convincing living-room attack plan. The PS4 needs to be the crux of any recovery strategy, so DeathWatch is withholding judgement on any turnaround until we see a demo.

Barnes & Noble

Barnes & Noble - inducted June 29th - is making a necessary and aggressive push for the Nook overseas. The company is starting with the UK, but it’s not alone. Building sales channels is half the battle. The rest involves filling that channel with the best possible hardware and content. To that end, the DeathWatch is waiting for something big to emerge from Barnes & Noble’s Microsoft deal before predicting a reversal of fortune. In the meantime, profits remain out of reach.

38 Studios

In early August - just over a month after 38 Studios joined the DeathWatch on June 22 - the Rhode Island Economic Development Corporation officially took hold of 38 Studios assets, including the games Kingdom of Amalur and the remains of Project Copernicus. All that’s left now is to see whether some of that amazing intellectual property winds up in the hands of another publisher (DeathWatch is betting on EA). Until then, while we mourn the loss of a lot of good work, we’re grabbing some popcorn and waiting for the latest round of It’s Not My Fault.

Samsung has just beat Nokia to the punch with a Windows 8 smartphone. On the surface, it’s not a huge deal, but it showcases Nokia’s weakness. Windows is Nokia’s only gig going forward, but Microsoft isn’t throwing the Finnish phone-maker any bones. Nokia’s stock bump from the Samsung / Apple verdict was short-lived. If Nokia hopes to lose its junk status, it will have to crawl out of that hole on its own – one smartphone at a time. That’s going to take a lot more than price cuts. Things are arguably worse now for Nokia than they were on June 15th when it became a DeathWatch victim.

HP remains committed to the PC and server markets, even as it those businesses wither on the vine. Still, while there’s still cash on hand and customers who answer the phone, there’s hope. A new tablet division looks like no more than a shot in the dark, but at least it displays a willingness to push the envelope a little. The new Envy X2 hybrid device shows some interest in redefining “PC,” as well. But as promising as these developments seem, baby steps aren’t going to turn around decades-old thinking for a company the size of HP that recently suffered a disastrous earnings report that included an $8 billion writedown of its Enterprise Services Business and the biggest quarterly loss in the company’s history. And regardless of new products, Whitman will have to win back investors hearts and minds, after they met her last announcement of weak earnings with a 13% drop in value.

The Trans-Pacific Partnership (TPP) is the most influential piece of recent legal work you’ve probably never heard of. Can a Free Trade Agreement really threaten Internet freedom, redefine copyright and alter the course of global healthcare? You bet.

What The TPP Is

According to the Office of the United States Trade Representative (USTR) and other participating or negotiating members, the TPP is just another Free Trade Agreement (FTA). Like most FTAs, the TPP regulates tariffs and duties and sets standards for trade among member countries. Unlike most FTAs, though, the TPP imposes additional standards on Intellectual Property (IP) law, including some that are more extensive and severe than any currently on its member countries' books.

In its TPP FAQ, the feds justify the deviations from FTA norm by claiming the modern world has changed the game:

The Administration recognizes that the concerns that workers, businesses, farmers, and ranchers have today are different than those they had a generation ago. We intend to negotiate a high standard, regional agreement that addresses new and emerging issues, incorporates new elements reflecting our values and priorities, and responds to the 21st century challenges our citizens face.

What Might Work

If successful, the TPP could help the United States and its neighbors compete more effectively in ever-important Asian markets. In addition to securing a stronghold against economic bullying from China, the U.S. could use the TPP to gain access to lucrative agriculture markets in Japan and New Zealand, and reduce or eliminate oppressive import duties that have depressed exports. Other participants have similar goals. New Zealand, for example, is eyeing U.S. dairy markets.

Concerns about excessive Intellectual Property protections extend beyond the digital world. Kensaku Fukui, a professor at Nihon University in Japan, is concerned that the TPP would give copyright holders complete and arbitrary control over “parallel goods” – licensed merchandise from multiple sources – which could disrupt established import/export markets. Want that rare import album? You might be out of luck.

Other impacts could be life-threatening. Twelve members of congress sent a letter to the USTR expressing concern that “long-term goals of public health and other programs in TPP countries would be challenged” due to increased costs for medications caused by an increased monopoly period in developing countries.

For more TPP criticisms, check out the Electronic Frontier Foundation’s topic page, or just check out the infographic:

Zynga’s Mafia Wars, Farmville and Bubble Safari are enormously popular pastimes that helped define social/casual gaming. But faced with a changing market and an unpopular leader, can Zynga innovate its way out of the hole it keeps digging for itself?

The Basics

Zynga’s climb to the top of social gaming didn’t take long. In 2007, Mark Pincus launched the Texas Hold'Em Poker app (now Zynga Poker) on Facebook. Within a year, he had acquired nearly $40 million of venture capital. A year later, Zynga reached 40 million active users on the back of Farmville, and an empire was born. Zynga went public in late 2011, and its stock took off, bolstered by strong performances from games like CityVille, Bubble Safari and Words With Friends. Since it peaked in March 2012 at nearly $15 per share, scandals and missed numbers have driven Zynga’s stock down to just over $3 per share. The company is now fighting to gain back the valuation, reputation and dominance it enjoyed just a few months ago.

The Problems

Zynga’s biggest problems break down into four buckets. Some of them are Zynga’s fault, and others aren’t:

1. The Brand Problem Mafia Wars has a loyal following. So do Words With Friends, Hidden Chronicles and Pioneer Trail. They’re all Zynga games, but Zynga itself does not command any loyalty. Social gamers are interested in individual game, not the companies producing them. That’s how something like OMGPOP’s Draw Something could come out of nowhere in a matter of days. Draw Something scared Zynga enough to prompt it to buy the company for $180 million. You can only do that so many times before the well runs dry, and hot apps can die as fast as they grow. The only way to stay on top is to keep churning out new hit games. That’s a tough business to maintain and scale.

On a recent earnings call, Mark Pincus acknowledged Zynga’s Facebook problem, noting that “getting beyond the Facebook Web footprint through mobile is going to give us more growth opportunities.” In the long run, that may be the case, though monetizing mobile traffic has been notoriously difficult for everyone. In the short term, Zynga has to hang on to as big a piece of the pie as Facebook will let them eat.

3. The Bubble Problem Zynga isn’t the only social gaming company that’s disappointed. Electronic Arts' PopCap acquisition is also starting to look like a bust. Social gaming is here to stay, but it seems tremendously overvalued. Zynga was funded in a bubble, built its expectations in a bubble and now has to meet bubble-sized expectations in a world that’s made a market correction.

4. The Boss Problem Speaking of management, the former wunderkind at the top of the org chart hasn’t made a lot of friends. Despite all his talk about everyone being a CEO, Mark Pincus has always been known as a bit of a control freak. When he was on top of his game, everyone let it slide. Then it got ugly, and so did the public.

In late 2010, we heard rumors of stock option clawbacks where Pincus allegedly demanded that certain employees return their options or be fired. Then, when Zynga executives cashed out before the stock tanked (and while everyday employees remained locked up), Pincus became the target of a class-action lawsuit alleging insider trading.

Pincus doesn’t seem to care about the common employee, and when you make video games for a living, your employees are your only real asset. That helps competitors to swipe your best talent and makes it really easy to mock you in videos like this one from Kixeye (language not suitable for work):

The Players

Mark Pincus, CEO: Mark Pincus is a smart guy, and he gets social media, having founded the push news service Freeloader and the social network Tribe Networks. He’s also not afraid of bending the rules to make a buck. In addition to the stock clawback, Pincus admits to dumping spyware on his users' computers to turn a profit:

I knew that I wanted to control my destiny, so I knew I needed revenues, right, f*cking, now. Like, I needed revenues now. So I funded the company myself but I did every horrible thing in the book to, just to get revenues right away. I mean we gave our users poker chips if they downloaded this Zwinky toolbar which was like, I don't know, I downloaded it once and couldn't get rid of it. *laughs* We did anything possible just to just get revenues so that we could grow and be a real business…So control your destiny. So that was a big lesson, controlling your business. So by the time we raised money we were profitable.

Pincus got the company this far by being ahead of the curve. His challenge now is coming up with a way to stay there as the industry becomes more commoditized.

The Prognosis

Zynga’s stock will not return to its peak for years, if ever. Its current franchises will likely hold onto a good deal of their market share, and the titles in the pipeline should perform well enough, but competition will eat into the company’s dominance. Within a few years, Zynga may still be the biggest social game publisher, but own a far smaller portion of a market valued far more conservatively than it is today. Unless it hits a major home run with one of its new initiatives (see below), there’s really no reason for anyone to acquire Zynga, so the company’s value will continue to float down to a point justified by its actual profit.

Can This Company Be Saved?

Zynga should continue to produce relevant, popular games, but to remain a power player in the social world, Pincus needs to win big with two moves.

Best Buy: After rejecting Richard Schulze’s takeover bid over the weekend, Best Buy hired Hubert Joly as its CEO. Joly is a turnaround mercenary who’s done good work in the past with companies like EDS, but whose primary experience is in the hospitality industry. Best Buy’s stock dropped 7% in an initial response.

Sooner or later, the Julian Assange/WikiLeaks story will be a major motion picture. It has all of the elements of an edge-of-your-seat thriller: a charismatic leader (with great hair) going up against government corruption, a precipitous fall from grace, a seedy sex angle, and a dramatic last stand. We've assembled a rough cut out of scenes from cinematic classics. To all you aspiring filmmakers who dream of pitching a treatment, take a look at what The Julian Assange Story could be. And thanks in advance for remembering us in your Oscar acceptance speech!

Scene 1: The Birth of a Legend. The story starts with a glimpse of our hero in his heyday, cutting a dodgy, back-alley deal for illicit information onto which he can shine the light of day. Maybe start with Project B, the classified 2007 video showing an Apache gunship crew killing innocent civilians and two journalists in Iraq. Assange releases teh clip, igniting an international debate about the role of the media and the autonomy of military operations. Plenty of people think Assange has gone too far, but just as many believe he has shed light on a subject that needed to be discussed. Need a place to start? Try Robert Redford and Dustin Hoffman in All the President's Men.

Scene 2: Introducing the Antagonist. You'll need a villain, of course, someone who represents offical opposition to Assange's efforts. A U.S. military officer would be perfect. The Army doesn't exactly support the killing of civilians, but Wikileaks' release of the Apache video looks like the top of a slippery slope, and the government doesn't want that kind of scrutiny. Clearly, this man must be stopped. So they send someone to take him down. Someone who's dangerous, tenacious, and resilient - a good soldier who has the guts to terminate Assange "with extreme prejudice."

Scene 3: The Doomed Romance. Assange and his wife separated in 1999, which leave plenty of room for a love interest. It's a tragic affair, of course, since our hero is a man of mystery. He's a loner. A . . . rebel? Yeah. A rebel. With awesome hair. The scene would go something like this:

Scene 4: The Set-Up. Assange hits the big time in February 2010, when Wikileaks begins releasing classified communications among the worldwide offices of the U.S. State Dept. You'll need a steely performer to play Hillary Clinton - say, John Travolta in a reprise of his role in Hairspray - who asserts that the leak "puts people's lives in danger, threatens national security and undermines our efforts to work with other countries to solve shared problems." Meanwhile, sexual assault allegations in Sweden further complicate matters. Stand and fight or turn and run? Assange is wanted for a crime he didn't commit, so he evades his would-be captors, hounds at his heels. That scene might go something like this:

Scene 5: Backed Into a Corner. Forced to seek refuge in Ecuador's British embassy, Assange is out of options. The Americans and British, thinking Assange is out of control, are closing their nets and locking him down. The fun times are over. For inspiration, filmmakers might want to look to the the time Riff Raff and Magenta busted up Fran N Furter's pool party: "Frank-N-Furter, it's all over. Your mission is a failure. Your lifestyle's too extreme."

Scene 6: The Nightmare. OK, so Sweden isn't exactly Transylvania. But extradition is extradition. Assange steps to the embassy window to taunt the British, where he spies troops mobilizing to storm the embassy. Bloodshed ensues as the British troops take on Ecuador and, ultimately, Assange and his army of information-wants-to-be-free warriors. It's a good thing, then, that we've already lined up a film about an anti-hero who mocks law enforcement from the safety of a secure tower that has never been breached. It's got a killer soundtrack and some amazing martial arts sequences. (Try to sign up some of these guys - they really know how to do a roundhouse kick.)

Scene 7: The Escape. Is this it for Assange? Of course not. At the last minute, with the troops closing in, he wakes to the sound of rescuers attempting to smuggle him out – perhaps in a shipping container or the trunk of a car. If you think that sounds too crazy to work, check out the trailer for Argo, in which an angry mob chases innocent workers into a friendly diplomatic safe zone and the U.S. government hatches a wacky scheme to being them home.