Dice Insights » C.S. Magorhttp://insights.dice.com
Insights & Advice for TechFri, 31 Jul 2015 15:42:39 +0000en-UShourly1http://wordpress.org/?v=4.2.2Why You’re Going to Love Your Wi-Fi Power Striphttp://insights.dice.com/2014/01/31/youre-going-love-wi-fi-power-strip/
http://insights.dice.com/2014/01/31/youre-going-love-wi-fi-power-strip/#commentsFri, 31 Jan 2014 15:47:20 +0000http://insights.dice.com/?p=109356The promise of the Internet of Things has been around for the better part of a decade. For the most […]

]]>The promise of the Internet of Things has been around for the better part of a decade. For the most part, however, the things have been few and far between – but that’s quickly changing. Now, a combination of technologies has begun to make things cheaper, less complicated and much easier to implement.

There was a time, not all that long ago, when Wi-Fi networks were more novelty than necessity. They were expensive, unreliable, difficult to configure and slow. Moreover, they tended to only support a small number of connections.

When I set up my first Wi-Fi network, I carefully did the math and came to the conclusion that under no circumstances would I need more than 10 connections. To be fair to that particular Apple Airport Express, it performed its job heroically – right up until it overheated under the strain of constant use. These days it’s not uncommon for its replacement to handle the routing duties of two computers, a printer, two iPhones, an iPod Touch and an Xbox 360 at any one time. When guests are thrown into the picture, then my Airport Extreme might provide as many as 20 simultaneous connections, twice as many as I thought I would ever need.

These days, most people still tend to think conventionally about the type of devices that they network. Desktops, laptops, phones, tablets, gaming consoles, network printers, storage devices, etc. The number of such devices in the average home has increased, but most families struggle to even come close to requiring the maximum number of connections that the average consumer-grade Wi-Fi router is capable of supporting. The difference between now and five or six years ago is that the number of connectable devices has risen. It seems fair to assume that a substantially greater number of people can see the utility of having a robust Wi-Fi network in their homes.

While Wi-Fi may not yet be ubiquitous, we have reached the point where it’s in most middle-class homes. That’s exactly where it needs to be for the Internet of things. It also seems a reasonable argument that people would be less inclined to purchase a Wi-Fi-connected power-strip if it meant upgrading their network.

Fewer Obstacles

Until recently, the Internet of things has had a number of obstacles in its path. The cost of making a non-traditional device connectable and the difficulties involved in setup immediately spring to mind. While prices are still substantial, they have fallen to the point where for many, the benefits outweigh the cost. So, we are starting to see a raft of non-traditional connected devices — thermostats, smoke detectors, power strips — making their way onto the market. They all cost substantially more than their non-connectable counterparts, but their utility is such that more people will consider them. Have you ever passed the point of no return on a trip and started to question whether you left the coffee maker on? That feeling of dread that you get in such situations is an extremely powerful motivator – and manufacturers know that.

The other issue with connected devices has always been ease of setup. I consider myself to be reasonably, if not highly, competent when it comes to setting up computers and electronics. That being said, connecting my printer to my Wi-Fi network felt like going to the dentist. The interface was slow, clunky and difficult to navigate. Worse yet, logging into the network involved entering passwords, using arrows and an Enter button. It was an excruciating task that took the better part of 30 minutes. It was a chore that I never want to repeat. These days most connectable devices can be set up directly via USB, micro-USB or Ethernet, but that usually involves moving the device to a computer or moving the computer to a device. It’s hardly a solution for a post-PC world.

Wink technology allows for light-based programming of connected devices. Using a free smartphone app, the device in question can be programmed via an onboard light sensor, which detects screen flashes. Take your phone, hold it in front of the sensor and let the magic happen. The app is intuitive, the technology can be applied to just about anything and the app can always be updated to allow it to program new devices. It also offers the advantage of uniformity – one device programming solution to learn and no more ugly micro-USB ports to keep covered when not in use.

If This, Then That (IFTTT)

Wink technology might be sufficient to take care of the simple stuff or even to get more complicated things online, but higher levels of control are needed for more sophisticated devices. I’d argue that IFTTT is potentially the most significant advance in device programming ever. For those who’ve not used IFTTT, it’s a simple application that allows the user to set up a range of IF-THEN scenarios to automate a range of processes.

Initially, IFTTT was used for things like email, Twitter and Facebook. It offers a simple means of creating highly precise notifications or automated responses. These days it can do a lot more, and it’s being applied to device programming. Wink gives you the access, while IFTTT gives you the control.

Take a connectable power strip, for example. Wink would give the user the means of accessing it via the Internet. IFTTT would give the user the ability to automate a range of actions. So, IF the power goes off, THEN send an email/Tweet. Or IF the device is on for more than two hours, THEN switch it off. A remarkable level of control is available from a simple set of programming instructions that can be entered via a highly intuitive interface.

The potential applications for IFTTT are endless, but in the world of connected stuff, it’s all about one simple function: keeping things simple, meaning one tried-and-true interface that can be used for absolutely everything, without getting up because you can do it from your phone.

Not if, When

The resounding success of the connectable thermostat Nest demonstrated that the time is ripe for the Internet of things — even before Google bought them. Since then, the trickle of devices has turned into a steady stream. Today, connectability is a wave on big-ticket items. When costs fall further, it will become a wave on cheaper products. When that happens, we’ll all be inundated with connected stuff. Until then, I’m going to talk myself into buying that overpriced connectable power strip, which will no doubt be a tenth of the price it is now.

]]>http://insights.dice.com/2014/01/31/youre-going-love-wi-fi-power-strip/feed/0Silk Road: A Lesson in Information Securityhttp://insights.dice.com/2013/11/19/silk-road-investigation-lesson-information-security/
http://insights.dice.com/2013/11/19/silk-road-investigation-lesson-information-security/#commentsTue, 19 Nov 2013 14:30:56 +0000http://insights.dice.com/?p=106523By now you know how the Silk Road, an online marketplace for all things illegal and semi-legal, has been shuttered […]

]]>By now you know how the Silk Road, an online marketplace for all things illegal and semi-legal, has been shuttered by the FBI. Ross William Ulbricht, the alleged owner of the anonymously hosted website, is in a lot of trouble.

Ulbricht was caught for a number of reasons, but what first brought him to the attention of the authorities was likely a simple Internet search. After that, the authorities were easily able to connect the dots between Ulbricht’s allegedly different personae – and they didn’t even need any special technology to do it. Let’s take a look at the how investigators were able to connect his s real-world identity with the anonymously hosted website. After, we’ll discuss some simple practices that can be put in place to keep personal information safe.

Background

Authorities allege that the Silk Road acted as the middleman in more than $1.2 billion worth of drug deals, collecting around $80 million in fees. What they neglect to mention is that the sales and fees were made with Bitcoin and that prices were pegged to dollar values. At the time, Bitcoin was worth between five and six times less than it is today. How much the website actually made is hard to say, but it’s fair to assume it made a lot of money. The notoriety of the site, the number of sellers who used it and the grandiose statements of its operator — who went by the name of the Dread Pirate Roberts — all likely helped to put the Silk Road in the crosshairs of the FBI.

The scope of the alleged operation makes for good breakfast reading, but what I find most interesting is how the investigators managed to reach through the layers of anonymity afforded by the Onion Router (TOR) to identify Ulbricht. By all accounts, the NSA has not been able to crack the TOR riddle – so presumably the FBI managed to do all of this the old-fashioned way.

The Trail of Breadcrumbs

Think about legitimate online sales. Why have there been so few successful auction sites? Why are there not more sites trying to replicate eBay’s success? The reason is simple: volume. You need enough sellers to keep the buyers interested, and you need enough buyers to make it profitable for the sellers. You can’t have one without the other. For Silk Road to work, people would need to know about it – and at a time when TOR wasn’t well known, that would mean reaching out to the World Wide Web.

The FBI’s first clues to Ulbricht’s identity came from drug forums. Searching forums for such information may seem like looking for a needle in a haystack, but actually it’s not: If you know roughly when a site launched, then all you would have to do is search for posts that were made around that time. The closer you are to the exact day, the greater the likelihood that people mentioning it are financially involved. You can’t just stumble around and find things on TOR, you need to know where they are first.

To suggest that finding the first posts was as simple as a search engine query is something of an exaggeration — querying Google for posts made on and around the February 2011 launch date only yields historical information about the Silk Road trade route and businesses that incorporate Silk Road into their name. I did create two successful queries in the space of about a minute, but I did it with the benefit of knowing that the first post was on a magic mushroom forum. Given that the FBI is much more familiar with these sites than I am, I would say that hindsight isn’t that much of an advantage. They would have had a bigger list of sites to hit, but they would have been searching for more or less the same key words.

I used Google to search for “magic mushroom forums” and one result after another was dated between Jan. 1, 2011 and March 1, 2011.

When I added the URL of the fifth forum on the list in front of the keywords Silk Road, I hit paydirt. Ditto, when I searched for “magic mushrooms Silk Road” within the same date range.

The top result for either search will take you to the first post of a user named “Altoid,” dated Jan. 27, 2011, who the FBI alleges is none other than Ulbricht himself:

I came across this website called Silk Road,” wrote Altoid, in a post which linked to the site. “I’m thinking of buying off it… Let me know what you think.

The next piece of the puzzle came when the same user name appeared on the Bitcoin Talk forum. You can get to that simply by searching for the terms “altoid bitcoin.” Again, we have the benefit of hindsight, but given that Bitcoin was the Silk Road currency – it would have been a matter of course. The search reveals a few posts by someone with the same user name. Searching the site internally reveals more. Eventually you get to the post that lists Ulbricht’s personal Gmail account.

If Ulbricht did what the FBI alleges he did, then this was a novice mistake, but it’s the sort of mistake that pretty much everyone makes. If he did in fact create the Silk Road, he could hardly have imagined at the time what it would become in the future. Had Ulbricht chosen to use a different handle for his Bitcoin Talk account, the connection to that zero day post might never have been made. It might seem like a stroke of luck or a clever bit of investigation, but it seems like it would be the normal route that agents would take when they hunt online for child predators.

The tacit connection between Ulbricht and that early post is not enough to go to court – it’s not even enough to obtain a search warrant. It is, however, enough to make him a person of interest to the FBI. From there, they started digging through his social media accounts, checking his email and looking for any further clues. As of yet, we have no way of knowing what these early efforts yielded, but at the very least it would have provided investigators with a clearer picture of their suspect.

That would have given the agents a good idea as to who their surveillance should be targeting. Ultimately, Ulbricht went on to make several mistakes: a package containing nine fake identification papers was intercepted, and he was caught using a VPN when a server was compromised that had an IP found in the Silk Road source code (it was used as a security measure, to keep other people from logging in). By the time Ulbricht was arrested, the agents likely felt confident that they had a water-tight case.

Implications for Law-Abiding Citizens

As a gainfully-employed, law-abiding citizen, I don’t have to worry about having the FBI batter down my door – but the investigation that led to Ulbricht’s arrest ought to give pause to anyone who goes to pains to separate their private and professional personae. What we put on the Internet will be there long after we’re gone – Web archival services have seen to that.

So while I don’t have to worry about law enforcement per se, I do have to contend with the possibility that somewhere down the line I might have issues with a stalker, vindictive co-worker, overzealous employer or disgruntled employee. I also have to accept that everything I have written could one day be read by my children. I grew up in a country where foreign citizens were routinely surveilled and had it drummed into me as a child, and later as a teenager, that I should always be extremely careful about anything I put into writing. It’s not so much a matter of what you write being used against you in a court of law; it’s about it being used against you generally.

Keeping Your Public Information Private

If you are doing something risqué, embarrassing or have the need to vent your feelings in a public forum, you’d do well to think very carefully about the personal information that you allow to be attached to the account. If you want to maintain a little privacy, then you need to add some layers of separation between you and your anime collector’s forum account.

A few things that you might want to consider:

Never use your Facebook account to access anything other than Facebook.

Be careful who your friends are.

Don’t be friends with co-workers.

Use a secondary (or tertiary) email account when you register for forums.

Seriously consider using a different user name for each forum account.

Drum the same values into your children.

The Facebook thing is a no-brainer. For one thing, it links you to your account. It also links you to all of your friends and associates, so it pays to know who your friends are. I used to accept friend requests from just about anyone, but not anymore. My wake up call came in the form a former classmate: We used to play rugby together at lunch, where he used his 6’7” frame to batter our opposition into submission. I accepted his request, we reminisced about old times, and then a few weeks later he changed his profile image to a picture of Hitler – and I hit unfriend. From there, I became ruthless. It started with people that I didn’t know and ended with people who I did know, but who had expressed views that I found abhorrent. The result: I actually read my news feed and nobody puts profanity-laced posts on my wall.

I have never been and will never be friends with a current co-worker on any social network, irrespective of my relationship with them in real life. Any worker-worker relationship involves two relationships, one with the workplace and one with the individual. If your relationship with one breaks down, it can affect the other. This applies just as much, if not more, to LinkedIn. Do you really want people at work to know when you start looking for a job?

When it comes to any interest-oriented forums, I create a buffer between myself and the forum by using a secondary email account. Obviously, the need for this varies according to interest – a game forum is much more likely to degenerate into a flame war than a cacti and succulents forum (though people can be very passionate about their Astrophytum). To simplify password recovery, I use the same secondary email address for all accounts and forward mails from the secondary address to my primary account. Finally, with regard to the use of different user names, tempers can flare from the strangest things. A different username can help to prevent spillover from one site to another.

Last but not least, teach your children and teach them young. The next generation of Internet users is growing up fast and will be getting into trouble soon. Most parents tend to take a reactive approach to these things, but by the time their child posts something stupid or objectionable on Facebook, it’s too late. Teenagers do all kinds of stupid things and that’s not going to change – the best we can hope for is that the stupid things they do when they don’t know better are less likely to come back to haunt them when they do.

Conclusions

The investigation into the Silk Road, the Dread Pirate Roberts persona and Ulbricht, looks to be a well-executed piece of detective work. What makes it unsettling is how easily it all happened. The same techniques that investigators used could just as easily be applied by people with nefarious intentions. As such, let this be a reminder to exercise a little more caution when it comes to securing your information. Finally, if you have children, do them a favor and teach them about this sort of thing before they are old enough for it to become a problem.

]]>http://insights.dice.com/2013/11/19/silk-road-investigation-lesson-information-security/feed/0SteamOS Could Be Great for Linux, Costly for Microsofthttp://insights.dice.com/2013/11/15/steamos-great-linux-costly-microsoft/
http://insights.dice.com/2013/11/15/steamos-great-linux-costly-microsoft/#commentsFri, 15 Nov 2013 14:00:50 +0000http://insights.dice.com/?p=106004With the wraps off SteamOS, the real reason why Valve has not so quietly been promoting Linux game development recently […]

]]>With the wraps off SteamOS, the real reason why Valve has not so quietly been promoting Linux game development recently might have come out of the bag. The new operating system, which is Linux-based, is primarily geared toward entertainment. Not just games mind you – it’s for television, movies and music as well.

We’ve known for some time that Valve’s been working on a console of sorts, but the money was on it being a small form-factor PC, so the announcement came as a surprise. Perhaps we should have suspected that they would design things from the ground up. For the longest time, Steam has been little more than a background process: always-on, always sucking up memory but only interacted with as a last resort. Social networking features were quietly added, but for most users Steam has been a means to an end: It has a respectable library of content and the prices are pretty good. It beats the horrible prospect of waiting a day or two for a game to come from Amazon or having to actually walk into a bricks-and-mortar store. But with an OS of its own, things look set to change.

The Facts

In December 2012, Steam had approximately 54 million active users. While that’s not quite in the same league as PlayStation Network (reportedly ~90 million), it is higher than Xbox Live (~46 million). It’s worth noting that the PSN user count includes those registered with portable systems like the PSP and PS Vita. If broken down to the “big” game experience, things might look a little different. As a side note, the 90 million user count for PSN has only ever been noted in one source — Sony has long kept mum on the size of its gaming network. It’s also worth noting that the 54 million figure for Steam is for active users – and its users are very active. During the slower hours of the day, there can be as few as 3.64 million concurrent users, while at peak times the number reaches as high as 5.76 million. Steam is a big gaming network — maybe not the biggest, but certainly the second biggest. More importantly, its user base is comprised of PC users, who more often than not invest a greater amount of money in their gaming hardware than the average console user.

Windows users account for the majority of the Steam user base. A quick perusal of the main page reveals something of a disparity when it comes to the amount of content available for non-Windows machines. The Mac collection is reasonable and the lag in delivery time seems to be improving, but Windows has clearly been the top priority. In fact, Mac users (of all versions of OS X combined) only account for 1.66 percent of the total population. Linux support has only recently been introduced, so it’s not surprising that Linux users only amount to 0.94 percent of the total. The lion’s share is in Windows, with Windows 7 accounting for more than half of all users (51.95 percent) and Windows 8 off to a slow start (at 14.01 percent), but slowly gaining ground. Dedicated hardware could change those figures drastically.

Hitting Microsoft Where It Hurts

With any operating system there’s always a group of loyal users, who are often too invested in software to even consider switching teams. For me, as an editor, the main requirement for an operating system is Microsoft Office. I prefer the Windows version, but I have no issue with using the Mac variant so my loyalties change with the wind. As for gaming, there is, at present, no substitute for a Windows desktop. That and the reduced cost of the hardware have chained me to Windows for desktop computers. I mention all this because, in many ways, I don’t think I’m too different than the average user in the 25 to 35-year-old age bracket. I definitely spend more on computers than most – but I use them for running the same types of programs.

If you take my preferences to be representative of the average adult gamer, you can get some pretty good insight into the impact SteamOS could have. If I were to divorce my gaming needs from the usage needs equation, I would have much less reason to stick with Windows. Aside from gaming, very little of what I do pushes my system to its limits – and while the cost of Apple hardware has always been only slightly south of exorbitant, the price of its software, particularly OS X, is actually pretty good. Interestingly, Microsoft Office was, for a long time, a lot cheaper to purchase on a Mac.

My hardware purchasing decisions always come down to a quality-cost-convenience relationship. I want my laptops to be fast and have enviable battery life. The MacBook Pro does that the best. And while the cost of the hardware has always made me think twice, the cost of the OS and its fully-integrated multilingual support make it the best laptop for my money. The only reason that I stick with Windows for my desktop is because it has always gotten better games. If the games were there for Linux, I would switch over in a heartbeat.

Now let’s consider another type of user: Someone who doesn’t need to use a particular office suite for their job. With a decent Linux-based game platform, would there be any compelling reason for them to stick with Windows? I would argue no. Open Office and Office Libre are more than up to any visual-based document and spreadsheet-related tasks. LaTex is a fantastic platform for professional manuscript preparation, and Gimp is good enough to give Adobe cause for serious worry. If you add a browser and email client to the mix, you have everything that most users will ever need. And all of these free programs work well on any of the free distributions of Linux.

SteamOS is not just a curveball. It has the potential to be a perfectly-delivered Daisuke Matsuzaka gyroball. Valve has the users and the delivery platform to make things happen. It already has some 500,000 or so Linux users. But the real opportunity lies in its Windows figures. More than half of its users are working with Windows 7 (64 bit), but that number is gradually declining as Windows 8 (64 bit) use rises. People are either slowly upgrading their operating systems or they’re getting new hardware that comes with Windows 8. Many more Windows users will likely find themselves in need of an upgrade in the not too distant future — particularly those using Windows 7 (32 bit), which accounts for 12.7 percent of all users, and those using versions of Vista or XP, which combine to approximately 15 percent. Together, that’s 27.7 percent, or nearly 15 million PC gamers. If Valve is able to entice developers, the potential market is huge – and the potential cost to Microsoft is enormous, both in terms of lost Windows sales and in reduced profit potential for the Xbox One.

Benefits to Linux

That SteamOS runs on a Linux backbone undoubtedly benefits Linux. First and foremost, it sweetens the pot for developers. A greater number of users equates to greater potential sales. More importantly, Steam users aren’t business users, they’re the target demographic: gamers. Also, it brings in hardware companies that have been on the fence or less than accommodating to Linux. For example, 52.38 percent of Steam’s users use Nvidia cards. If a lot of Windows users switch to Linux, hardware manufacturers (especially video card manufacturers) will likely become motivated to provide better support.

When Will It Happen?

For SteamOS and any upcoming Steam console to pose any threat to Microsoft, one very important thing needs to happen: Valve needs to increase its Linux game library by an order of magnitude. The content navigation side of SteamOS and the movies, television shows and music that it brings to the table are all very nice, but nobody’s going to part with good money unless it has a good lineup of big-name titles. Indie stuff is fun, but it’s not enough: There are gamers who will buy a new computer just to be able to crank up the settings on the latest version of Battlefield or Call of Duty. So far, big-name developers haven’t been bothering with Linux or OS X because the money hasn’t been there, but a potential hardware line and dedicated OS will draw users. And, as the number of users increases, the bigger name developers will start to take things more seriously. For Microsoft, this will not be a shock-and-awe type scenario – rather, it will be the start of a slow bleed.

How It Will Happen

It would be wrong to think of SteamOS as an all-or-nothing proposition. Just because the OS is out there doesn’t mean people will switch to it. However, people will experiment. As long as Valve makes it accessible, you’ll start to see it find its way into dual-boot systems and HTPCs. As more titles become available, it becomes a much more attractive proposition. Steam is, first and foremost, a digital content delivery system. In a standalone usage scenario, an operating system dedicated to delivering digital content makes a lot of sense.

It may take years for Valve to set things up properly. It may even take years for the first Steam consoles to hit the market. But hard disk prices are cheap and dual-boot is easy enough to implement, so SteamOS has all the time in the world. In the meantime, Linux users can reap the benefits of all of the extra attention.

]]>http://insights.dice.com/2013/11/15/steamos-great-linux-costly-microsoft/feed/4Yes, The iPhone Fingerprint Scanner Jeopardizes Privacy. So What?http://insights.dice.com/2013/11/13/yes-iphone-fingerprint-scanner-jeopardizes-privacy/
http://insights.dice.com/2013/11/13/yes-iphone-fingerprint-scanner-jeopardizes-privacy/#commentsWed, 13 Nov 2013 16:34:56 +0000http://insights.dice.com/?p=105981Apple’s decision to include a fingerprint scanner on the iPhone 5S and presumably the next generation iPad, has been met […]

]]>Apple’s decision to include a fingerprint scanner on the iPhone 5S and presumably the next generation iPad, has been met with a certain level of righteous indignation from privacy advocates. If the phone stores your fingerprint, then it doesn’t take much of a leap to figure out that it could send your fingerprint to any law enforcement or intelligence agency with an appropriate loosely worded subpoena. Fingerprinting is for criminals – I am not one, ergo the government has no right to a record of my fingerprints.

Living in Japan, I have been fingerprinted a few times. I’m fingerprinted by immigration every time I enter the country and I was fingerprinted by police when I went to reclaim a wallet that I had dropped. At the airport, it was about them making sure that I was who I said I was, and at the police station, it was about them having recourse in the event that they had given the wallet to the wrong person.

I’ve heard of foreign-born residents of Japan who decline to be fingerprinted. But really, what’s the point? The last thing that I want to do after a 10-hour flight is get into a civil rights debate. The fact is, Immigration having my fingerprints probably makes me safer. It makes things tougher for someone to illegally enter the country while pretending to be me. So I suck it up, stick my fingers on the sensors and smile. I’ve been here long enough to know when to pick my battles.

Is it possible that the National Security Agency could devise some clever way to download fingerprints from every iPhone 5S on the planet? Of course it is. But what does that really give anyone? The paranoid among us might be concerned about the Them with a capital T having Our fingerprints. It may seem a hop, skip and jump away from being framed for a crime and spending the rest of your life in jail because you answered a question evasively. But that’s fantasy, pure and simple. If you’re going to worry about that, then you ought to be concerned every time you get your hair cut, cut your nails or have a blood test.

It seems the distinction between privacy and secrecy has become a little blurred. In part, perhaps, because of the actions of certain intelligence agencies — the metadata harvesting revelations harmed the public’s trust. We have no trouble with such information being gathered on an individual-by-individual basis, but object to collective surveillance. More interesting is that many people will gladly accept tracking cookies from companies in return for access to free Web services, but balk at the thought of the same information being collected by government agencies.

At this point, most people have willingly given up most of their online privacy. Most people that I know use some form of free webmail service or another, even though they know that the contents of their inbox will be crawled to serve them better-targeted ads. They do it because it’s convenient and because they like they not having to pay for it.

Additionally, pretty much every cell phone on the market has an onboard GPS chip – people buy them in spite of the knowledge that it can easily be used to track them. Again, we’re accepting the risk that our privacy will be invaded for the convenience factor. I spent years navigating the old-fashioned way and you know what? I’ll never go back. And if you’re not already convinced of your phone’s ability to be tracked, just think of all those happy stories on the Internet of people who retrieved their lost or stolen phone with the “Find my iPhone” app. In that way, its ability to be tracked actually adds an element of security.

At this stage, we don’t know how much information Apple has willingly given up to intelligence agencies, but let’s face it – when it comes down to it, most of us sold our privacy a long time ago. If you didn’t worry about it then, you don’t need to worry about using a fingerprint to access your phone. And if you’re still concerned, I may have some good news: the iPhone 5S also accepts nipple prints. So there’s always that choice.

]]>Not long ago, Google filed a patent for pay-per-gaze technology. The technology, which allows Google to assign advertising revenue to content providers based on a reader’s glances, is big news for content providers who are tired of giving up free space to advertisers. But it may be a lot more impactful than that.

The obvious weakness of the pay-per-gaze framework, other than privacy, is that it requires an always-on camera — if you think real-time monitoring of a user’s eye-movement while they browse the Web sounds creepy, that’s because it is. An alternative to having a camera perpetually pointed at the user is to have a camera that shoots from a person’s point of view – preferably from their eye level. Not only would it moderately improve the privacy situation but it would require much cheaper components and would be a good deal less work to program. Instead of incorporating a camera accurate enough to capture subtle eye movements in lighting conditions that would, as often as not, be sub-prime, the gaze could be calculated based purely on the position of the user’s head and a sweet spot in the camera field.

Adding Glass to the Equation

While the patent doesn’t specifically mention Google Glass, the wording of one passage strongly suggests it’s part of the equation:

Eyeglasses including side-arms that engage ears of the user, a nose bridge that engages a nose of the user, and lenses through which the user views the external scenes, wherein the scene images are captured in real-time.

Thus far, Google Glass has been demonstrated predominately as an action cam with a few fancy extra features. It has the potential to be a whole lot more, but whether it reaches that potential will largely depend on the public’s acceptance of people walking around with cameras on their faces. And there’s already been backlash: Several watering holes have banned Google Glass, and Google itself has put the kibosh on the development of apps with facial-recognition properties out of privacy concerns.

Google Glass is more of a head-up display than an augmented reality solution. It provides a digital overlay for the physical world – which makes for some very interesting location-related possibilities. It’s already location-aware, so why not make advertising location-specific? We already see geo-targeted ads, but this could be much more tightly focused. Imagine yourself downtown in a big city at 5 p.m. Your search history indicates that you might be looking for a new laptop. You wander the streets and an ad pops up that shows that the model you had your eye on is on sale at an electronics store around the corner. Would you walk in? I probably would. So you go in, do some window-shopping. Time passes and it is now 6:30 – a time when most people are thinking about getting something to eat. You exit the store, look to your left, and several ads pop up for restaurants in your vicinity. You eat, leave the restaurant at 8:00, this time you see ads for bars. When you stumble out of the bar in the early hours of the morning, it points you toward a taxi stand.

The Trouble with Google Glass

While the patent may suggest integration with Google Glass, don’t expect to start seeing the new feature anytime soon. Google Glass has a couple of fairly significant problems that need to be addressed before it can happen. First and foremost is the price: The Explorer unit comes in at a wallet-breaking $1000 – far too expensive for most people. Secondly, people aren’t used to seeing Google Glass, so it looks kind of silly. A lot of people would be embarrassed to wear it. These problems aren’t unrelated. If you get Google Glass onto more faces, people will get used to it and it‘ll seem less silly. To do that, Google needs to sell more. A lot more.

The Unimaginative But Likely Solution

How can Google get Glass onto more heads? As discussed, it needs to be cheaper. So they sacrifice on the components, improve production output or subsidize and take a hit on each unit sold. An advertising supported pricing model would offer a partial solution – but realistically, how much revenue can one person generate in a year? With the present model of Internet advertising the answer is, “not a lot.” But the Glass environment allows all kinds of parameters around the advertisement to be tweaked to make the ads in question much more effective. We already have location-based advertising, but there is a huge difference between a potential customer sitting behind his or her desk, and a potential customer who is on the move. As such, pay-per-gaze Glass advertisements would hold much greater value than typical AdSense offerings.

If ad prices were increased and an individual could generate more ad revenue, it does seem feasible that advertising could subsidize the cost of Google Glass. If you couple that with savings made through volume production and throw a carrier subsidy on top, Google Glass would almost be affordable. Offer a subscription service like cell phones, and people would forget how much they’re actually paying.

The Road Ahead

Google has something of a chicken/egg conundrum on its hands. It needs customers to support its ad model so that it can drive revenue but to get those customers, Glass needs to be more affordable. Google cash reserves may seem like an appealing solution, but jump-starting the market before people are ready would just be silly. Their money would be better spent on marketing and promotion than subsidies. That and a moderate price cut could help Glass gain a little traction – enough that pay-per-gaze would start to become a viable option. Once that happens, everything has the potential to hit critical mass.

All things considered, if I had shelled out the $1,000 that Glass costs now, I would be pretty disappointed if there was no way to enjoy an ad-free experience. With that in mind, Google needs to think very carefully about the sort of experience it will offer. The right kind of ads can be useful, but there’s a time and a place for everything. If Google’s successful in finding the very fine balance between utility and annoyance, pay-per-gaze could be a huge success.

]]>http://insights.dice.com/2013/11/04/googles-pay-per-gaze-patent-intriguing-but-needs-groundwork-151/feed/0Next Gen. Console Game Lineups Hint at Future Directionshttp://insights.dice.com/2013/10/21/next-generation-console-game-lineups-hint-at-future-directions-133/
http://insights.dice.com/2013/10/21/next-generation-console-game-lineups-hint-at-future-directions-133/#commentsMon, 21 Oct 2013 14:03:16 +0000http://insights.dice.com/?p=104680Perhaps the most surprising thing about Microsoft and Sony’s next generation console game lists is that there were so few […]

]]>Perhaps the most surprising thing about Microsoft and Sony’s next generation console game lists is that there were so few surprises.

By December 31, Microsoft and Sony plan to release at least 23 and 33 titles, respectively. Based on sheer numbers, it would seem that Sony would have the upper hand, but a closer look reveals that the company might have been trying to boost its numbers: More than half of the titles are digital downloads, of which it appears that only four are exclusive to the PS4 (others may appear on the PS3 and many have already appeared on PC). That is not to undermine the indie title lineup. There are some great games in there – it’s just that the majority of the gaming public doesn’t need to purchase a PS4 to get them.

Microsoft’s launch list might seem a little sparse but the company definitely seems to have made an effort: while Forza Motorsport 5 is not quite Gran Turismo, it still has a good number of loyal adherents. And besides, we’ll see GT6 on the PS3 well before it hits the PS4. The fighting game lineup for the Xbox One looks fairly ho-hum, with one notable exception: Ryse: Son of Rome. Until now, Crytek had been associated with first-person shooters – most notably Crysis and Far Cry – so it will be interesting to see what they can do with a big budget gladiatorial combat game. At any rate, there are a couple of big titles in the mix, and some of the others might surprise.

Sony’s exclusive content seems limited to maybes. Driveclub is pegged as being a Grid successor, and Killzone: Shadow Fall is coming out against Battlefield 4 and Call of Duty: Ghosts, which are both massive cross-platform releases. With that level of competition, while it looks like it might make an interesting first-person shooter, it will be hard for it to be anything more than an also ran. And there’s the rub. How many first person shooters can one person play?

The similarities in the architecture of the Xbox One and PS4 will mean that most big name titles won’t be exclusives. The Xbox One might get a sequel to Halo and the PS4 will get a successor to Yakuza, but we probably won’t see much outside the handful of games that Microsoft and Sony have managed to lock down. Don’t expect the latest round of console wars to be decided by the games – they have their hardcore fans but in the grand scheme of things, they’re not that big.

That being said, Microsoft does have a pretty big ace up its sleeve with the Kinect 2. While games like Zumba Fitness: World Party (also available on Wii U) and Powerstar Golf might not immediately appeal to the traditional Mountain Dew sipping, Cheetos munching, hardcore gaming crowd, they are precisely the direction in which Microsoft should be throwing its money. “Why?” you ask. Because fitness games and sports simulators can broaden the Xbox One’s appeal well beyond its traditional market. Nintendo’s first Wii hinted at what was possible, but imagine where things could go.

Don’t be surprised if we see a highly realistic fitness trainer or sport simulator hitting big in the not too distant future. Imagine the potential impact of a game that actually taught a golfer to improve his or her swing. Now apply that across the board to other sports and activities and you get an idea for the potential of the Kinect 2. Microsoft’s decision to raise the price of the Xbox One by $100 to package the Kinect 2 with every console indicates that the company will be devoting a lot more time and energy toward producing Kinect-capable titles.

]]>http://insights.dice.com/2013/10/21/next-generation-console-game-lineups-hint-at-future-directions-133/feed/0Get Ready for a Virtual Futurehttp://insights.dice.com/2013/08/23/get-ready-for-a-virtual-future/
http://insights.dice.com/2013/08/23/get-ready-for-a-virtual-future/#commentsFri, 23 Aug 2013 14:00:47 +0000http://insights.dice.com/?p=99855When virtual reality (VR) first became a buzzword in the early 90’s, we seemed destined for a world full of […]

When virtual reality (VR) first became a buzzword in the early 90’s, we seemed destined for a world full of headsets and totally immersive content – then that future evaporated and we were left with naught but bitter disappointment. What happened?

Early VR systems suffered from three fatal flaws: They were really heavy, really expensive and there was a serious lack of applicable content. The impact was never in question, the reactions of attendees of “fine art” installations by earlier pioneers like Char Davies demonstrated that. But equipment costs limited the number of people that could view their work. The technology was not ready for prime time and manufacturers took a step back. Now, we’re seeing a resurgence, which is noteworthy because it has not, for the most part, been driven by conventional hardware sources.

While VR conjures up a range of connotations, I’d argue that it takes more than a headset to create a complete virtual experience. Granted, sight and sound are perhaps our most important senses when it comes to taking in simulated environments, but our means of interaction with the virtual are crucial to the overall experience.

At the very least, that means we need a headset and some sort of motion sensing technology. The latter gets a little tricky, because moving about with a headset strapped in front of your face could easily make for an expensive accident – thus the range of the user’s physical movements needs to be limited without restricting their range of motion.

The success of the Oculus Rift on Kickstarter could be said to be proof that the gaming masses have been wanting in the VR department. While a consumer version of the headset has yet to hit the market, there are enough developer models around to gauge the general response – overwhelming would be something of an exaggeration, but developers do seem, for the most part, to be very satisfied. At 379 grams (not counting headphones), the headset is light enough to wear for extended periods of time. With the second version of the headset, which is aimed at consumers, developers are looking to improve head and weapon tracking and the screen resolution.

The most significant complaint, that the so called “screen door effect” was very noticeable on the first version of the Rift was, for the most part, ironed out in the developer model. Ramping up to a higher definition display, as is expected with version 2.0, with better pixel fill would go a long way towards mitigating this and achieving a level of visual clarity, which would significantly broaden its appeal.

Headset technology has always been the biggest obstacle to VR adoption. The weight and cost issues that made early models either unimpressive or prohibitively expensive have largely been overcome. The magic comes from a Hillcrest 3DoF head tracker with custom firmware that allows it to run at 250Hz. Coupled with 3-axis gyros, accelerometers and magnetometers, it’s able to display a fairly detailed artificial environment that a user can look around fairly quickly without experiencing motion blur. It’s not perfect, but it’s almost there – and more importantly, it’s almost there at a price that most people will be able to afford. While there have been no official announcements with regard to the retail price, it’s widely rumored to be close to $300.

Now that the Oculus Rift is nearing launch, we’re starting to see the emergence of crowdfunded third-party accessories that aim to boost the Rift experience. At this stage, I’d opine that it’s a little too early to start introducing peripherals that fall outside of the core virtual experience: sight, sound and movement. A force feedback vest popped up recently, but doesn’t look like it will be successfully funded.

So what of the movement side of the equation? That actually comes to us from two sources – one of which is, not surprisingly, Microsoft’s Kinect. The other is the Virtuix Omni, a VR treadmill that is able to translate steps into virtual-world movements. The technology itself is relatively simple: a multi-directional treadmill that uses accelerometers and magnetometers to translate a user’s treadmill walking/running into realistic in-game movement. Other design features largely involve solving the problem of keeping the user upright.

At present, the Omni is still in the funding stage, but it has attracted a good deal of attention, enough so that it blitzed past its $150,000 funding goal on its first day on Kickstarter. There is, as of yet, no word on the retail price – Virtuix has stated that its goal is to get the Omni into the hands of as many developers as possible, and that the price of the finished product is likely to be significantly higher than the Kickstarter price of $400.

The Omni is not, however, a complete solution to in-game movement. It will let you walk and run without falling over, but it doesn’t track other body movements – to throw that level of sensitivity into the ring you need the Kinect.

The Kinect effectively solves the last piece of the movement puzzle and frees the user from the tyranny of the keyboard or the need for a hand held controller. With the Kinect you can translate any number of real-world actions into virtual-world movements and enter your VR environment empty handed. The second-generation Kinect will be Windows compatible, which should mean that we start to see movement tracking technology make its way into a wider range of games – and into first person shooter territory, which probably stands to gain the most from the Oculus Rift. At present there is no official price for the Kinect 2. When the first version hit the market it was priced at $150, since reduced to $110, and that would seem to be a reasonable price point at which to aim.

So where does this leave us? In the early days of VR, a complete system would have cost tens of thousands of dollars. Even as recently as 2006, according to a report from TechCast, one would have been looking at spending no less than $5,000. Interestingly, the author of that report suggested that mainstream adoption of VR would not happen until 2020. The report paints a clear picture of where the industry was seven years ago. Nobody was predicting the proliferation of mainstream VR in less than a decade – and nobody had heard of crowdfunding. At present, it would cost less than $1,000 to acquire all of the peripherals needed to create a functional VR unit, assuming that the user already has a PC.

With the relative affordability of the soon-to-be released VR technology, we can expect it to make a splash in the gaming industry in the not too distant future. Given that all of the core components are currently in the production stage, it would be unlikely to see anything too spectacular before the end of the year. But from 2014 on, VR is almost certain to gain significant traction.

]]>http://insights.dice.com/2013/08/23/get-ready-for-a-virtual-future/feed/1Can the Oculus Rift Revive Flagging PC Sales?http://insights.dice.com/2013/08/02/can-the-oculus-rift-help-revive-flagging-pc-sales/
http://insights.dice.com/2013/08/02/can-the-oculus-rift-help-revive-flagging-pc-sales/#commentsFri, 02 Aug 2013 13:50:35 +0000http://insights.dice.com/?p=99785The term “game changer” gets thrown about a lot these days, but if there is one device to which it […]

]]>The term “game changer” gets thrown about a lot these days, but if there is one device to which it can be correctly applied, it’s the Oculus Rift. The Kickstarter-funded virtual reality headset single-handedly revived interest in immersive gaming experiences. Could it inject some much-needed life into flagging PC sales, too?

For PC manufacturers, these are interesting times. Consoles and portable devices have been eating into sales. While PCs have always enjoyed a performance edge, consoles have been getting better and better. The next generation consoles may not pack a comparable processing punch, but they should provide a comparable gaming experience. That, and the fact that console manufacturers can recoup hardware losses through software sales, makes today’s PC industry a very tough business.

Recent comments and developments would suggest that the Rift is going to be a PC-oriented device. Oculus VR is gearing its headset for the PC, rather than the Xbox One or PS4. While aligning the Rift to the PC community might seem counter-intuitive, it actually makes a lot of sense. It ties the Rift to a much larger development community and targets users who should be more inclined to part with $300 for a new peripheral. The purchase is a little easier to justify for someone who has already shelled out $1,500 on a mid-range gaming rig than for someone who spent $500 on a console. There may be more gamers than ever, but everyone places differing levels of importance on gaming as a pastime. Logically, the first people to target are the connoisseurs.

The Relationship Will Be Complicated

While PC sales have been hurting for some time, there are still a lot of them around. For most people, CPUs are as powerful as they need to be – the biggest payoffs can be found by upgrading the GPU. For a long time CPU demands were the engine behind PC sales, but our requirements aren’t growing the way that they once were. This needs to be taken into consideration with regard to the Rift, as most potential buyers will already have a machine that is up-to-spec to handle its requirements.

That being said, if the stars align and people start spending more time immersed in the VR experience, there’s a good chance that they’ll be more inclined to upgrade their equipment. When playing a game, I love nothing more than being able to max out all or most of the settings. The Oculus Rift could help to steer the PC industry in the right direction, but it needs to find its own market first.

Independent Development is Everything

The Rift is a truly special device for a number of reasons: First, it was crowd-funded. Second, it was funded at a time when VR wasn’t a mainstream technology. Third, the company seemed to come out of nowhere. Big technology was asleep at the wheel, but it won’t be for long. The Rift could well be to VR what the Hero was to action cams. But first it needs to get into the hands of as many developers, and onto the heads of as many consumers, as possible.

As with any gaming/computer-related technology, the key to uptake will be content – more specifically, it will be the free content that really makes the Rift. Big title games are important, but updating game libraries to include Rift-compatible titles costs money. Add that to the price of the headset itself and things start to get expensive. Granted, people can acquire such titles slowly, but then the Rift becomes something that is used occasionally – a curiosity or novelty that spends most of its time in a drawer. To make it a must-have device, Oculus VR needs to turbocharge content production by encouraging as much amateur development as possible. The company’s launch of the VR Jam, which pumps $50,000 into development of Rift-compatible programs, is a step in the right direction.

With a projected price of around $300, the headset isn’t exactly expensive, but it costs enough to make some potential users think twice. People will be much more inclined to part with their hard-earned money if they have access to a decent volume of free content.

Make no mistake, major commercial titles will have just as big a part to play, but what will get the Rift into the homes of early adopters will be amateur productions: tours of Tuscany, guillotine simulators, etc. It’s the experimenters who will drive the technology into new directions and, dare I say it, lay the groundwork for the more polished commercial projects that follow. A solid library of free content will give people opportunities to explore, and that makes for a greater level of perceived satisfaction. In turn, that will greatly benefit word-of-mouth.

Encouraging the indie community will help bring in the early adopters, the first piece of the puzzle, if you will. However, making the Rift mainstream will take a lot more work. If the Rift is to offer any sort of lifeline to PC gaming, it will have to become more than mainstream — it will have become ordinary. Early adopters are influencers and will help drive sales, but again that will only go so far. To extend beyond a relative handful of technology enthusiasts, the solution will have to be cheap.

It Has to Be Cheap

One of the ideas that Oculus VR is exploring, according to CEO Brendan Iribe, is subsidizing the device, or even giving it away. While that sounds farfetched, there are a number of ways a subsidized model could work – even a model that was subsidized to the point of being zero-cost.

The loss-leader concept is nothing new. It has worked well for the likes of Microsoft and Sony with their current generation consoles, and the same approach seems to have been taken regarding the pricing structure of their next generation counterparts. Also, Kindle prices fall well ahead of those of their components. However, selling a product cheaply and giving it away are altogether two different things.

Oculus Rift could be sold for less than it costs to make and losses could be recouped from software sales. The trouble with that model is that Oculus VR is not in the content production business. This could be overcome with a royalty or licensing fee for commercial material – but that would become unworkable as competing products start hitting the market.

While not free, a subscription-based model would allow people to get the Rift home for little to no money, which would be more attractive to some users. Losses due to delinquent payments could be recouped through profits made through long-term subscribers. As a consumer, I would prefer to pay upfront, but there are plenty of people who think differently.

Another alternative would be to partner with PC or console manufacturers and include the Rift as part of a package. A free or cheap Rift could help to boost sales. In the case of console manufacturers, that would have a flow-on effect when it comes to game sales – a roundabout way to possibly sell more. It almost seems workable, but there seem to be a few too many “ifs”.

Where it makes a lot of sense is with distribution platforms – more specifically, Steam. The company has an enormous user-base, a great content delivery infrastructure and a management ideology that seems to be more or less in-line with that of Oculus VR. More importantly, the always-on Steam platform would give them the cold, hard metrics that would let them figure out exactly what would work. They would not just know who buys what and for how much, they’d know who is playing what, and how long and how often they are playing – valuable data when it comes to figuring out how much money to lose on the device and how much to pull back in from the games.

It would seem to be the perfect match for the oft talked about but never seen Steam console, which will almost certainly be a small form-factor PC. Steam would give the Rift a fantastic platform and an affordable Steam console packaged with a Rift could put more people in front of PCs, but would that really help the industry as a whole? In the case of the Steam user-base, it is very much a case of preaching to the converted. Any impact in PC sales will have to flow-on from there and any impact would be hard to judge.

Given the recent comments from Oculus VR regarding the PC orientation, I wouldn’t be surprised if Steam enters the picture sometime in the not too distant future.

It Has to Inspire

Perhaps the most important thing that the Oculus Rift can do for the PC industry right now is to drive innovation and imitation. The consumer VR market is potentially huge but, at present, totally untapped. It could be hard for a young company to meet the potential demand, and nothing kills interest like a long waiting list. A few competing products on the market could help ease some of the demand and spur development of next-generation headset technology.

At the end of the day, it’s unrealistic to expect a single device to reinvigorate an industry that some consider moribund. The power of mobile devices is increasing and our demands are changing. The PC, as we knew it, has become much less important. It still has a place, but that place is morphing.

Virtual reality technology has the potential to make PCs a much more attractive option. They lack the constraints of consoles in terms of interoperability. They are upgradeable and expandable, and thus arguably better-suited to bringing VR into the home. As the technology extends beyond the headset to other areas of the virtual experience, it becomes something that is a little more makeshift, a little more do-it-yourself and a little more closely aligned to the traditional PC gaming culture. We will not see anywhere near the same level of independent development with closed console environments. The creative explosion — and there will be one — will be PC-based.

While it remains to be seen whether that creative explosion translates into the sort of sales required to lift the PC industry out of the doldrums, there are interesting times ahead.

]]>Upcoming smartphone releases from Sharp and Kyocera to Japan’s AU KDDI network may hint at the future direction of other manufacturers. Samsung’s Galaxy S4 videos trumpeted the fact that it has the first touchscreen smartphone that you don’t have to touch to operate. Onboard technology detects your finger swipes even when they’re above the screen – which is apparently great news for spare rib aficionados. At 4.99 inches, the screen is 0.01 inches from the arbitrary definition of a phablet. It’s a great piece of equipment, but it looks set to be plagued by the same issues that iPhone and Galaxy owners have faced from day one: The battery is too small.

As power management systems have improved, manufacturers have milked extra performance out of smartphone batteries. But with bigger screens and faster processors, it’s often a case of two steps forward, two steps back. We end up in the same place, with a phone that will get us through a day of regular use if we limit what we do. That’s no way to live. It was for this reason that I found myself very interested in two models set for release in Japan on the AU KDDI network.

Next-generation models from Sharp and Kyocera boast something that only Motorola and Razr Maxx have managed to pull off – batteries that are big enough to last longer than a day. If the manufacturers’ claims are to be believed, they may last several days. However, this is the real world and when the laws of physics and regular usage scenarios apply, most people will probably be able to manage a day and a half – more if they ration.

There are arguments against bigger batteries: They cost more, weigh more and make phones thicker. But if external battery sales are anything to go by, then it seems that the public wants them. I was a little surprised that the Motorola Razr Maxx failed to make much of a splash, but then consumers were still head-over-heels in love with the iPhone when it launched. People are fickle. The market has become a lot less predictable, and Apple has long since lost the dominance that seemed at one time to be guaranteed. People will move on if they like something better or when something else becomes a little too passé. Enter Samsung.

Samsung’s Approach

While some would argue that Samsung’s earlier Galaxy models borrowed a little too heavily from the iPhone, the South Korean manufacturer has since taken a direction that is entirely its own. While Apple opted for smaller screens that are easier to handle, Samsung focused on user experience. The 5-inch screens that we see on smaller phablet devices from Samsung, HTC, Lenovo and LG are testament to its vision. Although Retina displays look fantastic, you can properly navigate a website on a Galaxy S3, and the S4 delivers even more screen real estate. An iPhone is fine for minor browsing, but doesn’t hold a candle to the S III when it comes to entertaining yourself on a long train ride.

A quick perusal of upcoming models from Korean and Japanese manufacturers reveals that they’re all thinking the same thing when it comes to the touchscreen: Big screens sell (within reason). People don’t need huge hands to comfortably use a 5-inch screen. In my wanderings around Japanese high school classrooms, I’ve noticed that the S III and other models sporting plus-sized screens are now replacing the once dominant iPhone. Five-inch screens, or screens that are very close to that, are here to stay, with cut-priced mini versions for those who don’t want them. That the bigger phones have proved to be a hit in Japan shouldn’t come as a surprise since mobile devices are the point of Internet access for many young people there.

It’s easy to read too much into Japanese technology trends. The country has always done its own thing when it comes to the cellular phone. It can claim the ignominious distinction of having held out against the smartphone for longer than any other developed country, to my knowledge. Eventually, the iPhone found its way there and the public was suitably wooed but, with the exception of Sony, local manufacturers were slow to get onboard with Apple’s approach. Japan has tended to be a follower when it comes to smartphone technology and its international success has been limited at best. That could change in the future after Softbank’s acquisition of T-Mobile, but it certainly won’t in the short term.

Japanese manufacturers face two hurdles: First, strongly top-down corporate cultures that some would say makes it difficult for designers to make bold decisions and, second, an extremely limited range of third-party accessory manufacturers. Can you find an OtterBox for a Sharp? While I would love a battery that could last two or three days between charges, I like my rugged case and bullet resistant screen protector more.

It could be argued that it was not until 2012 that things started to change with a number of quality models delivered from Hitachi, Panasonic, Kyocera and Sharp — and Sony started to fall by the wayside. Kyocera was notable for its bone conduction speaker system. Sharp’s Aquos Serie was an attractive phone with solid performance that was hampered by underwhelming battery life.

Power Play

The next generation models of phones from both of these companies will offer only the usual gains in terms of processing performance and functionality. However, battery sizes for both models have been given a substantial bump, to the point where the average commuter or student should never have to worry about whether they packed their charger or external battery. The 2013 Sharp Aquos Serie boasts a 3,080mAh battery and adds 802.11ac into the mix. Kyocera’s 2013 Urbano sports a 2,700mAh battery, which the manufacturer says is good for 570 minutes of talk time. Incidentally, that’s only 100mAh bigger than Samsung’s S4, but it’s for a 4.7-inch screen as opposed to a 4.99. In contrast, the iPhone 5’s 1,440 mAh is laughable.

Battery life is important everywhere, but it’s crucial in areas where people travel to and from work or school by public transportation. Cases in point: Tokyo, New York, London and just about every major city around the world with a well-established public transportation network. The difference in Japan is that the public transportation network extends throughout most of the country. The traffic situation in Tokyo is testament to the fact that a lot of people do still drive, but Shinjuku station in rush hour gives an idea of the amount of people who have to get to and from work without access to their car’s cigarette lighter. As such, the ~3,000mAh range seems right on the money.

Japanese smartphone manufacturers don’t really need to hit big internationally to have a strong impact on the industry. The Japanese market is big enough that they can do it from home. If Kyocera and Sharp eat up enough of Apple’s, Samsung’s and, to a lesser extent, Sony’s market share, then we’ll start to see battery life being taken a lot more seriously. If that happens, we might just see the major manufacturers releasing Japan-only models with bigger batteries – or see changes implemented across the board. Let’s hope that it’s the latter, because I for one long for the day when I don’t find myself in a state of panic every time I forget to pack my charger.

]]>http://insights.dice.com/2013/07/02/look-to-japan-for-the-next-generation-of-smartphone-influences/feed/1The Cylindrical Mac Pro Will Only Be a Footnotehttp://insights.dice.com/2013/06/23/the-cylindrical-mac-pro-will-only-be-a-footnote/
http://insights.dice.com/2013/06/23/the-cylindrical-mac-pro-will-only-be-a-footnote/#commentsSun, 23 Jun 2013 14:00:32 +0000http://insights.dice.com/?p=95172Apple has lifted the lid on its completely redesigned Mac Pro. In spite of the chorus of “oohs” and “aahs” […]

Apple has lifted the lid on its completely redesigned Mac Pro. In spite of the chorus of “oohs” and “aahs” that greeted the unveiling of the high-powered cylindrical computer at this year’s WWDC, it will almost certainly be Apple’s least significant product of 2013.

The trouble with Mac Pros — and not just this version — has always been their price. Their appeal lies with a limited number of designers and video editors who can and will pay the Apple premium. If the baseline price jumps with the new Pro, as many suspect it will, we can expect the entry level models to demand a higher premium. But there are other reasons to expect a tepid consumer response.

Complex Guts

The 2013 version’s cost will have little to do with the fancy case and everything to do with what’s going on under the hood. As of now, the confirmed specifications include:

Dual Gigabit Ethernet

HDMI 1.4

Thunderbolt 2.0 (six ports) with DisplayPort 1.2 support for up to three 4K displays

802.11ac wireless

Bluetooth 4.0

1866MHz ECC RAM

PCIe-based flash storage

Dual AMD FirePro GPUs with up to 6GB of VRAM

This last item is a big part of the problem. While there are a few advantages to the professional-grade video card options offered by AMD and NVidia, for the vast majority of users – even many professionals — higher-end gaming-grade stuff does just fine. In this case there are two boards that one must purchase. That will almost certainly raise the cost of the entry-level model.

The cylindrical case is an issue in and of itself. It looks fantastic, but there’s a reason why computer cases are rectangular boxes: They’re used to house rectangular components. Apple has performed a few interesting tricks to get the motherboard and cards into the case, but not without consequences.

The biggest of these is upgradability. The easiest way to squeeze more performance out of an aging system is to upgrade its video cards. The Mac Pro’s cylindrical case will prove an obstacle for even the most technically competent of users. Where are we going to get the cards and how much extra are we going to have to pay for them? The case measures 6.6 x 9.9 inches, so clearly a significant amount of customization is involved to get the cards and motherboard to fit together into the their tiny enclosure.

Growing Pains

Lack of upgradeability is one issue, lack of expandability is another. To be fair, the Thunderbolt 2.0 ports do give the new Mac Pro plenty of potential – albeit costly — options. Thunderbolt 2.0 RAID devices aren’t cheap, but they’re a great way to go if you don’t have enough internal storage. And you won’t, since the Mac Pro only offers Flash storage. One of the big pluses of working with a full-sized desktop is being able to have the best of both worlds – you run the OS and core programs off a solid state disk and use one or more hard disks for everything else. With the new Mac Pro, you can only have that luxury if you plug in an external drive.

Another feature that is conspicuous by its absence is the optical drive. While most users can get away without one, the drives are pretty much essential for a good percentage of the Mac Pro’s market base. Though the first thing people think when they hear “video editing” is someone stitching together a feature length blockbuster, the vast majority of video editors deal with much shorter fare — think weddings, events, school plays and that kind of thing. Do you hand over that sort of data on a portable hard drive? Usually not.

The Verdict

The 2013 Mac Pro is a highly capable machine, and Apple certainly didn’t skimp on the hardware. But it looks like a massive case of form over function. Apple has built a stunning device but has sacrificed a lot of convenience in the process. On its own, the machine will be expensive. Of that much we can be sure, given that the entry point includes two FirePro boards. However, the expenses don’t stop there. The Thunderbolt accessories needed to accomplish relatively common tasks will be add a more substantial burden.

The trouble for current Mac Pro users is that many of them are heavily invested in software. This severely limits options and leaves them with the unattractive prospect of purchasing new versions and dealing with a new OS. (This is perhaps the only silver lining to Adobe’s decision to make its Creative Suite subscription-only: It makes it jumping ship less expensive).

The 2013 Mac Pro looks fantastic but the price of both the machine and the options for expanding its capabilities via Thunderbolt will make it a losing proposition for most people. As far I can see, the only reason to buy it is if you already own a lot of Thunderbolt peripherals or are heavily invested in really expensive software. Don’t drink the Kool-Aid.