TOPICS:

EVENT ANGLE:

Premium Research

You can't buy a hybrid cloud as a product nor as a service, and even if you could you would need to customise it for your unique requirements and constraints. The reality today is you need to buy the ingredients from a supplier then roll your own hybrid cloud and to manage this you need to put in place a Hybrid Cloud Manifesto.

The SPC-2 benchmark is a useful benchmark for bandwidth intensive sequential workloads, such as backup, ETL (extraction, translate, load) and large-scale analytics. Wikibon does a deep comparative analysis of the SPC-2 results, time-adjusting the pricing information to correct for different publication dates. Wikibon then analyses performance and price-performance together, and develops a guide to enable practitioners to understand the business options and best strategic fit. Wikibon concludes the Oracle ZS4-4 storage appliance dominates this high-bandwidth processing as of the best combination of good performance and great price performance at the high-end and mid-range of this market.

The thesis of the overall Wikibon research in this area is that within 2 years, the majority of IT installations will be moving to combine workloads together to share data using NAND flash as the only active storage media. This will save on IT budget and improve IT productivity, especially in the IT development function. Our research shows that these changes have the potential to reduce the typical IT budget by 34% over a five year period while delivering the same functionality to the business. The projected IT savings of moving to a shared-data all-flash datacenter for an organization with a $40M IT budget are $38M over 5 years, with an IRR of 246%, an annual ROI of 542%, and a breakeven of 13 months. Future research will look at the potential to maximize the contribution of IT to the business, and will conclude that IT budgets should increase to deliver historic improvements in internal productivity and increased business potential.

The Public Cloud market is still forming – but seems to be poised to soon enter the Early Majority stage of its development where user behavior, preferences, and strategies become more stable. Large enterprises are more discerning of Public Cloud IaaS offerings. Test and development appears to be a key entry point for them since scale, operational complexity, and security/compliance/regulatory demands require a more nuanced approach to Public Cloud for IaaS. Small and Medium enterprises have the greatest need for Public Cloud and should consider well-established, lower risk entry points to Public Cloud like SaaS, Email, and Web Applications before venturing into Mission Critical and IaaS workloads to help them navigate an increasingly complex and costly IT infrastructure environment.

The Modern Prometheus Returns: Mega Gives the Finger to Content Cartels and the US Government

If you read the news about him, Kim Dotcom sounds larger than life. He has a reputation of a sleazebag with too much money to his name who became the center of a year-long-firestorm involving his website Megaupload. It’s now been over a year since the US violently dismantled his cloud cyberlocker file sharing business in an action that also led to his arrest by heavily armed authorities.

The US government made Dotcom into something of a folk hero with its overbearing and catastrophic overreaction to his business model. As the trial winds on, all eyes are on Dotcom and the Megaupload debacle wondering what the outcome will bring for cloud-based services that sell themselves to consumers because all of them could be used for piracy (in the same way Megaupload would have been.) In fact, during the first months after the takedown of Megaupload a cold wind blew through the cloud locker ecology withering many projects right off the vine.

A focus on strong encryption and it certainly doesn’t look like vaporware

As I said above, Mega is being sold to potential customers as “The Privacy Company” and to do this the company’s technical docs appear to state that they’re using monster encryption. Not only are files uploaded by users protected using a giant 2048-bit RSA key, passwords are hashed with AES-128, and everything is stored using a block-encryption system.

Basically: Mega wants to look like the Fort Knox of encryption. They’re not doing too bad of a job either.

Many of us in the cyptography world expected something similar to arrive from Dotcom’s empire, but most of us thought the Mega itself would be vaporware (or at least year’s off.) It’s unknown how much of the actual implementation fits with what the Mega documents say, but while the encryption mentioned may be quite strong, it doesn’t come without some potential pitfalls and might have benefitted from a few more months of planning.

It’s being hotly debated right now on Hacker News and even in an Ars Technica article about how strong the encryption can be (even with a hulking 2048-bit key) if Mega works the way that it does.

First, when the monster key is being generated at very beginning of the user session the system seems to suggest that it’s collecting (or collected?) entropy for the keys. This means that random information is being pulled in from somewhere to help make the encryption key more random. The status of generating randomness for cryptography keys right now is iffy because computers don’t actually do random: they need to look out into the world for that. Usually this is done by generating values from mouse movements or keyboard strokes by the user. However, it’s hard to tell how Mega is pulling in this extra randomness.

Second, like Megaupload it seems like it’ll be attempting to save space by consolidating bits of files—or deduplication—which should be next-to-impossible of every file uploaded by a user happens to be encrypted from the jump. In order to deduplicate something about the internal nature of the files must be known to Mega after the encryption so that during the addition of a new file or at a later date duplicate files could be linked to save space.

As the launch wears on, more issues will probably keep cropping up, but the language of the technical docs looks like it’s trying very hard to show that the system itself is highly secure and for the most part unaware of the content uploaded.

This “content agnostic” approach also gives Mega an out if they were to face future scrutiny by the US government and copyright cartel forces who are certainly champing at the bit to take another bite at Mega; although the US might be smarting after the severe drubbing they’ve taken in the global legal arena over the overzealous takedown of Megaupload. If Mega is largely unaware of what’s uploaded to their system due to the strong encryption, then authorities cannot claim that Mega is liable for what was uploaded.

It’s obvious that Mega was developed with the intent to give the proverbial middle finger to Dotcom’s legal detractors, place the power to control privacy in users’ own hands, and recreate most of what was lost with Megaupload. Include the fact that part of Dotcom’s “release instructions” was that he wouldn’t launch another service similar to Megaupload–although Mega does something quite like the now-defunct business, it is certainly a totally different beast and updated for a more discerning private user.

It’ll have to work first before anyone gets to really steam Mega’s encrypted noodles

Mega’s obvious work to protect the privacy of their customers while at the same time give a big middle finger to the US government and copyright interests that might want to take down the project yet again may be good and interesting, but right now the system doesn’t work for everyone. The giant influx of over a million users has led to a great deal of instability.

This morning, I finally successfully uploaded a file; but I still cannot download it even after an hour of waiting for it to finish being added to the system. (It’s the first draft of this article in fact.)

After that, it will be a whole new ballgame as we all watch and wait for the copyright industry to regroup and potentially go after Dotcom again. The dramatic downfall of Megaupload has fueled this amazing interest in Mega and as this fire burns bright, hopefully it will also warm the hoarfrost that’s been growing on cloud cyberlocker services for over a year and potentially it also means that those that start to return will also sport and focus on encryption.

Even services such as DropBox have seen increased speculation and scrutiny over their encryption and privacy practices with potential probes in 2011 and their updated terms of service. Now, more than ever, people who use the Internet are beginning to realize that what they put online can become meat for boogeymen like Anonymous, unscrupulous governments, or even simply malicious rivals and they want to be certain that the consumer-level cyberlocker services they’re using are up to the task of protecting their files.

Good luck, Mega, the cryptography and cybersecurity communities are watching—not to mention much of the cloud-based cyberlocker market.

About Kyt Dotson

Kyt Dotson is a Senior Editor at SiliconAngle and works to cover beats surrounding DevOps, security, gaming, and cutting edge technology. Before joining SiliconAngle, Kyt worked as a software engineer starting at Motorola in Q&A to eventually settle at Pets911.com where he helped build a vast database for pet adoption and a lost and found system. Kyt is a published author who writes science fiction and fantasy works that incorporate ideas from modern-day technological innovation and explore the outcome of living with those technologies.