A US cloud storage provider is being sued because it did not provide a recoverable backup of TV show files deleted by an aggrieved ex-employee.
zodiacisland
F-A-I-L spells... fail!
CyberLynk, headquartered in Wisconsin, was used by a Hawaiiian TV show production and distribution company, WeR1 World Network, to store …

COMMENTS

Goes without saying but....

These are exactly the sort of people Douglas Adams parodied with his SEP field.

How can anyone believe that putting your only copy of critical data on someone else's systems without running unannounced tests is a good thing. Not to say the cloud is bad per se, but to rely on a sales pitch for your business strategy???

I just can't be bothered to get into the 'hold a single copy' aspects.......

how many

Sarcasm

How many are transport streams is probably irrelevant.

That they exist at all means pirates downloaded them. If the company want their shows back, they can offer immunity against prosecution* to the first person to provide them with broadcast quality copies of their shows.

We need a Nelson icon!

The basic fail of cloud

It *works* because it allows business to reduce costs and move all their IT storage and processing to someone else, who can, in turn, leverage economies of scale to reduce costs.

All well and good.

However thanks to a Dilbert style conflation of sleazy salesmen and gullible executives the reality is far from rosy.

Once a company outsources all its IT to the cloud, it makes a great saving but there is nowhere left to back up so they are truly at the mercy of the cloud provider. Every one of their business security and continuity risks remain (and some are magnified) but they are no longer in a position to control any of them - instead they have to hope the cloud provider has done it.

The alternative is to use the cloud, while keeping a complete copy of everything on your own systems. Rarely is this going to be cost effective..............

The title is required, and must contain letters and/or digits.

"The alternative is to use the cloud, while keeping a complete copy of everything on your own systems. Rarely is this going to be cost effective"

Well it is if you dont mind speed issues. It also depends on your net connection and size of data. The same can be done with a small sata box loaded with off the shelf sata drives in a RAID 6 array (good old openfiler, how I love you). a 8tb box can be had on a domestic noname 6 port motherboard for very little money. Coupled with a linux headless PC that can CRON and you have an onsite backup box.

Well not all risks remain

If your offices burned down at least you have your off site backup. I suppose however that if you can be bothered to back up data to offsite, that it really shouldn't be a huge extra effort or expense to maintain a local backup, even if that's just a big raid array.

In the real world yes. In the world of contracts

you have transferred total responsibility to the cloud company (assuming you've written the contract correctly of course), and can sue them into the modern equivalent of forced servitude, plus make the lawyers rich in the process!

Cloud-ish

Some reasonable suggestions - however.

1 - yes, if you dont mind the speed issues you can keep local copies yourself. However this involves maintaining your own IT infrastructure (however small). This is what starts to eat into the costs of using the cloud. This is especially ironic as the concept is sold to the business as freeing them up from the costs of local IT....

2 - Yes if your office burns down you have offsite backup, but that depends on the cloud implementation you have gone for. If you are using the cloud as a managed IT service (as seems to be the case here) then you would like to think that the cloud provider has the backups. As this case shows, that isnt always true. The risk remains - and in some instances the business has removed their own ability to control the risk.

3 - and contracts are vital. However, there are things a contract cant protect (reputational damage for example) and some items are so irreplaceable no amount of contractual fine will help. All this assumes the cloud company has the resources to pay the claim you make, but doesnt have the legal clout to water it down. Try suing Microsoft, Google etc into forced servitude. Even if you have that mythical beast of a water tight contract it will be a hard battle.

It may not have been that...

...there were no backups, but rather that said backups were sabotaged in addition to the originals, thus preventing complete reconstruction. Backups are prudent policy, yes, but they're still not bulletproof (nor proof against sabotage).

Coincidentally I just got this :-

"<Big Name ISP> is hosting complimentary seminars for CIOs, Finance Directors and business leaders looking to enable business strategy and drive business transformation in new and cost-effective ways."

I think I'd prefer the "no backups" theory

since the backups should theoretically have had at least one set offsite and disconnected from the cloud. If the ex-employee also managed to get to the offsite backups, that a greater degree of incompetency and even more difficult to fix.

The Cloud Didn't Fail

There was no failure of the Cloud in this case.

An employee maliciously deleted files. That is not system failure. After the employee was fired, his access was not removed. That is failure number one. Failure number two is that the files were only stored in one location. The same problem could have occurred in an Enterprise storage environment. Who is at fault depends on the reason for the two failures, detail that we don't have.

Not failed ?

So the cloud was a service for the storage and processing of files. The buyer of said service put their files in the cloud. Then the files were not there. The buyer of said cloud service did not delete the files, nor did they forfeit them by not paying.

In what way didn't the cloud fail?

You can argue semantics about WHY the cloud failed (e.g. insufficient security, insufficient backups, etc.), but the fact is that the cloud failed. You can argue that the disks in the cloud didn't fail, you can argue that the servers didn't fail. But the total service, "the cloud" failed.

Failed Psychically

"The Cloud" failed to anticipate that what it had every reason to view as a valid command from what it had no reason to believe (pardon the anthropomorphizing) was not a valid user was not what some other valid users wanted it to do. Reminiscent of those stupid cars that permit drivers to run into things.

wow

The blame lies mostly with WeR1. To not have a local backup of company critical data is inexcusable. Cloud storage should be a last resort for critical data not the ONLY storage mechanism.

Also the fact that backups are not ISOLATED is another problem. A virus could have mimicked the above. Remote access should not be able to compromise whole systems.

"The same problem could have occurred in an Enterprise storage environment"

Not on my watch. Removeable storage exists for a reason - fire could have mimicked this problem. This cloud company is saying they have no backups? wut? I have 2 months of rolling backups for our small 1Tb total data company. I use removeable hard drives. I find it hard to believe that a cloud company doesnt do the same.

Penny Wise,

I've worked in an environment where cloud was the only

strategy, and that was before the cloud became The Cloud(TM). In that environment, it is not unreasonable to make the primary storage service provider is producing a backup level which exceeds yours. That does mean checking some details on the contract, but the assuming those were in place, WeR1 is not to blame.

local copies

Storing data in the cloud is great, you can access it from all over the world and if your house burns down or you lose your disks then you can get everything back, but I would never trust that as my only copy of something so important.

I have my emails stored only in the cloud because that's convenient, it would be annoying if I lost them but not the end of the world. My photos are in the cloud on two providers and on my local drive because they're important.

provisioning

depending on the provisioning of the drives, the data may have been striped across multiple drives that could be difficult to remove for recovery purposes and/or the drives may be so frequently used the required data are effectively irretrievable due to having been overwritten so many times.

no local copy of the data?

According to this article it was only 304gb of data and it appears that the ONLY place it was stored was on this cloud storage. Surely they should have had a local copy of this data as well, heck a 500gb sata drive costs less than £50 and this would have been good enough to keep it on.

If the show has been shown on several network TV stations then time to start asking for copies off some of the TV stations to get their data back

Did they blame

Outsourcing Cannot Remove Risks

Ironic that this item appeared yesterday. This past Friday, I presented a "Agility, the Cloud, and Accountability: What you can't know can kill you" as part of the Trenton Computer Festival Professional Conference (presentation at http://www.rlgsc.com/trentoncomputerfestival/2011/agility-the-cloud-accountability.html).

The basis of that presentation was that moving anything (e.g., tasks, processing, storage) to "the Cloud" cannot remove risk, it can only redistribute it. It is also very clear that a "redistribution" can seem to make risk disappear by obscuring it from view. However, in a manner reminiscent of multiple financial crises, it merely moves risk "off balance sheet". It does not destroy the risk. Moving to professional management should reduce the risk, but it is never eliminated.

In "Why Settle on a Hosting Provider? Bandwidth liquidity and other issues", the May 12, 2010 posting to my blog, Ruminations, I noted that providers are vulnerable to resource liquidity crises. Hosting providers who offer "unlimited" usage plans are clearly vulnerable to liquidity crises, runs on resources similar to bank runs, when more than the expected demand occurs. This is nothing new. Bank runs are legend, as are congestion crises on utility networks during surge periods (e.g., Mother's Day [telephone], water systems [Superbowl Sunday in the US]).

Employee malfeasance at a provider has similar risks. Automating processes so that a single individual can run massive infrastructure also increases the risk that a mis-operation (deliberate or accidental) will have system-wide implications.

RAID presents a similar hazard. RAID is a solution to drive failures, not a solution to software errors. A RAID array will dutifully copy incorrect data to all copies.

Risks can only be ameliorated by carefully implementing overlapping protections. There are no "magic bullets".

Loss of Data

This seems similar to the e-mail my wife got telling her that Chase security was partially breached...but anyone should only have her e-mail address. I am a Chase customer; no such notification. Containment is the name of the day, aye?