StorageBuzzhttp://www.computerweekly.com/blog/StorageBuzz
Wed, 16 Nov 2016 15:08:35 +0000en-UShourly1Flash is costly and will never achieve price parity with HDD: Infinidat CEOhttp://www.computerweekly.com/blog/StorageBuzz/Flash-is-costly-and-will-never-achieve-price-parity-with-HDD-Infinidat-CEO
http://www.computerweekly.com/blog/StorageBuzz/Flash-is-costly-and-will-never-achieve-price-parity-with-HDD-Infinidat-CEOWed, 16 Nov 2016 14:49:38 +0000http://www.computerweekly.com/blog/StorageBuzz/Flash-is-costly-and-will-never-achieve-price-parity-with-HDD-Infinidat-CEOFlash storage is over-rated. Well, if not over-rated, then certainly more costly than many would have us believe and price parity with spinning disk is a very, very long way off. Those are the views of storage industry veteran (billed as Mr Symmetrix in the PR puff) and Infinidat CEO Moshe Yanai. “The main problem...

]]>Flash storage is over-rated. Well, if not over-rated, then certainly more costly than many would have us believe and price parity with spinning disk is a very, very long way off.

Those are the views of storage industry veteran (billed as Mr Symmetrix in the PR puff) and Infinidat CEO Moshe Yanai.

“The main problem with flash is the cost,” said Yanai this week. “Even now, Seagate has introduced a 16TB flash drive, but it costs $8,000 to $9,000. You can get the same capacity spinning disk drives for 1/50th of the price.

He added: “Flash is coming down in price but so are HDDs. If people need to keep more data, to do it with flash would make them bankrupt.”

He said: “People who sell flash say it’s cheaper but they compare it to the cost of 15,000rpm spinning disk when a system based on 7,200rpm HDDs costs 10 times less than 15k.”

“There is no price crossing point coming, not in the next 10 or 20 years; ask people who product disk and flash!”

Just so we have the full disclosure bit, though, Yanai’s Infinidat F-series storage arrays scale to petabytes and are based on very large numbers of nearline-SAS spinning disk with triple controllers, relatively big memory modules and flash for working datasets.

The secret sauce in the Infinidat software allows rapid access to data via striping of small data blocks (64kb) across all drives in an array’s enclosures.

That means, said Yanai, that if the array needed to go to spinning disk for data you could have all 480 HDD drive actuators working at once to seek and read data (assuming that’s how many drives you had fitted). Meanwhile, data writes are held in memory and written sequentially so as not to get in the way of reads.

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/flash-costly-will-never-achieve-price-parity-hdd-infinidat-ceo/feed/0Brisk growth for storage in China; and a who’s who of Chinese data storage companieshttp://www.computerweekly.com/blog/StorageBuzz/Brisk-growth-for-storage-in-China-and-a-whos-who-of-Chinese-data-storage-companies
http://www.computerweekly.com/blog/StorageBuzz/Brisk-growth-for-storage-in-China-and-a-whos-who-of-Chinese-data-storage-companiesWed, 26 Oct 2016 12:44:34 +0000http://www.computerweekly.com/blog/StorageBuzz/Brisk-growth-for-storage-in-China-and-a-whos-who-of-Chinese-data-storage-companiesIt’s interesting to take a look at IDC’s recently-released market tracker figures for external disk storage systems for China. Interesting in two senses: To see who the key players are, and that largely they’re a different set of storage vendors to those we’re used to, and; to see how the market is progressing there in...

]]>It’s interesting to take a look at IDC’s recently-released market tracker figures for external disk storage systems for China.

Interesting in two senses: To see who the key players are, and that largely they’re a different set of storage vendors to those we’re used to, and; to see how the market is progressing there in terms of growth.

The headline facts are that in Q2 of 2016 the China external disk storage systems market was worth $527 million and that was up 7.2% year-on-year (although most growth was in the mid-market, with low- and high-end segments seeing declines).

The top five storage vendors were Huawei (21%), followed by EMC (10%), then Hikvision (8%), Inspur (6%) and IBM (6%), while “others” accounted for 49% of the market.

Inspur was a new entrant, according to the report, thanks to its work with government. Other notable growing storage companies mentioned included Sugon, Macrosan and Tongyou, where government and education were cited as drivers.

Top five vendors worldwide were EMC (28%), tied at two HPE and NetApp (10.5%), then IBM (9.5%), and tied at five Dell and Hitachi (7.5%), with “others” accounting for 27%.

China’s storage growth is largely in line with GDP growth of around 6% to 7%, which is seen as possibly unsustainably high. IDC’s identification of government and education as drivers falls in line with Chinese government policy of spending and lending.

A stark fact also is that most of the China storage market appears to be wrapped up by local vendors, with the US-based big six mostly struggling to register market share.

So, who are the key Chinese storage vendors?

Huawei – Probably the best-known of the Chinese hardware vendors, it’s a giant company with turnover in the several billions of dollars range and a presence globally. It made it’s name in telecoms equipment and networking, but its Oceanstor storage products include storage switches and storage arrays that go from entry-level/mid-range unified (iSCSI and Fibre Channel) to enterprise storage in the tens of petabytes range. Huawei and software-defined storage maker Datacore inked a deal last year to bring hyper-converged appliances to market.

Hikvision – Mostly a vendor centred on video camera (CCTV, surveillance etc) technology but with a couple of unified storage (iSCSI and NAS) boxes in its product range. It’s quite remarkable that it gets into a storage top five on such a limited storage product family.

Inspur – This company has majored on servers and has seen Microsoft investment and agreements with VMware. On the storage front it appears to focus on Fibre Channel enterprise-class hardware with a three-model range with features including synchronous replication and with capacities that scale to petabytes.

Sugon – Another company with VMware connections (some investment flowing to the Chinese company last year) and a background in HPC/super-computers, it also majors in servers but has a storage product range that stretches to (under the Dawning brand) a single storage server (so probably NAS) that scales to petabytes.

Macrosan – The only storage specialist among this crop. Its products range from entry-level NAS and surveillance-oriented boxes to high-end, fully-featured arrays that can be equipped with flash. Although called MacroSAN none of the product specs on the website mention blocks storage iSCSI or Fibre Channel as protocols supported, thought I guess this may be an oversight given the scale the products seem to go to.

Tongyou – Appears to be a brand name of Toyou Feiji. There appears to be no company website to showcase products, but Bloomberg says it, “researches, develops and applies data storage and protection, disaster recovery and related technologies. The Company’s main products include disk storage systems, storage management softwares, data storage solutions, and technical services.”

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/brisk-growth-for-storage-in-china-and-a-whos-who-of-chinese-data-storage-companies/feed/0VMware’s Cross Cloud moves aim to replicate datacentre dominance in the cloudhttp://www.computerweekly.com/blog/StorageBuzz/VMwares-Cross-Cloud-moves-aim-to-replicate-datacentre-dominance-in-the-cloud
http://www.computerweekly.com/blog/StorageBuzz/VMwares-Cross-Cloud-moves-aim-to-replicate-datacentre-dominance-in-the-cloudThu, 20 Oct 2016 12:34:11 +0000http://www.computerweekly.com/blog/StorageBuzz/VMwares-Cross-Cloud-moves-aim-to-replicate-datacentre-dominance-in-the-cloudAs an attendee at VMworld events for about half a decade, this one provided the clearest message I’ve yet heard from VMware. That’s because, in its explanation of its Cross Cloud strategy and announcement of a product set – Cross Cloud Services – lie its intentions to try to virtualise the hybrid cloud. This will...

]]>As an attendee at VMworld events for about half a decade, this one provided the clearest message I’ve yet heard from VMware.

That’s because, in its explanation of its Cross Cloud strategy and announcement of a product set – Cross Cloud Services – lie its intentions to try to virtualise the hybrid cloud.

This will see the private datacentre and public cloud connected by a layer of abstraction that can allow on-premises VMware operations and those in the public clouds of cloud providers (with four on board for the mid-2017 launch) to be managed as one.

The reason that probably comes across as clearer and bolder than anything VMware has announced for a long time at the level of grand strategy is that these amount a restatement of its original raison d’etre but at a more extensive level.

Back in the mists of time around the turn of the millennium VMware’s basic proposition was a new and revolutionary one. It offered server virtualization, ie a layer of abstraction for hitherto siloed servers that could end poor rates of utilisation and difficult management in one fell swoop.

Now the company plans to help customers pour a layer of abstraction over the largely discrete environments of private datacentre and the public cloud.

What are we to make of it in broad terms?

Firstly, this is an astute response to the rise of the cloud that addresses a customer need. Managing on- and off-premises operations as if they are part of a single whole is clearly desirable.

Secondly, it is a recognition that while the cloud is an inexorably rising feature in our IT landscape it also seems quite clear that the limitations of the public cloud as they currently exist – the extent of customer trust and questions of availability/latency – mean the future is hybrid for a couple of decades at least (see VMware CEO Pat Gelsinger’s predictions at VMworld).

Thirdly, it is, leaving aside any question of intent, an extension of VMware’s tentacles into the cloud. Having made itself almost indispensable – other virtualisation hypervisors are available*, the rapidly-spoken reminder at the end of the advert might say – in the datacentre, the company is now set on making itself so in a world in which the IT department must look outwards towards the cloud.

We will have to wait and see if VMware can play the same role in hybrid cloud as it has done in the datacentre.

* See this Jan 2016 survey by Atlantis for a breakdown of hypervisor share.

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/vmwares-cross-cloud-moves-aim-to-replicate-datacentre-dominance-in-the-cloud/feed/0VMware: Decades of hybrid cloud aheadhttp://www.computerweekly.com/blog/StorageBuzz/VMware-Decades-of-hybrid-cloud-ahead
http://www.computerweekly.com/blog/StorageBuzz/VMware-Decades-of-hybrid-cloud-aheadThu, 20 Oct 2016 12:19:55 +0000http://www.computerweekly.com/blog/StorageBuzz/VMware-Decades-of-hybrid-cloud-aheadThe tipping point at which public cloud operations attain a 50% share of IT workloads will come in 2030. Until then, and beyond, we face “decades of a hybrid [cloud] world”. That’s the view expounded by VMware CEO Pat Gelsinger at VMworld this week in Barcelona. It’s a view that can be read as a...

]]>The tipping point at which public cloud operations attain a 50% share of IT workloads will come in 2030. Until then, and beyond, we face “decades of a hybrid [cloud] world”.

That’s the view expounded by VMware CEO Pat Gelsinger at VMworld this week in Barcelona. It’s a view that can be read as a pessimistic one. I’ll come back to why, but first here’s what he said.

Using VMware-derived figures the VMware CEO said in 2006 – the year Amazon Web Services launched – the proportion of traditional IT to cloud had been 98% to 2%. By 2011 the split between traditional IT, and public and private cloud had reached 87%/7%/16%. In 2016 that figure is 73%/15%/12%.

VMware estimates 50% of operations will be cloud-based – public and private – by 2021.

By 2030 the public cloud will achieve tipping point. Gelsinger said VMware calculations indicate a 19%/52%/29% split between traditional, public and private cloud in that year.

Gelsinger said we face, “the biggest transformation in the history of IT” as businesses embark on digital transformation.

That may be the case, but by those calculations it’s going to take almost a decade and a half to just about reach halfway.

So, another way of reading VMware’s vision as explained at VMworld is that the transformation to the cloud will actually be very long and drawn-out and that for the foreseeable future – probably two decades, even by VMware’s projections – the most mission-critical workloads, such as financial transactions, will not be ready for the off-prem cloud.

That is because customers just don’t trust the public cloud for their most vital assets – their data – and availability and/or latency cannot be guaranteed, especially over the last mile where matters might not be in the hands of an IT professional but those of force majeur, or even a JCB operator.

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/vmware-decades-hybrid-cloud-ahead/feed/0Nutanix’s IPO launches hyper-converged but can it escape the rules of the storage universe?http://www.computerweekly.com/blog/StorageBuzz/Nutanixs-IPO-launches-hyper-converged-but-can-it-escape-the-rules-of-the-storage-universe
http://www.computerweekly.com/blog/StorageBuzz/Nutanixs-IPO-launches-hyper-converged-but-can-it-escape-the-rules-of-the-storage-universeThu, 06 Oct 2016 10:33:57 +0000http://www.computerweekly.com/blog/StorageBuzz/Nutanixs-IPO-launches-hyper-converged-but-can-it-escape-the-rules-of-the-storage-universeWe’ve been tracking the rise of hyper-converged infrastructure recently, and one key measure that seems to confirm this as a rising star of the tech world is Nutanix’s IPO last week. Its shares were initially offered at $16 but these rocketed as soon as they hit the market and went as high as $44.46 this...

Well, if you think about it it’s a relatively simple idea. You take server hardware and put it in the same box as some storage.

So far, so . . . well, like a “traditional” server really.

But there’s more, and here’s the key(s). #1 The server capability is virtualised, out of the box, so customers can get straight on to delivering apps via virtual machines. #2 The whole package comes in nodes that can be linked in scale-out fashion, so as capacity and performance needs increase you can add more.

Add all that together and you get a very customer-friendly package, that’s easy to deploy and is pre-configured so its different elements work together without any inter-vendor head-knocking to keep IT chiefs awake at night.

Limitations? Well, currently hyper-converged seems to be pretty much an SME or branch/remote office play. Although clearly capable of supporting fairly demanding virtualised environments, it isn’t generally used as high end transactional storage.

Still, we should expect hyper-converged to make mincemeat of entry-level and some midrange storage going forward.

But where next for hyper-converged? More flash, more software-only options, greater ability for regular (ie, not webscale giants) to build something more like hyper-scale environments and to nudge towards high end transactional environments.

Therein, however, possibly lie the seeds of the current vendor-led variants’ challenges.

Hyperscale, which is arguably the inspiration for hyper-converged, was/is effectively a DIY phenomenon. At root all storage/compute has commodity kit at its heart and the tension is always there between the commodity vehicle and the software component that defines the product.

In that, hyper-converged cannot escape the laws of the storage universe that are slowly eroding the rule of the incumbent giants of the market.

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/nutanixs-ipo-launches-hyper-converged-but-can-it-escape-the-rules-of-the-storage-universe/feed/0Hyper-converged and all-flash shine against backdrop of declining storage revenueshttp://www.computerweekly.com/blog/StorageBuzz/Hyper-converged-and-all-flash-shine-against-backdrop-of-declining-storage-revenues
http://www.computerweekly.com/blog/StorageBuzz/Hyper-converged-and-all-flash-shine-against-backdrop-of-declining-storage-revenuesWed, 07 Sep 2016 15:22:36 +0000http://www.computerweekly.com/blog/StorageBuzz/Hyper-converged-and-all-flash-shine-against-backdrop-of-declining-storage-revenuesWriting about the rise of hyper-converged infrastructure yesterday, and its likely impact on entry-level storage arrays, I based my blog-ument entirely on anecdotal evidence. IE, simply what I see coming across my desk as a storage editor in terms of product news and customer wins. So, I decided to look back through this year’s IDC...

]]>Writing about the rise of hyper-converged infrastructure yesterday, and its likely impact on entry-level storage arrays, I based my blog-ument entirely on anecdotal evidence. IE, simply what I see coming across my desk as a storage editor in terms of product news and customer wins.

So, I decided to look back through this year’s IDC storage figures, and while I couldn’t find numbers to precisely back up my case, I did find strong evidence that hyper-converged is riding a real wave right now, among other things.

Highlights of IDC’s Q1 year-on-year figures (admittedly Q1 has its own peculiarities, but at least like and like are being compared) included:

* Sales of hyper-converged storage increased 148% year-on-year.
* Enterprise storage as a whole and storage array revenues are on the decline.
* That decline was exacerbated in the period reviewed because of the volatility of hardware sales into web hyperscale datacentres, according to IDC.
* Converged systems, including the likes of vBlocks and FlexPods, as well as hyper-converged are on the rise, with revenues up 11% over the year.
* All-flash is undergoing a meteoric rise, with market share up 87% on the year before.
* But hybrid flash – with 40% market share – dwarfs all-flash’s 14%.
* And still, flash storage revenues account for less than 50% of total storage, so there’s still a lot of disk getting shifted.

Here are the longer versions of that.

The enterprise storage market – including external storage systems and server-based (ie, hyperscale) – declined in terms of revenue and capacity year-on-year in the first quarter of 2016. Total revenues fell 7% from $8.828 billion to $8.211 billion. Total capacity shipped dropped by 4%.

When it comes to “traditional” arrays – ie, the external storage market – revenues fell from $5.631 billion to $5.423 billion (a 3.7% drop) between 1Q 2015 and the same period this year.

In terms of market share ranking the order is EMC (25%), NetApp (12%), HPE and Hitachi Data Systems (9% – 10%) tied in third with Dell and IBM tied after that (7% – 8%).

That’s all going to change now, of course, with Dell-EMC in a seemingly unassailable position for now upon the close that acquisition.

Why did overall enterprise storage systems revenue drop more than that of external arrays? That’s down to a big decline in sales by ODM’s (original design manufacturers) that supply hardware to hyperscale datacentres, says IDC. That segment registered a 40% fall in revenue over the period.

IDC characterises the hyperscale market as “fluctuat[ing] heavily”. Presumably that’s a feature of the elastic, cloud-y nature of webscale operations, and a good thing for those customers, who drop hardware in and out as demand changes.

But what of all-flash and hybrid flash?

All-flash saw a meteoric increase in market share, up 87% on the previous year’s Q1, but makes up only 14% ($795 million) of the external storage market. Meanwhile hybrid flash revenues are 40% of the total.

The top five vendors in all-flash are different to that of external storage systems overall. EMC is top (31%) with NetApp second (23%). Third is Pure Storage (19%), with HPE (12.5%) and IBM (8.5%) fifth and sixth.

While overall storage systems and external arrays haveseen declines, converged systems increased revenues by 11% to $2.5 billion year-on-year to the end of Q1. These include integrated systems from the same vendor and certified systems from different vendors, such as the VCE products (EMC, Cisco) and FlexPod (NetApp, Cisco).

Also included are hyper-converged infrastructure products, and these saw an incredible sales growth over the year, at 148% year-on-year to $372 million. This was just under 15% of total converged products market value.

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/hyper-converged-and-all-flash-shine-against-backdrop-of-declining-storage-revenues/feed/0SME storage: Will hyper-converged eclipse the entry-level array?http://www.computerweekly.com/blog/StorageBuzz/SME-storage-Will-hyper-converged-eclipse-the-entry-level-array
http://www.computerweekly.com/blog/StorageBuzz/SME-storage-Will-hyper-converged-eclipse-the-entry-level-arrayTue, 06 Sep 2016 13:44:37 +0000http://www.computerweekly.com/blog/StorageBuzz/SME-storage-Will-hyper-converged-eclipse-the-entry-level-arrayHas the entry-level storage array had its day? It seems like hyper-converged infrastructure is one of the strongest trends in storage right now, based on the admittedly unscientific basis of what comes across my desk. But there are lots of reasons to think that discrete servers and storage are simply too much of a faff...

It seems like hyper-converged infrastructure is one of the strongest trends in storage right now, based on the admittedly unscientific basis of what comes across my desk. But there are lots of reasons to think that discrete servers and storage are simply too much of a faff compared to buying it all in one box.

That’s what hyper-converged offers; server and storage in one box, usually with the hypervisor built in and often in a scale-out architecture that can add nodes to create sizeable clusters with a single pool of storage.

For the SME that’s often an ideal proposition. It means that – without a dedicated IT team in-house – they can deploy servers and storage that are pretty much guaranteed to work together right out of the box, often with easy-to-use wizard setup interfaces. It also means there is only one throat to choke with simply no possibility of server, storage and virtualisation vendors able to deflect blame to the other.

Now too, hyper-converged comes with flash options, so there is the possibility of performance-hungry workload support, at least to some extent.

All in all, it has to be said that any SME looking to refresh storage (and servers) right now needs to look at hyper-converged as an option.

That probably comes at a time when many small organisations have recently got onto shared storage and away from servers with storage in one box.

As the wave of virtualisation rolled across the world of IT, servers with direct-attached storage became inadequate to the task of running many virtual machines with DAS an I/O bottleneck.

Now hyper-converged is something of a return to that, but with pooling of storage resources across many nodes that bottleneck should be bypassed.

But are there cases where a discrete shared storage array may still fit the bill, for example if high performance is an over-riding need.

All-flash is really best handled by a dedicated array, or is at least more available as such, although hyper-converged is increasingly available with solid state storage.

Also, organisations that envisage scaling up fairly extensively might favour a shared storage array where that array is part of a product family that can offer bigger, faster options in the same operating environment.

For some also, it might be simply too soon to jump to a newcomer technology.

So, clearly, the question now for SMEs is hyper-converged vs “traditional” servers plus array architecture. The drill-down question within that is what use cases dictate one or the other?

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/sme-storage-will-hyper-converged-eclipse-entry-level-array/feed/1Primary Data and the latest incarnation of the “storage hypervisor”http://www.computerweekly.com/blog/StorageBuzz/Primary-Data-and-the-latest-incarnation-of-the-storage-hypervisor
http://www.computerweekly.com/blog/StorageBuzz/Primary-Data-and-the-latest-incarnation-of-the-storage-hypervisorThu, 01 Sep 2016 11:14:00 +0000http://www.computerweekly.com/blog/StorageBuzz/Primary-Data-and-the-latest-incarnation-of-the-storage-hypervisorAt VMworld in Las Vegas this week Primary Data showcased its re-worked version of storage virtualisation. That is, in their case, a way of aggregating disparate storage resources from multiple protocols and making it available to applications as block, file or object storage and via VMware’s virtual volumes (VVOLs) performance policy tool. In conversation with...

]]>At VMworld in Las Vegas this week Primary Data showcased its re-worked version of storage virtualisation. That is, in their case, a way of aggregating disparate storage resources from multiple protocols and making it available to applications as block, file or object storage and via VMware’s virtual volumes (VVOLs) performance policy tool.

In conversation with Primary Data’s founders, they push the Datasphere product as a “storage hypervisor” (not the first time this has been done) and one that can help IT departments save money by utilising idle disk space, by tiering between different classes of disk, and reducing over-provisioning.

CEO Lance Smith’s first line of attack was against all-flash and its tendency to be used as a point solution for specific high-performance apps.

“The idea is that flash is super-fast,” he said. “For example, 10x to 40x faster than spinning disk. But often it is used in isolation to other storage or compute resources. So, users are forced to decide which files could benefit from flash such as index or log files. But, why not take all storage sets and make them available to applications?”

Also, Smith pushes Datasphere as a solution to the constant overspends on storage that IT departments engage in.

“IT habitually overspends,” said Smith, “because it over-provisions storage for applications. So, there’s a misalignment between compute and storage with dormant VMs and a cost for data that’s sat in silos. Organisations routinely buy 3x to 5x more storage than they need.”

He added: “The root cause it that IT lacks time to procure properly, with no more than three to six months for procurement and deployment. So, they over-provision and deal with the waste because there’s no risk that way.”

In its launch at VMworld 2016 Primary Data’s stated aim is to bring the effect of virtualisation to storage with automated archiving across all storage and a reduction in the cost cost of over-provisioning by 50%.

Datasphere has a metadata controller that provides a logical view of data separated from its physical location and allows customers to match storage type to application needs, with the cloud as a possible tier, and a policy engine to set provisioning to SLOs (storage level objectives) via VVOLs (more on VVOLs here).

It sits out of band, so customers’ apps deal directly with storage resources, but only after the controller has defined volumes and policies and opened and closed transactions.

It can aggregate storage from any protocol (SAN, NAS etc) and provides access via block (SAN), file (NAS) and object (S3). It comes as a virtual appliance on a VM or as software installed on an x86 server.

So, is Datasphere just another storage virtualisation product? Why not get IBM’s SAN Volume Controller hardware or Datacore’s software storage virtualisation product?

“SVC? We’re not hardware and SVC only provides SAN storage,” said Smith. “Datacore also only allows block access. Also, Datacore touches the data. We’re out of band.”

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/primary-data-and-the-latest-incarnation-of-the-storage-hypervisor/feed/0It’s time for all-flash says IBM, but IT chiefs won’t necessarily agreehttp://www.computerweekly.com/blog/StorageBuzz/Its-time-for-all-flash-says-IBM-but-IT-chiefs-wont-necessarily-agree
http://www.computerweekly.com/blog/StorageBuzz/Its-time-for-all-flash-says-IBM-but-IT-chiefs-wont-necessarily-agreeWed, 24 Aug 2016 10:37:07 +0000http://www.computerweekly.com/blog/StorageBuzz/Its-time-for-all-flash-says-IBM-but-IT-chiefs-wont-necessarily-agree“We’re going all-flash!” That was the message this week from IBM product marketing VP Eric Herzog. Though that’s not quite the truth, Herzog’s proclamations were pretty clear. He said: “Regardless of workload for primary storage, all-flash is the answer. That even includes big data workloads.” These bold statements came as we spoke about some Big...

]]>“We’re going all-flash!” That was the message this week from IBM product marketing VP Eric Herzog. Though that’s not quite the truth, Herzog’s proclamations were pretty clear.

He said: “Regardless of workload for primary storage, all-flash is the answer. That even includes big data workloads.”

These bold statements came as we spoke about some Big Blue storage array product refreshes, namely the addition of specifically all-flash SKUs to the V7000 and V5000 Storwize boxes.

Previously, you could build all-flash iterations of these products but that involved, according to Herzog, buying many separate part numbers. Now IBM offers the V7000F and V5030F, with upgraded Broadwell processors.

Claiming there is now “price parity” between flash and (presumably high-end) spinning disk and with an all-flash market at $6 billion annually according to IDC figures, said Herzog, he would, “recommend not using anything but flash for any primary data,” and that it is now time to retire spinning disk arrays to lower tier operations such as backup and archive.

With this in mind, I decided to test the water with the next IT decision-maker I spoke to. This was John Thorpe, IT director at web hosts Millennia. I briefly outlined Herzog’s argument to him and asked when would he go all-flash.

He said: “When I need it. There are a lot of people saying you need it in your life. But, do you?”

“As long as your working set size is within your available SSD [in a hybrid flash setup] then everything happening, on for example SQL Server, will be fine. The only time you need all-flash is if you have a large number of SQL datasets that you need access to; in effect requiring random access.”

He added: “Far too many people see it as a panacea but it’s a pointless way of storing lots of data.”

You could argue about both that “they would say that, wouldn’t they?” One has an interest in selling storage arrays with all-flash on board while the other has recently re-vamped his datacentres with a hybrid flash solution.

Nevertheless it’s an interesting snapshot of where vendors try to lead and what IT decision-makers might think of it.

]]>http://itknowledgeexchange.techtarget.com/storagebuzz/time-flash-says-ibm-chiefs-wont-necessarily-agree/feed/0Book review: SpectraLogic’s Society’s Genomehttp://www.computerweekly.com/blog/StorageBuzz/Book-review-SpectraLogics-Societys-Genome
http://www.computerweekly.com/blog/StorageBuzz/Book-review-SpectraLogics-Societys-GenomeWed, 10 Aug 2016 11:23:15 +0000http://www.computerweekly.com/blog/StorageBuzz/Book-review-SpectraLogics-Societys-GenomeWritten by SpectraLogic CEO Nathan Thompson with product specialist Bob Cone and software developer John Kranz, Society’s Genome – Genetic diversity in digital preservation, makes a case for “genetic diversity” in data protection. The premise of Society’s Genome is a weighty one. It is that the very survival of society as we know it is...

The premise of Society’s Genome is a weighty one. It is that the very survival of society as we know it is dependent on the preservation of its data; that data and the information and knowledge we gain from it is “society’s genome”, and the “recipe” by which modernity is propagated.”

For the author, the preservation of data against “far-reaching and complex” threats and, “[e]nsuring that this data can be retrieved by future generations is one of the greatest priorities of our time.”

To emphasise this the author reminds us of the loss for several centuries of Roman scholar Ptolemy’s Geography and of the ancient Sanskrit Bakhshali Manuscript, a mathematical encyclopedia.

With these hints at former “dark ages” Thompson asks, “How much more dangerous would it be then to lose substantive geographic data in the age of GPS?”

The fundamentals of a strategy to avoid such a new dark age are by ensuring a “genetic diversity” of data preservation by looking to the natural world for insights that can be applied to counter contemporary threats to data.

Citing examples that include the British royal family’s history of cousin marriage and resulting haemophilia as well as the genetic narrowness of the staple lumper potato that contributed to the Irish famine of the 1840s, the author goes on to argue that the evolution of business IT is also heading for a potential lack of “genetic” diversity.

Namely, that while data growth increases exponentially the locations and the media that store data are becoming concentrated, in fewer datacentres and on fewer media types.

“In a way,” says Thompson, “consolidation and standardisation have made enterprise disk drives a modern day lumper potato.”

The answer, in short, says SpectraLogic’s CEO, is to implement a diversity of data storage by using a variety of media; a combination of disk, tape and optical media.

Citing IDC figures that say 47% of data worldwide that should require security has none, it goes on to assert, “The question is not if – but when – a data catastrophe will occur,” and that services vital to society are at stake, such as communications, energy generation, transport, defence, commerce and healthcare.

So, to sum up, the argument is, data and information are vital to human society and a catastrophic loss of that data would therefore threaten our existence as we know it. Furthermore, when looking at nature we see that methods used by nature to ensure survival – genetic diversity – can be applied to data protection. Therefore, what is needed is to ensure multiple varieties of media to ensure data protection.

The bulk of the book goes on to provide a very readable account of the increasing volume and importance of data via the internet of things, social media and mobile, the importance of data to the corporate world, the increasingly ingenious ways data is being interrogated, but also the threats faced, including natural disasters, hacking and cyber war.

When it comes to practicalities Society’s Genome starts with some principles, namely diversity of protocol, of media, of volatilty and of geography.

Diversity of protocol dictates that data copies should be held in different formats so that a threat to one is not necessarily a threat to other copies.

When talking of diversity of media Thompson says, “A complete copy of disk data stored on a secondary disk (preferably using a different storage protocol), tape, or optical media offers the best chance of data survival against malicious threats or human error.”

It goes on to cite Google’s Raymond Blum explaining the web giant’s use of tape: “Tape is great because it is not disk,” and also favourably contrasts tape’s lifecycle (eight years) with that of disk (three).

Diversity of volatility appears to refer to the susceptibility of a medium to catastrophic power surges such as in an electro-magnetic pulse attack and here again tape comes out well, as does optical media, owing to the in-built “air gap” between the media and any network that may carry a threat.

Finally, Thompson deals with geographic separation and concludes that while so-called tectonic separation may be necessary for some organisations, for others, perhaps smaller ones, the likelihood of not being able to carry on at all after major events rules out the need for world-scale redundancy of data stores.

So, how does it all stack up?

Well, it’s a well-written and readable book. It is well-researched and contains many chapters of interesting examples, not just from storage and IT but from a wide sweep of human history.

But at the same time one can’t help feeling that the whole endeavour is an exercise in “top of the funnel” marketing, and I doubt SpectraLogic execs would deny that.

So, in some senses one reads certain sections, especially those that trumpet the advantages of the use of tape – albeit alongside other media – as a case of, “Well, they would say that, wouldn’t they?” given Spectra Logic is heavily invested in the tape storage market.

Having said that, there is frequent mention of the suitability of optical media as a viable long-term storage medium and as far as I know SpectraLogic has no direct investment in this space.

An obvious set of omissions, however, from a book with ambitions to cover a wide sweep of history and act as a guide for the future are next-generation storage technologies. Here I am thinking of holographic, DNA and quantum storage, that are potential game changers that still await crucial breakthroughs in various fields, but there is no discussion of these

The practical conclusions and technological focus of Society’s Genome, therefore, don’t look forward, in this sense, and are firmly rooted in contemporary, or one might argue, last generation technologies.