Data Center Advisorshttp://blogs.hds.com/hdsblog
Mon, 29 Dec 2014 10:17:35 +0000enhourly1http://wordpress.org/?v=3.3.1Modernize IT or Failhttp://blogs.hds.com/hdsblog/2014/05/modernize-it-or-fail.html
http://blogs.hds.com/hdsblog/2014/05/modernize-it-or-fail.html#commentsTue, 13 May 2014 20:41:14 +0000Ravi Chalakahttp://blogs.hds.com/hdsblog/?p=7832IT can be a weakness or a strength to its business. Almost every industry is witnessing dramatic changes in its business, and IT has a critical role to play in the business’ growth and in some cases, even its survival. A European retailer, after 40 years in business modernized their IT solutions to make real-time decisions and fast execution to increase profits and eliminate waste. Leading telecommunication companies are finding new revenue streams by expanding IT as-a-service offerings to their customers. New IT models with cloud and converged systems are challenging the status quo and rewarding astute CIOs, while leaving behind the ones that are unprepared.

So how do CIOs separate the winners in bringing solutions to meet their company goals in rapidly changing environments? In the past, lowering cost meant simply using commodity hardware; and agility meant deploying new software capabilities – some with high licensing cost – while still depending on expensive solutions to maintain high availability, performance and scale. Integration and optimization was either left to IT professionals or to professional services that can be unaffordable for many.

Hitachi took a different solution strategy for its converged solutions and integrates enterprise class systems and software, optimized for specific or general purpose workloads and automation with end to end infrastructure orchestration software. This Intel Xeon X86-based open systems not only delivers agility, automation, and availability only comparable in mainframes or high end UNIX systems, it eliminates complexity and cut cost to enable businesses compete better and grow faster. Gartner’s recent Critical Capabilities report for High End Storage Arrays rated Hitachi Virtual Storage Platform (VSP)as number one in enterprise class storage across all five use cases – consolidation, virtualization, cloud, OLTP and analytics environments. Hitachi recently announced a new enterprise platform, Hitachi VSP G1000, which greatly extends this market lead with next generation capabilities based on Storage Virtualization Operating System. Optimizing the best of server, networking and storage is the real secret to mission critical workloads and lowering capex and opex. For instance, a highly optimized, converged solution can deliver similar IOPS and latency performance as the highest performance competition for mission critical database environments, using about half the number of CPU cores, less space, power, cooling and software licenses and lowering the cost by 30% to 50% of other high performance solutions.

This new generation enables fast deployment of always-on cloud infrastructure which HDS just announced, to support its customers for the next decade or more. The refreshed line of converged systems, called Hitachi Unified Compute Platform (UCP) enable state of the art automation software for servers, network and storage all in one, with servers using Intel’s latest Ivy Bridge Xeon X86 processors and Storage systems with Hitachi’s new Storage Virtualization Operating System (SVOS) software.

Customers are finding, to their amazement, that enterprise class converged systems can cost less than commodity products due to advanced capabilities that drive down system and management cost by large factor. HDS is basically rewriting the rules of the game by offering high end features that drive down cost without sacrificing always on, automated and agile solutions.

A New Jersey based service provider , offering mission critical SAP application services to its business clients is rapidly growing its business by providing competitive cloud based workloads. Hitachi UCP enables this provider to focus on growing its business and its expertise in SAP apps rather than being slowed down with legacy technologies that are complex to scale and manage, not to mention expensive to acquire and maintain. Businesses are making important decisions to modernize their IT with the right solutions and grow their business faster in a business defined IT era.

“We are what we repeatedly do. Excellence, therefore, is not an act, but a habit.” ― Aristotle

Last night, in New York City, several of the world’s top companies, including Microsoft, Intel, Google, Adobe, Ford Motor Company and Cisco were recognized as the World’s Most Ethical Companies. Receiving the recognition right along with them was Hitachi Data Systems! If you have a good memory you’ll remember that I announced the same result for HDS last year. And if your memory is really good, (and you’re keeping score) this is the 4th year in a row HDS has received this award.

On any given day, take out a pair of scissors and a copy of your favorite newspaper. I realize you probably get your news online, but play along. Start by cutting out the articles about companies embroiled in investigations. Snip out the blurbs about questionable business dealings, graft and corruption. Remove the articles about companies being fined and by-lines about executives being indicted for illegal activities. Now close the newspaper. Chances are you’ll be able to read straight through to the sports section without leaving the front page (or the stock prices if you are reading the Wall Street Journal or Financial Times)!

The point is, in the current legal and regulatory environment it seems far too easy for multi-national companies to become the subject of negative publicity. It seems far more difficult for companies to act consistently and in a manner that displays adherence to ethics and values. To maintain a program and act in a manner that garners accolades for ethical behavior establishes HDS a trusted partner and distinguishes us from our competition in a meaningful way.

Being awarded World’s Most Ethical Company for the fourth straight year displays excellence and is an award that is shared among all HDS colleagues. It’s becoming a habit of which you should be proud!

]]>http://blogs.hds.com/hdsblog/2014/03/worlds-most-ethical-company.html/feed0HiMSS 2014–We’ll Be There!http://blogs.hds.com/hdsblog/2014/02/himss-2014-well-be-there.html
http://blogs.hds.com/hdsblog/2014/02/himss-2014-well-be-there.html#commentsThu, 20 Feb 2014 22:24:11 +0000Dave Wilsonhttp://blogs.hds.com/hdsblog/?p=7796It’s that time of year again when we gather up our demo systems and head for somewhere warm and sunny to escape the brutal cold of winter. This year we head to Orlando, FL for some excitement with Mickey and, of course, the Health Information and Management Systems Society (HIMSS) annual meeting. A common theme we have seen over this past year is that healthcare is moving in a direction that demands better patient care. Did you know that since the last conference over 1 exabyte of patient-related data has been generated? But what have we done with it? Or the better question is…what CAN we do with it? This of course leads to what we can expect to see at HIMSS.

BIG DATA. It is certain to be featured in everyone’s booths, in everyone’s lectures and be present on everyone’s minds. This year, big data has gotten bigger! Healthcare generates a variety of data types, with new devices, technologies and mission critical systems all generating digital data, the velocity of data is increasing. Add to that social media, wearable devices and the Internet in general and the amount of data that can be utilized to make evidence-based decisions increases dramatically. In the HDS booth, we will be demonstrating the Hitachi Clinical Repository that starts at the root of your big data problem by overcoming interoperability issues, normalizing the data and enabling aggregation of data.

Key to big data is timely access to the data and this starts at the infrastructure layer. Hitachi Clinical Repository is a Vendor Neutral Archive that consolidates medical images and clinical content so that it can be shared across departments, facilities and regions. A completely integrated hardware, software and workflow solution, HCR ingests the medical content, indexes the associated metadata and then enables access to viewers, portals, analytics engines and business intelligence applications to standardized data. Add the enterprise viewer from our partners, Calgary Scientific and you have a fully functional VNA. As an add on to Hitachi Medical’s modality business or as a PACS archive replacement, HIMSS will be rife with VNA offerings and HDS and HMSA will be demonstrating how to get the most out of your VNA, beyond just medical imaging.

For the second year in a row, HDS is excited to have our sister company Hitachi Consulting join us in our booth. HCC will be demonstrating how they can help deliver on the promises of big data with a clinical focus. Chronic disease is a huge cost to the healthcare system and being able to manage patients more effectively throughout their life is a big part of how we can reduce those costs. HCC will show you how to identify those patients at risk and how to take preventative measures before a particular disease gets out of hand.

Building on the big data theme, Hitachi will be hosting a series of speaking sessions including Elmar Flamme, CIO at Klinikum Wels, Austria. Mr. Flamme will be discussing the work HDS has done with Simularity, around risk management and the ability to identify patients at risk. We have been working on a Proof of Concept and will be showing this at HIMSS for everyone to see how getting access to the right data and applying a predictive analytics suite like Simularity can help lower patient risk and improve the potential outcomes of patients.

Also included in the speaking sessions, HDS has invited a number of our partners to share their experiences and best practices that every provider should be aware of when it comes to healthcare IT. Stop by the booth and see Calgary Scientific, Karos Health, Informatica and Visbion discuss important topics ranging from enterprise viewing requirements to application retirement. In addition, HDS has launched the Hitachi HLS Fitbit project. Join our Fitbit Group and track your sleep, your steps per day and get healthy with us.

And finally, what would HIMSS be without some fun. Hitachi will be hosting a cell phone charging bar to recharge your cell phone and a coffee bar for you to get your caffeine fix and keep you awake during that exciting ICD-10 transition lecture. Don’t forget to stop by as the day winds down for the Brocade Happy Hour on Monday and Tuesday where you can taste some exotic beverages and learn about how our partners Brocade work with HDS to deliver key infrastructure benefits that improve performance and reliability of your mission critical systems.

I hope everyone enjoys #HIMSS14 and I look forward to seeing you in Florida! If you can’t make it to the show, follow the action on Twitter at @hds_healthcare, #HIMSS14HDS.

Thanks,

David

]]>http://blogs.hds.com/hdsblog/2014/02/himss-2014-well-be-there.html/feed0Hitachi Content Platform Object Store Earns Top Marks from Gartnerhttp://blogs.hds.com/hdsblog/2014/02/hitachi-content-platform-object-store-earns-top-marks-from-gartner.html
http://blogs.hds.com/hdsblog/2014/02/hitachi-content-platform-object-store-earns-top-marks-from-gartner.html#commentsMon, 17 Feb 2014 23:15:28 +0000Jeff Lundberghttp://blogs.hds.com/hdsblog/?p=7768As you may have read in my colleague Steve Garone’s blog post in our Hitachi Data Systems Community site, the industry is abuzz with object storage reports, articles and events such as the upcoming Next Gen Object Storage Summit for which HDS is a sponsor. This week, yet another analyst has weighed in on this topic, and according to Gartner’s new Critical Capabilities for Object Storage report, by Arun Chandrasekaran and Alan Dayley:

I couldn’t agree more. Nor would I disagree with Hitachi Content Platform (HCP) being top rated in overall score, which was assessed across archiving, cloud storage and content distribution, or their recognition that HCP offers the most robust security of any object store they reviewed. (No surprise, I’m sure…)

We are thrilled that Gartner sees the value of the HCP object storage platform, and that they recognize there is more to the story than the object store itself. While the platform excels on its own merits, the real beauty of Hitachi Data Systems solution is the integrated portfolio of HCP, Hitachi Content Platform Anywhere and Hitachi Data Ingestor. The trio of integrated offerings deliver simplified file sharing and for remote and branch office employees, instant access and easy collaboration for mobile workforce and corporate users, all while supporting a growing number of cloud and traditional applications. It is this integration that makes Hitachi Data Systems’ object storage solutions so compelling for our 800+ customers around the world and inspires the deployment of roughly 150PB of storage behind their HCPs. The security, simplicity and smarts of the HCP Portfolio means they did not have to rewrite existing applications, spend hundreds of hours and piles of money to make it work, or force users to learn new tools and processes, resulting in reduced operational costs and improved productivity.

And Gartner is not alone in recognizing this. Enrico Signoretti at Juku writes: “At the end of the day [the Hitachi Content Platform] is the only product on the market that can offer a complete set of end-to-end solutions from the object store to the gateways, all directly supported by a primary storage vendor.” Also, a recent 451 Research piece noted customers are “Looking at object-based storage in remote offices – the idea you can have a caching mechanism without a huge footprint is very appealing” and that “HDS’ object-based storage is something we’re also interested in.” Even Verizon is convinced that the HCP Portfolio is the right solution for their Enterprise Cloud offerings.

So, if you are thinking about object storage like just about everyone else these days, be sure to give HCP strong consideration. We believe you’ll see what those 800+ customers know to be true.

]]>http://blogs.hds.com/hdsblog/2014/02/hitachi-content-platform-object-store-earns-top-marks-from-gartner.html/feed0Technical Memes in 2014http://blogs.hds.com/hdsblog/2013/11/technical-memes-in-2014.html
http://blogs.hds.com/hdsblog/2013/11/technical-memes-in-2014.html#commentsFri, 15 Nov 2013 00:31:12 +0000Michael Hayhttp://blogs.hds.com/hdsblog/?p=7709It is getting to be that time of the year again. Time to make predictions on various risk levels, about what is going to happen next year. For 2014, my colleague Hu and I are going to split up the task of creating a list of technical trends and themes. Our message will span across a couple of cross-linked blog posts with Hu aiming at things that we should see come to pass soon and I’ll focus on memes that are likely to take hold.

With that said, let’s dig in!

1. EMERGING EXA-SCALE ERA

I’ve talked about this before in my Financial Forum series. In this post I used the Square Kilometer Array to motivate what kinds of sweeping changes are going to be required to achieve an exa-scale architecture. Since then, there have been bets against the emergence of such an architecture emerging by 2020 and several active groups and organizations deliberately planning for such a platform. I’m pretty sure that in 2014 we will see heightened debates on the possibility of such a platform arising on, before, or after 2020. So my tangible prediction is that the key word “exa-scale” will become hotly debated in 2014.

2. THE BI-MODAL ARCHITECTURE

Let’s face it, as fast as our LANs, MANs and WANs are they are still roughly an order of magnitude slower than the internal fabrics and busses of the storage and compute platforms; Compute and storage fabrics/busses are measured in gigabytes per second and networks are measured in gigabits per second. What to do? What we’re starting to see emerge is richer storage control and low latency access within the server. Today this is acting as a cache, but tomorrow who knows… I referenced this in the Bi-modal IT Architectures discussion on the Global Account and Global Systems Integrator Customer Community. For completeness, I’ve pulled the diagram in from that discussion to illustrate that a key driving force in the change is the shift in software architectures. The diagram suggests a kind of symbiotic relationship between an evolving software stack and the hardware stack. My expectation for 2014 is that we will see one or more systems that implement this kind of architecture, though the name may be different (I endeavor to cite references to them throughout the year).

3. PROCESS, ANALYZE AND DECIDE AT THE SOURCE

The Hadoop crowd has half the story right. The other half is that to support the Internet of Things where “the data center is everywhere” (thank you Sara Gardner for this quote) and low bandwidth unreliable WAN pipes are the norm, moving the data from edge or device to core isn’t really feasible. Further, today many EDW and Hadoop platforms in practice move data significantly over the network. For example, I’ve talked to several customers who pull data out of an RDBMS, process it on Hadoop, and push back to another RDBMS to connect to BI tools. This seems to violate one of the basic tenants of the Hadoop movement: bring the application to the data. Therefore it is necessary to augment data sources with intelligent software platforms that are capable of making decisions in real time, analyzing with low latencies, and winnow/cleanse data streams that are potentially moved back to the core. Note that in some cases the movement back to the core is by acquiring a “black box” and literally shipping data on removable media to a central depot for uploading. This suggests that perhaps that a sparse, curated information model may be more relevant for general analysis/processing than raw data. I digress. For 2014, I predict we will begin to see platforms emerge that start to solve this problem and an increased level of discussion/discourse in the tech markets. We have been calling this “smart ingestion” because it assumes that instead of dumb raw data there is some massaging of the data where the user gains benefit from both the “massage” and the outcome.

4. THE PROGRAMMABLE INFRASTRUCTURE

Wait. Did we just cross some Star trek like time barrier and go back to the era of the systems programmer? Is the DevOps practitioner really a re-visitation of a past era where the mainframe ruled the world? Likely not, but in the spirit of everything old being new again perhaps there is a sliver of truth here. To me, a key center point for the Software Defined Data Center (SDDC) is programmatic control of at least compute, network and storage. In effect, what application developers are really asking for is the ability to allocate, directly from their applications, these elements and more to meet their upstream customer requirements. Today the leading movement in the area of the Software Defined Data Center is the OpenStack initiative and community that surrounds it. We’re definitely far from complete control of the IT infrastructure from application developers, but I think that we are surely on that trajectory. A key aspect behind programmatic control is a reduction of the complexities and choices that application developers can select from and a fundamental reality that almost everything will be containerized in virtual infrastructure of some kind. By giving these things up, DevOps proficient developers will be able to quickly commission, decommission and control necessary ICT elements. In fact, I know of at least one customer that has had an application development team realize exactly this fact. What has happened is that the application team was being very prescriptive to the IT organization while at the same time authoring much of their next generation application stack on a public cloud. At some point, two things occurred: the cloud service could not meet their requirements and engineering realized they traded complete flexibility for speed to market and they liked it. The result was that the IT organization used OpenStack to build a private cloud so they could host engineering’s new application. This is a great “happily ever after” moment and I think hints at things to come. My prediction here is that we will begin to see OpenStack-friendly private cloud infrastructures for sale within the coming year. Since this is the most direct prediction I’m keeping my fingers crossed.

5. MEDIA WITH GUARANTEED RELIABILITY

As we’ve talked to customers, contemplating exa-scale systems, we’ve found they are reconsidering everything in the stack including the media. For a subset of these users tape and disk won’t cut it and they are in fact looking towards optical media of all things. Their constraints and thinking around power consumption, floor space, and of course extreme media durability coupled to specific requirements to guarantee data preservation, in some cases for 50 years or more without loss, this means that existing approaches won’t do. As it turns out there could be a perfect storm for optical notably with the maturation of standards, media density roadmaps, customer need, and emerging capacity in the supply chain, I argue that for specific markets optical is poised to make a comeback. Therefore both HDS and Hitachi are opening the dialogue through activities like the 2013 IEEE Mass Storage Conference or Ken Wood‘s post on US Library of Congress Update: Designing Storage Architectures for Digital Collections. We aren’t the only ones paying attention. Companies like M-Disc, for example, are pushing forward a thought pattern of really long term media. They articulate this argument well on their website:

“M-Discs utilize a proprietary rock-like inorganic material that engraves data instead of using reflectivity on organic materials that will eventually break apart and decompose with time. Furthermore, did you know that M-Disc technology is already being adopted worldwide by all major drive manufacturers, and that the M-Disc Blu-ray is read/write compatible with all current Blu-ray RW drives? While it is important to note that gold has a lower reflectivity than silver, even silver discs are still made of organic materials that may begin to lose data after only 2 years. See: Archival DVD Comparison Summary.”

As to a prediction… I think in this case we’ll begin to see the re-invention of the optical industry starting in 2014 with a focus moving from the consumer towards the enterprise. It wouldn’t surprise me if you even see the careful introduction of an offering or two.

]]>http://blogs.hds.com/hdsblog/2013/11/technical-memes-in-2014.html/feed0Hitachi Proud To Be Elected a Gold Member of the OpenStack Communityhttp://blogs.hds.com/hdsblog/2013/11/hitachi-proud-to-be-elected-a-gold-member-of-the-openstack-community.html
http://blogs.hds.com/hdsblog/2013/11/hitachi-proud-to-be-elected-a-gold-member-of-the-openstack-community.html#commentsMon, 04 Nov 2013 18:45:01 +0000Greg Knieriemenhttp://blogs.hds.com/hdsblog/?p=7699We are pleased to share the news today that Hitachi has been elected to Gold Membership of the OpenStack Foundation at the Hong Kong summit. By becoming a gold member, Hitachi joins other industry vendors to help “achieve the OpenStack Foundation Mission by Protecting, Empowering and Promoting OpenStack software and the community around it, including users, developers, and the entire ecosystem.” Our election to the OpenStack community as a gold member is an extension of our philosophy of helping IT develop a truly Software Defined Data Center that is open and flexible.

Our Gold membership is a part of our accelerated participation in the OpenStack community. In June, Hitachi joined the Red Hat OpenStack Cloud Infrastructure Partner Network, and in August we announced our formal participation as a corporate sponsor. Our subsequent code contribution is now included in the latest Havana release.

Hitachi has a long and productive history of supporting the open source community. Hitachi created an industry-leading operating environment for Linux applications, and as a Linux Foundation member (formerly Open Source Development Laboratory), we provide the development environment for open source projects focused on making Linux enterprise ready as well as participate in performance and reliability assessments and tool development for Linux. Our contributions to open source software include Linux kernel tracing infrastructure (SystemTap), Disk Allocation Viewer for Linux (DAVL), Branch Tracer for Linux (btrax), and others. In addition, Hitachi has done extensive work with Red Hat to provide hardware level virtualization technology on LPAR’s for KVM stacks.

Hitachi is now building on its commitment to open source through its involvement with OpenStack. Hitachi provides open cloud architectures to give IT more control over security and governance, the ability to leverage existing IT assets and to provide interoperability with a variety of infrastructure components, applications, and devices to create “best of breed” cloud while avoiding vendor lock-in. In addition to OpenStack, Hitachi also provides private cloud delivery solutions based on VMware and Microsoft.

Hitachi private clouds are built on a secure, fully consolidated and virtualized foundation – a single platform for all workloads, data and content. IT departments choose private cloud infrastructures to control and protect their data assets, but in many cases they have an eye toward hybrid clouds so they can take advantage of the flexibility and elasticity of public clouds, depending on the workloads and application requirements. For IT, openness is in part represented by their ability to leverage a variety of cloud delivery models.

This openness is also reflected in the degree to which customers can reach out beyond their data centers to leverage portals, management and open source frameworks, infrastructure components (provided by Hitachi as well as third parties), and other cloud environments to safely and securely create best-of-breed functionality for their stakeholders. We accomplish this by providing APIs, widely used and standards-based interfaces (such as Amazon S3 and REST), as well as other access methods and protocols.

Our support and involvement with OpenStack is another example of the company’s goal to provide IT with private cloud solutions that are more virtualized, more secure and more open. Our reputation as a vendor of robust cloud-enabled platforms and our large global install base will help further establish OpenStack as a viable private cloud framework.

]]>http://blogs.hds.com/hdsblog/2013/11/hitachi-proud-to-be-elected-a-gold-member-of-the-openstack-community.html/feed0My Goodness!! Won’t that Mainframe Thing Just Go Away!!http://blogs.hds.com/hdsblog/2013/10/my-goodness-wont-that-mainframe-thing-just-go-away.html
http://blogs.hds.com/hdsblog/2013/10/my-goodness-wont-that-mainframe-thing-just-go-away.html#commentsThu, 31 Oct 2013 17:31:55 +0000Claus Mikkelsenhttp://blogs.hds.com/hdsblog/?p=7693It’s been almost 30 years since its demise was predicted and the words “the mainframe is dead” echoed throughout datacenters around the world. About the same time people were scrambling to remove words like “MVS” and “mainframe” from their resumes. Outside of their datacenters and the semi-annual SHARE conference, it was seldom discussed. The “gray-hairs” ultimately retired and the skill set declined, so it would have to die. Right? Well, not quite. This is not a beast to be starved.

Like Arnold Schwarzenegger in “The Terminator”, the mainframe is not about to die. The gray-haired crowd (or no-haired crowd) has been replaced by tattoos, dreadlocks, and body piercings. Attend a SHARE conference and you’ll see what I mean. But aside from the visual change, there is certainly a mindset saying “the mainframe ain’t going anywhere anytime soon”.

What prompted this blog was an annual survey by BMC software that shows that the mainframe is, in fact, back, assuming it actually went somewhere. Or as Schwarzenegger (again) put it: “I’ll be back”.

This survey has received a fair amount of coverage. Rob Enderle of IT Business Edge gave a great review of the survey (in a level of detail I won’t repeat here, thank you, Rob).

So where does that leave us? The mainframe never did die. It merely receded into the murky background while we all jumped into the more challenging topics relating to Open Systems. And we’re still jumping into these challenges today. And we’ll still be jumping next year. In the meantime, HDS and Hitachi have never forgotten this platform with its blazing performance, top notch security, scalability, ultra reliability, and what many are now concluding is a price competitive OS.

And what exactly has HDS done? For starters, we remain fully IBM compatible. We’ve taken it a step further by adding HDP (Dynamic Provisioning) for the performance boost it gives to critical applications such as DB2, IMS, and CICS VSAM. Since IBM is pushing big data analytics on the mainframe, this is key. You need the IOPS that only HDP can provide. The HDP architecture is unique to HDS.

Additionally, we now have HDT (Dynamic Tiering) on the mainframe. On Open Systems we’ve calculated that over 80% of the data we find on Tier 1 disk doesn’t need to be there, and I believe that statistic applies to the mainframe as well. Less expensive media, much better environmentals, and less floor space. Let the storage system move 80% of your data to tier 2 disk. And to make matters even better, we’ve integrated HDT with SMS at the Storage Group level.

So although the mainframe has been hidden from public view for 3 decades, rest assured that we have never given up on it.

I just thought I’d bring that up.

]]>http://blogs.hds.com/hdsblog/2013/10/my-goodness-wont-that-mainframe-thing-just-go-away.html/feed1How to reduce backup storage requirements by more than 90% without data deduplicationhttp://blogs.hds.com/hdsblog/2013/10/how-to-reduce-backup-storage-requirements-by-more-than-90-without-data-deduplication.html
http://blogs.hds.com/hdsblog/2013/10/how-to-reduce-backup-storage-requirements-by-more-than-90-without-data-deduplication.html#commentsMon, 14 Oct 2013 21:08:45 +0000Rich Vininghttp://blogs.hds.com/hdsblog/?p=7677One of the most egregious causes of the copy data problem for many organizations is the common practice of performing full backups every weekend. The architecture of the backup application forces this practice, as it requires a recent full backup data set as the basis for efficient recovery. But each full backup set contains mostly the same files that were backed up in the previous full backup, and the one before that.

Below is a simple example to illustrate this, showing the differences between the common “full + differential”, “full + incremental” and “incremental-forever” backup models. First, the basic definitions of these models.

Full + differential: copies any new or changed data since the last full backup; a periodic full backup helps to keep the size of the differential set from growing out of control.

Full + incremental: copies the new or changed data since the last full or incremental backup; a periodic full backup helps to keep the number of incremental sets from growing out of control.

Incremental-forever: starts with a full backup, then copies only the new and changed data during each backup; it never performs another full backup.

The differential backups require more storage and will copy the same files multiple times during the week, but they offer the benefit of faster, more reliable recoveries since you need to restore only the last full backup set and then the last differential set (a 2 step recovery process). However, the size of the differential backup will increase each day, until a new full backup is completed. Doing differentials forever would eventually be the same as doing full backups every day.

In comparison, the full + incremental method uses a little less storage, and the daily backups will transfer less data, but recovery can be complicated by needing to restore multiple incremental data sets, in the correct order.

The incremental-forever backup solutions on the market are able to track each new file within its repository and present a full data view from any point-in-time within the retention period. This enables a one-step recovery process, which is faster and less error prone than the other models. And of course, this method eliminates the periodic full backups.

Better backup, better recovery

For this example, let’s assume we have a normal business, school or agency that operates 5 days per week, 50 weeks per year. They have 100TB of data, and a total data change rate of 50% per year (50TB). This equates to 1% per week (1TB), and 0.2% per day (200GB). They retain their backups for 12 weeks for operational recovery, assuming that data that needs to be retained longer is archived.

The full + differential model copies 200GB on Monday, 400GB on Tuesday, through to 1TB on Friday, and then copies the full 100TB during the weekend. The full + incremental and the incremental-forever models each copy 200GB per weekday, but the full + incremental copies the full 100TB on the weekend while the incremental-forever system takes the weekend off.

Including the initial full backup (100TB), the total backup storage capacity needed for 12 weeks for each model is:

Full + differential: 1,336 TB (1.3 PB)

Full + incremental: 1,312 TB (1.3PB)

Incremental-forever: 112TB (0.1PB)

That’s a 91% reduction in capacity requirements without spending any money or compromising system performance on data deduplication. How much does 1.2PB of backup storage cost to acquire, manage and maintain? Actually, it’s 2.4PB of extra storage, since we’ll want to replicate the backup repository for off-site disaster recovery. If the backup data is retained for longer than 3 months, then these savings will be increased even more.

Continuous vs. scheduled incremental-forever

As with all choices in technologies, there are some trade-offs involved when selecting an incremental-forever backup model. The classic, scheduled approach to incremental backup used by most data protection applications requires the scanning of the file system being protected to determine which files have changed since the last backup. If the system contains millions or even billions of files, that scan can take many hours, consuming most of the available backup window. Copying the actual data to be backed up takes relatively little time.

This scanning time can be completely eliminated by using a continuous data protection (CDP) approach, which captures each new file, or block of data, as it is written to the disk. There are only a few solutions on the market, including Hitachi Data Instance Manager, that combine the benefits of incremental-forever and continuous data protection.

The CDP model will require a little more storage than the scheduled incremental model, since it will be capturing multiple versions of some files during the day as they are edited, but that’s a good thing. And the storage required will still be far less than the solutions that require full backups.

To learn more about how HDIM can reduce the costs and complexity of protecting your data, watch this video [link TBD], or to request a demo send a note to hdim-sales@hds.com.

]]>http://blogs.hds.com/hdsblog/2013/10/how-to-reduce-backup-storage-requirements-by-more-than-90-without-data-deduplication.html/feed0Hitachi Performance Leadership, The Teddy Roosevelt Wayhttp://blogs.hds.com/hdsblog/2013/10/hitachi-performance-leadership-the-teddy-roosevelt-way.html
http://blogs.hds.com/hdsblog/2013/10/hitachi-performance-leadership-the-teddy-roosevelt-way.html#commentsThu, 10 Oct 2013 15:00:29 +0000Bob Madaiohttp://blogs.hds.com/hdsblog/?p=7664Of the many things US President Theodore Roosevelt is known for, one certainly is the quote “Speak softly and carry a big stick, you will go far” and made me think that Teddy would have made a great HDSer.

You see, while other vendors are loudly proclaiming performance leadership after upgrading systems that were sorely in need of a lift (yes, I’m looking at you EMC), Hitachi continues down our path of consistently providing our customers the most innovative hardware architectures around.

The above configurations are nothing too extravagant from a hardware perspective: 2 and 4 node HNAS clusters and 64 or 32 flash modules. That simplicity is exactly why the results are all that much more exciting.

Our HUS VM 2-node HNAS 4100 system delivered 298,648 operations/second with an overall response time of 0.59 milliseconds. While both numbers are astounding for the amount of hardware deployed, note that the 0.59 milliseconds overall response time was the lowest reported on this benchmark. Ever.

Our HUS VM 4-node HNAS system delivered a whopping 607,647 operations/second with an overall response time of 0.89 milliseconds. Again, amazing results. But the throughput numbers start to get so large that it might help to understand them by looking for a relevant comparison.

The most timely, and arguably most relevant, comparison might be the recently and LOUDLY announced VNX8000 (result here)… king of the recent “VNX2” launch. It certainly showed up to this benchmarking match ready to rumble, as it had twice as many “X-blades” installed (eight) to drive NAS traffic as did our 4-Node HNAS configuration and five hundred and forty four (yes, 544) SSD drives compared to our seemingly impossibly outgunned 64 Hitachi Accelerated Flash Modules.

Despite the should-be insurmountable hardware advantage for the brand-spanking-new EMC system, it actually drove 5% LESS NFS operations per second than our significantly more efficient HUS VM configuration, with both systems providing sub-millisecond overall response times.

Granted, in the VNX architecture one X-blade needs to sit idle waiting for an issue to arise before providing value and EMC does not have advanced, enterprise flash capacity like our Accelerated Flash, but you’d expect it to be able to beat out a system with half the installed file nodes and less that 1/8 the amount of flash devices, wouldn’t you?

With that comparison helping set context, the next logical one might be to NAS market “leader” Netapp. The most relevant system that Netapp has published results for is the FAS6240 in a wide variety of cluster sizes. In all fairness, it becomes hard to make a logical comparison between the systems because the HNAS per-node performance is >2X that of a FAS6240 node and NetApp’s flash-strategy seems to lag its benchmarking so only eight FlashCache cards are leveraged in Netapp’s benchmark.

Thus, the closest comparison is probably an 8-node FAS6240 cluster (results here), but despite having twice as many file-serving nodes and 576 power-hungry disk drives it still provides 18% less operations per second and is unable to provide a sub-millisecond overall response time.

Of course, benchmarks are not the real world, though industry-trusted ones like those at SPEC.org do their best to maintain a useful level of openness and vendor comparison for end users. That ability to compare is important, because NAS solutions for such things as large scale VMware deployments and Oracle databases among (other use cases) continue to gain significant traction and demand extreme performance. Customers however do not want to simply throw massive amounts of filers, disk drives and SSDs at every problem, and they are realizing that the system architecture does, in fact, matter.

So we are rightly proud of our architectural advantages that allow customers to deploy more efficient solutions and get more predictable performance. Yes, the market is awash with hardware providers whose design point is much more about developing to the lowest possible cost, while putting the onus on customers to deploy more hardware to make up for architectural and design limitations.

We choose another path. We choose to develop better hardware to provide our customers with highly functional systems that provide predictable (and yes, best-in-class) performance in the most efficient way possible. Some might say that’s a harder path, and maybe they are right. But results like these, and more importantly the continued success of our customers, have us convinced it’s a better path.

To learn more about our architectural differences that enabled this success, here are some links for the technically inclined among you:

So, while other vendors speak loudly, launch loudly and deploy over-sized solutions. We’ll walk softly and provide the best technology we can. I for one, think Teddy would be proud.

]]>http://blogs.hds.com/hdsblog/2013/10/hitachi-performance-leadership-the-teddy-roosevelt-way.html/feed3Software Defined Softwarehttp://blogs.hds.com/hdsblog/2013/10/software-defined-software.html
http://blogs.hds.com/hdsblog/2013/10/software-defined-software.html#commentsTue, 01 Oct 2013 22:02:41 +0000Claus Mikkelsenhttp://blogs.hds.com/hdsblog/?p=7510And NO, I’m not trying to be sarcastic; this is a seriously titled blog. But I must explain. Hang in there for a paragraph or two. It gets better. It is “Software Defined Software”. Trust me.

Firstly, yes, (for those of you who know me) I am a serious cynic. It comes from (many too many) decades of observing hype, bubbles, buzzwords, trends, promises, cure-it-all startups, hype masters, marketing wizards, promises and predictions that the storage industry, as we knew it, would be transformed forever. Tomorrow this widget will be 100 times faster, be 1/100th the cost, and be 100 times more reliable than what you have today. Did I mention that it also cures world hunger?

So here’s the latest: Software Defined Storage. So, what is Software Defined Storage? We have all read about it. EMC has announced it. VMWare has announced it. And even HDS has concluded that it has Software Defined Storage. But in our case, we really do, and this gets to the heart of this explanation. So here’s my diatribe on why Software Defined Storage is really Software Defined Software.

Not that many years ago, I was the guy (or one of the guys) designing and architecting the storage subsystems of the day. We had a rather simple terminology on things. Simply, that “software” was an instruction set and some programs that ran on the server. And what “drove” the hardware (in this case storage) was called microcode (or ucode for you engineering types). I think ucode might be called “firmware” these days. My iPhone has “firmware” to control its hardware, and my 327 apps, run all of the silly apps that I need.

But then, sometime in the mid-1990’s, a company called EMC () decided to start calling their microcode “software”. Why? Because it looked better on the books for the Wall Street guys. Yes, it was that simple. Software suddenly became a whole lot sexier than microcode. I hated the sudden change in terminology, but at the time, EMC was the 900-pound gorilla, and who was I to argue. Although, I did!

Fast forward (I think that’s a VCR term and therefore inappropriate, but visually effective), to today. Software Defined Storage is all the rage. Every vendor is on the bandwagon, including HDS. But there is one, very significant difference, and it cannot be emphasized enough. Software is microcode. Microcode is software. HDS has invested 2 decades into “software” to implement copy on write (COW), synchronous replication, asynchronous replication, copy after write (Thin Image), cloning, 3-Data Center Replication, 4-Data Center Replication, archiving, and more functionality than you could ever imagine.

So, can I at least go back to being the cynic that I am, and interpret “Software Defined Storage” as to what it really is, which is “Software Defined Software? , we must now conclude that software is microcode, and microcode is software. (thank you EMC and your “Wall Street handlers” at the time) As Hitachi/HDS has the best hardware functionality, that it is, in fact, the best microcode in the business, meaning (trash the hype) why are we not the best “Software Defined “Software”?

I not only think we are, I know we are. Define software the way you want, but as this ex-architect guy, I’m convinced I’m in the right place.