Category Archives: Storage Tiering

There’s been practically a firestorm when EMC announced ViPR, its own version of “software-defined storage” at EMC World last week. Whether you want to call it Virtualization Platform Re-defined or Re-imagined, competitors such as NetApp, HDS, Nexenta have taken pot-shots at EMC, and touting their own version of software-defined storage.

The separation of the Control Plane and the Data Plane of the ViPR allows the abstraction of 2 main layers.

Layer 1 is the abstraction of the underlying storage hardware infrastructure. Although I don’t have the full details (EMC guys please enlighten me, please!), I believe storage administrator no longer need to carve out LUNs from RAID groups or Storage Pools, striped and sliced them and further provision them into meta file systems before they are exported or shared through NAS protocols. I am , of course, quoting the underlying provisioning architecture of Celerra, which can be quite complex. Anyone who has done manual provisioning with Celerra Manager should know what I mean.

Share this:

An interesting article came up in the news this week. The article, from the ever popular The Register, mentioned 3 up and rising storage stars, Nimble Storage, Tintri and Tegile, and their assault on a flash strategy “blind spot” of the big boys, notably EMC and NetApp.

I have known about Nimble Storage and Tintri for a couple of years now, and I did take some time to read up on their storage technology offering. Tegile is new to me when it appeared on my radar after SearchStorage.com announced as the Gold Winner of the enterprise storage category for 2012.

The Register article intriqued me because it implied that these traditional storage vendors such as EMC and NetApp are probably doing a “band-aid” when putting together their flash storage strategy. And typically, I see these strategic concepts introduced by these 2 vendors:

Have a server-side cache strategy by putting a PCIe card on the hosting server

Have a network-based all-flash caching area

Have a PCIe-based flash card on the storage system

Have solid state drives (SSDs) in its disk shelves enclosures

In (1), EMC has VFCache (the server side caching software has been renamed to XtremSW Cache and under repackaging with the Xtrem brand name) and NetApp has it FlashAccel solution. Previously, as I was informed, FlashAccel was using the FusionIO ioTurbine solution but just days ago, NetApp expanded the LSI Nytro WarpDrive into its FlashAccel solution as well. The main objective of a server-side caching strategy using flash is to accelerate mostly read-based I/O operations for specific application workloads at the server side.

Share this:

The worldwide storage market is going through unprecedented change as it is making baby steps out of one of the longest recessions in history. We are not exactly out of the woods yet, given the Eurozone crisis, slowing growth in China and the little sputters in the US economy.

Back in early 2012, Fujitsu has shown good signs of taking market share in the enterprise storage but what happened to that? In the last 2 quarters, the server boys in the likes of HP, IBM and Dell storage market share have either shrunk (in the case of HP and Dell) or tanked (as in IBM). I would have expected Fujitsu to continue its impressive run and continue to capture more of the enterprise market, and yet it didn’t. Why?

I was given an Eternus storage technology update by the Fujitsu Malaysia pre-sales team more than a year ago. It has made some significant gains in technology such as Advanced Copy, Remote Copy, Thin Provisioning, and Eco-Mode, but I was unimpressed. The technology features were more like a follower, since every other storage vendor in town already has those features.

Share this:

Happy Lunar New Year! This is the Year of the Water Snake, which just commenced 3 days ago.

I have always maintain that VMware has to power to become a storage killer. I mentioned that it was a silent storage killer in my blog post many moons ago.

And this week, VMware is not so silent anymore. Earlier this week, VMware had just acquired Virsto, a storage hypervisor technology company. News of the acquisition are plentiful on the web and can be found here and here. VMware is seriously pursuing its “Software-Defined Data Center (SDDC)” agenda and having completed its software-defined networking component with the acquisition of Nicira back in July 2012, the acquisition of Virsto represents another bedrock component of SDDC, software-defined storage.

Who is Virsto and what do they do? Well, in a nutshell, they abstract the underlying storage architecture and presents a single, global namespace for storage, a big storage pool for VM datastores. I got to know about their presence last year, when I was researching on the topic of storage virtualization.

I was looking at Datacore first, because I was familiar with Datacore. I got to know Roni Putra, Datacore’s CTO, through a mutual friend, when he was back in Malaysia. There was a sense of pride knowing that Roni is a Malaysian. That was back in 2004. But Datacore isn’t the only player in the game, because the market is teeming with folks like Tintri, Nutanix, IBM, HDS and many more. It just so happens that Virsto has caught the eye of VMware as it embarks its first high-profile step (the one that VMware actually steps on the toes of the Storage Big 6 literally) into the storage game. The Big 6 are EMC, NetApp, IBM, HP, HDS and Dell (maybe I should include Fujitsu as well, since it has been taking market share of late)

Virsto installs as a VSA (virtual storage appliance) into ESXi, and in version 2.0, it plugs right in as an almost-native feature of ESXi, not a vCenter tab like most other storage. It looks and feels very much like a vSphere functionality and this blurs the lines of storage and VM management. To the vSphere administrator, the only time it needs to be involved in storage administration is when he/she is provisioning storage or expanding it. Those are the only 2 common “touch-points” that a vSphere administrator has to deal with storage. This, therefore, simplifies the administration and management job.

Here’s a look at the Virsto Storage Hypervisor architecture (credits to Google Images):

What Virsto does, as I understand from high-level, is to take any commodity storage and provides a virtual storage layer and consolidate them into a very large storage pool. The storage pool is called vSpace (previously known as LiveSpace?) and “allocates” Virsto vDisks to each VMs. Each Visto vDisk will look like a native zeroed thick VMDK, with the space efficiency of Linked Clones, but without the performance penalty of provisioning them. The Virsto vDisks are presented as NFS exports to each VM.

Another important component is the asynchronous write to Virsto vLogs. This is configured at the deployment stage, and this is basically a software-based write cache, quickly acknowledging all writes for write optimization and in the background, asynchronously de-staged to the vSpace. Obviously it will have its own “secret sauce” to optimize the writes.

Within the vSpace, as disk clone groups internal to the Virsto, storage related features such as tiering, thin provisioning, cloning and snapshots are part and parcel of it. Other strong features of Virsto are its workflow wizard in storage provisioning, and its intuitive built-in performance and management console.

As with most technology acquisitions, the company will eventually come to a fork where they have to decide which way to go. VMware has experienced it before with its Nicira acquisition. It had to decide between VxLAN (an IETF standard popularized by Cisco) or Nicira’s own STT (Stateless Transport Tunneling). There is no clear winner because choosing one over the other will have its rewards and losses.

Likewise, the Virsto acquisition will have to be packaged in a friendly manner by VMware. It does not want to step on all toes of its storage Big 6 partners (yet). It still has to abide to some industry “co-opetition” game rules but it has started the ball rolling.

And I see that 2 critical disruptive points about this acquisition in this:

It has endorsed the software-defined storage/storage hypervisor/storage virtualization technology and started the commodity storage hardware technology wave. This could the beginning of the end of proprietary storage hardware. This is also helped by other factors such as the Open Compute Project by Facebook. Read my blog post here.

It is pushing VMware into a monopoly ala-Microsoft of the yesteryear. But this time around, Microsoft Hyper-V could be the benefactor of the VMware agenda. No wonder VMware needs to restructure and streamline its business. News of VMware laying off about 900 staff can be read here. Its unfavourable news of its shares going down can be read here.

I am sure the Storage Big 6 is on the alert and is probably already building other technology and partnerships beyond VMware. It the natural thing to do but there is no stopping VMware if it wants to step on the Big 6 toes now!

Share this:

There is a mini revolution going on, and Facebook is the main force driving it.

It is the Open Compute Project (OCP), and its mission is to redesign the modern-day data centers and drive open hardware and architectural designs and specifications, including storage. The overall goals are to drive greater data center efficiency, flexibility, energy savings and cost effectiveness in a new class of “hyperscale” datacenters. Facebook, Google and Amazon are some of the examples of hyperscale datacenters, where their businesses relies on massive computing power, exponential storage performance and racks and racks of computing infrastructure to drive their web-computing or cloud-computing services.

Some of the cool technology innovations in mind includes having systems that support any CPUs from any vendors including Intel and AMD. We may even see both processor brands running on the same motherboard. The Open Common Slots component for processors is based on PCIe. Intel has pledged their Decathlete motherboard specifications for OCP and likewise AMD has produced its Roadrunner mobo series specification for the project as well. The ARM processor could also be supported in the near future in this “mix-and-match” OCP ideals.

Other proposed changes include OpenRack specifications, “sleds”, and of course, the Open Vault project for storage (aka “Knox”). Continue reading →

Share this:

Back in 2000, before I joined NetApp, I bought one of my first storage technology books. It was “The Holy Grail of Data Storage Management” by Jon William Toigo. The book served me very well, because it opened up my eyes about the storage networking and data management world.

I mean, I have been doing storage for 7 years before the year 2000, but I was an implementation and support engineer. I installed my first storage arrays in 1993, the trusty but sometimes odd, SPARCstorage Array 1000. These “antiques” were running 0.25Gbps Fibre Channel, and that nationwide bank project gave me my first taste and insights of SAN. Point-to-point, but nonetheless SAN.

Then at Sun from 1997-2000, I was implementing the old Storage Disk Packs with FastWide SCSI, moving on to the A5000 Photons (remember these guys?) and was trained on the A7000, Sun’s acquisition of Encore way back in the late nineties. Then there was “Purple”, the T300s which I believe came from the acquisition of MaxStrat.

The implementation and support experience was good but my world opened up when I joined NetApp in mid-2000. And from the Jon Toigo’s book, I learned one of the most important lessons that I have carried with me till this day – “Data Storage Management is 3x more expensive that the data storage equipment itself“. Given the complexity of the data today compared to the early 2000s, I would say that it is likely to be 4-5x more expensive.

And yet, I am still perplexed that many customers and prospects still cannot see the importance and the gravity of data storage management, and more precisely, data management itself.

A couple of months ago, I had to opportunity to work on an RFP for project in Singapore. The customer had thousands of tapes storing digital media files in addition to tens of TBs running on IBM N-series storage (translated to a NetApp FAS3xxx). They wanted to revamp their architecture, and invited several vendors in Singapore to propose. I was working for a friend, who is an EMC reseller. But when I saw that tapes figured heavily in their environment, and the other resellers were proposing EMC Isilon and NetApp C-Mode, I thought that these resellers were just trying to stuff a square peg into a round hole. They had not addressed the customer’s issues and problems at all, and was just merely proposing storage for the sake of storage capacity. Sure, EMC Isilon is great for the media and entertainment business, but EMC Isilon is not the data management solution for this customer’s situation. Neither was NetApp with the C-Mode solution.

What the customer needed to solve was a data management solution, one that involved

Single namespace for video editors and programmers, regardless of online disk storage or archived tape storage

Transparent and automated storage tiering and addressing the value of the data to the storage media

A backup tier which kept a minimum 2 recent copies for file restoration in case of disasters

An archived tier which they could share with their counterparts in other regions

A transparent replication tier which would allow them to implement a simplified disaster recovery mechanism with their counterparts in Japan and China

And these were the key issues that needed to be addressed, not the scale-out, usual snapshot mechanism. These features are good for a primary, production storage infrastructure, but this customer’s business operations had about 70-80% data and files which were offline in tapes. I took the liberty to advise my friend to look into Quantum StorNext, because the solution could solve the business problem NOT solving it from an IT point of view. Continue reading →

Share this:

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications will have implications on the brand of storage they will be locked in into.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

“The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.”

“It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware”

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

Share this:

Dell made a huge splash 2 weeks ago in London in their inaugural Dell Storage Forum. They dubbed their storage and management lineup as “Fluid Data Architecture” offering the ability for customers to quickly adapt and automate their business when it comes to storage networking and more importantly, data management.

In the London show, they showcased several key innovations and product development. Here’s a list of their jewels:

Optimized object storage for Microsoft Sharepoint with the DX6000 Object Storage Platform – DX6000 is an OEM from Caringo

Broader support for Dell Force10, PowerConnect and their partner’s Brocade

The technology from Ocarina Networks is fantastic technology and I have always admired Ocarina. I have written about Ocarina in the past in my previous blog. But I was a bit perplexed why Dell chose to enter the secondary dedupe market with a backup dedupe appliance in the DR4000. They are already a latecomer into the secondary deduplication game and I thought HP was already late with their StoreOnce.

They could have used Ocarina’s technology to trailblaze the primary deduplication market. In my previous blog, I mentioned that primary deduplication hasn’t really taken off in a big way, and Dell with the technology from Ocarina could set the standard and establish themselves as the leader of the primary deduplication market space. I was disappointed that they didn’t, not just yet.

The Compellent Storage Center 6.0 release was a major release and it was, for better or for worse, coincided with the departure of Phil Soran, the founder and CEO of Compellent. Phil felt that he can let his baby go and Dell is certainly making the best of what they can do with Compellent as their flagship data storage product.

The major release included 64-bit support for greater performance and scalability and also include several key VMware technologies that other vendors already have. The technologies included:

Other storage related releases (I am not going to talk about Force10 or their PowerConnect solutions here) included Dell offering 16Gbps FibreChannel switches from Brocade and also their DX6000 Object Storage Platform optimized for Microsoft Sharepoint.

I think it is fantastic that Dell is adapting and evolving into a business-oriented, enterprise solution provider and their acquisitions in the past 3 years – EqualLogic, Exanet, Ocarina Networks, Force10 and Compellent – proves that Dell aims to take market share in the storage networking and data management market. They have key initiatives with CommVault, Symantec, VMware and Microsoft as well. And Michael Dell is becoming quite a celebrity lately, giving Dell the boost it needs to battle in this market.

But the question is, “Is their Fluid Data Architecture” fluid enough?”If I were a customer, would I bite?

As a customer, I look for completeness in the total solution, and I cannot fault Dell for having most of the pieces in the solution stack. They have networking in their PowerConnect, Force10 and Brocade. They have SAN in both Compellent and EqualLogic but their unified storage story is still a bit lacking. That’s because we have not seen Dell’s NAS storage yet. Exanet was a scale-out NAS and we have seen little rah-rah about this product.

From a data management perspective, their data protection story gels well with the Commvault and Symantec partnership, but I feel that Dell sales and SEs (at least in Malaysia) spends too much time touting the Compellent Automated Storage Tiering. I have spoken to folks who have listened to Dell guys’ pitches and it’s too one-dimensional. It’s always about storage tiering and little else about other Compellent technology.

At this point of time, the story that Dell sells here in Malaysia is still disjointed, but they are getting better. And eventually, the fluidity (pun intended ;-)) of their Fluid Data Architecture will soon improve.

How will Dell fare in 2012? They had taken a beating in the past 2 IDC’s quarter storage market tracker, losing some percentage points in market share but I think Dell will continue to tinker to get it right.

Share this:

Nowadays, the capacity of the hard disk drives (HDDs) are really big. 3TB is out and 4TB is in the horizon. What’s next?

For small-medium businesses in Malaysia, depending on their data requirements and applications, 3-10TB is pretty sufficient and with room to grow as well. Therefore, a 6TB requirement can be easily satisfied with 2 x 3TB HDDs.

If I were the customer, why would I buy a storage array, with the software licenses and other stuff that will not only increase my cost of equipment acquisition and data management, it will also increase the complexity of my IT infrastructure? I could just slot HDDs into my existing server, RAID it with RAID-0 (not a good idea but to save costs, most customers would do that) and I have a 6TB volume! It’s cheaper, easier to manage with Windows or Linux, and my system administrator doesn’t have to fuss about lack of storage experience.

And RAID isn’t really keeping up with the tremendous growth of HDD’s capacity as well. In fact, RAID is at risk. RAID (especially RAID 5/6) just cannot continue provide the LUN or volume reliability and data availability because it just takes too damn long to rebuild the volume after the failure of a disk.

Back in the days where HDDs were less than 500GB, RAID-5 would still hold up but after passing the 1TB mark, RAID-6 became more prevalent. But now, that 1TB has ballooned to 3TB and RAID-6 is on shaky ground. What’s next? RAID-7? ZFS has RAID-Z3, triple parity but come on, how many vendors have that? With triple parity or stronger RAID (is there one?), the price of the storage array is going to get too costly.

Experts have been speaking about parity-declustering, but that’s something that a few vendors have right now. Panasas, founded by one of forefathers of RAID, Garth Gibson, comes to mind. In fact, Garth Gibson and Mark Holland of Cargenie-Mellon University’s Parallel Data Lab (PDL) presented a paper about parity-declustering more than 10 years ago.

Let’s get back to our storage fatty. Yes, our storage is getting fat, obese, rotund or whatever you want to call it. And storage vendors have been pushing a concept in hope that storage administrators and customers can take advantage of it. It is called Storage Optimization or Storage Efficiency.

Here are a few ways you can consider to put your storage on a diet.

Compression

Thin Provisioning

Deduplication

Storage Tiering

Tapes and SSDs

To me, compression has not taken the storage world by storm. But then again, there aren’t many vendors that tout compression as a feature for storage optimization. Most of them rather prefer to push the darling of data reduction, data deduplication, as the main feature for save more space. Theoretically, data deduplication makes more sense when the data is inactive, and has high occurrence of duplicated data. That is why secondary storage such as backup deduplication targets like Data Domain, HP StoreOnce, Quantum DXi can publish 20:1 rates and over time, that rate can get even higher.

NetApp also has been pushing their A-SIS data deduplication on primary storage. Yes, it helps with the storage savings in primary but when the need for higher data transfer rates and time to access “manipulated” data (deduped or compressed), it is likely that compression is a better choice for primary, active data.

So who has compression? NetApp ONTAP 8.0.1 has compression now and IBM with its Storewize V7000 started as a compression device. Read about IBM Storewize in my blog here. Dell has Ocarina Networks, which was recently unleashed. I am a big fan of Ocarina Networks and I wrote about the technology in my previous blog. EMC, during the Celerra days of DART has compression but I don’t hear much about it in their VNX. Compression is there, believe me, embedded all the loads of EMC marketing.

Thin Provisioning is now a must-have and standard feature of all storage vendors. What is Thin Provisioning? The diagram below shows you:

In the past, storage systems aren’t so intelligent. You ask for 10TB, you are given 10TB and that 10TB is “deducted” from the storage capacity. That leads to wastage and storage inefficiencies. Today, Thin Provisioning will give you 10TB but storage capacity is consumed as it is being used. The capacity is not pre-allocated as in the past. Thin provisioning is a great diet pill for bloated storage projects.

Another up and coming feature is storage tiering. Storage tiering, when associated to storage optimization, should include hierarchical storage management (HSM) and tape-out as well. Storage optimization solutions should not offer only in the storage array itself. Storage tiering within the storage array is available with most vendors – IBM EasyTier, EMC FAST2, Dell Fluid Data Management and many others. But what about data being moved out of the storage array? What about reducing the capacity of the data online or near-line? Why not put them offline if there isn’t a need for it?

I term this as Active Archiving, something I learned while I was at EMC. Here’s a look at EMC’s style of Active Archiving:

Active Archiving promotes the concept of data archiving and is not unique only to EMC. Almost all storage vendors, either natively or with 3rd party vendors, can perform fairly efficient data archiving in one way or another. One of the software that I liked (and not unique!) is Quantum Stornext. Here’s a video of how Quantum Stornext helps reduce the fat of the storage.

With the single-copy sharing feature of Quantum Stornext to multiple disparate OSes, there are lesser duplicate files in storage as well.

Tapes have been getting a bad name in the past few years. It has been repositioned and repurposed as an archive medium rather than a backup medium. But tape is the greenest and most powerful storage diet pill around. And we should not be discount tapes because tapes are fighting back. Pretty soon you will be hearing about Linear Tape File System (LTFS). In a nutshell, Linear Tape File System (LTFS) allows you to use the tape almost as if it were a hard disk. You can drag and drop files from your server to the tape, see the list of saved files using a standard operating system directory (no backup software catalog needed), and use point and click to restore. How cool is that!

And Solid State Drives (SSDs) makes sense as well.

There are times that we need IOPS and using spinning drives, we have to set up many disk spindles to achieve the IOPS that we want. For example, using the diagram below from the godfather of storage, Greg Schulz,

The set of 16 spinning HDD drives on the left can only deliver 3,520 IOPS. The problem is, we have wasted a lot of disk space, as seen in the diagram below. This design, which most customer would be accustomed to, may look cheaper but in actual fact, is NOT.

If the price of a Fibre Channel HDD is RM2,000, the total of 16 would make up RM32,000.00. That is not inclusive of additional power and cooling and rack space and also the data management costs. Assuming the SSDs costs 5 times more than the Fibre Channel HDD. SSDs are capable of delivering very high IOPS. Here I am putting a modest 5,000 IOPS per SSDs. With just 2 SSDs (as the right design suggests), the total costs is only RM20,000. It has greater performance room to grow, and also savings in data management, power and cooling.

Folks, consider SSDs as part of your storage diet plan.

All these features are available, in whole or in part, and they are part of the storage technology offerings that is out there. With all these being said, are you doing something about it? Get off your lazy bum and start managing your storage and put your storage on a diet!!!

Share this:

I wanted to sign off early tonight but an article in ComputerWorld caught my tired eyes. It was titled “EMC to put hardware into servers, VMs into storage” and after I read it, I couldn’t help but to juxtapose the articles with what I said earlier in my blogs, here and here.

It is very interesting to note that “EMC runs vSphere directly on the storage controllers and then uses vMotion to migrate VMs from application servers onto the storage array, ..” since the storage boxes have enough compute power to run Virtual Machines on the storage. Traditionally and widely accepted, VMs should be running on servers. Contrary to beliefs, EMC has already demonstrated this running of VMs capability on their VNX, Isilon and Symmetrix.

And soon, with EMC’s Project Lightning (announced at EMC World in May 2011), they will be introducing server side PCIe-based SSDs, ala Fusion-IO. This is different from the NetApp PAM/FlashCache PCIe-based card, which sits on their arrays, not on hosts or servers. And it is also very interesting to note that this EMC server-side PCIe Flash SSD card will become a bridge to EMC’s FAST (Fully Automated Storage Tiering) architecture, enabling it to place hot, warm and cold data strategically on different storage tiers of the applications on VMware’s VMs (now on either the server or the storage), perhaps using vMotion as a data mover on top of the “specialized” link created by the server-side EMC PCIe card.

This also blurs the line between the servers and storage and creates a virtual architecture between servers and storage, because what used to be distinct data border of the servers is now being melded into the EMC storage array, virtually.

2 red alerts are flagging in my brain right now.

The “bridge” has just linked the server back to the storage, after years of talking about networked storage. The server is ONE again with the storage. Doesn’t that look to you like a server with plenty of storage? It has come a full cycle. But more interesting and what I am eager to see is what more is this “bridge” capable of when it comes to data management. vMotion might be the first of many new “protocol” breeds to enhance data management and mobility with this “bridge”. I am salivating right now of this massive potential.

What else can EMC do with the VMware API? This capability I am writing right now is made possible by EMC tweaking VMware’s API to maximize much, much more. As the VMware vStorage API is continually being enhanced, the potential is again, very massive and could change the entire landscape of cloud computing and subsequently, the entire IT landscape. This is another Pavlov’s dog moment (see figures below as part of my satirical joke on myself)

Sorry, the diagram below is not related to what my blog entry is. Just my way of describing myself right now. 😉

I am extremely impressed with what EMC is doing. A lot of smarts and thinking go into this and this is definitely signs of things to come. The server and the storage are “merging again”. Think of it as Borg assimilation in Star Trek.