Tag Archives: Cloud

Post navigation

(Excerpt from original post on the Taneja Group News Blog)

What’s a Cloud Converged system? It is really what us naive people thought hybrid storage was all about all along. Yet until now no high performance enterprise class storage ever actually delivered it. But now, Oracle’s latest ZFS Storage Appliance, the ZS5, comes natively integrated with Oracle Cloud storage. What does that mean? On-premise ZS5 Storage Object pools now extend organically into Oracle Cloud storage (which is also made up of ZS storage) – no gateway or third party software required.

Oracle has essentially brought enterprise hybrid cloud storage to market, no integration required. I’m not really surprised that Oracle has been able to roll this out, but I am a little surprised that they are leading the market in this area.

Why hasn’t Dell EMC come up with a straightforward hybrid cloud leveraging their enterprise storage and cloud solutions? Despite having all the parts, they failed to actually produce the long desired converged solution – maybe due to internal competition between infrastructure and cloud divisions? Well, guess what. Customers want to buy hybrid storage, not bundles or bunches of parts and disparate services that could be integrated (not to mention wondering who supports the resulting stack of stuff).

Some companies so married to their legacy solutions that they, like NetApp for example, don’t even offer their own cloud services – maybe they were hoping this cloud thing would just blow over? Maybe all those public cloud providers would stick with web 2.0 apps and wouldn’t compete for enterprise GB dollars?

(Microsoft does have StorSimple which may have pioneered on-prem storage integrated with cloud tiering (to Azure). However, StorSimple is not a high performance, enterprise class solution (capable of handling PBs+ with massive memory accelerated performance). And it appears that Microsoft is no longer driving direct sales of StorSimple, apparently positioning it now only as one of many on-ramps to herd SME’s fully into Azure.)

We’ve reported on the Oracle ZFS Storage Appliance itself before. It has been highly augmented over the years. The Oracle ZFS Storage Appliance is a great filer on its own, competing favorably on price and performance with all the major NAS vendors. And it provides extra value with all the Oracle Database co-engineering poured into it. And now that it’s inherently cloud enabled, we think for some folks it’s likely the last storage NAS they will ever need to invest in (if you’ll want more performance, you will likely move to in-memory solutions, and if you want more capacity – well that’s what the cloud is for!).

Oracle’s Public Cloud is made up of – actually built out of – Oracle ZFS Storage Appliances. That means the same storage is running on the customer’s premise as in the public cloud they are connected with. Not only does this eliminate a whole raft of potential issues, but solving any problems that might arise is going to be much simpler – (and less likely to happen given the scale of Oracle’s own deployment of their own hardware first).

Compare this to NetApp’s offering to run a virtual image of NetApp storage in a public cloud that only layers up complexity and potential failure points. We don’t see many taking the risk of running or migrating production data into that kind of storage. Their NPS co-located private cloud storage is perhaps a better offering, but the customer still owns and operates all the storage – there is really no public cloud storage benefit like elasticity or utility pricing.

Other public clouds and on-prem storage can certainly be linked with products like Attunity CloudBeam, or additional cloud gateways or replication solutions. But these complications are exactly what Oracle’s new offering does away with.

There is certainly a core vendor alignment of on-premises Oracle storage with an Oracle Cloud subscription, and no room for cross-cloud brokering at this point. But a ZFS Storage Appliance presents no more technical lock-in than any other NAS (other than the claim that they are more performant at less cost, especially for key workloads that run Oracle Database.), nor does Oracle Cloud restrict the client to just Oracle on-premise storage.

And if you are buying into the Oracle ZFS family, you will probably find that the co-engineering benefits with Oracle Database (and Oracle Cloud) makes the set of them all that much more attractive (technically and financially). I haven’t done recent pricing in this area, but I think we’d find that while there may be cheaper cloud storage prices per vanilla GB out there, looking at the full TCO for an enterprise GB, hybrid features and agility could bring Oracle Cloud Converged Storage to the top of the list.

An IT industry analyst article published by SearchCloudComputing.

Cloud computing has evolved quite a bit in the last few years, but it still has far to go. Technologies such as big data, containers and IoT will have a big part to play in the future.

Mike Matchett

Yes, it’s a brand new year and time to make some Next Big Thing predictions for the year to come. This year, our outline of what’s on the immediate horizon is already well known: hybrid cloud adoption, big data applications and containers. Looking a little further out at enterprise IT trends, we might see the first practical persistent storage-class memory begin to disrupt 30 years of traditionally structured data center infrastructure. And expect a hot smoking internet of things mess of requirements to land in the lap of IT folks everywhere.

All of these topics are, of course, highly interrelated. In fact, it wouldn’t surprise me to find that many organizations will have to bite the bullet on all five at the same time to handle a new internet of things (IoT) data processing application. But let’s take a quick look at each:

Cloud adoption. I am as guilty as the next pundit in predicting when cloud adoption will finally be considered a “traditional” deployment model. But this time I really mean it! VMware is demonstrating cross-cloud products. Microsoft is making real hay rolling traditional businesses, large and small, into software as a service, likeOffice365, and infrastructure as a service, like Azure. And all our favorite storage vendors are realizing that building in a cloud tier won’t shrink on-premises storage needs given the growth in data and hybrid technologies that balance and marry the best benefits of both cloud and on-premises processing.

Big data. Hadoop is a decade old now. With newer generation platforms like Apache Spark making it easier to deploy and consume big data interactively for SQL-friendly business analysis, real-time operations, machine learning and even graph-based applications, it’s time for us all to get on board this train. As I’ve said, all data can grow up into big data someday. One of the top enterprise IT trends we’ve noticed is less concern about what big data is and more focus on getting maximum value out of all that data. In fact, I predict that data access — or data paucity — will become a new corporate key performance indicator in the future.

Containers. Having predicted the fast rise of containers last year, I claim some victory here against naysayers. Containers have won even if they aren’t in production everywhere yet. Yes, there are some major issues yet to be resolved for the regular, not quite DevOps, IT organization. Many apps will never transition to containers — just like how we will have mainframe applications and VM-based appliances hanging around for decades — but open the hood of every modern application, appliance, cloud or software-defined infrastructure, and you’ll likely find containers. In fact, most of the newest enterprise IT trends covered above – especially cloud and big data — are internally powered by container-based development and deployment.

(Excerpt from original post on the Taneja Group News Blog)

Is anyone in storage really paying close enough attention to Oracle? I think too many mistakenly dismiss Oracle’s infrastructure solutions as expensive, custom and proprietarily Oracle database-only hardware. But, surprise, Oracle has been successfully evolving the well respected ZFS as a solid cloud-scale filer, today releasing the fifth version of the ZFS storage array – the Oracle ZS5. And perhaps most surprising, the ZS series powers Oracle’s own fast growing cloud storage services (at huge scale – over 600PBs and growing).

An IT industry analyst article published by SearchCloudStorage.

Customers should evaluate the cost and effectiveness of a hybrid cloud storage implementation when selecting a provider to store their valuable nearline data.

Mike Matchett

Security, governance, cost, bandwidth, migration, access control and provider stability once impeded the journey to the cloud, but today many companies’ on-premises storage arrays tier directly to public cloud storage.

Public cloud storage has graduated into an elastic utility that everyone can now use profitably. But there are some differences between the storage services various providers offer, and organizations should shop around before migrating terabytes of corporate data into the ether with a hybrid cloud implementation.

Cloud storage has evolved to the point where companies can use it for business, not just as a remote backup archive. Network latencies still prevent I/O-hungry data center workloads from using the cloud as primary storage, but a hybrid cloud implementation that can also move workloads to the cloud using virtual machines and containers is more the norm than the exception. Still, many data centers today have yet to start using the cloud for purposes beyond cold storage.

To nail down good use cases, look at the various tiers of storage each cloud service provider offers. Companies should consider each tier as a possible plug-in resource in their architecture. Businesses should ask service providers how simple — and costly — it is to transfer data between cloud storage services directly, as this can make it easier to shift data around should needs change. Be sure to understand the costs involved in both storing data over time and accessing it when necessary. There may be access costs, but they might be within expected budgets. And pay attention to access latencies: Backups that take hours to recall may not provide satisfactory levels of business continuity.

Service providers have been constantly dropping prices in what is only a beneficial turn of events for consumers. This means organizations considering a hybrid cloud implementation should position themselves to take advantage of the lowest costs available if all other factors are equal. It currently isn’t easy to migrate massive amounts of data from one provider to another, nor is it necessarily cheap. This results in some friction-based lock in, which is something to be aware of…(read the complete as-published article there)

(Excerpt from original post on the Taneja Group News Blog)

We’ve been writing recently about the hot, potentially inevitable, trend, towards a dense IT infrastructure in which components like CPU cores and disks are not only commoditized, but deployed in massive stacks or pools (with fast matrixing switches between them). Then a layered provisioning solution can dynamically compose any desired “physical” server or cluster out of those components. Conceptually this becomes the foundation for a bare-metal cloud. DriveScale today announces their agile architecture with this approach, aimed first at solving big data multi-cluster operational challenges.

Today at #snyc18 learning about the latest in #serverless. Opportunity is huge (iRobot is 100% serverless and loving it @ben11kehoe ), but not a panacea, lots of work to do to build up full production applications yet according to Kelsey Hightower (google) @kelseyhightower .