Gardner: How are IT architecture and new breeds of service providers coming together? What’s different now from just a few years ago for architecture when we have cloud, multi-cloud, and hybrid cloud services?

Reyenger: Like the technology trends themselves, everything is accelerating. Before, you would have three-year or even five-year plans that were developed by the business. They were designed to reach certain business outcomes, they would design the technology to support that and it was then heads-down to build my rocket ship.

It’s changed now to where it’s a 12-month strategy that needs to be modular enough to be reevaluated at the end of those 12 months, and be re-architected -- almost as if it were made of Lego blocks.

Gardner: More moving parts, less time.

Reyenger: Absolutely.

Gardner: How do you accomplish that?

Reyenger: You leverage different cloud service providers, different managed services providers, and traditional value-added resellers, like International Integrated Solutions (IIS), in order to meet those business demands. We see a large push around automation, orchestration and auto-scaling. It’s becoming a way to achieve those business initiatives at that higher speed.

Gardner: There is a cloud continuum. You are choosing which workloads and what data should be on-premises, and what should be in a cloud, or multi-clouds. Trying to do this as a regular IT shop -- buying it, specifying, integrating it -- seems like it demands more than the traditional IT skills. How is the culture of IT adjusting?

Reyenger: Every organization, including ours, has its own business transformation that they have to undergo. We think that we are extremely proactive. I see some companies that are developing in-house skill sets, and trying to add additional departments that would be more cloud-aware in order to meet those demands.

On the other side, you have folks that are leveraging partners like IIS, which has acumen within those spaces to supplement their bench, or they are building out a completely separate organization that will hopefully take them to the new frontier.

Reyenger: IIS has spent 26 years building out an amazing book of business with amazing relationships with a lot of enterprise customers. But as times change, you need to be able to add additional practices like our cloud practice and our managed services practice. We have taken the knowledge we have around traditional IT services and then added in our internal developers and delivery consultants. They are very well-versed and aware of the new architecture. So we can marry the two together and help organizations reach that new end-state.

It's very easy for startups to go 100 percent to the cloud and just run with it. It’s different when you have 2,000 existing applications and you want to move to the future as well. It’s nice to have someone who understands both of those worlds -- and the appropriate way to integrate them.

Gardner: I suppose there is no typical cloud engagement, but what is a common hurdle that organizations are facing as they go from that traditional IT mindset to the more cloud-centric thinking and hybrid deployment models?

The cloud answer

Reyenger: The concept of auto-scaling or bursting has become very, very prevalent. You see that within different lines of business. Ultimately, they are all asking for essentially the same thing -- and the cloud is a pretty good answer.

At the same time, you really need to understand your business and the triggers. You need to be able to put the necessary intelligence together around those capabilities in order to make it really beneficial and align to the ebbs and flows of your business. So that's been one of the very, very common requests across the board.

We've built out solutions that include intellectual property from IIS and our developers, as well as cloud management tools built around backup to the cloud to eliminate tape and modernize backup for customers. This builds out a dedicated object store that customers can own that also tiers to the different public cloud providers out there.

And we’ve done this in a repeatable fashion so that our customers get the cloud consumption look and feel, and we’ve leveraged innovative contractual arrangements to allow customers to consume against the scope of work rather than on lease. We’ve been able to marry that with the different standardized offerings out there to give someone the head start that they need in order to achieve their objectives.

Gardner: You brought up the cloud consumption model. Organizations want the benefit of a public cloud environment and user experience for bursting, auto-scaling, and price efficiency. They might want to have workloads on-premises, to use a managed service, or take advantage of public clouds under certain circumstances.

Reyenger: Now it’s becoming a multi-cloud strategy. It’s one thing to say only on-premises and using one cloud. But using just one cloud has risk, and this is a problem.

We try to standardize everything through a single cloud management stack for our customers. We’re agnostic to a whole slew of toolsets around both orchestration and automation. We want to help them achieve that.

Intelligent platform performance

We looked at some of the very unique things that HPE has done, specifically around their Synergy platform, to allow for cloud management and cloud automation to deliver true composable infrastructure. That has huge value around energizing a company’s goals, strengthening their profitability, boosting productivity, and enhancing innovation. We've been able to extend that into the public cloud. So now we have customers that truly are getting the best of both worlds.

Composable infrastructure is having true infrastructure that you can deploy as code. It’s being able to standardize on a single RESTful API set.

Reyenger: It’s having true infrastructure that you can deploy as code. You’ll hear a lot of folks say that and what it really means is being able to standardize on a single RESTful API set.

That allows your platform to have intelligence when you look at infrastructure as a service (IaaS), and then delivering things as either platform (PaaS) or software as a service (SaaS) -- from either a DevOps approach, or from the lines of business directly to consumers. So it’s the ability to bridge those two worlds.

Traditionally, you may have underlying infrastructure that doesn't have the intelligence or doesn't have the visibility into the cloud automation. So I may be scaling, but I can't scale into infinity. I really need an underlying infrastructure to be able to mold and adapt in order to meet those needs.

We’re finally reaching the point where we have that visibility and we have that capability, thanks to software-defined data center (SDDC) and a platform to ultimately be able to execute on.

Gardner: When I think about composable infrastructure, I often wonder, “Who is the composer?” I know who composes the apps, that’s the developer -- but who composes the infrastructure?

Reyenger: This gets to a lot of the digital transformation that we talked about in seeking different resources, or cultivating your existing resources to gain more of a developer’s view.

But now you have IT operations and DevOps both able to come under a single management console. They are able to communicate effectively and then script on either side in order to compose based on the code requirements. Or they can put guardrails on different segments of their workloads in order to dictate importance or assign guidelines. The developers can ultimately make those requests or modify the environment.

Gardner: When you get to composable infrastructure in a data center or private cloud, that’s fine. But that’s sort of like 2D Chess. When I think about multi-cloud or hybrid cloud -- it’s more like 3D Chess. So how do I compose infrastructure, and who is the composer, when it comes to deciding where to support a workload in a certain way, and at what cost?

We are ultimately allowing that to be the single pane of glass, the single console. And then because it’s RESTful API integrations into those public cloud providers, we’re able to provide that transparency from that management interface, which mitigates risk and gives you control.

Then we deploy things like Puppet, Chef, and Ansible within those different virtual private clouds and within those public cloud fabrics. Then, using that cloud management stack, you can have uniformity and you can take that composition and that intelligence and bring it wherever you like -- whether that's based on geography or a particular cloud service provider preference.

There are many different ways to ultimately achieve that end-state. We just want to make sure that that standardization, to your point, doesn’t get lost the second you leave that firewall.

Projecting into the future, do you see a role for an algorithmic, programmatic approach putting in certain variables, certain thresholds, and contextual learning to then make this composable infrastructure capability part of a machine process?

Reyenger: The things that companies like HPE have done, and their new acquisition, Nimble, as well as at Red Hat, and several others in the industry, to leverage the intelligence they have from all of their different support calls and lifecycle management across applications allows them to provide feedback to the customer.

And in some cases, if you are tying it back from an automation engine that will actually give you the information as to how to solve your problem. A lot of the precursors to what you are talking about are already in the works and everyone is trying to be that data-cloud management company.

We will see more of that single pane of glass that they will leverage across multiple cloud providers.

It's really early to ultimately pick favorites, but you are going to see more standardization. Rather than having 50 different RESTful APIs that everyone is standardizing on and that are constantly changing, so that I have to provide custom integrations. What we will see is more of that single pane of glass they will leverage across multiple cloud providers. That will leverage a lot of the same automation and orchestration toolsets that we talked about.

Gardner: Looking at composable infrastructure, auto-scaling, using things like HPE Synergy, if you’re an enterprise and you do this right, how do you take this up to the C-Suite and say, “Aha, we told you so. Now give us more so we can do more”? In other words, how does this improve business outcomes?

Fulfilling the promise

Reyenger: Every organization is different. I’ve spent a good chunk of my career being tactically deployed within very large organizations that are trying to achieve certain goals.

For me, I like to go to a customer’s 10-K SEC filing and look at the promises they’ve made to their investors. We want to ultimately be able to marry back what this IT investment will do for the short-term goals that they are all being judged against, as well as from both the key performance indicators (KPI) standpoint and from the health of the company.

It means meeting DevOps challenges and timelines, ruling out new green space workload issues, and taking data that sits within traditional business intelligence (BI) relational databases and giving access to some of that data to different departments. They should be able to run big data analytics against that data from those departments in real-time.

These are the types of testing methodologies that we like to set up so that we can help a customer actually rationalize what this means today in terms of dollars and cents and what it could mean in terms of that perceived value.

Gardner: When you do this well, you get agility, and you get to choose your deployment models. It seems to me that there's going to be a concept that arises of minimal viable cloud, or hybrid cloud.

Are we going to see IT costs at an operating level adjusted favorably? Is this something that ultimately will be so optimized -- with higher utilization, leveraging the competitive market for cloud services -- that meaningful decreases will occur in the total operating costs of IT in an organization?

An uphill road to lower IT costs

Reyenger: I definitely think that it’s quite possible. The way that most organizations are set up today, IT operations rolls back into finance. So if you sit underneath the CFO, like most organizations do, and a request gets made by marketing or sales or another line of business -- it has to go up the chain, get translated, and then come back down.

A lot of times it's difficult to push a rock up a hill. You don’t have all the visibility unless you can get back up to finance or back over to that line of business. If you are able to break down those silos, then I believe that your statement is 100 percent true.

But changing all of those internal controls for a lot of these organizations is very difficult, which is why some are deploying net-new teams to be ultimately the future of their internal IT service provider operations.

Gardner: Arthur, I have been in this business long enough to know that every time we’ve gotten into the point where we think we are going to meaningfully decrease IT costs, some other new paradigm of IT comes up that requires a whole new round of investment. But it seems to me that this could be different this time, that we actually are getting to a standardized approach for supporting workloads and that traditional economics that impact any procurement service will become in effect here, too.

Mining to minimize risk

Reyenger: Absolutely. One of our big pushes has been around object storage. This still allows for traditional file- and block-level support. We are trying to help customers achieve that new economic view -- of which cloud approach ultimately provides them that best price point, but still gives them low risk, visibility, and control over their data.

I will give you an example. There is a very large financial exchange that had a lot of intellectual property (IP) data that they traditionally mined internally, and then they provided it back to different, smaller financial institutions as a service, as financial reports. A few years back, they came to us and said, “I really want to leverage the agility of Amazon Web Services (AWS) in terms of being able to spin up a huge Hadoop form and mine this data very, very quickly -- and leverage that without having to increase my overall cost. But I don’t feel comfortable providing that data into S3 within AWS, where now they have two extra copies of my data as part of the service level agreement. So what do I do?”

And we ultimately stood up the same object storage service next to AWS, so you wouldn’t have to pay any data eviction fees, and you could mine everything right there, leveraging the AWS Redshift, or Hadoop-as-a-service.

Then once these artifacts, or these reports, were created, they no longer had the IP. The reports came from the IP, but these are all roll-ups and comparisons, and now they are not sensitive to the company. We went ahead and put those into S3 and allowed Amazon to manage all of their customers’ identity and access management to go ahead and get access to that -- and that all minimized risk for this exchange. We are able to prevent anyone outside of the organization to get behind the firewall to get at their data. You don’t have to worry about the SLAs associated with keeping this stuff up and available and it became a really nice hybrid story.

We help customers gain all the benefits associated with cloud – without taking on any of the additional risk.

These are the types of projects that we really like to work on with customers, to be able to help them gain all the benefits associated with cloud – without taking on any of the additional risk, or the negatives, associated with jumping into cloud with both feet.

Gardner: You heard your customers, you saw a niche opportunity for object storage as a service, and you have put that together. I assume that you want a composable infrastructure to do that. So is this something on a HPE Synergy a future foundation?

Reyenger: HPE Synergy doesn’t really have the disk density to get to the public cloud price point, but it does support object storage natively. So it's great from a DevOps standpoint for object storage. We definitely think that as time progresses and HPE continues down the Synergy roadmap that that cloud role will eventually fix itself.

A lot of the cloud role is centered on hyper-converged infrastructure. And in this kind of mantra, I don’t see compute and storage growing at the same rates. I see storage growing considerably faster than the need for compute. So this is a way for us to be able to help supplement a Synergy deployment, or we can help our customers get the true ROI/TCO they are looking for out of the hyper-converged.

Gardner: So maybe the question I should ask is what storage providers are you using in order to make this economically viable?

Reyenger: We are absolutely using the HPE Apollo storage line, and the different flavors of solid-state disks (SSD) down to SATA physical drives. And we are leveraging best-in-breed object storage software from Red Hat. We also have an OpenStack flavor as well.

We leverage things like automation and orchestration technologies, and our ServiceNow capabilities -- all married with our RIP in order to give customers the choice of buying this, deploying it, and having us layer services on top if you want or if you want to consume a fully managed service for something that’s on-premises. I have a per-GB price and the same SLAs as those public cloud providers. So all of it’s coming together to allow customers to really have the true choice and flexibility that everyone claimed you could years ago.

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com.
As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions.
This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information.
As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

Enterprise Architecture is the organizing logic for business processes and IT infrastructure reflecting the integration and standardization requirements of the firmâ€™s operating model. It is often said that the architecture of an enterprise exists, whether it is described explicitly or not. This makes sense if you regard the architecture as existing in the system itself, rather than in a description of it. Certainly, the business practice of enterprise architecture has emerged to make the system structures explicit in abstract architecture descriptions. Practitioners are called "enterprise architects."

Cloud Expo

Cloud Computing & All That
It Touches In One Location Cloud Computing - Big Data - Internet of Things
SDDC - WebRTC - DevOps
Cloud computing is become a norm within enterprise IT.

The competition among public cloud providers is red hot, private cloud continues to grab increasing shares of IT budgets, and hybrid cloud strategies are beginning to conquer the enterprise IT world.

Big Data is driving dramatic leaps in resource requirements and capabilities, and now the Internet of Things promises an exponential leap in the size of the Internet and Worldwide Web.

The world of SDX now encompasses Software-Defined Data Centers (SDDCs) as the technology world prepares for the Zettabyte Age.

Add the key topics of WebRTC and DevOps into the mix, and you have three days of pure cloud computing that you simply cannot miss.

Delegates will leave Cloud Expo with dramatically increased understanding the entire scope of the entire cloud computing spectrum from storage to security.

Cloud Expo - the world's most established event - offers a vast selection of 130+ technical and strategic Industry Keynotes, General Sessions, Breakout Sessions, and signature Power Panels. The exhibition floor features 100+ exhibitors offering specific solutions and comprehensive strategies. The floor also features two Demo Theaters that give delegates the opportunity to get even closer to the technology they want to see and the people who offer it.

Attend Cloud Expo. Craft your own custom experience. Learn the latest from the world's best technologists. Find the vendors you want and put them to the test.