Minnesota and Private Cloud

I had a blog partially written for today when @GeorgeVHulme tweeted this: "WAHOO! Minnesota goes Private Cloud!". And that changed my thoughts and direction completely. Here’s the article George linked to State of Minnesota Signs Historic Cloud Computing Agreement With Microsoft. The fact that it was private cloud and with Microsoft got me to read the article. And it’s actually a pretty impressive story for both the state and for Microsoft.

In essence, this takes “private cloud” to a different place than I would have envisioned. They’re outsourcing. Yes, there’s a line in the sand, beyond which the state has complete control, but they have essentially given Microsoft their infrastructure (the collaboration and email piece of it anyway) and are holding Microsoft accountable for security and software maintenance. That’s a pretty solid plan if the admins at the state can manage the applications as they need/desire. There are gray areas that would need to be covered, like what types of threats are user/application threats that Microsoft isn’t responsible for, what’s the escalation path, etc. But those are no doubt covered in the contract, which we don’t have access to.

Microsoft is giving over dedicated space (notice that the article does not say dedicated hardware anywhere in it), and even has committed a datacenter that the cloud will run out of. The price tag must have been pretty high, but Microsoft Exchange admin, IM (ala Microsoft Communicator) admin, and Microsoft Sharepoint admin – the hardware and software maintenance, routing, upgrades, etc part – is expensive too. The state knows what that portion of its budget will cost and can focus on running the apps that the state and its citizens require to get the job done.

I admit to being a bit intrigued. Not just by the concept, but by the actual architectural implementation. Assuming that the access is via some form of SSL VPN, is it to be expected then that when another portion of the state is signed out to a cloud vendor, another VPN connection will be required? That would seem to be… Awkward. But they do reference a dedicated line, so it is possible that there is no “gateway” to the services, but that would seem irresponsible from a security perspective on both parts, so I doubt it. Though lock-down by IP might be possible, that’s spoofable, so again, I doubt it.

This arrangement has half of the issues that traditional outsourcing does, but I would argue the worst of the issues are taken care of. In a traditional outsourcing arrangement, the fact that your contract decreases in value as time goes on (assuming your vendor is successful anyway) means that your staffing levels are slowly watered down by other duties, and by the end of the contract you are likely frustrated. This is compounded by the fact that your IT needs grow in the two to five year period of an outsourcing contract.

But in this case, the labor intensive part of the agreement resides with state employees. Upgrading hardware and software is labor intensive but “bursty” to put it in IT terms. You do it, and then it’s done until next time you need it. On the other hand, maintenance of users, modifying software configurations to meet your needs, and managing that software is a constant job that will likely increase over time.

This may be the answer outsourcing has been looking for. To me, having Microsoft employees apply their own security patches sounds like right-sizing. Of course there will be speed bumps, but even that has a pressure release valve. If a server drops for no explainable reason, state IT staff will point to Microsoft, but be quietly glad that they have someone to point at.

Depending upon the agreed-upon price, states with much larger budget woes than Minnesota should probably be considering such an arrangement. Instead of a hazy partial budget that is padded in case Exchange use grows at a faster-than-expected pace, they can have a number that is required to keep the lights on for critical state systems. It cleans up budgeting and allows the state to make critical choices in hard times without as much guesswork. Capital expenditures drop, ostensibly staffing needs will go down, but that greatly depends upon the number of servers this system replaces and what their server:admin ratio is.

And I’m really intrigued by the implication that Microsoft, just by virtue of taking over this function, increases the security of Minnesota’s data. I know that Microsoft has been getting better over the last decade at security, but that is still an intriguing concept to me. Hope my boss (who lives in Minnesota) doesn’t notice it…

By way of disclosure, we are a Microsoft Partner, not that being one had anything to do with this blog, just making sure you know.

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.

When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes.
In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.

Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.

As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.

With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.

While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balanc...

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can be...

Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through ...

As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.

CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build fr...

The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how the...

René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterpris...

Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We wil...

Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their...

Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.

DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their D...

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.