If you are still reading allow me to reward you with some measure of answers:

The first real challenge is change happens and you will not have the funding to remove the old and replace with the new.

The second real challenge is that innovation and agility demand change.

The third challenge is that we continue to focus on initial state instead of life of a service as a constant source of change.

So, the challenge is change.

Perfection is the process of refinement until only desirable elements, qualities or characteristics remain.

I have illustrated that change is both the problem and the solution. How can we resolve these two opposites: I love change, but I hate its effects?

. There are two IT approaches to this challenge:

Take on day two operations (standardize, quantify, change management etc..)

Move to micro-service architecture

Many organizations have embraced change management as the way to approach change. Every single change has to be approved by subject matter experts thus reducing risk. In practice this only serves to slow down innovation by making it filter though a committee. Management of change is the enemy of innovation. It’s truly the relish of IT’s failure today. Change management rarely stops failures because of the complexity of systems involved. While I am a huge fan of configuration management as a method of maintaining initial state it’s only a band-aid for the real solution. It’s a reactive approach which rarely takes into account the master plan.

The allure of micro-service architecture can be easily understood but in reality many applications both COTS and in-house developed struggle to achieve this as a reality. Many customers have a strategy that is single sided favoring stateless and micro-service architectures while pretending traditional applications don’t exist. A quick survey of your application portfolio might show that 80% of your business revenue is generated by the least stateless architecture you support. It’s a rip and replace plan that rarely takes into account current realities.

So how can we embrace change and innovation?

I believe this is where we combine the best of both worlds with a clear understanding of reality. We need the replacability of micro-service architecture with the compatibility of legacy servers. We need to speed and agility of constant change with the stability of configuration management. For me it’s come down to application as code. Can you define your application as code? Architects have been defining their applications in Visio for years… would it not be easier to define it as code. This code can then be versioned and updated. envision everything unique about your application from network, security, storage, servers, configuration and applications defined in a code construct. You could then check that construct into a code repository. When change is required the complete environment including change can be deployed using the code construct. The deployed infrastructure can be tested automatically for required functionality and then be used to replace current production. If it fails functionality tests then it returns to the drawing board. This type of infrastructure as code can be deploy 100 times a day driving innovation speeds. If failures become an issue the application and infrastructure are rolled back to last known good state. I am not suggesting we adoption 100% stateless infrastructure, containers or magic fairy dust… I am suggesting we tighten our belts and do the hard work to truly define applications as code. In order to define things as code we need to have three things:

Software based constructs for everything – if your solution requires physical hardware it cannot be automated or replicated without time and cost – no one has hardware on demand for every dynamic situation

coordination between silo’ed teams (break down the solo and form one team, no more infrastructure, application, network, security, operations)

Development skills

Combining all these elements provide the basis for successful application as code. You will have to orchestrate many different methods into a cohesive approach and use iterative software development. In order to solve the problem you will have to approach a new project with this team not try to redesign or replatform a old one. These basic blocks can provide the basis for immutable applications thus making infrastructure just plumbing.

]]>http://blog.jgriffiths.org/your-it-shop-is-ugly-part-3/feed/01724http://blog.jgriffiths.org/your-it-shop-is-ugly-part-3/Your IT shop is Ugly! Part 2http://feedproxy.google.com/~r/blogjgriffithsorg/~3/kmJ0C-HFpek/
http://blog.jgriffiths.org/your-it-shop-is-ugly-part-2/#commentsThu, 08 Mar 2018 01:38:40 +0000http://blog.jgriffiths.org/?p=1720This is part 2 of a multi-part article series see other articles here:

Innovation and agility are buzz words I hear a lot in IT. Innovation is more about culture than capabilities. Innovation is inherently a proactive activity, see a problem and choose to solve it in a new way. Agility is the ability to embrace change quickly it is inherently reactive. I had some exposure to an IT environment where everyone was compensated based upon not being the cause of outages. After an outage the root cause analysis would be done to determine which groups compensation was negatively affected by the outage. As you can image this policy was created to reduce outages. In fact, it had a direct negative effect on mean time to resolution. During an outage everyone was focused on making sure they didn’t get any of the blame. Innovation did not exist in this company because it had the potential of creating outages which were unacceptable. No one would work together on anything they had a culture of blame instead of innovation. Innovation requires organizations to be willing to endure acceptable downtime. Acceptable downtime was defined by Google as part of it’s site reliability engineering. It is focused on the idea that we can continue to innovate until we have passed the threshold for acceptable downtime for the month. Site reliability engineers focused 50% of their time on operations and 50% automating operations. Once the month has passed innovation can continue. Using the acceptable downtime or allowed downtime has turned the traditional SLA model upside down. It allowed Google’s IT to innovate at a much faster pace. Increased proactive innovation has a direct effect on reducing the amount of reactive work being done.

The second real challenge is that innovation and agility demand change.

We are focused on the initial state

When you consider manufacturing they are concerned with the initial state. Auto manufacturing has really optimized every portion of the process. They have supply chain whipped, huge buildings full of robots and they produce tens of thousands of cars a day. All of these efforts optimize the end deliverable product of a car to the consumer. Once the consumer takes ownership all optimized automation ends. Once you reach 5,000 miles you have to take the car to a shop where a human changes the oil. If something breaks humans change the parts and immediately start to break all the standardization and quality created by initial automation. End to end creation of a car takes roughly 17 hours. That same car is likely to be in the wild for 87,600 hours (10 years) yet everything is focused on optimization of the 17 hours of initial state. There are a number of parallels to IT with cars. Most IT shops seem to be focused on delivering initial state quickly(day 1), a lot less thought is given to day two operations which will persist for the next five to ten years. The major difference is the customer expected outcome. With a car you expect a drivable product with some level of quality. With a server you expect it to operate on the fifth year the same as initial delivery.

The third challenge is that we continue to focus on initial state instead of life of a service as a constant source of change.

]]>http://blog.jgriffiths.org/your-it-shop-is-ugly-part-2/feed/11720http://blog.jgriffiths.org/your-it-shop-is-ugly-part-2/Your IT shop is Ugly! Part 1http://feedproxy.google.com/~r/blogjgriffithsorg/~3/ABZbod6AjJ4/
http://blog.jgriffiths.org/your-it-shop-is-ugly-part-1/#commentsThu, 08 Mar 2018 01:37:50 +0000http://blog.jgriffiths.org/?p=1716This is part 1 of a multi-part article series see other articles here:

Big News everyone! Your IT shop is ugly! Good News everyone’s IT shop is ugly. As a ‘older’ IT professional (In IT that means you are past 30) I have seen my fair share of IT shops. As a solutions architect for VMware I have seen even more IT practices. The truth is everyone’s IT shop is ugly there are no pretty IT shops. In this article I will explain why it’s ugly and provide some prescriptive steps to solving the issues you may face.

Master Planned community

I recently moved to Texas with my job (It’s been great btw). I had to sell my beloved first home and move into another house. This home happened to be in a master planned community. My community has hundreds of homes that have all been built-in the last five years. Before a single home was built a developer got together with an architect and planned out the whole community. Every inch of the community was planned from the placement of houses down to location for bushes. It’s a beautiful place to live and very orderly. To preserve the experience envisioned by the architect a 250-page HOA document exists to maintain the master plan. I learned quickly that my home was missing a bush in front of the air conditioner and I could not leave my garbage cans out overnight. As I drive out of my community the center island of the road is lined with trees. I noticed the other day one tree had been replaced with a new tree due to the death of the previous tree. This has upset the balance of my community the master plan now has a tree that is ten feet shorter than the rest. Chaos has happened the master plan could not account exactly for the chaos.

I don’t care about your master planned community why do you bring it up?

Honestly, I don’t really care about my master planned community either. It is a great and safe place to live which were my requirements I could care less about the tree, but I believe it’s a perfect way to explain why your IT shop is ugly. Your IT environment is as old as your company (in most cases) which means it was master planned a while ago. Since the original plan you may have expanded, contracted, taken on new building techniques and changed contractors. Your original architect has retired, moved to a new company, been promoted, continued, or stayed put and not updated skills. New architects have come and gone each looking to improve the master design with their own unique knowledge and skill set. Some organizations even have architects for each element who rarely coordinate with each other. Each of these architects understood the technical debt created and left by previous architects. Older architectures, applications, methods each with their aging deterioration and mounting cost. Some of your architects have suggested solving these technical debt monsters in two potential ways:

Wipe it out and start over (bi-modal)

Update the homes where they stand (Upgrade)

Each of these methods provide the simple benefit of reducing the total cost of ownership of these legacy platforms.

The wipe it out method requires some costly steps:

Build new homes that could be used to turn a profit if they were not part of the re-platform

Move everyone into the new homes

Ensure everyone is happy with their new home (which turns into a line by line comparison – my kitchen used to be on the right not the left…)

Switch the owners into the new homes

Plow down older homes

Build new homes on the land to turn a profit (or get cost savings from the re-platform to improve bottom line)

The update homes where they stand seems like a good plan but requires some steps:

Buy new materials to replace sections of the home

Move owners into temporary housing

Update their homes

Move them back

It’s a long process

Both methods are costly and removing technical debt rarely makes the businesses radar of critical to the health of the business, so they get ignored.

]]>http://blog.jgriffiths.org/your-it-shop-is-ugly-part-1/feed/11716http://blog.jgriffiths.org/your-it-shop-is-ugly-part-1/Will FaaS mean the end of servers?http://feedproxy.google.com/~r/blogjgriffithsorg/~3/fNKTx59sjAw/
http://blog.jgriffiths.org/will-faas-mean-the-end-of-servers/#respondMon, 19 Feb 2018 14:16:55 +0000http://blog.jgriffiths.org/?p=1699A few years ago there were many articles about how containers would mean the end of servers. From a technical standpoint Function as a Service (FaaS) and containers both run on servers. So simple answer no it does not mean the end of servers. I have seen a lot of rumbling around FaaS of late. Those who have heard me speak on automation know I am all about functions and modular blocks. We do need to break code down to simplest terms to encourage innovation and re-use. FaaS has a place in your overall design. Application design continues to pivot away from monolithic design to more micro-service models. FaaS is part of that pie. When considering any of these strategies the same overall design challenges exist:

Data persistence

Data gravity

Security

Data persistence:

No matter how stateless your environment sooner or later data is involved. There are some exceptions but they are really rare. The internet runs on data. The real value is identification of you as a user and selling that data in mass not the $.99 cents you paided for the app. Applications exist to do something then keep state… or record your reactions either way the data needs to be stored. Pure stateless applications are stateless. FaaS is stateless. So somewhere in the pie we need state. Something to orchestrate the next step and provide the value to user and the developer. Where you store this data depends on the application from a simple text file to a share nothing database someone is keeping the data. Lets just be honest that 90% of the world still lives on a relational database (Oracle, MS-SQL, My-SQL) with a small portion using a share nothing database (Cassandra etc..). This persistence layer has all the same concerns as any other non-immutable infrastructure. If you loose all your copies you loose data. Even with every function of an application as a FaaS you still need a database. The challenge of persistence means you have to live in both worlds a persistent and non-persistent. It’s important to consider the manageability of both these worlds when you consider implementing new technologies.

Data gravity:

The idea of FaaS or stateless is I can deploy anywhere… while this is technically true you want your application/functions to be close to that persistent data to ensure performance is observed. Which means you either need to real time replicate data between anywhere you want to operate or operate in the same locality as your stateless / function. No share databases have massive concerns with write amplification, confirming a write across long distances introduces unacceptable latency into every write. Sharding of these databases is touted as the solution using sync writes in the same location for redundancy, sharding is possible it’s a complex and you still have latency when the data needed is not local. Now we have created a MC Escher puzzle with our application architecture. Gravity of data will continue to drive location more than feature / functionality of location. It’s an instant world and no one is going to wait for anything anymore.

Security

While not as interesting as the bling of FaaS security is a real concern. Unless you plan on running your FaaS inside your private datacenter it’s a concern. Your functions have data to do their work in memory. The function is running on a server. Like all multi-tenant situations how do we avoid having a bad or untrusted actor access our data in flight? Anyone who has worked in a multi-tenant provider understands this challenge. Cloud providers have long deployed containers with light weight containers to ensure isolation is present (instead of shared worker nodes). I personally don’t know what measures providers have taken to isolate FaaS offerings but you do have to consider how you will ensure there is not a hacker running a buffer overflow and reading your memory FaaS.

At the end of the day what is old is new and what is new is old. FaaS, containers, virtual machines, physical servers, laptops, phones all have the same fundamental applications challenges. These all provide options. You may be considering a FaaS strategy for many reasons. My point is don’t ignore good design principles just because its new technology.

]]>http://blog.jgriffiths.org/learning-basic-vrealize-orchestrator-video/feed/01696http://blog.jgriffiths.org/learning-basic-vrealize-orchestrator-video/vRealize Orchestrator scaling with 4K displayshttp://feedproxy.google.com/~r/blogjgriffithsorg/~3/JOCPNz69teI/
http://blog.jgriffiths.org/vrealize-orchestrator-scaling-with-4k-displays/#respondWed, 07 Feb 2018 02:14:23 +0000http://blog.jgriffiths.org/?p=1674I ran into this issue this week. vRealize Orchestrator with Windows 10 on 4K displays makes the UI so small not even my seven year old could read it. For those lucky enough to have 4K displays it’s a real challenge. It’s a problem with java and dpi scaling not Orchestrator but it’s a magical challenge. As much as I want to return to the day of using magnify glasses to read computer screens… here is a simple fix.

Download the client.jnlp and run it from an administrative command line with the following command:

javaws -J -Dsun.java2d.dpiaware=false client.jnlp

This should fix your vision issues and it’s cheaper that a new pair of glasses.

]]>http://blog.jgriffiths.org/vrealize-orchestrator-scaling-with-4k-displays/feed/01674http://blog.jgriffiths.org/vrealize-orchestrator-scaling-with-4k-displays/vRO Action to return virtual machines with a specific taghttp://feedproxy.google.com/~r/blogjgriffithsorg/~3/vaC2G8zJA2U/
http://blog.jgriffiths.org/vro-action-to-return-virtual-machines-with-a-specific-tag/#respondWed, 07 Feb 2018 02:14:03 +0000http://blog.jgriffiths.org/?p=1681I am a huge fan of tags in vSphere. Meta data is the king for modular control and policy based administration. I wrote a action for a lab I presented at VMUG sessions. It takes the name of a tag as a string and returns the names of all virtual machines as an array of strings. It does require a VAPI endpoint setup as explained here: (http://www.thevirtualist.org/vsphere-6-automation-vro-7-tags-vapi-part-i/) Here it is:

]]>http://blog.jgriffiths.org/vro-action-to-return-virtual-machines-with-a-specific-tag/feed/01681http://blog.jgriffiths.org/vro-action-to-return-virtual-machines-with-a-specific-tag/Learning vRealize Orchestratorhttp://feedproxy.google.com/~r/blogjgriffithsorg/~3/yB6u-dJjfd8/
http://blog.jgriffiths.org/learning-vrealize-orchestrator/#commentsThu, 01 Feb 2018 16:30:01 +0000http://blog.jgriffiths.org/?p=1670Recently I provided a lab to learn vRealize Orchestrator to the Austin VMUG. It’s been too long since I attended a VMUG meeting due to moving to Dallas. It was great to meet new peers in the Austin area. The purpose of the lab was to provide people hands on experience using Orchestrator with some real world helpful examples. I was present to answer questions and help the learning process.

I am working to bring the same lab to Dallas and Houston in the next few months but wanted to share the labs here. It’s mostly possible to do the labs in the hands on lab environment of hol-1821-05-CMP partially made by my friend Ed Bontempo. You will have to type a lot of code examples since HOL does not support cut and paste. You can do it all in your home lab. Contact me if you want to know about the next session we are presenting to a live VMUG if you want some instructor help.

]]>http://blog.jgriffiths.org/learning-vrealize-orchestrator/feed/11670http://blog.jgriffiths.org/learning-vrealize-orchestrator/Two simple things to improve your lifehttp://feedproxy.google.com/~r/blogjgriffithsorg/~3/wAUCIZPl1qM/
http://blog.jgriffiths.org/two-simple-things-to-improve-your-life/#respondTue, 26 Dec 2017 16:17:50 +0000http://blog.jgriffiths.org/?p=1665Warning this is an end of year personal post and has very little to do with technology so if you are looking for a technology post feel free to skip it. Two things have been rattling around in my head of late and I wanted to share them as an end of year post. These two things have proven to improve my life many times over.

Perception determines reality

I do not wish to diminish the real life challenges you each face. I have lived long enough to understand that each of us faces monstrous mountains throughout our lives. Some of us face challenges with family, friends, relationships, actions of others, health and many other things that are real.

When used professionally, this has the potential to reduce stress and enhance wellbeing. By applying it directly to your skin, you can check here for more benefits; for example, relieving your headache or aiding in your circulation.

When we faces these mountains our outlook truly can change the outcomes. I learned this in a simple way many years ago. I was a young missionary serving a dedicated two years spreading the gospel. I was 22 years old and in Michigan 2,000 miles away from any family. I had been a missionary for over 14 months well experienced in constant negative response we received from our work. I was assigned to train a new missionary and it was Christmas time. It’s a particularly hard time to be away from the family and virtually cut off (we were only allowed to communicate via mail once a week). We had a particularity hard area full of rich people (they are generally not receptive to our message). There were many days it poured freezing rain or snow while we traveled by bike. It was cold and dark. A week before Christmas we discussed getting a Christmas tree and determine that our monthly food budget of $115 dollars could not afford a tree. So we pressed on with the long days work (9 am – 8 pm knocking on doors every day six days a week). My new missionary friend always had great attitude nothing phased him. We joked and talked every day while we seemed to be doing nothing. One night I was preparing for bed and came out into the front room to discover my companion coloring boxes green. When I asked him what he was doing he simply stated making a Christmas tree. Later that night he stacked the boxes of various sized on top of each other to roughly resemble a tree. He then took a red marker and drew balls on the tree. Satisfied with the results he went to bed. He and I spent three months together knocking on doors for 11 hours a day and didn’t get the opportunity to teach a single lesson. By all results we had an epic fail. Looking back 16 years later I can clearly say I learned one of lives most important lessons: it is not the results that count it is how you face them. I have had many failures in my life since then but I have been able to remember the lessons he taught me: make the best of what you have and don’t let any external event get you down. You might have to make a Christmas tree out of boxes or lower your expectations considering getting out of bed the crowning achievement of the day but your perception determines your reality no external event.

By small and simple things great things are brought to pass

When I was young I was convinced that I needed to locate some great event to prove nobility. It was in a single moment these great things happen. It is simply not true. While people are noble and at times do great things I suggest this is just an extension of the many small things they have done for many years. A friend once called it healthy life habits practiced regularly. I have learned that I do my best work in small burst practiced regularly. Simple things like choosing to go to bed on time makes me a better father the next day.. when practiced for a lifetime consistently makes a better father for life. Choosing to allocate time to service makes me less selfish for a day practiced each day makes me a less selfish person. Other examples may include: reading a book to improve myself, spending one on one time with my children, driving a little slower and letting people merge, choosing to do the dishes, reading religious words, turning off my phone etc.. I am convinced that by doing all these small simple things consistently I will find I have become a great and noble man at the end of my life.

Hyperconverged infrastructure (HCI) is natively software defined proving a shift of operations away from the traditional storage management paradigm. Many ofmycustomers have struggled with the paradigm shiftwhenadoptingstorageHCI. HCI has been very successful in addressingspecific use cases.Many of these use cases have been successful because they represent workloads that have not been traditionally managed by storage teams for example VDI. Adoption of HCI beyond these use cases requires large organizations to implement people and process transformation to be successful. Discussions with customers has shown that fearabout the operational transition has created alack of adoption. The net gain of HCI in the datacenter is a significant reduction in the total cost of ownership for storage.

What is your storage strategy?

When looking at your storage strategy you are likely to see a mixture of solutions to meet your needs. Ihave found that the following questions helppeopleidentify their requirements which ultimately lead to strategy:

What storage requirements do your applications have and how are they measured?

How is storage involved in your business continuity, disaster recovery, backup and availability strategy?

What data security requirements does your organization have?

What is your storage strategy for the cloud?

Once you identify your storage requirements the strategy can be aligned with functional needs. Functions that may be important to your organization around storage may include:

Capacity

Performance

Redundancy

Data security

Ease of management

Cost

Replicationcapabilities

Assigning measurements to these functions allow you to identify the correct storage “profiles” to be used in your organization. These profiles can then be aligned withyour storage strategy.

Differences in HCI

HCI does present some differences from manytraditional storage arrays. Fourcommonelements ofdifference are capacity management,scalability,policy based management and roles.

Capacity Management

Capacity management in most traditional systemsrequire a measurement based onhistorical usage metrics. Historicaldata is taken into account then a“bet”is made about required capacity for the nextXyears. Thelarge “bet”on storage array capacity and performance does not allow IT to be agile to business changes. Growth beyond the initial implementation is possible by adding additional storage shelves or buying new arrays.HCI by contrast takes alinear model.You can scale upand out incrementally. Youadd capacity by adding additional drives to your current servers or add additional servers to increase available controllersand drive bays.Ifind that customer who adopt HCIare:

Able to procure storage in incremental blocks instead of via large capital expense“bets”

Able to have a predictable outcome on capacity management

Able to adopt new technology faster

Able toutilize storage resources without depreciation“bets”

Once a storage becomes aligned with HCI based capacity management they find that storage capacity growth is no longer a “flak jacket” exercise. The business can accept that their new project requires some incremental increase in cost insteadof requiring a large CapEx spend. The integrated nature of HCI means that compute capacity sizing is integrated in part with storage capacity. This simplifiedcapacity managementallows the IT budget to stretch farther. Best practices for HCI include:

Design for scale, but build incrementally

Overall capacity management process is the same as traditional arrays but lead times are shorter and potentially more frequent

Choose servers with maximum available drive bays

Traditional storage capacity management requires procurement at roughly 60% usage to allow for growth. In large environment this means that large amounts of capacity will never be used increasing to total cost per GB of storage usage. HCI’s lower capacity expansion cost should allow large organizations to utilize 80% or more of capacity before buying expansions.

Some capacity metrics that you should monitor include:

Total available space

Used space

Used capacity breakdown including (VM’s, Swap,Snapshots etc.)

Dedupe and compression savings

Scalability

A common concern with HCIis scalability.Independent scalabilityis touted as one of the primary benefits of traditional three tier infrastructure: compute, storage, and networking. When considering the scalability of traditional storage systems the follow are considered:

Capacity in TB’s

Required IOPs

Throughput of storage systems (link speed)

Throughput of controllers

The adoption of flash drives has changed the scalability painpoint, IOPs are no longer a concern for most enterprises. Flash drives have increased the pressure on link speed and controller throughput forcing architecture changes in traditional arrays. When adopting HCI controllers and link speed becomes distributed removing both bottlenecks leaving only capacity to be considered assuming all flash arrays. HCI addressescapacity scalabilityin two ways: adding additional drives and increasing the capacity of existing drives. It is considered a best practice when implementing HCIto get servers with as many drive bays as possible. This allows you to increase capacity across the cluster by adding drives. The explosive adoption of HCIand flashhas driven manufactures to provide increasing larger capacity drives. With VMware VSAN you can replace existing drives with largerdrives without interruptingoperationsCustomerscandoublestorage capacitywithout adding additional compute nodes. HCI scalesina distributed fashion for linear growth. Some best practices to consider around scalability are:

Consider using traditional servers instead of blades to increase the available drive bays

Consider using all flash drives toremove all potential performance concerns

HCI does implement a flash cache which greatly improves performance without having to implement all flash

Policy Based Management

Many traditional arraysavailability and performance is tied to logical unit number (LUN). These capabilities are set in stone at time of creation. In order to change these capabilities moving the data is required. This type of allocation creates challenges for capacity management and increases the number of day two operations required in order to meet business needs. HCI takes a policy based approach and removes the constraints of LUNs. There is a singledatastoreprovided by HCI radically simplifying traditional storagemanagement. Policies define availability and performancerequirements and the HCI system enforces the policies. To increase the performance of a specific workload a new policy is defined and assigned to the workload. The HCI system works to ensure policy compliance without interruption to the workload. Policy based management provides large operational efficiencies. An IDC study has shown thatHCI can lower the OpExcostof storage by 50% or more. In VSAN there are two key elements in a policy: stripe count and failures to tolerate (FTT). Stripe count denotes the number of drives an object needs to be striped across thus improving performance. Each object will have its data spread across X number of disks on the same compute note. Failure totolerate denotes the number of compute nodes that can fail before data access is affected. A FTT setting of 1 is essentially a mirror each object must have one duplicate copy on another node. FTT of 2 provides two copies of the data across three total nodes. FTT has a direct effect on the amount of storage used in the HCI implementation. Policies should be designed to meet the business needs of the application. A few best practices to consider:

Do not use FTT of 0 unless you truly don’t care about loss of the data (stateless services)

Depending on the type of disks backing the HCI solution additional stripes may not provide performance boosts

Some generalVSANperformanceguidance is provided below:

Some generalVSANavailability guidance provided below:

The policies should align with organizational application requirements. Management by policy provide the greatest flexibility and reduces the management cost.

Roles

Many organizations have struggled to adopt HCI because of the change inskills and processrequired to be successful. The best case scenario for HCI bridges the world of compute,storage, networking and security together into a single platform. This single platform provides operational synergy and encourages standards. Organizations that have been successful in adoption of HCI have learned that it requires a cross functional skills set. The current reality of siloed teams struggle to adopt HCI.Creation of cross functional teams with blended skills allows accelerated adoption of HCI.

Some best practices for successful HCI adoption include:

Cross functional training

Blended teams

Rotating subject matter experts who are expected to own a product but train others

Outcome-oriented teamsand compensationinstead of activity-oriented

Many ofmy customershave adopted a plan, build run methodology in these organizations it is recommended that teams at each tier be blended. I recommend that members of each silo of plan, build and run rotate though plan, build and run to better understand each role.

Benefits of HCI

HCI can provide many benefits required by modern datacenters. Ihaveobserved customers successfully adopting HCI have achieved the following outcomes:

Hyper Scalability

Operation agility

Operation efficiency

Simplified operations and support

Improved availability and performance

I truly believe it’s time to adopt HCI in your datacenterand realize the operational and cost benefits.