I think I have updated my blog post and PSO guide to reflect this change. In case you are using Pure Service Orchestrator with FlashBlade. The original yaml for the arrays when installing PSO was “NfsEndPoint”. At somepoint, it was fixed to expect “NFSEndPoint” matching the proper name for NFS. I never updated my blog and docs until now.

November 2018 was my the finish of my 5th year at Pure. I really meant to write up a recap but let’s just say November and December were super busy.

Cotton House Hotel in Barcelona

I was in Barcelona for VMworld EMEA the beginning of November, then came home to visit more customers around the US and tell them about using PSO with Kubernetes and Docker. Then my amazing oldest daughter had a soccer tournament in Orlando, FL. It was a great time with the family and why I do what I do.

Post Tournament Team pic. Go AFU U13 Girls.

Disney with the Family

Then back out to AWS re:Invent. This was Pure’s first big presence since we launched our suite of cloud data services the week before. It was great to share what we have been working on in the background for the last year. Cloud Block Store, CloudSnap and StorReduce have definitely increased the interest in doing a hybrid cloud, many current and prospective customers are very excited. I came home to take a breather and then off to KubeCon Seattle where our team was overwhelmed with conversations about how Pure can make Cloud Native apps persistent with easy Storage as a Service with Pure Service Orchestrator. Being able to run the same API’s in the Public Cloud and on prem is very appealing to teams rolling out apps in all kinds of use cases. Dev in the cloud and prod on prem? yes. Dev on prem and prod in the cloud? yes. Dev and Prod in the cloud? you guessed it. yes.

The Pure team at KubeCon Seattle

January was about building out some content for our sales and company kickoff but also helping customers with their projects on K8s and Docker. That brings me to yet another Kickoff. What I call the Orangest show on Earth. A chance for me to see so many great friends and see how successful their last year was. It was very satisfying to see sales reps and SE’s that I worked with throughout the year get recognized for growth they brought to the company. It was very nice to be recognized by my leadership and peers with an award. When you work with such a wide range of regions and teams sometimes gets hard to see if you are making a difference, especially when you are remote like I am. At the beginning of 2018, almost no one at Pure knew what I was working on. Slowly but surely the excitement around K8s is growing, so I am looking forward to an even more exciting year here at Pure.

Kingsman jackets for the team. So much orange and such a great team.

Somethings I would like to do in 2019

Share more on the blog. The transition from VMware(I still do VMware stuff!) to Kubernetes has provided many learning opportunities for me to share.

Work on Clusters as Cattle with Persistent data. Data is important and the app/cluster can or should move around it. Seamlessly.

Finish some cloud/dev online classes I have started. Finding time with no distractions is key here.

What you will see in this demo is the initial install of Pure Service Orchestrator on a upstream version of Kubernetes. Then by running the ‘helm upgrade’ command I can add a FlashArray to scale the environment and take advantage of Smart Provisioning. First we see the new m50 is not used over the original m70. So the final upgrade adds labels for the failure domain or availability zone in Kubernetes. I also add my FlashBlade to enable block and file if needed for my workload. We use the sample application with node and storage selectors to now request the app use compute and storage in a particular AZ. Kubernetes will only schedule the compute on matching nodes and PSO will provision storage on matching storage arrays.

I would love to hear what you think of this and any other ways I can show this off to enable cloud native applications. I am always looking for good examples of containerized apps that need persistent storage. Hit me up on the twitters @jon_2vcps or submit a comment below.

I will be at the Pure Storage booth at Kubecon next week December 11-13. Booth G7. Come see us to learn about Pure Service Orchestrator and Cloud Block Store for AWS. Find out How our customers are leveraging K8s to transform their applications and Pure Storage for their persistent storage needs.

It has been a fun (nearly 2 years) time at Pure working with our customers that already love Pure Storage for things like Oracle, SQL and VMware as they move into the world of K8s and Containers. Also helping customers that never used Pure before move from complicated or underperforming solutions for persistent storage to FlashArray or FlashBlade. With Cloud Block Store entering beta and GA later next year even more customers will want to see how to automate storage persistence on premises, in the public cloud or in a hybrid model. All of that to say if you are an architect looking to grow on our team please find me at Kubecon. I want meet you and learn why you love cloud, containers, Kubernetes and automating all the things in-between.

Over the last few months I have been compiling information that I have used to help customers when it comes to PSO. Using Helm and PSO is very simple, but with so many different ways to setup K8s right now it can require a broad knowledge of how plugins work. I will add new samples and work arounds to this Github repo as I come across them. For now enjoy. I have the paths for volume plugins for Kubespray, Kubeadm, Openshift and Rancher version of Kubernetes. Plus some quota samples and even some PSO FlashArray Snapshot and clone examples.

This post is a recap of my session at VMworld last week in Las Vegas. First, due to lighting, the demo was no very easily viewable. I am really disappointed this happened. I posted the full demo here on youtube:

All of the scripts and instructions are available here on my github repo.

One thing since we released Pure Service Orchestrator I get asked is, “How do we control how much developer/user can deploy?”

I played around with some of the settings from the K8s documentation for quotas and limits. I uploaded these into my gists on GitHub.

git clone git@gist.github.com:d0fba9495975c29896b98531b04badfd.git
#create the namespace as a cluster-admin
kubectl create -f dev-ns.yaml
#create the quota in that namespace
kubectl -n development create -f storage-quota.yaml
#or if you want to create CPU and Memory and other quotas too
kubectl -n development create -f quota.yaml

This allows users in that namespace to be limitted to a certain number of Persistent Volume Claims (PVC) and/or total requested storage. Both can be useful in scenarios where you don’t want someone to create 10,000 1Gi volumes on an array or create one giant 100Ti volume.

Credit to dilbert.com When I searched for quotas on the internet this made me laugh. I work with salespeople a lot.

The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.

Last year, here I am talking about containers in front of a container. Boom!

Why Pure Service Orchestrator?

At Pure we have been working hard to develop a way to provide a persistent data layer that is able to meet the expectations of our customers for ease of use and simplicity. The first iteration of this was the release as the Docker and Kubernetes Plugins.

The plugins provided automated storage provisioning. Which solved a portion of the problem. All the while, we were working on the service that resided within those plugins. A service that would allow us to bring together managing many arrays. Both block and file.

The new Pure Service Orchestrator will allow smart provisioning over many arrays. On-demand persistent storage for developers placed on the best array or adhering to your policies based on labels.

The second way that may fit into your own software deployment strategy is using Helm. Since using Helm provides a very quick and simple way to install and it may be new to you the rest of this post will be how to get started with PSO using Helm.

If you have another storage class set to default and you wish to change it to Pure you must first remove the default tag from the other StorageClass and then run the command above. Having two defaults will produce undesired results. To remove the default tag run this command.

Demo

Maybe you are a visual learner check out these two demos showing the Helm installation in action.

Updating your Array information

If you need to add a new FlashArray or FlashBlade simply add the information to your YAML file and update via Helm. You may edit the config map within Kubernetes and there are good reasons to do it that way, but for simplicity we will stick to using helm for changes to the array info YAML file. Once your file contains the new array or label run the following command.