Heart of the Issue

Thursday, November 15, 2018

It has been almost 2 years since I penned Want to be Great? Always Touch the Fantastic Line: a story about how I was inspired by my youngest son and his constant pursuit of excellence. This has been one of my most read blogs, having being read thousands of times.

Since that time, I have traveled extensively — circumnavigating the globe twice — and I have been so moved by people that have come up to me and asked me how “Mitty” was doing and if he was still “touching the line”. I have also came across numerous people that have adapted my “TTFL” mantra into their lives. One particular person is a wrestling and football coach and he said all of his students know TTFL and it is a part of their ethos. All of this is very humbling, thank you all for sharing with me.

If you haven’t read the blog, the gist of TTFL is that true excellence is achieved by those who never cut corners and do the right thing, even when no one is watching and others are doing so with impunity. If you want to read more on my TTFL philosophy, then please read the link above.

Now, what’s happened to Mitty since then?!? Well, Mitty is now 12 years old and has still been pursuing his baseball and basketball passions with the same constant pursuit of excellence. He has also since taken up a love of physical fitness with me and is a regular (and beloved) attendee down at our box, Crossfit Countdown. And finally…. he finally had a growth spurt this summer and now my height-challenged chunky little man is an average height young man that has slimmed down accordingly. And, more importantly, all the hard work he put in before the growth spurt is paying dividends.

He recently tried out for his school team and, not only did Mitty make the starting squad, he was a pace setter during the infamous speed drills from the original blog post. He lead from the front, touching the line every time.

While I am proud of this young man’s athletic accomplishments, he has continued to be an excellent person and has won many awards for his kindness and compassion for others. TTFL is something that touches every aspect of his life.

Will he play professional basketball? Very likely not. Will he make his mark on the world and leave it a better place than he found it? I guarantee it. May we all be able to say the same of ourselves.

Sunday, July 8, 2018

Last year Delphix blogged about how the Dynamic Data Platform can be leveraged with Amazon's RDS (link here). Subsequently, they released a knowledge article outlining how the solution can be accomplished (link here).

I thought I would take the work I have been doing in developing terraform plugin and create a set of blueprints that could easily deploy a working example of the scenario. I also took that a step further and created some docker containers that would package up all of the requirements to make this as simple as possible.

This demonstration requires the Delphix Dynamic Data Platform and Oracle 11g. You will need to be licensed to use both.

Next, copy the .example.docker file
to .environment.env and
edit the values to reflect our environment

Now we run the docker container against
the delphix-centos7-rds.json template to create our AMI.Details of the command:

docker run – invoking docker to run a
specified container

--env-file .environment.env –passing in a file that will be instantiated as environment variables inside the container

-v pwd:/build – mount the current working directory to /build inside the container

-i - run the container in interactive mode

-t – run a seudo tty

cloudsurgeon/packer-ansible:latest - use the latest version of this image

When the container starts, it will download the necessary Ansible roles required to build the image.

After downloading the Ansible roles, the container executes Packer to start provisioning the infrastructure in AWS to prepare and create the machine image. This process can take around 20 minutes to complete.

Build the Demo environment with Terraform

Now that we have a compatible image, we
can build the demo environment.

--env-file .environment.env – passing in a file that will be instantiated as environment variables inside the container

-v $(pwd):/app – mount the current working directory to /app inside the container

-w /app/ - use /app as the working directory inside the container

-i - run the container in interactive mode

-t – run a seudo tty

cloudsurgeon/rds_demo:latest – use the latest version of this image

apply –auto-approve – pass along the apply flag to terraform and automatically approve the changes (avoids typing yes a few times)

This
is repo is actually a set of three terraform blueprints that build sequentially
on top of eachother,
due to dependencies.

The
sequence of automation is as follows:

Phase
1 - Build the networking, security rules, servers and RDS instance. This
phase can will take around 15 minutes to complete, due to the time it takes AWS
to create a new RDS instance

Phase
2 - Configure DMS & Delphix, Start the DMS replication task.

Phase
3 - Create the Virtual Database copy of the RDS data source.

Using the Demo

Once phase_3 is complete, the screen will
present two links. One is to the Delphix Dynamic Data Platform, the other link
is to the application portal you just provisioned.

Click
the “Launch RDS Source Instance” button. The RDS Source Instance will open in a new browser tab.

Add someone, like yourself, as a new employee to the application

Once
your new record is added. Go back to the application portal and launch the RDS
Replica Instance

You
are now viewing a read-only replica of our application data. The replica is a
data pod running on the Delphix Dynamic Data platform. The data is being sync’d
automatically from our source instance in RDS via Amazon DMS.

Go back to the application portal and
launch the Dev Instance.

The backend for the Dev Instance is also
a data pod running on the Delphix Dynamic Data Platform

It is a copy of the RDS replica data pod.

Notice we don’t see our new record.

That is because we provisioned this copy
before we entered our new data.

If we want to bring in the new data, we
just need to simply refresh our Dev data pod.

While
we could definitely easily do that using the Dynamic Data Platform web
interface, let’s do it via terraform instead.

In the terminal, we will run our same docker command again, but with a slight difference in the end.

This time, instead of apply --auto-approve, we will pass phase_3 destroy –auto-approve

Details of the new parts of the command:

phase_3 – apply these actions only to phase_3

destroy – destroy the assets

--auto-approve – assume ‘yes’

Remember, phase_3 was just the creation of our virtual copy of the replica. By destroying phase_3, Terraform is instructing the DDP to destroy the virtual copy.

If you login to the DDP (username delphix_admin/password
is in your .environment.env
file), you will see the dataset being deleted in the actions pane.

If
you close and relaunch the Dev Instance from the application portal again, you
will see that the backend database is no longer present.

Now
we run our Docker container again with the apply command. And
it rebuilds phase_3

If
you close and relaunch the Dev Instance from the application portal again, you
will see that the backend database is present again and this time includes the
latest data from our environment.

When you are finished playing with your
demo, you can destroy all of the assets you created with the following docker
command:

Thursday, June 21, 2018

Test environment data is all over the place, slowing down your projects, and injecting quality issues. It doesn’t have to be this way.

According to the TDM Strategy survey done by Infosys in 2015, up to 60% of application development and testing time is devoted to data-related tasks. That statistic is consistent with my personal experience with the app dev lifecycle, as well as my experience with the world’s largest financial institutions.

A huge contributor to the testing bottleneck isdata friction. Incorporating people, process, and technology into DataOps practices is the only way to reduce data friction across organizations and to enable the rapid, automated, and secure management of data at scale.

For example, by leveraging the Delphix Dynamic Data Platform as a Test Data Catalog, I have seen several of my customers nearly double their test frequency while reducing data-related defects. The Test Data Catalog is a way of leveraging Delphix to transform manual event-driven testing organizations into automated testing factories; where everyone in testing and dev, including the test data engineers, can leverage self-service to get the data they need and to securely share the data they produce.

Below you will find two videos I recorded to help illustrate and explain this concept. The first is an introduction that speaks a little deeper on the problem space. In the second video, I demonstrate how to use Delphix as a Test Data Catalog.

Reach out to me on Twitter or LinkedIn with your questions or if you have suggestions for future videos.

Wednesday, June 20, 2018

Continuous Integration and Continuous Deployment are two popular practices that have yielded huge benefits for many companies across the globe. Yet, it’s all a lie.

Although the benefits are real, the idea behind CI&CD is largely aspirational for most companies, and would more properly be titled, “The Quest for CI/CD: A Not-So-Merry Tale.”

Because, let’s face it, there is still a lot of waiting in most CI/CD. To avoid false advertising claims, perhaps we should just start adding quiet disclaimers with asterisks, like so CI/CD**.

The waiting still comes from multiple parts of the process, but most frequently, teams are still waiting on data. Waiting for data provisioning. Waiting for data obfuscation. Waiting for access requests. Waiting for data backup. Waiting for data restore. Waiting for new data. Waiting for data subsets. Waiting for data availability windows. Waiting for Bob to get back from lunch — even when devs just generate their own data on the fly– QA and Testing get stuck with the bill. (I am talking to three F100 companies right now where this last issue is the source of some extreme pain).

I wish I could say that any one technology could solve all data issues (I have seven kids and that fact alone would pay for their entire college fund). But, I can say that Delphix solves some very real and very big data issues for some of the world’s biggest and best known brands, through the power of DataOps. It allows organizations to leverage the best of people, process, and technology to eliminate data friction across all spectrums.

Here I share a video of how I tie Jenkins together with Delphix to provision, backup, restore, and share data in a automated, fast, and secure manner. This video explains how I demonstrated some of the functionality in my Delphix SDLC Toolchain demo.

Monday, November 13, 2017

I know I have been talking about this for a while, but with the DevOps Enterprise Summit kicking off, I thought it was time to finally do it! Below you will find a video of Delphix integrated into a typical toolchain consisting of tools like Datical, Maven, git, Jenkins, and Selenium.

In this video, I walk through a form of "A Day in the Life" of the SDLC, where we want to introduce a new feature to our employee application: we want to record their Twitter handle. To do this simple change, we will need to introduce database object changes (a column to store the handle) and application level changes to display and record the changes. This is a simple application with a Java + Apache front end and an Oracle 12c backend.

Below is a general swim diagram of the flow and the video, as well. More details on the "how" of the components next week! (I will replace this video with a better quality video, but my computer crashed last night with all the changes, and I had to reproduce on a loaner system. Crazy story)

Thursday, September 7, 2017

Hey everyone! I’m back in the “demonstration saddle” again to showcase how easy it is to replicate data from one cloud to another. Data friction abounds, and there are few places that feel as much data friction as cloud migration projects. Getting data into the cloud can be a challenge, and adding security concerns can make it seem almost impossible. DataOps practices can ensure that data friction is not the constraint keeping you from leveraging the cloud. I recorded this video to demonstrate how the Delphix Dynamic Data Platform (DDP) works across the five facets of DataOps ( governance, operations, delivery, transformation, and version control) to make migrations "friction free."In this video, you will see me replicate data from Amazon Web Services (AWS) into Microsoft Azure, and also from Azure to AWS. Since the actual steps to replicate are very few and only take a matter of seconds, I spend time on the video explaining some of the different aspects of the DDP. I also highlight leveraging the DDP’s Selective Data Distribution which only replicates data that has been sanitized as a part of an approved and automated masking process. In the conclusion of the video, I demonstrate creating a copy of the masked virtual database (VDB) and demonstrating how quickly you can do a destructive test and recover.

Here is a high-level diagram to understand the layout of what I am working with:

Sunday, May 7, 2017

The explosion of data in the recent years has had some knock-on effects. For example:

Data theft is far more prevalent and profitable now than ever before. Ever heard of Crime-as-a-Service?

There is now more pressure than ever before to modernize our applications to take advantage of the latest advances in DevOps and Cloud capabilities.

But the problem is that data growth is actually encumbering most companies' ability to modernize applications and protect customer information. The effect is exacerbated in environments leveraging containerization where application stacks are spun up in seconds and discarded in minutes. Through no fault of their own, the DBA/Storage Admin can't even initiate a data restore that quickly. This has painted data as the bad guy.

bigstock/andrianocz

The consequence of this is that Dev/Test shops have moved towards eliminating the 'bad guy' by using subsetted or pure synthetic data throughout their SDLC. After all, it kills two birds with one stone: Data is small and easy to get it when they need it, and nothing of value exists to be stolen.

But the implication of this well-meaning act is that application quality decreases and their application projects are just as slow, if not slower, than before. Their use of non-realistic datasets results in an increase in data-related defects. Then they try to combat the self-inflicted quality issues by creating a whole new data program lifecycle around coverage mapping, corner cases, data quality, data release, etc. The net result is that they spend at least as much human and calendar time on data, as they did before...yet they will still have self-inflicted data-related quality issues.

We need to stop the madness. Data is not the enemy, rather it is the lifeblood of our companies. The true enemy is the same enemy we have been tackling with DevOps: Tradition. The traditional way that we have been dealing with the culture, process, and technology around data is the enemy. At Delphix we help our customers quickly flip this on its head and eliminate the true enemy of their business. By enabling our customers to provision full masked copies of data in minutes via self-service automation, they now have data that moves at the speed of business. Their applications release over 10X faster, their data-related defects plummet, and their surface area of data-risk decreases by 80%. And one of the beautiful things is that, in most cases, Delphix is delivering value back to the business inside of two weeks.

bigstock/pryzmat

When you only address the symptoms of a problem, the problem remains. Data is not your enemy; serving data like you did for the last two decades is the enemy. Your data is more-than-ready to be your business-enabling partner, you just need to unshackle it with Delphix.