DevOps Stack Exchange is a question and answer site for software engineers working on automated testing, continuous delivery, service integration and monitoring, and building SDLC infrastructure. Join them; it only takes a minute:

Recently my team started adapting git workflow for agile-based delivery, we've been using JIRA, TeamCity and various other CI tools to complete the pipeline. The workflow works great for monolith or SOA applications in JAVA.

The problem comes with microservices delivery pipeline targeted for enterprise environments such as Openshift.

With git-workflow, the expectation is feature branches once developed and tested will be merged to the develop branch, however, when there are numerous features being developed a developer needs a platform to test the service feature he just built before submitting a pull request.

With openshift/K8s we expose a service, let's assume each service built out of the feature branch looks like below.

Internally openshift should be able to allocate dedicated pods to test the services and they should be disposed once merged with develop branch, another problem with this approach is running integrated test or if services have dependency with consumer apis. Modifying the service name based on feature branch is not a good idea.

Does anyone know a better way of handling this delivery model in microservices based platform. Thanks

2 Answers
2

I'm not altogether sure if there actually is a problem you're trying to solve, or whether you are looking for confirmation that you are on the right track. Some thoughts:

Internally openshift should be able to allocate dedicated pods

Sure. Add a post-build step in your CI pipeline which does the following:

Fashion a .yaml or .json description of that pod (or rather, DeploymentConfig). How/where you do that depends on your architecture - if each of your microservices has exactly the same port (i.e., all listening on 80 or 443), then you only need a central template for that, and basically only inject the name (containing the microservice name and the feature branch name) and the git source URL for it. The file can of course contain multiple OpenShift objects (i.e., the DeploymentConfig and a Route, maybe).

oc apply it, OpenShift will build an image and start a pod for it.

and they should be disposed once merged with develop branch

Add a post-merge step which does the same in reverse, i.e. oc delete ... for every component of the original .yaml. Probably this is easiest if you give a branch-based label to each of the components, so you don't need to keep track of which you have applied in the first place.

another problem with this approach is running integrated test or if services have dependency with consumer apis.

If a service involved in an integrated test has state, then you indeed need to fire up a new pod/volume for each test run. If your integration test depends on external state, then you also need to manage that somehow. Both of these aspects are true anyways, no matter how you architect your solution.

In these cases, you would probably do well to not start deployments right out of your individual CI pipelines, but do a "big" deployment when you start your integration test (i.e., deploy all individual microservices like described above, but also inject an ID for your current integration test run into all their names, so everything stays unique).

Modifying the service name based on feature branch is not a good idea.

Why not? If you wish to have a pod running for every feature branch, then you need to do it that way, period.

It would certainly force you to do some kind of dependency injection (i.e., if you are testing one service A which needs another service B, then you have to tell A which B-feature it should take. But you need to do that anyway, in some form or fashion; as far as I can understand, you are trying to make the services (their branches) independent from each other, and I also assume from your question that you have one service per git repository (i.e., one service per branch) so you need to have them running per-branch.

Thats correct i have one service per git repository and about 300-750 microservices per application. There are about 9000 apps so looking for a unified model.
– SudheejDec 27 '18 at 20:23

Wow, @Sudheej, that is a nice amount of microservices and apps. I'm afraid that I have no experience with that kind of scale (you might want to add those numbers to your original question). Did you look into istio.io? They do "service meshes", and seem to be the next greatest thing as far as I can tell (just hearsay from colleagues...), but I'm not altogether sure if they support your exact case.
– AnoEDec 28 '18 at 0:52

we have been talking with AWS and MS the company i work for is one of the top banks in US and has lot of PI data so they look for on premise setup with OSE. Microsoft has been pitching for Service Fabric and they have an integrated Azure devops.
– SudheejDec 28 '18 at 17:20

Nowhere near that scale, but personally on gitlab, my pipeline runs for every branch. On every push it builds, tests and deploys. Each branch gets its own namespace, when that branch is merged automation deletes the entire namespace.
– LeviJan 21 at 15:24

We are in the process of making open source our openshift continuous deployment setup over on GitHub that we call OCD. It is actually driven by git tags and Helmfile yaml within git. As it’s driven by git tags it is agnostic as to which branches are being use. It is also agnostic to which CI/CD tools you are using.

The idea is that developers can “self-service” spinning up new apps on openshift by changing yaml files in git. So a developer setting up a feature branch can “self-service” spinning up that code by simply tagging the code and changing some yaml to have openshift run that code.

It could be used to meet your use case as follows:

OCD builds from git tag (or git release) events and makes a container image of the source code in the git tag, and tags the container image with the same tag. It doesn’t need to know which branches those tags belong to.

A team that needs a new feature branch deployment would add a new folder in the git repo that holds all the configuration to build and run their tagged code.

OCD is implemented as generic helm charts. One sets up BuildConfig that build from git tags and tag the image with the same tag. Another will setup DeploymentConfig to run tagged images. Another manages Secrets encrypted in git. Another manages ConfigMaps. There are installed using Helmfile that lets you declare a bunch of helm release in a yaml file. That is run by adnanh/webbook a small go app that can catch git webhook events and run shell scripts.

OCD was created with micro services in mind. The idea being teams can spin up new services very easily. Using it to spin up temporarily apps to test is easy. It doesn’t current delete apps if they are removed from git. That wouldn’t be hard to implement when testing it I delete apps with a single command line.

Disclaimer: it’s currently pre-release but feature complete and we are in the process of migrating our own apps into it.