DevOps – 345 Systemshttps://www.345.systems
we help you build amazing softwareWed, 20 Feb 2019 15:47:16 +0000en-UShourly1https://wordpress.org/?v=5.0.3New pypyr Release – Version 2.2.0 shipped todayhttps://www.345.systems/news/new-pypyr-release/
https://www.345.systems/news/new-pypyr-release/#respondFri, 18 Jan 2019 11:44:24 +0000https://www.345.systems/?p=1921We’re excited to announce the latest release of our devops pipeline runner, pypyr! You can check out the latest release notes here: https://github.com/pypyr/pypyr-cli/releases/tag/v2.2.0, and find it on PyPi via this link: https://pypi.org/project/pypyr/. What is pypyr? Warning: this is one for the techies! pypyr (pronounced piper as in the Pied Piper) is a free open-source yaml-based pipeline...

What is pypyr?

pypyr (pronounced piper as in the Pied Piper) is a free open-source yaml-based pipeline runner. OK that’s great…what does this actually mean for you?

Free: Yep. It’s free. Go use it. For nothing. The payback we get is that the more people use the tool the more input we get into the project and it’s a win all round.

Opensource: The code is on GitHub and the packages on PyPi. Check out the links. It’s all there. Go use it. Clone it. Fork it.

Yaml-based: this means that you can create the tasks in human-readable form as a text file and Pypyr will read this. Pypyr has built-in tasks, so it’s less finicky to use than other ways of scripting.

Pipeline runner: This simply means that you can run a sequence of steps or tasks one after another. The output of one step can feed into the input of the next step. It’s like a to-do list for a computer.

Why Python? Python is fast becoming the default language of DevOps. Using Python instead of shell scripts means that you get consistent behaviour and results cross-platform, without worrying about which flavour of bash you have.

To put it another way, this is a tool that’s really easy for your devops team to use. It saves a lot of time and effort, and at 345 we use it as the cornerstone of our devops automation.

What’s New in 2.2.0

We’ve added dynamic pipeline loading to allow you to extend the core to load pipelines from anywhere (git, s3, consul, sky’s the limit), and improved the error handling in this release so you have a better time understanding what’s happened if something’s gone wrong. This is a great time-saver when you are creating and testing new scripts.

pypyr on GitHub

From the very start we have run the pypyr project as an opensource initiative on GitHub. We would love you to follow, star, contribute and get involved.

]]>https://www.345.systems/news/new-pypyr-release/feed/0345 Release DEVOPS:101 White Paperhttps://www.345.systems/news/345-release-devops101-white-paper/
https://www.345.systems/news/345-release-devops101-white-paper/#respondTue, 25 Apr 2017 15:33:17 +0000https://www.345.systems/?p=611Followers of the 345 blog will know that we’ve been very active in the DevOps area: DevOps is a hot topic at the moment, and as such there’s a ton of information out there that you need to sift through in order to get a meaningful view of the subject. Given 345’s expertise and thought...

]]>Followers of the 345 blog will know that we’ve been very active in the DevOps area: DevOps is a hot topic at the moment, and as such there’s a ton of information out there that you need to sift through in order to get a meaningful view of the subject. Given 345’s expertise and thought leadership in this area we decided that we would make a contribution that makes it easier for people to understand the subject.

We have put together a DEVOPS:101 white paper that takes you on a high-level tour of the subject without delving too deeply into any specific topic. The paper is technology-agnostic too, so no matter which stack you’re working on you should find the information relevant.

The paper covers the following topics:

Why DevOps

What is DevOps

Developing your DevOps Vision

Continuous Integration

Continuous Delivery

DevOps for Operations

DevOps Security

This paper is provided free of charge, all you have to do is enter your email to be added to our DevOps mailing list and we’ll send you a link so you can download the white paper.

]]>https://www.345.systems/news/345-release-devops101-white-paper/feed/0DevOps: The Concurrent Engineering of Softwarehttps://www.345.systems/devops/devops-concurrent-engineering-software/
https://www.345.systems/devops/devops-concurrent-engineering-software/#respondTue, 25 Apr 2017 15:33:00 +0000https://www.345.systems/?p=556I used to be an Engineer. Yes, really. A proper one. I used to design physical things rather than what I do now – specify how ones and zeroes should be arranged. I used to help design equipment for aeroplanes. In fact, it was automating repetitive calculations through scripting and via programming languages that eventually...

]]>I used to be an Engineer. Yes, really. A proper one. I used to design physical things rather than what I do now – specify how ones and zeroes should be arranged. I used to help design equipment for aeroplanes. In fact, it was automating repetitive calculations through scripting and via programming languages that eventually made me decide that the software was more fun. But that’s another story.

I still learned a lot from my time as an Engineer, and a real formative experience was one where I was seconded onto a manufacturing project. Concurrent Engineering was really taking off in aerospace at the time and the company I worked for was trying to get on board. They were also getting on board with Lean Manufacturing – yet another story I’ll be picking up soon.

Concurrent Engineering refers to the development of manufacturing systems in parallel with the development of a product. Whether this is an aircraft, an automobile or an iPhone the idea is still the same: you need to pay attention to the manufacturing process right from the start if your product is going to be (a) manufactured cheaply enough and (b) manufactured to the correct level of quality. You also need to develop your supply chain at the same time, and you need to develop it in such a way that your manufacturing volumes can flex with demand.

To me, the relationship between DevOps and Software Engineering has significant parallels to the relationship between Concurrent Engineering and Product Engineering.

A product is no use of you can’t deliver it when needed, or it has such poor quality that it’s unusable.

DevOps is like building the shop floor in your factory. Code is the raw material that makes its way through Goods In. Compiling, unit testing, deploying, load testing, functional testing – these are all the stages of the manufacturing process. In software the stages of the process are heavily biased towards QA, as the “build” tends to be rather straightforward, but the principle still stands.

How efficient and automated do you want your manufacturing process to be? In some ways that depends on how many times product will roll along the production line. Back in our naïve days, we used to think that software rolled off the production line once per release. This made software development something of a cottage industry. You had as much automation as old ladies knitting Aran sweaters.

Now we are striving for Continuous Delivery (CD) we think of software going through the factory on every commit. On a sizeable development team this can easily be tens of times per day if not more. And we’re talking about a complex product that needs hundreds or thousands of quality checks before it’s completed. If you were building a hundred cars a day you would have a production line that dictated your process, ensuring that only quality cars were made. With DevOps we are in the same place: each commit results in high quality shippable product.

If you look at it like this, it doesn’t matter whether you’re shipping packaged software or installed software that your users consume as a service. That’s just the detail of what “rolling off the assembly line” actually means. The difference between publishing an installable update or deploying an update when prior to this you have performed all the relevant deployment and testing cycles to get to the point you know you can ship.

When’s the right time to start designing the shop floor for your factory? You start at the same moment you start designing the software. How else will you deliver product if you have no production line?

For the rapid delivery of quality product you need a production line. DevOps is about building that production line. Dev leading is running the production line. Architecting is designing the product with manufacture in mind. Get these in harmony and you’ll achieve amazing results.

]]>https://www.345.systems/devops/devops-concurrent-engineering-software/feed/0The anatomy of a release pipelinehttps://www.345.systems/technical/anatomy-release-pipeline/
https://www.345.systems/technical/anatomy-release-pipeline/#respondMon, 30 Jan 2017 00:00:55 +0000https://www.345.systems/?p=341Following on from my previous articles on DevOps I’ve decided to write in more detail about the release pipeline. DevOps is a such a buzzword at the moment, but under the bonnet what is actually involved? In this article I’ll be dissecting release pipelines, which transform Continuous Deployment from an aspiration to a reality. I’ll...

]]>Following on from my previous articles on DevOps I’ve decided to write in more detail about the release pipeline. DevOps is a such a buzzword at the moment, but under the bonnet what is actually involved?

In this article I’ll be dissecting release pipelines, which transform Continuous Deployment from an aspiration to a reality. I’ll talk about the underlying building blocks that any good release pipeline will be comprised of and how they fit together.

CI versus CD

I think of Continuous Integration (CI) as being distinct and complimentary to Continuous Deployment (CD). I think of CI as a check on code quality and not of system quality. CI includes the following operations:

Static code analysis

Semantic code analysis

Compilation

Code packaging

Unit testing

CI should run on every branch you push to measure and ensure quality throughout your entire codebase. CI should test every pull request before you merge into a protected branch. Successfully passing CI is a prerequisite for your code ever seeing the light of day.

Continuous Deployment is different. CD is the automated (or semi-automated) process by which committed code is released to production. CD is usually triggered by commits to a single branch (e.g. master) or a small subset of branches. You can safely commit new features into any old branch you like, safe in the knowledge that you’re not going to put it live unless and until you then merge those commits into your CD branches. After that, you assume it’s going live.

Release Pipeline

The way we achieve this is to put into place a release pipeline. A release pipeline is the conceptual process by which we take committed code into production. As such, a release pipeline can be as ephemeral or as real as we want to make it.

The fundamental release pipeline, code change to production software.

There are some underlying capabilities we need in order to make sure we actually do have a release pipeline though:

A means of triggering the pipeline to run.

A means of executing tasks such as environment provisioning, application deployment, testing, collation of test results.

A means of controlling the flow of execution, especially to stop further processing in the event of failure.

An artefact store to maintain state throughout the process.

A metadata store, allowing consistent metadata (such as build number) to be passed to each stage of the pipeline.

A configuration store, allowing environment-specific values to be retrieved for use in the pipeline.

Logging of the work performed and of any errors.

Notifications of success and failure.

A release pipeline can also be thought of as a workflow. A workflow whose purpose is releasing software. As such, modelling your release pipeline should simply be a case of modelling how you want to release your software.

Triggers

Releases happen in response to one or more events. Often the triggering event is a code commit, but sometimes the release can be triggered manually or on a schedule.

You may also want your pipeline to run automatically up to a certain point (e.g. to the completion of pre-production testing), and then require manual approval to actually release into production. You may therefore want manual triggers to act as prerequisites for completion even though the start of the pipeline process is run automatically.

Pipeline Stages

Pipeline stages are the control points in your release pipeline, and within a stage you have tasks that are triggered. The key things to note about pipeline stages are:

A stage cannot start unless all of its prerequisites are fulfilled.

A stage cannot complete unless all of the tasks within it are complete.

Failures of any task (usually) result in the whole stage failing, and in turn this usually fails the whole release.

Basic release pipeline stages, a sequential series of steps.

A simple pipeline like the one pictured above is a sequential series of stages. These types of release pipeline have usually descended from CI systems, where a build process is executed linearly from start to finish.

A more complex pipeline may use fan-out and fan-in, as a means of running stages in parallel and then collating the output from each. This can be an advantage in highly complex deployments where there are a series of services that need to be deployed and tested as part of the overall process.

A pipeline supporting parallel processing via fan-out and fan-in.

The control of flow in your pipeline will vary depending on which model you use. In a simple pipeline you can trigger the start of the next stage from the completion of the previous stage. In a fan-out fan-in model you need to find a way of allowing stages to subscribe to their prerequisites. I’m not talking about underlying tech in this article, just what needs to happen.

Repair and Restart

An optional feature in release pipelines is the ability to repair and restart. This can involve correcting environmental issues and then allowing failed tasks to start again and then hopefully resume the process.

Repair can work both ways though. If you find that you’re fixing machine settings then you really ought to be working on fixes to your infrastructure provisioning scripts instead of masking your underlying issues. If you’ve has a network blip then restart may be fair enough.

Not all release pipelines support repair and restart, and if yours does then use it wisely.

Tasks

Tasks are the things that actually get done, at a granular level. Importantly, within a stage it should not matter which order the tasks complete in. Use stages as the gates in the control flow and tasks as the things that actually get done.

The tasks you will need include:

Infrastructure provisioning. This can include spinning up new virtual environments for test, or it can involve ensuring that a test environment is configured correctly and that the required services (e.g. web server) are installed and running.

Application deployment. This includes taking the packaged software and deploying onto the infrastructure instances, and making any environment-specific configuration changes as required.

Testing. Executing tests and publishing test results. You also need to be able to mark a stage as failed if the test run is not successful.

Infrastructure shutdown. After running a test phase any virtual infrastructure can be shut down or even decommissioned entirely to save costs.

Some tasks will be asynchronous, and your pipeline may need to be able to handle this. For example, and application that spins up AWS EC2 instances may have to wait a minute between the start of infrastructure provisioning and application deployment or testing while the environment is prepared.

Artefact Store

In general, a release process starts with a code change and ends with provisioned infrastructure and deployed software. Along the way the packaged software needs to be available for deployment to each environment, and may need to be modified and / or configured for each environment. An artefact store therefore underpins the process.

A release pipeline with a supporting artefact store.

The artefact store also needs to support distinct artefact versions. The set of artefacts for a single build and release should be atomic and distinct from the artefacts from any other release with no cross-contamination.

Going back a few years we used to achieve this by having a network share for the build and release process, and each build having its own folder within that share. Whether you take this approach, whether you use a database, or even put everything into an S3 bucket, persistence of artefacts is essential.

Configuration Store

The artefact store contains different data (software) for each build and release. The configuration store contains values that are consistent between builds such as connection strings, API URLs, environment-specific users and permissions.

A release pipeline with a configuration store.

The configuration store will ultimately contain some of your production configuration, even if it’s just machine names, and so it is essential that your configuration store is secure and encrypted. Your release pipeline should be able to pull out the required configuration for any environment at the relevant stage of the pipeline and use it to allow you to provision and deploy.

Logging

It goes without saying that if there are problems with your pipeline execution you need to be able to examine your logs to see where any problems occurred and what went wrong. In a system that has a lot of moving parts you need to ensure that the logs are collated in such a way that you can make sense of them.

A release pipeline supported by a log store

Everything you log will therefore need to be stamped with at least the following information:

Release pipeline / application

Pipeline stage

Build number

Timestamp

Beyond this, it’s up to you how you visualize your logs. A very basic system might just collate them together and make them searchable, but sophisticated build pipeline software will give you a graphical view of the execution of your pipeline with drill-down into the logs.

Metadata Store

The metadata store is one of the simplest features of the process. It is usually a collection of name-value pairs that contain build-specific information. Often these are exposed as environment variables, but depending on how you conceptualise your build data this could also be information on completion of stages and tasks.

A release pipeline supported by a metadata store.

Execution Engine

The release pipeline involves executing a workflow. How this actually works under the hood depends on implementation, but you need processes to execute. Whether this is a glorified bash script or a hosted workflow engine, somewhere the work actually needs to get done.

A release pipeline powered by an execution engine.

Notification Service

Notifications are optional, but useful and almost universal. When your process succeeds or fails you generally want to tell someone about it, and usually that’s via email. If you’re in the modern SaaS webhooky world (like 345) you might prefer to use a tool like Slack for notifications. Whatever your preference, someone needs to know.

A release pipeline with a notification service.

Process Viewer

Optional, but useful, is a graphical view of your pipeline. It’s no coincidence that CD tools include some sort of graphical view, because it’s a truism of software that UI is the only thing people see of their software. UI is the tip of the iceberg, often the smallest part in terms of code and functionality but the only bit you can see.

A release pipeline with a graphical process view.

Summary

So that’s the end of my tour of a release pipeline. I haven’t talked about specific tools and technologies because I wanted to keep it at a conceptual level. I’ve highlighted the main features that allow the execution of your release process. Whether you’re rolling your own release pipeline or using a package, you should be able to spot the same feature come up again and again. They may take different nomenclature depending on how the developers’ vocabulary evolved and the paradigms they were using, but the essence will all be there.

]]>https://www.345.systems/technical/anatomy-release-pipeline/feed/0Building DevOps on a solid foundationhttps://www.345.systems/methodology/building-devops-solid-foundation/
https://www.345.systems/methodology/building-devops-solid-foundation/#respondMon, 09 Jan 2017 00:00:33 +0000https://www.345.systems/?p=312Just about everyone has heard of DevOps by now, right? We have clients that talk to us because they want to “do DevOps”, but if you’re in this situation, how do you even begin to plot the correct journey for them? In this article I’ll go through some of the thought processes I use to...

Buying into the need for DevOps

I start by saying that DevOps isn’t something that you do once, tick a box, and then move onto something else. It’s a way of working. It’s a mindset and an approach.

I’m not a gardener, but I liken it to having a garden. In the early days you design your garden, put your turf down and your plants in. Then you tend to it regularly. You prune and weed, add a little, remove a little. Optimise. Nurture.

I then get clients to understand that manual processes undermine their ability to deliver. This is a biggy. If you do anything manually, no matter how well you document the steps, you always get different results. Eventually, always. Software quality depends on repeatability. Manual processes are not guaranteed repeatable. Manual processes are therefore the enemy of quality.

More than this. Manual processes don’t scale. If it takes me 20 hours this week to deploy something, it will take me 20 hours next week. If I need 10 deployments next week it will take 200 hours (or even more as everyone dies of boredom and demotivation). If I spend 40 hours this week scripting a deployment it may take me 10 minutes next week to kick off 10 deployments. Code scales, people do not.

Chart illustrating weekly effort of manual vs automated deployment

Chart of cumulative effort for manual vs automated deployments

Examine the underlying practices

DevOps doesn’t appear in a bubble. It should be a wrapper that encompasses other practices. If you’re doing the other practices well you should be able to build on top of them. If your dev practices are weak you need to address them first.

Look at your source control processes, your branching strategy and how you come to release code. Is this rock solid? You can’t build quality software until you get your source management right. Fix it. Get the right tools and learn how to use them.

Look at your build processes. Make sure you’re automating your builds with every new commit. Are you ensuring that bad builds are rejected? Does your build process test quality? How? Does it run unit tests? Does it set code coverage thresholds? Do you block merges to your master branch if the quality measures aren’t met?

Look at your testing strategy. What’s the reliance on unit testing versus integration testing? (This is a big subject, with no single answer). If you make a change, how confident are you that you haven’t broken anything? Are you able to automate a test run and get a report on your quality? How do you manage integration with systems developed by other teams or vendors?

How do you provision infrastructure? Do you procure physical tin? Are you virtualized? Are you able to provision a new environment, or scale your production environment, by running scripts?

Examine and understand all of the practices that underpin your delivery. Methodology is for selling books and consultants. Practices are the key to good development. Practices and a commitment to excellence.

Examine your processes

The best processes are the simplest. The fewest branches. The smallest number of active deployments. The shortest time from commit to release.

Look at what you’re doing, and examine it critically to see if it’s adding value and contributing to quality.

Look at each process and understand what happens at each stage. In detail.

Plan for end to end

Once I’ve been over the practices and processes I then start working with clients on a plan to get the end to end DevOps in place.

You need to look at where your pain points are and then plan to eliminate them one by one.

Where are your pain points? Look where you’re burning resource that isn’t adding value. The only true measure of value is working software (Agile Manifesto). Anyone that isn’t contributing to the building of working software is a symptom of waste and bad quality. Script waste out of your project. Build solidly scripted sub-processes. Tend to them. Improve and optimise them often.

Building your DevOps is like building links in a chain. Model your process, script each step. Encourage continuous improvement via evolution; dissuade stagnation. Aim to go from a commit, through testing, to deployment solely by running scripts. Once you have achieved this, think about how you can join the links to create your release pipeline.

In summary

DevOps is like many other things in software. There were people who were doing DevOps before it was even a thing. That’s because of their commitment to excellence and they devised good practices to support what they were doing.

My advice would be to focus on building the foundations right. Before you know it you’ll be “doing DevOps” because the components will all be there.

]]>https://www.345.systems/methodology/building-devops-solid-foundation/feed/0Day 1 DevOps: A Manifestohttps://www.345.systems/methodology/day-1-devops-manifesto/
https://www.345.systems/methodology/day-1-devops-manifesto/#respondMon, 19 Dec 2016 00:00:24 +0000http://www.345.systems/?p=263I believe everyone starting a software project should start their DevOps on the first day [of the build cycle] of their project. I believe that failure to do this leads to bad places almost every time, and the more complex the solution the worse mess you can get into. This is my manifesto for getting your DevOps...

]]>I believe everyone starting a software project should start their DevOps on the first day [of the build cycle] of their project.

I believe that failure to do this leads to bad places almost every time, and the more complex the solution the worse mess you can get into.

This is my manifesto for getting your DevOps lined up from the start of a project.

What is DevOps?

DevOps is the term used to describe a set of practices used to automate the delivery of software and infrastructure. Most software delivery best practices have incorporated automated build / Continuous Integration (CI) for a long time, but as automation extends from the developer’s code commits up to the point of deployment the range of practices involved has expanded to include scripted provisioning of infrastructure, automated deployment and automated testing.

There is no strict definition of DevOps, but I’d put a stake in the ground to say that if you are manually changing settings on any server in your test or production environments then you need to improve your DevOps.

Manual changes are not repeatable. Without repeatability you cannot achieve consistent quality.

Manual processes are not scalable. Without automation you cannot improve productivity.

No excuses

I’ve been on too many projects where deployment is left too late. I’ve heard a lot of excuses. I’ve yet to hear a compelling one. I just hear that some people aren’t interested in quality.

Excuse: We don’t want to spend the money on hardware yet, so we don’t need DevOps.
Retort: What? You’re happy spending money building something, and not knowing if it works, but you can’t even stand up a few VMs?

Excuse: We haven’t designed the infrastructure yet.
Retort: What? You don’t even know how you’re hosting your solution yet you’re willing to take the risk building it?

Excuse: We don’t have the expertise to build the infrastructure [or deploy the solution] yet.
Retort: Concentrate on building your DevOps expertise before you start building software.

Excuse: We outsource that to someone else.
Retort: Exactly how will you be in a better place by getting them to do this later?

Excuse: It will take too long.
Retort: Exactly how will you be better off if you burn that time – and more – later on, when your project is at a more critical stage?

When to start

We all start building a new application with something resembling a “hello world” app. Even if we’ve just initialized a new repo on Git, we can create:

An index.html with static text for a website.

An API route that GETS “/status” and returns a 200.

A background service that writes out to a log.

Literally, within minutes of starting a new software project, you can have a few lines of code that do something trivial that demonstrate running code. It is at this point that you should deploy your code.

Don’t leave it till later. That is a path to bad things. Deploy now. You now have enough code to:

Set up your source control repo, branching structure, permissions.

Set up your CI and automated tests.

Set up your target infrastructure and DevOps scripts.

Set up your deployment scripts to automatically deploy.

If you wait to add devs onto your project until your quality processes are in place you won’t regret it. Not for a moment. You might also uncover [early!] which developers are used to achieving high quality and which aren’t. And you might teach the ones that aren’t a lesson that will benefit them for the rest of their career.

Impact of cloud

At some stage we’ve all been forced to manually deploy some code onto a server at the last minute because something went wrong. Usually it’s not a pleasant experience.

The world of cloud computing makes the bad habit of manual deployment unsustainable. On the other hand, it makes the exercise of scripted deployment easier as there is no physical infrastructure to provision. Provisioning of infrastructure and deployment of code are all just lines of script. And usually fairly brief.

Cloud infrastructure richly rewards those with good practices and scripted, repeatable processes. You can spin up a test environment in seconds, run a suite of tests, and then tear the environment down when it has only cost you pennies in CPU time. Usually this can be achieved with a handful of lines of script.

Cloud is your friend. Embrace it.

Penalty of leaving it late

As your codebase gets bigger, with more contributors, it gets harder and harder to get your deployments working. All the time you’re trying to deploy, you have colleagues that may break your deployments.

If you can get your deployments and test runs green after your first day of code, make it your developers’ responsibility to keep them green. Human nature being what it is, if you’re trying to deploy to new environments it will be your problem until you get it fixed. If you have a working deployment the responsibility should fall on whoever breaks it. This works in your favour, so get it right on day 1.

Bigger picture

It’s easy to get devs to start cutting some code, that’s what they all love to do. A Solution Architect can identify applications and services, and you can get working on features and everyone’s happy.

It’s a different mindset to consider deployment from day 1. It is often the case that the design a Solution Architect may recommend is driven by features and functional requirements, but it is the physical / virtual infrastructure that will dictate how non-functional requirements are supported.

If you need to put in place realistic infrastructure on day 1 you need to have a grip on how to meet performance, scalability, reliability, security, upgradeability and a whole host of other non-functional aspects of your solution. It helps immensely to get these baked in early so that you are building on solid foundations.

Done means done…when?

One of the main problems with project management on software projects is when people estimate for features based on how long they take to code. If you allow a developer to code a feature as “works on my machine” and declare the task done then you’ve lost.

You can’t even rely on unit tests passing, you have to look at test coverage as well. Unit test coverage should be as close to 100% as possible. I can’t put a number on where you should draw the line, but it needs to be high.

You should also be testing features for completeness as well, so functional tests encompassing user stories or business scenarios will also need to have full coverage. This should absolutely be at 100% for a feature to be considered done.

The more features you have, the more you will be in a position to run load on your system, perform security penetration tests. You will be “upgrading” continuously, as your app will already be deployed.

Done really means done when the feature works in production and passes all tests (with full test coverage). Day 1 DevOps gives you the surest path to this.