Blog

Node.js is a phenomenal runtime and programming model transforming the way modern systems should be built, it also requires a different approach to operations which architect’s, operators and developers must address. Today there are ways to implement Node.js at a mature enterprise-grade for Production systems, and the Node.js Collaborators are working hard to make Node.js even better. Node.js has taken hold of the Enterprise market through developer adoption. The challenge now for both the Node.js Community and Enterprise consumers is addressing deeper observability, security and compliance concerns. On the community side Node.js Core contributors are currently working to address diagnostics and debugging improvements.

In this post I’ll outline how Node.js can be maturely operated by Enterprises in Production with an understanding of the first major building blocks needed to build a compliant open source supply chain and how nearForm’s Certified Node.js Image Pipeline can play an important role.

Node.js is not a singular thing, but rather an ecosystem - a JavaScript runtime, C/C++ extension models, build system, developer tooling, the largest OSS community and a dependency model built to address the way modern application supply chains should be architected. However most development teams struggle to implement an enterprise-grade Node.js system. The C-Suite cannot sleep comfortably at night with nightmares of data breaches caused by shipping unknown code through open source package managers like npm or a lack of guarantees that their code and configuration hasn’t been altered on it’s way to Production..

Unfortunately the rapid innovation in Node.js makes the Production challenge even more complex. What is often the case in developer led adoptions is that security, compliance and governance come in the later stages of the software development lifecycle. A more certified and coherent approach to adopting Node.js with safety has not existed to date. Fortunately sophisticated open source providers are focusing on these gaps and addressing solutions within the existing continuous integration and cloud-native space.

Since 2015 the Node.js Foundation has adopted a release cadence driving a predictable new version adoption rate, a better understanding of additive or deprecated features and importantly bug fixes and security patching conducted in partnership with the security community during a known Long Term Support (LTS) window. Staying current with the latest LTS version of software in Production is a requirement for any type of enterprise support agreement or certification. Node.js contains that same implicit Enterprise contract - LTS version adoption, verifiably immutable artifacts in your supply chain, the ability to publish system events and a forward-leaning approach with business partners.

Verifiably Immutable Artifacts

What is a verifiably immutable artifact? It’s any component of your Production system, from your infrastructure to your UI which you can verify has not been altered in any way since you built this version of the artifact, tested it and passed any additional quality gates or sign-offs you’ve built into your CI pipeline. Ideally you can verify these artifacts have not changed, mutated, through some type of cryptography scheme, hashing or other differential technique.

Why are containers a great packaging and virtual machine choice? I think the answer comes down to the ability to have immutable and 100% deterministic artifacts in your supply chain from your developers machine all the way through qa, staging and production. If you were manufacturing cars you wouldn’t change the wheels after attaching them, you’d need to retest every time, the same problem exists in software supply chains!

Whether you use docker, containerd, rocket or something else you have an immutable unit of deploy. It could be argued that a VM can represent this same unit however there the size, speed of deploy to run and developer experience make containers a clear winner and why it’s the only supportable artifact in nearForm’s Certified Node.js Image Pipeline.

Addressing the challenges of an open source runtime requires a holistic approach across your enterprise’s entire supply chain. We cannot solve this fully within the Node.js ecosystem, it requires wrapping our arms around the entire DevSecOps lifecycle and should ideally be more heavily weighted as early (left) in this lifecycle as possible. It starts with vetted images, ideally in partnership with industry experts. These images will need to be integrated into the early stages of the supply-chain, the days of patching in Production are over (at least they should be!). If you can adequately assess, log and trace the provenance of your base distribution images then you have the means to statically correlate feature changes, bug fixes and even specific lines of code that changed in the event of a regression or just curiosity around how an open source project you critically depend on is evolving.

The flow diagram below illustrates how nearForm’s Certified Node.js Image Pipeline addresses this critical stage. nearForm offers a few different options in incorporating this process into your supply chain whether you have a more home grown situation, leverage an enterprise distribution mechanisms such as Red Hat’s OpenShift or you have adopted a public cloud managed service like ECS, EKS or AKS.

Understanding the source of your distributions is critically important, however this alone can come directly from the distributions made available by the project maintainer’s themselves. Where the enterprise grade intersection really comes in are the additional testing and vetting steps taken by your support partner. In this case nearForm’s approach covers the following critical steps:

Run docker squash - creates as minimal a container image size as possible

Run a battery of unit and smoke tests on the new build - ideally the definition of these tests are informed by both the vendor and the client

Trigger build approval process - manual or automated

Publish Release!

Given the various environment we encounter across our partners and clients it’s necessary to maintain a certain level of flexibility in how we publish our releases. For most organisations publishing these images to Docker Store (Hub) is acceptable. However, there are environments which sometimes would not have access to public artifact repositories, this is certainly the case in regulated industries such as Financial Services or Healthcare. In these situations we work with our clients either on-prem or in the public cloud to deliver images in a way that is compliant with their security standards, one size does not fit all.

Publishing System Events

Adopting certified Node.js images is a critical step in moving to a more mature Production system. However simply re-building your containers based on the certified images is a portion of promoting discoverability and establishing Production provenance. Every build step in your supply chain, whether internal or external, should provide metadata, change outputs, repo history and other data collected and associated with versioned releases across distros, dependencies, your own source code, infrastructure or configuration files. All of this data can then be published and made available to downstream systems and consumers over a message stream built on Kafka, AWS Kinesis, Azure Event Hubs or an Enterprise solution like MQ or Tibco.

Forward Leaning Approach with Audit & Legal Teams

What an event driven approach allows you to do is communicate early and often through automation with full forensic detail, you can then immediately get stakeholders involved where needed. It will be important to teach your auditors how to investigate in the cloud as DevOps moves us away from traditional resource catalogs or ITIL style change management techniques. In the cloud resources are ephemeral, self composing and auto discovering - your audit logs and information sharing need to match this reality. By adopting immutable Node.js releases in an event driven supply chain you can statically determine exactly what was running, when they ran, test results, who created and published any code, dependency traceability and a means to roll-back or patch forward using the same process - your system becomes comprehensively observable.

Embracing the process we’re describing demands providers you partner with understand their obligations to testing, no breaking minor changes, additive only changes between majors as well as upfront and early warning communications in navigating the deprecation and feature removal process. However as in software development breakages will happen, your organisation will need governance and quality gates put in place between your vendors and your customer’s production systems.

Next Steps

If you are attending NodeDay in Toronto in May, we’ll be there to discuss Node.js in the Enterprise as well as expanding on this subject in a series of talks and blog posts. Hope to see you there.

Also, we’ll be at Red Hat Summit in San Francisco in May with lots to say on the subjects of Node.js in the Enterprise as well as Node.js on OpenShift. Keep an eye out for our booth and make sure to attend our talk.

nearForm represents the best people, process and tools to deliver software on-time and on-budget. Our Solutions team and Node.js Core contributors are experts in de-risking digital delivery. We’d love to discuss your needs and how nearForm can help you mature your Node.js and digital supply chain capabilities.