IT Chronicles reached out to executives, thought leaders, experts, practitioners, and writers about a unique initiative. ITC would donate to Second Harvest for every article submitted in December by our past contributors. Thank you to all who contribute to this food drive. We appreciate your knowledge and leadership.
Table of Contents
ToggleShifting to a microservice architecture disrupts the DevOps pipeline as we know it today. Microservices are small, independently deployed objects. Their independence through the pipeline requires that you re-imagine your pipeline to support hundreds of moving parts.
The application layer defined our traditional CD workflows, and these workflows seldom addressed infrastructure or database objects. Instead, microservices will require full stack management for small incremental releases.
Microservices – The Last Mile of Agile
Microservices represent the last mile of agile. In agile, the goal was to push small incremental updates across the pipeline. But the pipeline itself was not built for incremental processing. Both builds and deployments were monolithic. Microservices are incremental by nature, and they will force us to re-think how the pipeline changes when we finally address that last mile of agile.
Adapting to an incremental development process requires us to revisit how we must implement our new Pipeline tooling. These four significant challenges will help you understand what changes are coming:
- The Number of DevOps Workflows – In traditional software engineering, a small number of CI/CD workflows were needed for each application, often based on the release version. Because microservices are independently deployed, each will have its own pipeline resulting in hundreds instead of dozens.
- Version control – While storing our source code objects will still be done, branching and merging will become less critical. A microservice is a small bounded context piece of code. Multiple developers will most likely not be working on a single microservice. GitOps will further shift how and why we use version control.
- Builds are Completely New – Traditional builds take hundreds of pieces of code, compiles and links the code into a set of binaires. The new build will take a single service, run scanning and then create a container image. The image is then registered. Microservice builds are completely different from monolithic builds.
- Tracking SBOMs– Tracking SBOMs in monolithic development means creating an SBOM for the entire application. Each microservice will have an SBOM but the application level is obfuscated.
- Fullstack– In a cloud-native architecture, your infrastructure and database should be part of your pipeline. For most CD pipelines, managing the full stack is completely new.
New Tools Needed for a Microservice Pipeline
Continuous Delivery and all of the actions that CD orchestrates will need to be revisited. Trying to force our existing pipelines to support these small independently deployed objects will be a mistake. While some tools will remain a part of the CD process, we will need to introduce new tools to advance the implementation of microservices.
CDEvents
Event-Based CI/CD is a new concept coming out of the Continuous Delivery Foundation. CDEvents are based on an event listener and move away from imperative workflow scripts. Imperative scripts are what make our traditional pipeline not work with microservices. In our new pipeline, the workflows will need to be declarative with easy interoperability. CDEvents address these two basic shifts in Continuous Delivery.
Version Control and GitOps
Version Control will continue to manage our source, but it will also be used to manage infrastructure and application deployment files. The rise of GitOps will change the role of versioning tools.
Catalogs and DevOps Data
As we move into microservice applications, we are beginning to see the convergence of the service catalog and CMDB to manage microservices. These new Microservice Catalogs will be used to manage the data of DevOps. This data will be critical for representing ‘logical’ applications based on a collection of microservices. SBOM and other configuration data will be stored and versioned via these catalogs showing application versions over time, what has changed, and the impact a single microservice may have across multiple consuming applications. Support teams will also use a microservice catalog to determine ownership of a microservice to maintain incident response. There are two open-source catalogs incubating at the Linux Foundation. One is Ortelius.io, and the other is Backstage.io. Learning how these tools manage microservices will be important as you move into this component-driven architecture.
The Catalog and the CD Pipeline
Critical to the implementation of a microservice catalog is its integration into the CD Pipeline. Automation is absolutely critical to track many fast-moving parts. Catalogs should be triggered at the point in time where a new container image is registered. It is at this point that the catalog can pull the new version (based on the SHA) into the data, track the impact across consuming applications, create difference reports, and SBOMs.
When the container is ready to be pushed to an endpoint, the catalog can call the components deployment engine and track where the component has been deployed across all clusters. This can include automating the GitOps process where the catalog generates the deployment .yaml file. Inventory management of microservices will be just as important as knowing who wrote them and who consumes them.
Conclusion
The time has come for us to re-imagine our CD pipeline. Our new pipeline will support the needs of a microservice architecture where small functions are being deployed all day long. New tooling will be required to adapt to this new cloud-native approach. Two important changes will be Event based CD orchestration to manage hundreds of declarative workflows, and microservice catalogs. The microservice catalog will become central in managing the data of DevOps. They will provide insights into a microservice’s impact across applications, ownership information for maintaining incident responses, and potentially automate the activities around GitOps.