It’s time for developers to treat services as first-class citizens in their development workflow, and evaluate not just how clean their code is but whether or not the services work as expected, without breaking any dependencies.
The entire development workflow is focused on catching and fixing problems in the code before they get into production. This is the goal of having team members review each others’ code, it is the focus of most testing programs and continuous integration platforms. As the industry has moved away from Waterfall deployments and tried to shorten the feedback loops for developers, the focus has remained on code.
No one wants to deploy buggy code to production. But it is also a fallacy to assume that perfect code will always lead to the outcome we ultimately want: applications that function as expected and provide a good user experience. As organizations adopt an increasingly microservices-based, cloud native application architecture, the ability of services to work together becomes just as important as code quality.
Just as we wouldn’t want to send untested code into production — or even into the integration environment — we shouldn’t be sending new services into production or upgrading existing services without having evaluated how they impact dependencies.
But in most organizations, there isn’t a formal way to evaluate how updates to services will impact upstream and downstream dependencies. The testing process is relatively ad-hoc, and debugging often can’t be done by the service owner alone — he or she has to work directly with the owners of the impacted services to see what change is causing the problem and to identify the best fix. This can be further complicated by the fact that applications can be made up of microservices written in different languages and that interdependent services are usually being updated simultaneously.
Feedback loops, which as an industry we’ve been working to make as short as possible for the developer, stretch out in these scenarios. Developers can’t know how the service will interact with upstream and downstream dependencies while still working on their local machine. When a problem is discovered, it’s more likely to delay the ultimate release because it’s often uncovered far along in the process. Even so, developers often don’t feel confident that their updates won’t break anything — or that their colleagues’ updates won’t break their service.
Creating formal service certification procedures
In the interest of reducing release delays, reducing bugs in production and improving developer confidence, it makes sense to have a formal process for testing not just whether or not the code is good, but whether or not the services work together as expected to provide the experience the team is trying to create.
Here are some of the components a service certification should have:
Multiple steps. At a minimum, there should be a way to test and certify services once on the local machine and again in the integration environment. There can also be different aspects to service certification — security, compliance, other best practices — that might want to be tested in addition to just evaluating whether or not the updates break anything
Robust, automated version controls. As interdependent services are continually and simultaneously updated, it is critical to know which versions of dependent services were used to certify the service. Version control should be automated, so developers don’t have to keep track manually.
Polyglot. The reality of most modern teams is that not all services are written in the same coding language. Service certification should be at the service level and be able to work with any service, in any language.
Part of the move towards microservices has to be a way to treat services as first-class citizens in the development workflow and create ways to evaluate how services work at the service level, not at the code level. Formal service certification processes help organizations evaluate how services work systematically, reducing production issues while also speeding up deployment times and preventing delays. At the same time, it gives teams a way to talk about a service that has been vetted and better confidence that if issues exist, they’ll be found.
Join the cloud native community at KubeCon + CloudNativeCon Europe 2021 – Virtual from May 4-7 to further the education and advancement of cloud native computing.