Table of Contents
Sometimes we run into teams that are struggling to produce releasable increments at the end of their sprint. There are numerous reasons for this such as poor quality tickets, inadequately groomed tickets, a poor quality legacy codebase, poor code boundaries, inexperienced developers, developer turn-over and more.
As much as possible these situations should be addressed individually but there are also general purpose process changes that can be made to increase the quality of code. Broadly I would divide these into technological solutions and human process solutions. What I would like to do here is enumerate some solutions in both of those categories.
Technological Code Quality Solutions
Unit Testing
Much has been written about unit testing and I’m not going to repeat it here. There are millions of tutorials and explainers out there. What I will give you is my personal experience with them. I’m used to working with small teams of very careful developers who have a lot of knowledge of the codebase because they’ve been working on it for years. By pure chance that’s been most of my career experience until recently. In that situation unit tests aren’t needed. Because of that I’ve never really seen the need for them.
With a lot of team turnover recently as well as bringing in junior developers to replace senior developers our lack of unit testing discipline in the past is hurting us now and I have had their value beaten into my head with pointless bug over pointless bug.
Unit tests preserve some institutional knowledge of the codebase outside of the developers who wrote it so even if your current development team doesn’t need them, your future development team will. Write them anyway and save yourself some headache.
I do subscribe to the idea of requiring 70-80% test coverage with unit tests but it’s important to note that the quality of the unit tests does matter and quality over quantity is a priority. It’s easy to “game the metric” and get coverage without adding value. For that reason you should require unit tests be part of code reviews so that the team can verify the unit tests aren’t just ticking boxes.
Automation Testing
These are the UI to backend tests that your QA automation engineers write…and you need QA automation engineers. They’ll test the various permutations of the happy paths and verify that they’re all working automatically.
Static Code Analysis (Sonarqube)
Setup a local installation of Sonarqube (it’s free), have your developers install Sonarlint linked to your Sonarqube server and then integrate it into your build pipeline. Setup strict quality gates on new code so that the build will fail if a quality gate isn’t passed.
Internally we have a “zero defects” quality gate on new code while we let legacy code continue to as exist as-is. This creates a situation where no new issues are added the code base and old issues slowly burn down on there on as developers touch them and they suddenly become “new code” for SonarQube purposes.
Static Code Analysis (Synk)
Same concept as Sonarqube but whereas SonarQube is focused on code quality, Synk is focused on static code analysis for security vulnerabilities. Integrate this into your build pipeline and configure it to fail when any new code is committed that introduces new security vulnerabilities. Then slowly clean up the vulnerabilities in legacy code as time and business priorities allow.
Quality Gates
As a general rule, I like to setup quality gates to gate only new commits and new code. Usually it’s too high of a hurdle to clean up all the existing code at once so the primary goal of quality gates is to stop the bleeding. Then once the bleeding’s been stopped you can start prioritizing issues in the legacy codebase for clean-up.
Code Coverage Gates
Same general rule as quality gates. Few teams are going to have the manpower to go back and write tests for all their legacy code. What they can do is create a new standard that’s enforced going forward. They key for both unit testing code coverage and code quality gates is that they must be integrated into the build and break the build when failed.
Hard-earned experience has shown that passive scanning and reports all get ignored despite best intentions if they don’t break the build.
Pull Request Test Integration
If your source control supports it, setting up hooks to require all unit tests pass and all quality gates pass before a PR can be completed is incredibly useful. Breaking the build is great for enforcement but it still disrupts the other developers. If you can prevent the merge into your main branch in the first place that gives you an extra layer of protection.
However, this isn’t a replacement for breaking the build. It’s an augmentation. You still want to know if gates are failing, tests are failing, etc in the main branch…by adding the hooks into pull requests you just ensure the build is broken less often.
Refactoring the codebase
Give your developers time to refactor the worst parts of the codebase. You should be able to identify these by looking at past bugs, clustering them and identifying where breaks happen most frequently.
Human Process Solutions
Mandatory ticket fields – Testing strategy
At some level you want to foster a ‘culture of quality’ inside the dev team. That means creating a culture where developers are thinking about quality throughout the development process and not just throwing it over the wall to QA for testing.
One way to foster this quality culture is by bringing quality to the forefront of every step of the development process, including ticket grooming. Create a mandatory field in the ticket system that contains a plan for how a ticket will be tested. Require QA and Dev to generate this plan together during a grooming session. It need not be detailed, QA and Dev just need to be talking. Then require the field be filled out before its accepted into a sprint.
The goal of this field is not to define what or how QA will test…not really…the goal is to get developers into the habit of thinking “quality first” as they develop and give them some basic criteria to test against before they lob their code grenades at QA.
Mandatory ticket fields – Acceptance Criteria
It’s my opinion that all tickets should contain, at a minimum, two pieces of information – a user story and acceptance criteria (AC). AC should be a short bullet-pointed list of all expected behavior that will be implemented as part of the ticket. This doesn’t replace a user story but compliments it – the user story describes “why” and the AC describes “what.”
I’ve found that this greatly reduces communications problems between product owners and developers. It also helps highlight any internal contradictions or misunderstanding inside the user story in a way that open discussion sometimes misses.
Mandatory ticket fields – Technological Impact
This field can be started in a grooming session or during pre-grooming research and then filled out more completely as the developer works on a ticket. The goal of this field is to document the impact of code changes (including refactoring) made by the developer with an eye toward all the workflows that QA will need to test.
If the developer simply made a new feature then maybe this is a small list. However, if the developer changed a core library in the course of making that feature then it could be a very long list since it should enumerate all the workflows that use the part of the core library the developer changed.
The idea here is that it requires the developer to pay more attention to secondary impacts of their code changes and also give QA a starting point for testing for any secondary effects that may not be immediately obvious.
Defects vs. Bugs
Drawing a distinction between defects and bugs can add some value in release management. The concept being that defects are things that are identified as broken inside of a sprint or before a release and bugs are things that make it into production.
Defects are broken areas of the code created as a result of new development during a sprint. They can be directly linked to a new feature that has issues or they may be unlinked because they were caused as a result of some unknown change. Their defining feature is that they never made it into production and they don’t need to be tested or confirmed during a release.
Bugs on the other hand are defined as broken code that made it into production. These typically require a root cause analysis, coordination with support and should be verified fixed during their release into production.
This distinction helps DevOps, QA and support manage tickets more effectively and also allows for better analysis of where a quality process may have an opportunity for improvement.
True Bugs & Root Cause Analysis
Continuing off the last point, any bug that makes it into production should require documentation be generated as to how the bug occurred and how will be prevented in the future. The goal of the RCS is not simply to fix the bug but to fix the underlying cause of the bug without blame.
In concrete terms it’s the difference between “Fixed bad data in production” and “Changed the data import job so that the bad data can’t enter the system again.” Perhaps even going so far as to try and identify why the job wasn’t already handling the bad data (although that’s sometimes a bridge too far).
Mandatory/Documenting Automation Test Coverage
During releases its incredibly useful to know which user stories are covered by QA automation testing and which aren’t. Adding this information to a ticket can make for a much smoother release or UAT period because the testers know which stories require manual verification and which they can pay less attention to because automation will verify them.
Two-layer QA
Going back to our concept of a ‘quality culture,’ there may be value in requiring two layers of quality assurance – one layer from the dev team and one layer from the QA team. In other words, when a developer has finished coding a ticket he passes it off to another developer for testing before it goes to QA for final testing.
This has a couple of effects: first, it puts more eyes onto every ticket looking for defects. That’s always handy. Second, it creates a focus on culture inside the dev team. There’s no more “throwing it over the wall” because your fellow developers have to see it first. Lastly, it creates a more personal sense of responsibility because you have to stand by your work as a peer is testing it.
The value of this isn’t so much in having another set of eyes although that’s nice. The value is in creating an intra-team focus on quality.