This eighth post in the Making Software Quality Visible series describes two common bad habits that are detrimental to software quality. It also challenges the common “It’s just test code” excuse for writing poor quality test code. It contains three footnotes from Making Software Quality Visible that are germane to the previous post in the series, Individual Skills Development.
I’ll update the full Making Software Quality Visible presentation as this series progresses. Feel free to send me feedback, thoughts, or questions via email or by posting them on the LinkedIn announcement corresponding to this post.
Remember from earlier that Apple’s
goto fail bug was hidden by six copies of
the same algorithm in the same file. To see how unit testing discipline could’ve
caught or prevented this by discouraging duplication, see my article “Finding
More Than One Worm in the Apple.” This example also illustrates that the
Don’t Repeat Yourself (DRY) principle isn’t a purely academic concern.
There’s a school of thought that suggests duplication is OK before landing on the correct abstraction. I consider this dangerous advice, because it’s so easily misunderstood and used to justify low standards. Programmers are notorious for taking shortcuts in code quality in order to move onto the next new thing to work on. They’re also notorious for using any available rationale to justify this behavior, and often disparaging more thoughtful approaches as “religion.” (Not that some can’t get carried away in the opposite direction—but it’s more common to find programmers attacking “religion” than programmers that are certifiable zealots.)
I understand the utility of duplicating bits of code in one’s private workspace while experimenting with a new change. However, I think the fear of the potential costs of premature abstraction are overblown. The far, far greater danger is that of “experimental” duplication getting shipped, leading to hesitation to change shipping code. Instead of the “hasty abstraction” getting baked in, dangerous duplication gets baked in instead.
After all, a premature abstraction should prove straightforward to reverse. Working with it should quickly reveal its shortcomings, which suggest refactoring it or breaking it apart in favor of duplicating its code for some reason. If it wasn’t premature, then making changes to the only copy is less time consuming and error prone than having to update multiple copies.
Replacing duplication with a suitable abstraction after the fact should be easy,
but it gives cover to potentially unnoticed bugs in the meanwhile. Again,
fail illustrates how easy it is to miss bugs in duplicate code. Once you’ve
seen the first copy, the rest tend to look the same, even if they’re not. Our
brains are so eager to detect and match patterns that they trick us into
skipping over critical details when we’re not careful. (I believe this is
because we process duplicate code with “System 1” thinking instead of more
expensive “System 2” thinking, per Thinking, Fast and Slow.)
We all know a 50 line code change is generally much faster and easier to review than a 500 line change. (500 lines of new or changed behavior, that is—500 lines of search and replace or deleted code is different.) Encouraging smaller reviews encourages decomposing larger changes into a series of smaller ones that can be independently tested, reviewed, and merged. This enables more thorough reviews, faster and more stable tests, and higher long term code quality and maintainability.
Even so, some hold onto the dated belief that one should submit entire feature changes at once to avoid “dead code.” The thinking, I suppose, is that one risks introducing unused code if a larger change is introduced one piece at a time. The value judgment seems to be that unused code is a greater risk to quality than, say, poorly tested code.
This, however, increases the risk of checking in “deadly code” that contains a bug that could harm users in some way. This is because larger changes are generally more difficult to test and review thoroughly. Overcompensating for poor design sense, poor communication, poor code quality, and poor process by mandating ill advised all-at-once changes can’t overcome those issues. In fact, it all but guarantees their perpetuation.
“It’s just test code.”
Of course, you’ll hear people make some variation of the excuse “It’s just test code” for writing sloppy tests. However, if the tests are there to ensure the quality and readiness of the production code, then the tests are part of our production toolchain. If a test fails, it should halt production releases until we’ve aligned the reality of the system’s behavior with our expectations (like Toyota’s andon cord). If a failure doesn’t warrant a halt in production, the test is a waste of resources (including precious developer attention) and should be removed. As such, our tests deserve as much respect and care as any other part of our value-creating product or infrastructure.