One of my design goals for writing the "goto fail" unit test and the Heartbleed unit test (submitted to OpenSSL as ssl/heartbeat_test.c) was to avoid importing any testing frameworks or tools into the existing build environment. However, being an ardent user of Google Test (and its companion Google Mock) for years, the influence of xUnit came shining through in the resulting test code.
What’s more, myself and others have warned against the dangers of data-driven tests, and I’ve lamented that my love for the Go programming language is slightly tainted by its advice to resort to table-driven tests. The pattern I discovered in these unit tests struck me, after the fact, as a possible solution to the problem of data-driven/table-driven tests, regardless of whether a testing framework is used or not.1
What follows is an illustration of the Pseudo-xUnit pattern based on a mapping from the general xUnit concepts outlined in the Wikipedia entry on xUnit to their implementation in my Heartbleed unit test.
Differences from xUnit
In typical xUnit implementations in Object-Oriented languages, there is some sort of
TestFixture base class. The author of a test inherits from this class, overrides its
TearDown() methods, and then adds test cases as member functions. In Google Mock for C++, one can also override the constructor or destructor instead of
TearDown(), and each new test case is defined as a subclass of the user-defined fixture. However it’s implemented, the framework guarantees that
SetUp() is called before each test case and
TearDown() is called after, that every test case is automatically executed, and that errors are reported in some standard format.
These features make xUnit-based testing frameworks very convenient, as the xUnit idiom is familiar to many programmers, ensures that a lot of mechanical details are handled automatically, and error reporting is standardized across all tests using the framework. That’s why I call the pattern described below “Pseudo-xUnit”, because it’s up to the test author to ensure that
TearDown() are called from each test case, to write out all of the assertions and error messages in an execution function, and to remember to add each test case function to the test runner (i.e.
main()). Still, the resulting code organization is very similar to xUnit, which makes tests written in this style more readable, especially to people used to xUnit-based frameworks in other languages.
Test Result Formatter
The “test fixture” is the data structure and set of associated operations that establish the environment for each test case. The structure, as in many data-driven tests, will contain the test inputs and expected outputs (and, in this case, a pointer to the code under test as
Note that the fixture is copied by value; no need to mess with more memory management than is necessary. There’s also an associated
TearDown() function to ensure proper resource cleanup:
All of the tests in a suite will share a common prefix:
Multiple test suites can share the same underlying fixture structure and can “inherit” base fixture behavior by defining their own variations of the
SetUp() function (and possibly
TearDown()) that make a call to the “base” version:
The “execution function” prepares helper data based on input values from the test fixture, executes the code under test (
fixture.process_heartbeat(s) in this example), performs all of the assertions and handles their output formatting. It returns zero on success and one on failure, no matter how many assertions may have failed:
There can be multiple execution functions if need be, and possibly multiple separate assertion functions. Also, while in this version the execution function calls
TearDown(fixture), the actual OpenSSL version of the test has the test cases execute
TearDown() (via a macro defined in the test file).
Each test case is a function that constructs and sets up a test fixture instance, overrides the fixture structure with inputs and expected outputs for that specific test case, and calls the execution function and
TearDown() (either directly or as part of the execution function). It returns the result of the execution function, which will be zero on success and one on failure:
Notice the passing of
__func__ to the
SetUp() function. This is how the fixture gets initialized with the name of the test function for better assertion failure formatting. (Note that on Windows,
__FUNCTION__ should be used instead.)
The other feature of the test cases is that they look very similar to one another; this is a deliberate violation of the Don’t Repeat Yourself principle, because defining test cases is the one place where code duplication makes sense. There will be a lot in common between some test cases, though every test case will have at least one detail different from all others. Having the test cases look so similar to one another not only allows for the rapid generation of test case functions, but when test cases are small and self-contained, it also becomes easier to take stock of the similarities and differences by scanning them.
Only declarative statements are getting duplicated, not any detailed program logic. Assertion macros or functions would be OK to duplicate as well, since their intent is basically declarative.
Also notice that, were this a data-driven test, the test fixture structure used in this function likely would have been declared thus (replacing the
SSL members with
The member-by-member initialization of the Pseudo-xUnit version is easier to read in general, which makes it easier to work with when a test case fails, or when defining a new test case as a variation on an existing one. The Pseudo-xUnit version is easier to update when a new member is added, since the common
SetUp() function can define a default value for the new member, and only the test cases for which that new member is relevant need to be updated. At the same time, the essence of the data-driven/table-driven structure is preserved, with the execution function containing all of the common execution and assertion logic.
The test runner can be as straightforward as a
main() function that adds up all of the return values from each test case and reports the number of test cases that failed:
The test runner can be arbitrarily more complex than this, but something like the above will likely do the trick most of the time. The trickiest part is remembering to add each new test case to
Despite the fact that this unit test was written in C, which has no direct support for Object-Oriented Programming, I was able to get nearly all of the benefits of the xUnit framework without any special tools or macro magic, and the code is nearly as clean and well-organized as it would be in any other language. I’m hoping that we will be able to apply this pattern to great effect in the effort to improve OpenSSL’s unit/automated test coverage, and that these ideas may be of use to others wishing to introduce unit testing to an existing code base without having to learn any new frameworks or tools.