Mike Bland

Music student, semi-retired programmer, and former Googler

Most recent posts

79 posts total. See Filtering and Navigation for tips on how to find the bits in which you're interested.

I paid a visit to Washington, D.C. this past week, gave a tech talk on unit testing and culture change, and was amazed at the positive energy there.

- Boston
Tags: grouplets, programming
Discuss: Discuss "D.C. Trip Report" on Google+

This past Wednesday, July 16, I flew down to Washington, D.C. to visit a few government IT teams and give a talk on unit testing and culture change. I was hosted by Robert Read from the team known as “18F”, so named because it resides in the General Services Administration building at 18th and F streets. 18F, as I currently understand it, is a team that currently acts as a “production floor” to develop software products for government agencies using industry-standard (mainly agile) practices, and has ambitions to perform outreach to other agencies to improve their development processes. The audience for my talk, however, was primarily to members of the US Citizenship and Immigration Services development team and US Digital Services.

What was supposed to be a one-hour talk followed by a forty-five minute question and answer session turned into a one-hour, forty-five minute discussion, in which almost all of the two dozen or so attendees remained from beginning to end. That was just what I was hoping would happen—I mean, I was parachuting into this environment completely cold. I’d met no one, I’d seen no code, I’d literally only stepped into the office for the first time and gone straight to the conference room. Who was I to tell these folks what they needed to be doing on their projects? Did I even have anything to say that was either interesting or something they’d not heard already? Would I have enough to say? Would I ramble too much?

As it turned out, my approach of just getting the talk going and letting folks ask questions freely throughout got the room warmed up enough to where it wasn’t just me pontificating like an all-knowing so-and-so the entire time. There was real back-and-forth there. Rob and I brainstormed the four main sections the week before:

  • Unit Testing Principles, Practices, and Idioms
  • Unit Testing Education and Advocacy
  • Unit Testing APIs and Legacy Systems
  • Rapid Prototyping and Unit Testing Strategy

If I remember correctly (the day was a bit of a blur), we actually decided on the spot to start the talk at the second section, “Unit Testing Education and Advocacy”. From there, the talk naturally wandered throughout the first and third sections as well; we never got around to the fourth, but it’s better to have and not need than to need and not have.

The slides are freely available (yes, I got permission to share them first), but are nothing to be bowled over by. They’re basically just a fancier form of outline that I could use to keep track of the discussion. That said, they did the job more than well enough, and I’m offering them here in case others might be inspired to give similar talks in their own workplaces.

On top of that, I have to say I was stunned at the winds of change that seemed to be blowing through government IT culture. It was a real inspiration to see so many people engaged in serving their country, trying to make a real difference not just in how development is done, but in how the products of their labors can better serve their fellow citizens. It felt positively…grouplet-y.




My original "goto fail" article has been published online and in print by the Communications of the ACM, the flagship journal of the ACM.

- Boston
Tags: Apple, Google, goto fail, grouplets, Heartbleed, Testing Grouplet
Discuss: Discuss ""Finding More Than One Worm in the Apple" in Communications of the ACM" on Google+

The flagship journal of the Association for Computing Machinery, Communications of the ACM, has published my article Finding More Than One Worm in the Apple (PDF version, "digital" version) in its July 2014 issue (Vol. 57, No. 7, pp. 58-64). Again, mega-thanks to my reviewers mentioned in the article; to Guido van Rossum for advocating for the article; to George Neville-Neil (aka Kode Vicious), Jim Maurer and Matt Slaybaugh, who published it in ACM Queue and submitted it to CACM as part of the “Practice” section; and to Diane Crawford at CACM for being such a pleasure to work with in editing the final draft.

Let’s hope that between this article, my Goto Fail, Heartbleed, and Unit Testing Culture article for Martin Fowler, and my work on OpenSSL’s unit/automated testing, that some good will come of it all.




I'm experimenting with refactoring OpenSSL's existing recursive Make structure into a top-Makefile-with-includes structure.

- Boston
Tags: OpenSSL testing, programming, technical
Discuss: Discuss "Makefile Refactoring" on Google+

In the makefiles branch of my OpenSSL fork, I’m experimenting with refactoring the existing recursive make structure into a Makefile-with-includes structure. I believe this is a critical first step towards improving automated test coverage and instilling a testing culture.

Motivation
Refactoring Steps
Side Effects

Motivation

I started down this path when writing a small test in my encode-test branch. Not the most important test to write, but it was something to sink my teeth into to get better acquainted with the code and try out more of my testing ideas. When I tried to change crypto/evp/encode.c to make the test fail—and it didn’t—I realized that make test didn’t rebuild the crypto library on which the test depends. The paper Recursive Make Considered Harmful explains why: The top-level Makefile recursively invokes make in the test/ directory. The make process parsing test/Makefile doesn’t have the full dependency graph information, therefore it doesn’t know that it needs to rebuild libcrypto when rebuilding a test.

The first, most obvious workaround is to always invoke make && make test. The second, less-obvious way is to use the top-level GitConfigure and GitMake scripts instead; GitConfigure will compile a single Makefile out of all of the sub-Makefiles with the correct dependency info (mostly).

When trying to encourage testing as a regular habit, any friction in the build process works against that goal. The make && make test solution is inefficient, increasing the length of the feedback loop due to unnecessary work being done (i.e. tons of unnecessary recursive make processes), and error-prone. The GitConfigure/GitMake solution works, but in both cases, the need to resort to anything other than a straightforward make test violates the principle of least surprise. This is especially problematic right now, as I’ve got a good number of volunteers ready and willing to help improve OpenSSL’s testing, but who are struggling to understand the code and how they can best contribute. I feel like they’re waiting for me to clear a path, and the first obstacle isn’t the code itself, but the recursive make structure.

Refactoring Steps

The end result I’m aiming for is to have a top-level Makefile include all of the other Makefiles, as well as dependency files for each object file generated as a side-effect of compilation, so that rebuilding and executing a test is fast and correct. It will remain compatible with GitConfigure and GitMake for those who prefer to use them.

I’m not going to go into excruciating detail here, but suffice it to say this will happen over a long series of changes, and I’ll constantly rebase them on updates to the master branch. Each change will maintain compatibility with the existing recursive make structure until the final switch is flipped. I’ll take care to support both GNU and BSD make, and ensure that everything works on Mac OS X, Linux, and FreeBSD at the very least. Now that I’m also running Windows 8.1, I’ll be able to check my changes for Windows compatibility, too.

I wrote a Python script to automate the changes now evident in my makefiles branch. I developed it to apply the changes one step at a time, and made sure that, after each step, running the script multiple times didn’t alter the Makefiles after the first run. As I’m now going back to redo many of those changes in a different order, I’m going to be building the script back up again, this time checking it into my personal Google Code repository as update_makefiles.py. Since this is a one-off script that only I’m using, I don’t imagine it’ll go into OpenSSL proper—plus, I’m breaking tradition by using Python instead of Perl for this—but this way folks can take a look at how I’m going about the process if they’re curious.

Side Effects

Even if the core team ultimately decides to reject my experiment, it’s already paid off enormously. First of all, now I know many of the problems that could frustrate people helping with the testing effort. If we can’t fix these issues, we can at least make sure they’re clearly documented. Second, as a result of all this digging into the build system, I’ve already issued a pull request to fix several build issues, particularly those that prevented successful OS X builds, since that’s my home platform. These fixes are worth committing independently of all the other experimentation I’ve been doing; they’re small, noncontroversial, and eliminate a great deal of friction in and of themselves.

Third, I’m having way too much fun with this.