Mike Bland

The Snowflake Fallacy and Instigator Theory

We're more alike than different—but it can be easier to convince the rest of the world of that than our own teammates.


Tags: leadership, personal, philosophy

It’s been about a month since I left Apple. I’m still decompressing, and things are good—and I am playing a lot more guitar, though I still have yet to find others to jam with. Along those lines, I am feeling the need to structure my time and to become productive in some way. That’s very much an ongoing project, but I am excited to see where things lead, one small step at a time.

In New York from December 12-15

Speaking of which, I’ll be visiting New York City next week. If anyone there would like to catch up, please send me an email or LinkedIn message! My Monday evening and all of my Wednesday are free. (Past colleagues David Plass, John Turek, Alex Buccino, and Andrew Trenk have spoken for the rest of my time.)

Two ideas for consideration

Also, I have two ideas I want to get on the record. I’ll have more to say about both in the future, I’m sure, but I’ve been sitting on these for long enough. In my experience, ideas become more valuable when shared early and often, with anyone who’s game to help shape them. (That reminds me, I need to post about the innovation and leadership lessons from The Beatles: Get Back sometime, too…as well as Shoresy.)

  • The Snowflake Fallacy is the claim that a particular project is so different from others that basic coding and smaller testing practices don’t apply. At the molecular level, across languages and idioms and domains, code across different projects is more alike than it is different. Claims to the contrary are usually specious, if not willfully dishonest.
  • Instigator Theory is the idea that sometimes it’s easier to change the rest of the world than it is to change your own team. In other words, it’s about working from the outside in, instead of from the inside out.

The Snowflake Fallacy

Tell me if you’ve heard this one before: “Our project is different. All those practices that work great for other groups—especially unit testing, or smaller tests in general—won’t work for us.”

This claim is equivalent to claiming that a project is a “snowflake,” completely distinct from others in its own unique way. But isn’t the project still using:

  • A particular, widely used programming language or five?
  • A particular, widely used text editor or IDE or five?
  • A particular, widely used source control system? And more often than not nowadays, isn’t that system Git?

What’s more, each of those tools exist to make certain practices easy, and often, to make other practices harder. For example:

  • All programming languages support statements, expressions, sequence, selection (conditional statements), iteration (loops and often recursion), and functions.
  • Nearly all modern programming languages support structured data and object-oriented or object-based programming models.
  • Many modern programming languages borrow features from one another. A popular one is memory management, to avoid memory leaks and memory-related crashes. Another is array bounds checking, to prevent out of bounds reads and writes. Lately, features that make asynchronous and concurrent code easier to write and reason about have been popular. Higher-order functions and closures have grown popular. So have standard libraries, or defacto standard modules, for common algorithms and services (such as HTTP servers and requests).
  • Text editors and IDEs exist to create and edit code files. Many times, people on the same team will use different editors from one another to work on the same code.
  • Practically every practicing programmer these days groks short-lived branches, rebasing and squashing and cherry picking commits, tagging, pushing and pulling branches between Git repos, etc. Source control exists to make shared editing of a project’s code easy, while making it relatively difficult to screw things up or lose history.

Of course, certain domains dictate the use of specific languages or tools for one reason or another. But very few projects exist that are truly unique in their choices of languages or tools.

What I’m getting at is, at the molecular level, most software is more alike than it is different:

  • Once someone starts committing requirements or ideas to code, they start writing statements and expressions to process primitive or structured data.
  • These statements and expressions are organized according to the principles of sequence, selection, and iteration.
  • Sections of code are then abstracted using functions, and maybe further abstracted into classes.
  • Free functions and classes may then be packaged into modules or components.
  • These modules and components are then shared by publishing them via packages or libraries. These expose an application programming interface defining a contract between the shared code and the code that calls it.

Granted, each code project serves different purposes in different domains. When you get to the published boundary of a project—be it an API or a user interface—you may have specific testing needs that require bespoke processes or tools. Or some lower-level code may perform mathematical calculations whose domain and range may be too large or difficult to validate via conventional testing. Validating concurrency properties can be notoriously tricky.

Even so, the vast, vast, vast amount of business logic and infrastructure code is fairly straightforward, resembling many other projects in the industry. Even subtle mathematical logic resides within an infrastructure abstraction. Concurrent operations can be broken down into pieces that can be tested in a single thread. While these difficult properties may not be easily tested, that’s not an excuse not to test the important properties of all the other code.

Also, the correct testing approach isn’t to throw as many resources at testing the top-level interface as possible. You’ll need to perform some testing at that level, of course, possibly to gain confidence in those trickier properties. However, testing isn’t a one size fits all proposition. A Test Pyramid-based approach enables a wiser use of testing resources, improving stability while tightening feedback loops,1 yielding higher confidence and better quality outcomes. (I’ll have more to say about the Cobra Effect and the inverted Test Pyramid and so forth in the future.)

After all, a project’s language and idioms aren’t unique, and the team’s text editors and IDEs aren’t unique, and the source control system isn’t unique. By the same token, basic principles and practices of sound coding and (smaller) testing are more than likely applicable to a given project.

This is why, within Apple, the Quality Culture Initiative had active participants and contributors from Software Engineering, iCloud, Applications, AI/ML, internal IT, and other groups. To claim that all those other projects are so similar yet one’s own project is unique in that context is laughable. They are all more similar than different at a particular level—but one’s own project wouldn’t prove essentially any different, either.

Implications of the Snowflake Fallacy

The implications of this insight go far beyond software projects—it gets quite cosmic, actually. Not long before I left Apple, someone who’d only worked there asked what I saw as the big differences between different software companies.

My response was that, of course, on the surface, there are many things that set different companies and organizations apart. However, the more experience I get, the more I see the similarities. The differences do exist, and are important—but I think the similarities are even more important.

Any company or organization is made of a bunch of people trying to figure out how to produce or accomplish something together. Certain orgs have certain values they emphasize through policies, incentives, and other cultural levers. But underneath it all, once you’ve grown beyond a handful of people, the full range of human nature is still driving the dynamics.

To lean on my favorite metaphor once again: Whether you’re playing guitar, bass, piano, saxophone, drums, or singing, you’re striving to produce beauty together within the framework of harmony and rhythm. No matter the composition of the group or the genre of the material, from Bach to Mozart to Monk to Metallica, it’s all music.

What’s more, while the differences between organizations and their cultures can be important, and can be worth celebrating, denying the similarities can be outright dangerous. Silicon Valley is littered with the graves of countless companies that thought themselves unique, superior, invincible. And crimes against humanity are committed by those who deny the common humanity that we all share.

The Snowflake Fallacy is pervasive and as old as humanity itself—it’s basically another form of toxic tribalism, another name for a familiar old problem. I’ve presented it here in the context of resistance to smaller testing within software projects, but that’s only one example of its corrosive influence. I don’t think stubborn, misguided software developers are war criminals, but their influence on modern society only continues to grow. We have a professional—which is to say, societal—responsibility to shut down the Snowflake Fallacy as best we can.

Humorous illustration of the Snowflake Fallacy

An illustration of the Snowflake Fallacy in action: Some folks working on low level operating system components were once skeptical about the efficacy of smaller tests, because their code wasn’t that “high level.” After that group gained some success with smaller tests, I got into a good, civil discussion with a colleague working on microservices. He wondered about the applicability of smaller tests to a particular problem. When I pointed out the operating system group’s experience, his response was: “But they’re so low level!”

Instigator Theory

Overcoming the Snowflake Fallacy is really, really difficult. Software developers in general are nowhere near the rational, data driven decision makers they make themselves out to be. Buried underneath the technobabble and facade of logical thinking are the same fears, conscious and unconscious biases, and blind spots many of us have. They’re humans, just like the rest of us, which means they’re subject to the same economic, cultural, and emotional forces that often breed irrational conclusions.

For this reason, a team that isn’t open to the idea of writing more smaller tests will find reasons to override an individual who is. Unless that individual is an authority figure with power over the team members’ performance evaluations, that individual’s direct influence is severely limited. That’s no reason not to try to set a good example and advocate for adoption regardless. But just as testing a project solely through the API or UI isn’t the most efficient or effective use of resources, better options exist.

Specifically, Instigator Theory suggests that connecting with other like minded Instigators from other teams within the organization may be a better approach. It’s based upon the insight that sometimes it’s easier to change the rest of the world than it is to change your own team. It’s about working the problem from the outside in, rather than from the inside out.

Instigators is my term that lumps together the Innovators and Early Adopters from Geoffrey Moore’s Crossing the Chasm model. These Instigators assume responsibility for developing and marketing improved principles and practices—what Moore would call the Total Product—to the Early Majority across the organization. The Early Majority is open to trying new things once there’s a Total Product they can apply without too much difficulty.

The Late Majority always asks for more data, more case studies, more proof to answer the question “How do we know it’ll work for us?” Once the Early Majority starts demonstrating widespread success, the Late Majority will get on board. They can’t be accused of an abundance of courage, but they’ll make sure that improvements get baked into the culture once they’re a safe bet. At that point, the Laggards—the most vocally opposed, who feel threatened by change—won’t have much influence. Before results become apparent, their words echo around the halls. Once results manifest, their words won’t hold as much force as they once did.

My own Rainbow of Death model breaks down the responsibilities the Instigators need to fulfill to cross the chasm and connect with the Early Majority. But the point is, as an Instigator, don’t waste your time with Late Majority members and Laggards on your team. Go outside to connect with fellow Instigators if you have to, and together, focus on the Early Majority first. At this stage, you need partners focused on developing the Total Product in good faith.

Once the Total Product comes together, it’s time to sell it. Get results from the Early Majority, and use those results to sell it further. Eventually you’ll get the attention of the right people—including some of those authority figures who control performance evaluations. Show how your Total Product solves a problem for these figures—share the results, to give them a legitimate foundation for recommending your product.

Eventually, this Total Product will work its way back into your team from the outside—and you will have built a solid professional network. If your team still doesn’t get it, you’ll have more options to move on to one that does.

Instigator Theory and the Test Pyramid

People often think Instigator Theory is purely grassroots focused. It’s not—much like the Test Pyramid, it’s about striking the right balance of effort. Establishing a foundation of grassroots success is necessary, but not sufficient. It’s a foundation for higher level executive recommendations that’ll actually succeed, based on solved problems and results that will enable everyone to implement them successfully. On the contrary, executive mandates by fiat, without established support to help people be successful, risk backfiring.

As I related in my post on Test Certified:

The very afternoon after I first lobbied a gathering of Test Engineering folk to adopt Test Certified as a means to have a meaningful dialogue with product teams, I met Patrick Copeland, the Director of Engineering Productivity, for the first time. I lamented that we still needed then-CEO Eric Schmidt and other high-level executives to openly and forcefully endorse Test Certified, and developer testing in general. Patrick then related that he’d come from Microsoft, Eric from Novell and Sun, Alan Eustace from Digital, Bill Coughran from Bell Labs, etc., and at each of those companies, these guys have seen corporate mandates fall flat with engineers and fail time after time. They’re waiting for good, well-implemented, and street-proven ideas to bubble up for them to get behind when the time is right. That totally changed my perspective and thought processes from then on, and I never gave another thought to securing such an endorsement. In the end, while we received a few kind nods from high-up execs on rare occasions, we managed to achieve a sweeping, permanent culture change on our own, without help (or, thankfully, interference or deterrence) from on high.

Conclusion (for now)

These could’ve been two separate posts, at least. However, I found it useful to juxtapose the pair. Writing testable code and good, smaller tests isn’t technically difficult. Overcoming the resistance to adopting them remains a challenge, largely due to the Snowflake Fallacy mindset. Instigator Theory can be an effective tool for overcoming this resistance—but not by forcing people to do anything they don’t understand and embrace willingly.

Instigator Theory is all about leadership, which is ultimately about selling, not telling—and nothing sells quite like making the right thing the easy thing!

  1. Shoutout to Simon Stewart for being a vocal advocate of shorter feedback loops. See his Dopamine Driven Development presentation.