in Programming

In Defense of Defensive Programming

I recently spoke with some developers about a proposed code change where I suggested adding a guard clause to prevent recalculating some state in the case where it didn’t actually need to be recalculated.

Changing this:

void OnObjectUpdate(object updatedObject)
{
  RecalculateFoo(updatedObject); // expensive recalculation and rerendering
  // some other stuff
}

To this:

void OnObjectUpdate(object newlyUpdatedObject, object cachedObject)
{
  if (newlyUpdatedObject == cachedObject)
  {
    // the object hasn't changed, so exit
    return;
  }
  RecalculateFoo(updatedObject); // expensive recalculation and rerendering
  // some other stuff
}

This is pseudocode of course, and the original code wasn’t even C#; it was a framework method. The issue was that the framework being used would be able to trigger this event even if the underlying object hadn’t actually changed. Maybe uncommon, but possible.1

The response to this proposed change was: “Isn’t this defensive programming? There should be code upstream to prevent this from ever happening. We don’t need to check it here.”

I know it wasn’t meant as such but in my head the word “defensive” here almost sounded like a pejorative. Like it gave off a bad smell.

At the time I deferred, admitting that yes, it could be considered defensive. Surely elsewhere there’s other code that also tries to make sure if there’s no new values, then don’t update anything in the object. I was running low on sleep and didn’t have the mental wherewithal to articulate why, exactly, it didn’t feel right to not do this. Shortly after we moved on to more productive conversation.

The question stuck with me though and I couldn’t stop mulling it over. Should the code be that defensive? Do we need to check this here? Or for another example, why write a null check four layers down if you know the code calling some private internal method already checks for null three other times?

On the other hand, what’s the harm in another check here? We don’t pay by the lines of code. Code bases grow, anyhow, and the upstream component might be removed or rewritten or the developer might forget that there’s something downstream that depends on a certain behavior and a year or two or five from now, a missed null check or a needless recalculation and rerender could be a real concern.

One thing that always pops up in my head when writing code like this is an old coworker: a super intelligent person, one of the best software developers I have ever worked with who understood user needs better than the user did and who wrote rock-solid software. I always picture them staring, bewildered, at a stack trace and muttering, “That shouldn’t be possible.”

But it was. Because multiple people contribute to a code base and no one can hold its entirety in their head all the time, and due to a variety of competing priorities and external factors, bugs slip through. The world changes. What was once impossible becomes possible, and things break.

I could have said all of this at the time but it’s only a defense of this particular change, and not for the approach as a whole. There are reasons I approach software with this mindset (defensive, low-trust, double-checking everything!) but I struggled to articulate them without resorting to pithy phrases like “throw your user down the pit of success.”2

It wasn’t until I was reading the Austral introduction that I came across this passage which put it perfectly (bolded emphasis mine):

If planes were flown like we write code, we’d have daily crashes, of course, but beyond that, the response to every plane crash would be: “only a bad pilot blames their plane! If they’d read subparagraph 71 of section 7.1.5.5 of the C++, er, 737 spec, they’d know that at 13:51 PM on the vernal equinox the wings fall off the plane.”

This doesn’t happen in aviation, because in aviation we have decided, correctly, that human error is an intrinsic and inseparable part of human activity. And so we have built concentric layers of mechanical checks and balances around pilots, to take on part of the load of flying. Because humans are tired, they are burned out, they have limited focus, limited working memory, they are traumatized by writing executable YAML, etc.

Mechanical processes—such as type systems, type checking, formal verification, design by contract, static assertion checking, dynamic assertion checking—are independent of the skill of the programmer. Mechanical processes scale, unlike berating people to simply write fewer bugs.

Honestly, it’s going to be hard not to whip this argument out for a lot of things.

From my limited observation it seems that people who write software tend to think that, since they know how software works (citation needed), they know how the world works3. What’s the point of having dedicated QA when you have unit tests and CI/CD? Who needs project managers? Why add a guard clause if you expect this thing to be taken care of elsewhere?

Maybe I am too careful. We’re usually writing glorified CRUD apps, after all. It’s not the end of world if a thing gets recalculated every once in a while. Is it?

Little inefficiencies compound. How you do anything is how you do everything, and this is doubly so when it comes to writing code. Small bad habits multiply. What is true today may not be true tomorrow, and it’s important to take care especially when it’s a small effort to ensure that you are prepared for when, not if, this happens. Otherwise the result is multiplicative, not additive, and the little bugs and inefficiencies turn once-pleasant software into something else.

The inverse is equally true. Performant, resilient software is not the result of a few guard clauses, or a few micro-optimizations. Good software is the result of a culture of discipline and care. It comes from an intentional approach toward the little things, over and over, even when today it might not matter.

So take the time. Add the guard clause. Don’t trust the upstream and run through your metaphorical pre-flight checklist. If something shouldn’t be possible, make damn sure it’s impossible.

EDIT (2025-05-22):

For a good counterpoint to the argument I made regarding the code above, check out this post: Push Ifs Up and Fors Down. It makes some good arguments regarding where best to put control flow.

There comes a point in a codebase when it grows past a certain threshold, and has a certain number of distinct authors, where having fully centralized logic can be a risk as I argued above. But there are plenty of cases where it does make sense to keep the control flow / logical branches all in the same place, especially with smaller files or modules.

The thing is, there’s no 100% correct way to do everything, or else we as an industry would have found it by now. A specific approach may work for one team or developer or codebase, but not another. The important thing is to stay vigilant and discerning, and to be thoughtful about how you approach a given problem.

  1. But there is a reason that the framework in question provides two versions of the same method one that includes the cached object being watched.
  2. The relevant interpretation here being, only make possible the actions that let you go the right way.
  3. Which is how we got the meme of techbros constantly reinventing buses.