As “best practices” have drifted around, we often see associated shifts in other practices in response. Once upon a time, assertions were an extremely common tool. The infamously sparse C standard library nevertheless comes with assert.h. Entire philosophies of programming, like defensive programming, built up around using them.

As practices have evolved, assertions have started to fall out of favor. (This is not to say they’re not used, just that they’re not used like they once were.) Tests will of course check things, and may use the term “assert” for this, but this isn’t what I mean by an “assertion.” Assertions are things we’d write in the code under test, not in the testing code itself.

So what happened to defensive programming?

The case against assertions

In the last 30 years, assertions got squeezed between multiple different pressures:

  1. The rise of unit testing and test frameworks.
  2. The rise of logging.
  3. The consequent promotion of normal error-handling mechanisms instead of assertions.

Some small class of assertions were really in the category of things that unit tests could do instead. So for this limited class of assertion, once unit testing became ubiquitous, the obvious thing was done. The assertion was removed from the function itself, and tests for that function were written instead. There’s little reason for a function to be left checking if it did the right thing, if there’s a test suite to do that instead. And a test suite means the function actually gets tested during development. If a function is buggy, users running into an assertion failure is better than silent and mysterious misbehavior. But better still is catching the misbehavior during development, before the buggy code is released to users at all.

With this rise in unit testing, a second effect emerged: if we’re testing the function’s behavior, shouldn’t we be testing how it responds to erroneous inputs, too? If there’s an assertion in the function, isn’t that a branch, and therefore another case of inputs to test? And if an assert is aborting the whole process, doesn’t that make it rather hard to write unit tests exercising that branch? The result of these lines of questioning was the promotion of many traditional assertions to more ordinary error handling mechanisms instead.

For the most part, this is probably a very good thing. Already, many were advancing arguments against the traditional behavior for assertions, where they were disabled in “release” builds, and only had any effect for “debug” builds. The theory is, if those checks are worth doing at all, why would you not want that debugging aid for productions systems? It often is better to crash than to silently start doing the wrong thing, and the performance costs are usually not significant. So going a step further—promoting these checks to ordinary code, not even wrapped in an assertion mechanism that could be disabled—is just a continuation of this trend.

The last pressure arises from the usual fatality of assertion failures. For traditional debugging, this is a good thing: you can be dropped into the debugger exactly in the state where misbehavior was first detected. But as it turns out, perhaps somewhat surprisingly, this is often extremely unwanted behavior in almost any context other than running under a debugger. It’d be nice if test suites could run to completion instead of stopping at the first assertion failure, and applications can sometimes manage to limp along to a more graceful result. Servers, for example, may want to turn an assertion failure into a request failure, but keep the whole process running and just continue on with the next request to handle. If the process has become corrupt, some other operational monitoring should be able to catch that. We can at least try, because surprisingly often, it works.

Of course, we’d still like to do those checks, and record the debugging information as best we can, so these assertion failures are generally turned into logged events instead. But once again, this is another reason to do something other than use ordinary, traditional assertions.

For awhile now, the Linux kernel’s assertion mechanism, BUG_ON, has been regarded as deprecated. The trouble is that such a mechanism immediately panics the kernel, halting everything on the machine and making recovery impossible. Kernel developers have discussed removing it entirely, except that code that currently uses it doesn’t expect it to return, which is hard to fix sometimes. But it’s a frequent occurrence to see cleanups remove BUG_ON in favor of WARN_ON_ONCE, replacing an assertion failure with “log and go on as best we can” even in the kernel, where you might think that sounds dangerous.

The case for assertions

Previously on this blog, we’ve looked at a major reason assertions are great. When property testing or fuzzing, assertions are extremely synergistic with testing. With these testing approaches, it’s often good to take this to an extreme, and turn on as many forms of dynamic analysis as possible, and write as many assertions as make sense. These approaches can be thought of as giving the code a workout, and the more self-checking that goes on, the better.

How should we square this observation with the waning of assertions as a result of unit testing?

Well in part, I think the belief that unit testing obsoleted many forms of assertion is actually mistaken. As sometimes happens, the zeal required to overcome inertia and get developers to adopt a better practice may have morphed into a dogma that begins to believe that it’s the best practice. Some people start to develop software as if unit testing were the only way to reason about program behavior.

Who is to say that a function’s postcondition assertion is “more properly” done with a unit test than an assertion? Writing a unit test means we’re adding the creation and checking of that situation to our automated test suite, so we can be sure the assertion doesn’t actually fail. Certainly, it’s a significant advantage that we’re routinely testing this code during development. So unit testing is great, in that sense.

But I wonder how often it’s happened that something that could be been asserted gets done with unit tests instead… and then a bug slips through. When the bug is discovered, nobody’s going to notice that an assertion might have caught it earlier. Instead, the blame will likely land on inadequate unit testing. We should be deeply uncomfortable here, as this starts to stink of “unit testing has failed us, and the solution is more unit testing!” Can unit testing fail? Or can unit testing only be failed?

The key advantage of an assertion is that it takes any situation as an opportunity to do some self-checking. If we restrict ourselves to thinking only in terms of unit testing, the utility of an assertion does indeed diminish to nearly nothing. But anything beyond a very narrow notion of unit testing and it becomes beneficial again. “Integration” testing, property testing, fuzzing, QA, and even deployed to production, assertions start to add significant value again. Each of these activities start to check deeper properties automatically. Preconditions, postconditions, and invariants can be inspected in all situations the code encounters, not just those few specific situations that are written up as tests against that single unit.

I’m not sure this is a winnable argument, but I put “integration” testing in quotes above because these terms are used inconsistently. Some people take overly-narrow views of what constitutes a “unit,” and they insist that everything else becomes an integration test.

Personally, I think anything that doesn’t require your code to do I/O is a unit test, but I’m not sure what terminology I’ll have to adopt for the book, yet. So, above, “integration” test means “any test that runs code from a module the test wasn’t directly written against.”

How should assertions be used?

To summarize all this, traditional assertions—the kind that abort immediately and are disabled in release builds—are less common today for good reasons:

  • Ordinary error handling mechanisms are often preferable, and today we often write code in languages where (e.g.) exceptions are always an option.

  • Terminating the process is often not the desired outcome. This tends to be a “running under a debugger” oriented behavior, and not suitable for test runners or release.

  • Disabling assertions is sometimes undesirable. Error checks are worth doing, and the performance costs are usually close to zero. (Exceptions exist, of course.)

But assertions (in general, not just traditional) are also less common for at least one bad reason:

  • When thinking in a “unit tests only” frame of mind, assertions appear to offer little value.

One (these days, less typical) reason to use the kind of assertions that abort is when other error handling mechanisms are not available, or come at too high of a cost. This mostly means C or C++. In C, there obviously isn’t any other exception handling mechanism. When a function can’t be changed to return an error code (because it’s already on a system boundary, or simply because the ergonomic costs of such a change are just too high), then aborting becomes the only option.

Similarly, in C++, many coding styles demand -fno-exceptions, unfortunately for good reasons. But even when exceptions are acceptable in general, exception safety is so difficult that an abort can be preferable just to ensure certain functions cannot raise an exception. Indeed, proposals to improve C++ exception handling (a proposal I quite like, btw), include the suggestion to stop pretending that allocation failures can raise exceptions. Instead, a failed allocation would just abort by default. The costs of such a change are surprisingly small, and the benefits are surprisingly large.

Even in Rust, it appears that its normal error handling is ergonomic enough that aborting (panicking) isn’t usually preferable.

If I’m handing out advice, I’m not sure that “consider using assertions more” is the message I want to convey here. What I’d like to encourage is: learn to property test. From my experiences, I think that increased use of property testing will naturally encourage some additional use of assertions of some kind, as their value becomes more obvious. Or even as their value actually increases as a result of adopting the practice, compared to conventional unit testing.

Having property tests around helps us determine what and how to assert, too. In the past, I’ve sometimes felt uncertain about how best to use (especially traditional) assertions. It’s easy to write an assertion, and be left wondering if it had any value. The purpose is to aid testing the software, and if you’re using them in deliberate conjunction with property testing, then it much more quickly becomes clear when they’re helpful or when they’re a hindrance.

Without something like property testing in mind, assertions can become more nebulous: attempting to anticipate what situations should not happen in downstream user’s software. It hasn’t been uncommon for a library to add an assertion, decide everything look good to make a release, only to discover that situation is possible and non-erroneous, when someone gets upset that you broke their software. The root trouble here is that without something like property testing, evaluation of the effectiveness of an assertion doesn’t really happen until it’s released… and breaking users.

It’s surprisingly easy to not have a good understanding of how our software actually behaves, and that’s what testing is supposed to help us with.