Last time I finished with a request: whether anyone’s read anything about how “encapsulation of mutable state restores compositional reasoning.” Someone (thx @ericbb) came through with a good link (which also has a good discussion on reddit) that sort of danced around the topic, and it ended up giving me a good think.

The core proposition I’d like to advocate for is that compositional reasoning is gold. It’s not the be-all and end-all of reasoning, but whenever we can get it, we’re better off for it. And whenever we can recover it, after being forced to do without, we can significantly improve our designs.

State is the canonical antagonist to compositional reasoning: you might think you know what state a system is in now, but a moment later all that could be out the window. So… state bad? Obviously, we can’t do without it (well… completely and practically), but I’d also push back against this kind of blanket reaction. State shouldn’t be banned, it should be controlled, and the best way to do that is to contain it and recover compositional reasoning.

But doing that is subtle.

What’s compositional reasoning?

Worth taking a moment to consider. Naively, I’d say compositional reasoning allows us to understand a whole in terms of its parts. But, I think this gets too simplistic, and I’m not ready to attempt a definition that captures all the nuance.

So let me start with this: inductive reasoning is compositional reasoning, but not necessarily vice-versa. When we reason (“prove”) by induction, we’re showing a property is true of a larger object by making use of its truth on its smaller components. But the properties we use about smaller parts don’t have to be arrived at by induction, necessarily.

That’s part of what’s cool about property testing. We aren’t reasoning inductively there—we’re just walling off a black box, and trying to test some parts of its behavior space to see if they work as we expect.

MIT (controversially, funnily enough) switched its first CS course from Scheme to Python in part because they wanted to emphasize a more experimentally-driven approach to development. In other words, they don’t want to pretend programming happens from “first principles” anymore. We don’t exist in some Platonic ideal. Real software is messy and vague, and often requires experiment to understand. Distributed systems are the canonical example: generally anything that can happen will happen eventually. That’s hard to reason about from scratch, and you’re generally not doing it strictly by induction.

But this doesn’t mean we should give up on compositional reasoning. Rather the opposite: whenever we can recover it, instead of fully confronting the mess of complex systems, that’s a phenomenal design win.

Encapsulating state attempt 1: getters/setters

The word “encapsulation” brings to mind object-orientation, so what of the most naive and obvious approach: getters and setters? Let’s make our mutable state private, and see what happens. Advocates point to a few advantages:

  • Hiding representation allows it to change in the future without breaking compatibility.
  • Restricting mutations to a limited set of behaviors (the setters) allows those behaviors to change. For example, we could add invariant checking to a setter in the future.

But I disagree with these supposed advantages:

  1. Except on system boundaries where we have different priorities, when we change representation, we often prefer it to be a breaking change! That way, we can just fix all the code with the old assumptions. And on system boundaries, changes to representation often leak, leading to a desire for a different type anyway (i.e. handling two different versions of a data structure.)

  2. Adding invariant checking to a setter is often a breaking change! It doesn’t really matter if your code still compiles if your code doesn’t still work.

  3. Setters aren’t good descriptions of behavior, and when you start describing behaviors, you often find you want more than one. For example, you don’t gain much by having setAmount(int), we want moveAmount(other) and resetAmount(). But this is a minor nit about “setters” specifically, we could imagine doing this anyway.

  4. We haven’t actually restored compositional reasoning. Just because we made the mutable state into private fields doesn’t mean we’ve accomplished anything—who knows what code could be calling those setters. We’re in exactly the same situation as when these mutable fields were just public.

It’s this last one that’s killer. “Encapsulating state” in the trivial OO sense has actually done nothing at all to help us reason about our code.

Encapsulating state attempt 2: invariant behaviors

Okay, so let’s just do those things from #2 and #3 above. We’ll have actual behaviors (not merely “setters”) and those behaviors preserve important invariants. Does that actually change anything?

Yes! But only a little.

We can regain partial compositional reasoning. If we don’t need to know anything about the state except what the invariants tell us, then we’ve recovered compositional reasoning. The only trouble is… we often need to know more.

Red-black trees are a good example. We’ve recovered enough information (through invariants) to be able to reason about our algorithm’s performance! But that’s all the RB tree invariant really tells us. We don’t know anything else about the content of the mutable tree, just from the RB tree’s invariants.

Incidentally, this brings to mind another example of non-compositional reasoning: Haskell performance. Laziness throws a wrench in everything. It’s generally difficult to know how long any given function will take, because it requires knowing how much time it will take to evaluate its arguments, which depends on what parts of those arguments have already been evaluated by other parts of the program.

Encapsulating state attempt 3: use data

Let’s try a different strategy. Using data to mutate state is a fantastic approach that gets us even further towards regaining compositional reasoning. The key benefit is that more of our program becomes pure, and pure functions are much easier to reason about. The part of the code that actually directly manipulates state get confined to the “interpreter” that consumes the data.

The “encapsulation” here is totally different from the kind we see with OO. We’re not talking about hiding representation or anything like that. We’re just reducing the spread of code that does mutation. More and more code that “affects the state” instead gets written as pure functions, while the code that actually does those changes stay in one module only.

This module might need expansion over time to accommodate richer ways of manipulating that state, and the ease of doing this might be related to its status as a system boundary. It’s not a panacea, but it’s still much more confined.

Encapsulating state attempt 4: memoization

Another (limited) approach to trying to wall off state is to make the state not really matter. This can take the form of a cache or memo table. From the outside of an object with internal state of this type, there doesn’t need to be any state at all. Except… usually this state exists for performance reasons. And performance matters. So that’ll pierce the veil of abstraction.

“Cache invalidation” is sometimes joked about as one of the hardest problems in computer science. I think it gets that reputation in part because it looks like state that can be totally ignored… until it can’t be, and then you’re left trying to reconstruct where all you have have these assumptions littered about carelessly.

Successfully encapsulating state: lexically scoped state

How do we actually encapsulate state, and actually regain compositional reasoning, fully? The technique that actually works is confining state locally. Confining state to within an object (“encapsulation” as it’s usually meant) doesn’t get us there because the state is owned by the object and thus escapes, but fully-local to a function does successfully isolate that state.

This can take three general forms:

  1. You can just use local state within a function, and not expose it outside. This is a technique (mostly) usable in any language.

  2. You can make use of fancier type system features to help you confine that state. This is what Haskell does with its ST (or State) monads. We have a function that does something stateful internally, and we can runST to just do that imperative thing and get the return value. This effectively transforms a stateful function into a pure function.

  3. You can make use of fancier static analysis tools to control state visibility. This is what Rust’s borrow checker does: you can mutate something freely if you exclusively own it, but once you start sharing it, it has to stay immutable.

The canonical example here is sorting. There’s a useful middle ground between a sort that mutates the array it’s given, and a totally purely functional sort. Purely functional through-and-through leads to a lot of added inefficiency. But… we can get something that effectively looks purely functional from the outside, but internally mutates its copy of the array before returning it. Haskell and Rust can both do this, quite easily.

With Rust, we can take a mutable sort function like this one:

fn sort<T: Ord>(v: &mut [T])

And we can wrap it in a way that makes it look purely functional:

fn sorted<T: Clone + Ord>(v: &[T]) -> Vec<T>
{
  // Make a mutable copy of the array we were passed a non-mutable reference to.
  let mut new_vec = v.to_vec();
  // Mutate it into sorted order.
  new_vec.sort();
  // Return our copy, which can now be viewed as immutable.
  new_vec
}

Quick aside: this function doesn’t necessarily return an immutable vector. It’s “moving” its return value: once this function returns, its caller now owns the vector that sorted constructed. And so its caller still has the right to consider that vector mutable.

The resulting function makes no apparent or observable use of state, from a caller’s perspective. It’s totally… encapsulated.

Of course, internally we have all the same problems state brings (i.e. when using the first sort and not the second sorted), the difference here is that we’ve actually contained it. Once we fully wrap up all that state internally to a function call, it becomes invisible externally. We have a (hopefully) small space of non-compositional reasoning, but:

  1. It’s contained and closed-world. The code that can affect the state hidden inside sorted can be determined from the implementation of sorted.

  2. We can be sure we got it right by rigorously testing sorted. That is, there’s no funky state this code can get into except states reachable by applying sorted to some argument. Any bug in sorted is observable by calling sorted on an input that triggers the bug, without needing to somehow set up an interesting system state beforehand.

  3. We can used sorted without loss of compositional reasoning in the slightest.

I think that’s neato.

That’s not the very first thing that came to mind when I used the words “encapsulation of mutable state.” But it probably should have been.