One of my favorite tools for thinking is deliberately arguing the opposite positions, to myself. (Sometimes this is a worthless exercise—you need to know when to apply it, otherwise it’s just a lazy form of trolling—which is why I usually do this as internal monologue.) But technical decisions, where there are trade-offs involved, are often excellent places where trying to force yourself to argue both sides of a decision can be enlightening. The simplest form can be “pro/con” list making, trying to ensure you’ve seriously considered potential downsides before going ahead with a decision.

But broader forms of this, applying to more than just a single decision, are also sometimes enlightening. One of my favorite examples of this sort of style is this essay which makes the following points:

0: Don’t write code
1: Copy-paste code
2: Don’t copy paste code
3: Write more boilerplate
4: Don’t write boilerplate
5: Write a big lump of code
6: Break your code into pieces
7: Keep writing code

What makes this style of advice so great is that every single point is contradicted, directly, by another point. But the style of the essay—trying to explain when and why each bit of advice applies—is enlightening. Advice about how to do things better is mostly useless unless you know where you’re starting from. A lot of advice isn’t really an end but a direction, and if you head in one direction long enough, good advice will start to point you the opposite way.

Far too often, advice is given by people who assume particular contexts, but do so unconsciously. The advice then comes without any examination of what context it applies well to. Frequently, both its advocates and students then begin to believe that bit of advice is absolute and universal. This is especially true if the context includes broadly adopted cultural practices, so the advice appears to apply everywhere, for a time. It’s easy for a consultant to fool themselves when their absolute advice works 90% of the time. Surely those other 10% were just bad at it for some reason.

So here’s the one bit of advice I feel is truly universal: whenever we’re trying to improve, it is never enough to know what we should do, we need to truly understand why we should do it, and when we should not do it.

We get a lot of fads

I was at least partly there for the object-oriented programming revolution. I didn’t have much experience at the time, but I did get to see the blast of advocacy. I can’t really capture what I thought of it at the time (and I wasn’t particularly insightful back then anyway), but acknowledging that this comes 100% from hindsight, I think it was obvious that:

  1. OOP contained a number of terrible ideas.
  2. OOP was going to win.

The trouble of course is that OOP was not just a set of neutral tools, but this whole ideology that came with a lot of advice.

Let’s start with the second part. It’s hard to explain to modern programmers just what old-school C procedural programming was like. So let me pick a tiny example: error handling.

If you’ve used the C standard library, you’ve seen one general form of how it can work. You call a function, and it might return -1 or NULL or something like that, and then you can check the global (or thread-local) errno variable and map that to a string explaining what went wrong. But, errno was really only supposed to be for C standard library. Every application generally had its own style of error handling used throughout its code.

Which meant that every bit of code in that application (because what code can’t possibly encounter an error) was frequently made application-specific. You’d write some string manipulation routines and if you ran into a problem in the middle you’d be using MYAPP_HANDLE_ERROR or some such. (Because of course this was handled with state.) Only truly major APIs had their own independent error handling (you can see remnants of this in the modern OpenGL libraries, for example) because it’d get too tedious to have too many error handling mechanisms floating around everywhere. So most application code was non-reusable, it wasn’t isolated from the rest of the application.

This kind of thing was everywhere, most code written for an application was dripping with coupling to the specific application it was written in. The OOP revolution, its new languages and new practices, came with several good mechanisms for isolating code, which meant re-use suddenly became much more practical. (Of course, this message was confused by OOP advocates at the time praising inheritance as the tool that permitted re-use, which was utterly and mind-bogglingly wrong.)

As you might guess from today’s theme, the problem here was all this was great directional advice. We got languages with better features that promoted writing much more modular and isolated (and so re-usable) code, and lots of advice that encouraged people to use it. But people also confused this advice with absolute truth.

And spiraling away with these absolute rules in hand, we got things like JavaBeans. Take an ordinary struct, but oh no fields are verboten! Quick, make them private, add zero-arg constructors, getters/setters, and serialization. Now we’ve recreated a mutable struct, but with more boilerplate. But at least Serializable means part of this is a re-usable abstraction, with general rather than application-specific code, so we’ve got that going for us. I guess.

Reactions beget counter-reactions

I don’t think I need to go on about that too much, because we’re actually a fair bit past the original OOP revolution’s propaganda. Sort of. I still see some naive car extends vehicle garbage out there. But I think people who get victimized by that stuff are better surrounded by people who know better and can help set them straight. (I once helped some engineers who had to become programmers deal with their program design by first insisting they remove their MyAppObject class that every other object in their program inherited from because that’s what they thought OO design meant. This stuff is still out there, and it’s still leading people astray!)

Today, we also have SOLID, which I consider to be a kind of counter-reaction to the original OOP ideology. It doesn’t reject objects or anything, but it does the same thing that the OOP revolution did back in the day. It gives good, but somewhat too directional, advice. The one thing I think SOLID does really well is get people more skeptical of inheritance. At least as far as I can tell among mainstream programmers, criticism of inheritance seemed to be obscure until SOLID made it more well-known.

I should probably write a post deconstructing SOLID. Maybe next week.

But the important part of this, for my point in today’s post, is that it’s the same kind of advice. The way it frequently gets framed forgets all context, and so it starts looking like universal rules. And because it’s a reaction against some commonly used OOP practices, it seemed widely applicable. But it’s not universal. And we’re already seeing counter-reaction brewing. Over-abstracted code, over-mocked for “unit testing,” cobbled together with dependency injection frameworks are rapidly becoming the new “what on earth are you architecture astronauts inflicting on us?”

Round and round we go

I don’t know why we’re so faddish. There’s an element of “consultants trying to sell a brand” to it, but I don’t know why people go along with that so much. Many people get really enamored and become huge advocates for these things, and that’s really what pushes it, the consultants couldn’t do it on their own. The original OOP revolution at least came along with extremely rapid growth in the number of programmers, which was probably contributing.

This might just be how we collectively learn over time, and we’re suffering from it mostly because things sometimes get too big with too much inertia (somehow?) to turn them around before they get painful. We’ve certainly seen faster iterations of this learning cycle.

Pain points with traditional SQL databases lead to NoSQL and schemaless approaches. Horrible experiences with these is leading back to schema-ful approaches, but we have adopted some differences. Postgres does have JSONB now, and I’m not sure it would have if it weren’t for the success of things like Mongo. Now it’s common to work out which information should be table columns, and which can be embedded in the JSON, striking a balance. Schema migration tooling is improving. Meanwhile, some people are starting to find the occasional place where schemaless is actually a good idea, so we might even have a minor course correction back from “schemaless was totally irrational exuberance.”

One of my favorite articles about mutability was Peter Norvig’s observation about unification algorithms. I’m a big advocate of offering immutability in languages, but it’s always good to be mindful of solid evidence that it’s not a cure-all, universally applicable. To summarize that article, unification algorithms are naturally implemented in a stateful way, and going stateless actually caused a lot of bugs. Today, Rust offers an interesting compromise where owned data is mutable, and once data is no longer wholly owned, it’s either immutable or you need to actually deal with the sharing problem. I still need to get some experience with that, but it’s an interesting possibility for best of both worlds on mutability. Again, it reflects a more nuanced understanding of what’s actually helpful.

And as a final example, we’d in the middle of watching testing practices evolve. Adoption of widespread unit testing was a big revolution, and actually happened fairly recently. But now we’re starting to experience “over-mocking” and “test damage” to designs, and other symptoms of overzealous attempts to test everything by any means necessary. Just what exact form the counter-reaction will take should be interesting to see.

Hopefully we’ll learn quickly instead of riding a fad deep into the weeds.