Awhile back I wrote a post where I mentioned abstractions having an “inside” and an “outside”. Let me just rehash that idea for a second, because I want to think it through a bit more today.
If we write a function (in Java, let’s say) like
void f(int x) we do two different things.
Inside the function, we’ve created a guarantee that
x is an
Outside of the function, for it’s users, we’ve created a restriction that you must pass an
The direction of this push/pull (of “guarantee” versus “restriction”) depends on variance. If it’s covariant, one side gets a guarantee, and the other a restriction. If it’s contravariant, it goes the other way around. If it’s invariant, everybody gets both.
Mutable things are generally invariant, because reading is covariant and writing is contravariant.
So if we have a mutable variable
int x, then we’re getting a guarantee when we read, and a restriction when we write.
So designs with mutable elements are often invariant.
Today I want to consider a few other places where this push/pull comes up.
The customer can sign off on it!
Let me pick on a product, maybe slightly unfairly:
Cucumber Pro empowers Product Owners and Business Analysts to harness the power of examples, making everyone part of the conversation.
If you’re a contract software developer, you might like cucumber. The customer signs off on a “totally human-readable” specification, so when they inevitably claim your software doesn’t do the right thing, you can say “no look, here is where you said to do that!” Then it’s their fault, and they have to pay you more to fix it.
Maybe I’m cynical.
But when I look at cucumber, I see two things that make it unattractive to me:
There’s a long tradition of believing that you can bring non-programmers in to do programming if only X wasn’t standing in the way, whatever X is. This has never worked. The one place it has ever had the appearance of working is spreadsheets. But spreadsheets are really just also programming, and we just don’t really give them the credit they deserve. Spreadsheets are a friendly way to dip your toe into (yes, real) programming. But a lot of these other kinds of tools seem to think it’s syntax that’s the problem, and it’s not. You have to think abstractly and be able to write, test, and debug code. Programming is programming, regardless of syntax. But this isn’t my main point.
Specifications of this form aren’t as useful as they seem. Sure, we get tests out of it. But there’s a general problem with tests as a form of specification: they don’t go both ways. They’re inside-only. A test suite can tell you if a particular module is working well, or at least a test suite is supposed to be able to do that. But it gives you nothing for other modules to make use of.
Now, I don’t want to knock test suites here, they generally do what they’re supposed to do. But as a specification tool, they only do half the job. You get a restriction on the module under test, but you don’t end up with any corresponding meaningful guarantee on the outside.
Just imagine trying to use cucumber as a specification. You send one team off with a cucumber spec to implement their module. Then you assign another team to build a product that uses that first module. Good luck to that second team; maybe they’ll find a way to delay until the first team is done. (Ping-pong tournament?)
But that’s the trouble with something like cucumber: human readable or not, the customer doesn’t have any real ability to actually understand whether the spec is what they actually want. So you get cover as a contractor, and… maybe it’s useful in helping communicate changes to an existing product? (I did say I might be criticizing it unfairly…) But as a customer having decided on a cucumber spec, you may not really know what you’re getting.
UML: a post-mortem
The one good thing that could maybe be said about UML is that it didn’t have this problem. In principle, a UML spec would tell the implementor of a class what to implement, and the user of a class what to expect. In theory, maybe developers could go their separate ways and come back with something that worked.
In practice, this was a disastrous failure.
This approach to using UML ignored the most fundamental fact about design: design is always iterative. Nobody ever, ever, comes up with a good design in advance and then just goes and implements it. (The closest we can get to this is truly formal specifications, but that’s just programming again! You iterate on the spec design!)
UML wasn’t detailed enough to learn from and iterate on a design in advance. And so you generally ended up with bad designs that often couldn’t actually work together as described, and then a light garbage fire trying to integrate it all together. The technical debt at that point is often fatal.
But a second failure of UML is that it’s inherently object-oriented. OOP forces certain decisions about variance on you. Many things are going to end up invariant, because (especially traditional) object-oriented design encourages using a more state-oriented design. Many things are going to have co/contravariance chosen by accident, simply because your types were objects or interfaces. So “restrictions” end up going every which way.
A moment’s thought about critical parts of the design, and you might realize “hey, we actually want the direction of guarantee/restriction to go the OTHER way!” But to do so, you’d want switch from using an object to using a data type to represent some critical type. And that’s not in consideration.
The combination of these two problems isn’t pretty. You start with a design set from the start that’s certainly going to need changes. But you’ve also got a design that works in such a way that the “restrictions” and “guarantees” aren’t sensibly aligned. So those changes end up being more difficult to make than they might otherwise need to be.
This duality with guarantee/restriction shows up in mocks too. We’ve already talked about how tests are a bit one sided.
But mocks could be two sided! A mock says “this interface behaves this way” to a test, allowing the test to skip actually calling out beyond that interface. But… we could also actually test whether the interface does, in fact, behave that way. The problem is that it’s hard to automatically “extract” a test case from a mock that goes the other way.
Say we want to test some error case involving a database query. We generally take the business logic, mock out the DB call, and instead say “suppose it returns 0 to indicate it couldn’t find the record.” Now we’re able to test that logic without needing a database.
But that mock could also be a dual test. The question is: if the DB query can’t find the record, does it return 0, as our mock does? We could be in the situation where we have thoroughly tested the business logic, think everything works fine, but we discover in production that the DB query returns NULL in that situation, not 0!
So mocks could be a bi-directional contract, but instead they’re this awkward single-directional test instead.
Someday I suspect we’ll replace mocks with some kind of “automock” tool. Maybe we could record traces of interactions between components from a full integration test. Then we could replay cached I/O traces when testing changes to one unit, allowing quick testing in isolation.
Biting off chunks
It’s tricky to section off pieces of a system that can actually be built by separate teams when there’s a dependency between them. The best approach we have is to aggressively re-use well-established prior work. Databases and caches are things we shouldn’t be reimplementing. Frameworks give us structure that lets us re-use some more off-the-shelf components.
For a lot of software, I think we’ve mostly resolved this issue with open-source software. The more that already exists and can just be used, the more you can just build on top of it, instead of trying to build things in parallel and then fit them together.
Too often, Conway’s law ends up being true:
“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”
Open and free software is an interesting end-run around this problem. You pick software where the organizations sprang up around the software’s design, instead of the other way around.
- I didn’t really intend this when I started off writing this post, but the way tests behave only as a one-sided specification reminds me of my post about how types and tests are different “materials.”
- And now that I think about it, another advantage of property testing is that it gives users more usable guarantees about the module under test.