Unintended consequences

Volume 10, Issue 110; 17 Oct 2007; last modified 08 Oct 2010

Everything is connected. But some things less obviously than others. You're testing for consequences, right?

No, no, no, don't tug on that! You never know what it might be attached to.

Buckaroo Banzai

Imagine that you're building something. A bookcase, a stone wall, a model airplane. Doesn't really matter what. Artifacts of any complexity can usually be viewed as being comprised of subassemblies: the routered joints of a bookcase; small stones stacked on top of bigger ones; wings, fusilage, and landing gear.

Now imagine that you spend some time focused on a particular subassembly: the routering, a corner, or the wings. In the physical world, when you turn your attention back to the whole, you don't expect your concentrated effort in one place to have consequences elsewhere. You'd be surprised, for instance, if routering for the shelves made the boards too thick, or if working on the corner of the wall made some other part of the wall shrink, or if working on the wings made the wheels stop turning.

That's not to say it can't happen. If you used a smaller bit than you planned, or fixed the corner by scavanging rocks from the wall, or dripped glue on the wheels while you were working on the wings, then you might see those consequences later. But the nature of the physical world is such that you can often see the consequences immediately and avoid them or fix them before they become significant.

Not so in the world of bit pushing. Turn your attention to a particular subassembly, say making all the atomic steps work, and when you turn back to the whole, you can find that all sorts of things have changed. Like, maybe, the port bindings for p:choose and p:try don't work anymore.

We build software from subassemblies too, but the connections aren't directly visible. We work from models that don't naturally manifest themselves as first class objects we can see. Change an interface over here and things over there, out of sight and out of mind, go wrong.

If you write software, you've had this experience. Certainly I've had it before and I'm likely to have it again. But this time, perhaps for the first time, I really understand what I did wrong. Not, what I broke in this particular bit of software (though I have a fairly good idea about that too), but where I failed methodologically.

I didn't have enough tests.

Or, rather, I didn't run the tests often enough. I swore I was going to adopt a test-first methodology for this project, but I got lazy. I didn't really build the test harness until fairly recently so I didn't have an automatic system for regression testing. (I have JUnit tests, of course, for particular methods and classes, but they're awfully tedious to write for testing larger interactions.)

If I had, I'd have noticed right away when I broke the port bindings. I'd know what interfaces I'd been fiddling with and I'd probably know exactly where to look for the problem. Punishment for my sins: I have only an intuitive notion of where the bug is and I'll have to go hunting for it.

Lesson learned, I hope.


Sadly, I think this problem is even more insidious with computers / programming. This is why most software and hardware companies have people whom they pay to do nothing but testing, and pay well. With an application of any complexity at all, the parts interconnect in such ways that it is impossible for one person to grasp and remember them all. Of course, we all do incremental testing, but there are always tests that don't get done, situations that don't get tested. I think it's the nature of the beast, and it's what keeps the whole thing interesting.

—Posted by Misty on 17 Oct 2007 @ 06:03 UTC #