Should I Write This Contract Test?

Question

I'm trying to apply collaboration and contract tests to my existing Python code. As an example, let's say I have a function A which calls function B. In turn B calls function C.

I write a collaboration test for when a specific exception is caught by A. However, this exception is generated by C and B just let it bubble up. Do I need to write a contract test for B for that exception?

Discussion

Do you need to? No. Should you? Probably. Would I? I'd consider it and I'd probably try, at least for a few minutes.

First, I'd like to evaluate this as a testing question (how much confidence we can justify in the code), and then after that I'll evaluate it as a practice question (whether we get enough from the activity to justify the investment).

Testing From The Client's Point Of View

Consider the exception, which I'll call E. B throws E (it doesn't matter that C actually throws it—keep reading if you don't yet feel confident about this claim) to signal either a recoverable or an unrecoverable error.

If E represents a recoverable error, that means that B expects clients to try to recover, and so we need to document what E means so that clients like A know what they need to do. Next, either E describes itself "well enough" that clients can understand how to recover or it doesn't. If E describes itself well enough, then clients need no additional information to know how to recover, but sometimes clients need to know something about the conditions under which B throws E in order to recover from E. In that case, I prefer to write a contract test for B that shows the specific conditions—or at least one common example of them—under which B throws E.

If E represents an unrecoverable error, then B doesn't expect clients to recover, and the application probably just lets its top-level exception handler catch E and respond with "Oops!" just to avoid crashing the entire process. In this case, the code probably doesn't care about the conditions that led B to throw E, except to log that information for a human to read and decide how to react. Neither A nor its clients will look at E and treat it as anything more than a generic exception. In this case, it might suffice to document "B sometimes throws unrecoverable exception E" and writing a contract test for this doesn't add any value. I might still choose to write a contract test in order to document an example of when B would throw E, but I wouldn't let that work stop us from shipping a feature. I suppose it depends on the audience of that documentation.

"Should" B Even Throw E?!

Of course, in all this, I haven't questioned whether B ought to throw E. If E relates only to implementation details inside B, then perhaps clients of B should never see E, but rather some other, more generic exception. Languages like Java force us to confront this question earlier, because they differentiate checked exceptions from unchecked ones, so some exceptions end up in the type signature of the function in question and the code won't compile until we decide what to do about the checked exception. Working in Python means that we have to rely on our discipline to remember to consider this question. When I write the collaboration test for A that simulates B failing, I will stub B to throw E. In the process, I will consider whether knowing details about E helps clarify the test or feels out of place. Here I rely on the "improve names" part of the Simple Design Dynamo. Either E and A seem related (judging by their names or their levels of abstraction) or they don't. If they seem unrelated, then that probably means that it feels strange for A to know even about the class name E. Instead, A ought to treat E as the generic Exception class rather than specifically E, so I would at least clearly document that B might throw Exception and not specifically E. I don't want implementation details inside B to leak out into clients like A. Use the Weakest Type Possible.

In all this so far, I've treated B as an abstraction, meaning just a function signature. I've only cared about the contract of B: it sometimes throws E.

Testing From The Implementation's Point Of View

Now consider the implementation B, which uses C, and that makes it possible that B might throw E. We can apply the same reasoning again. In this case, C might throw E and B consciously decides not to handle E. If B simply rethrows E and exceptions like it, then absent other context, I would vote for not writing that test for B. I would claim that it suffices to document something like "B doesn't attempt to handle exceptions thrown by its collaborators". I might write a comment where the test would normally appear, saying "B propagates all exceptions throws by its collaborators", although since this statement applies to all functions all the time, I would avoid writing this comment as long as I felt no clear need to write it. If, later, we need B to react differently to different exceptions thrown by its collaborators, then we would add the necessary tests to clarify the situation. In rare situations, I would write that test just for the sake of clarity or if the project relied on some bureaucratic test coverage tool results to run the scheduled build system.

If I choose to write tests for implementation B expecting exception E and if I can articulate the conditions under which B throws E in terms only of abstractions related to B, then I consider extracting a contract test. It might happen that, in trying to articulate the conditions under which (abstract) B (really C) throws E, I have to refer to implementation details. In that case, the contract test for B would collapse to "simulate 'something inside B throws E', then expect B to throw E". Meh. That sounds too generic to matter. I could easily talk myself out of writing that test.

Thinking only about the testing question, I would only write those tests that help us feel more confident that the code behaves as expected. Rarely do I find myself wanting to write tests for behavior that requires no code, but I didn't always feel that way. This brings me to thinking about this as a question of practice.

"Do I Need To...?"

Whenever someone asks me "do I need to do X?" I have the impulse to answer "Mu" (unask the question). I try to make this friendly, even though it can come across as rude. I don't like this question in general, because I don't know what you need. Of course, we often say one thing when we typically mean another, and one probably asks this question as shorthand for something else. I genuinely encourage you to articulate that something else. Let me propose a few possible interpretations.

"Could I get away with just documenting that B might throw exception E?" Probably. It depends what you think clients might do with that exception. This refers back to the earlier arguments about testing and confidence. A contract test would largely help if B throwing E had some significance outside the implementation details inside B.

"Should I try to figure out how to write a contract test for this?" Almost always yes. If you could figure out how to articulate the conditions under which B throws E without describing implementation details about B, then that might provide value. If nothing else, it would help you practise thinking in terms of abstractions instead of implementation details, a skill that most programmers mostly lack most of the time. As your thinking in abstractions improved, your designs would improve, mostly by becoming less rigid and more ready to adapt to new situations. If you don't have much experience thinking this way, then you might need to spend an hour thinking about how to do this before giving up. You only need to think of something more helpful than "B throws E if some component inside the implementation throws E to signal some unforeseen unrecoverable failure". If you could find something better, then that might help you, so you'd probably benefit from looking. It takes experience to quickly judge that nothing better exists and experience comes from "doing it wrong" for a while. You have to try.

"Is it worth the effort to write a contract test for this?" I can't answer that, because I don't know the tradeoffs in your situation. I can only suggest good reasons to try to write contract tests so that you can make a more-informed decision. I've tried to do that in this article.

"Is it ever worth the effort to write a contract test for this?" Yes. When I try to write a contract test for this kind of situation, I notice more easily when implementation details have started leaking into my contract, and I want to stop that. I also get feedback from the contract tests regarding whether interactions have become painfully complicated. Writing tests gives me that feedback in general, and often that feedback provides enough value to justify the effort of writing the contract test.

"What's in it for me if I write this contract test?" Documentation. Learning. Practice. Precision. If you don't care about those things right now, then don't bother. You're not going to hurt my feelings.

I would recommend to most people most of the time that they spend some time trying to write this contract test until they feel confident in their judgment to know that they don't need to write it.

References

J. B. Rainsberger, "Putting An Age-Old Battle To Rest". A description of the Simple Design Dynamo and why it works.

J. B. Rainsberger, "Modularity. Details. Pick One." Concrete types are details. Using the Weakest Type Possible discourages obsession with detail and enables great modularity. Embrace abstraction.

Complete and Continue