Pragmatic unit testing

By Vladimir Khorikov

The topic described in this article is part of my Unit Testing Pluralsight course.

This post is about pragmatic unit testing: how to get the most out of your unit test suite.

Pragmatic unit testing: black-box vs white-box

Pragmatic unit testing is about investing only in the tests that yield the biggest return on your effort. In the previous posts, we discussed what traits a valuable test possess (high chance of catching a regression, low chance of producing a false positive, fast feedback) and how various styles of unit testing (functional, state verification, collaboration verification) differ in terms of their value proposition.

To get the most of your unit tests, you need to treat the system you are testing in as black-box manner as possible. It will help you avoid coupling tests to the SUT’s implementation details and urge you to find ways to verify the observable state of the system instead. Focusing on the behavior the clients of your code care about generally results in fewer false positives making your test suite more valuable overall.

In practice, viewing the SUT as a black-box means that when it comes to unit testing the domain model, you should forgo verifying collaborations between your domain objects altogether and switch to the first two styles of unit testing instead. (We will talk about unit testing the other parts of your application in the next post.)

You might have heard a guideline saying that the black-box approach is good for end-to-end/integration tests whereas the white-box approach is suitable for unit testing. This is generally not the case. Try to avoid white-box testing completely as it encourages coupling unit tests to the SUT’s implementation details. Adhere to the black-box approach as much as possible on each level.

Note that the black-box vs white-box dichotomy is not a binary choice and you will need to know at least something about the SUT in order to unit test it. However, there’s a big difference between knowing the SUT’s public API and its implementation details. Never unit test against the latter.

Architectural changes: dependencies

The shift from verifying collaborations inside your domain model to the first two (functional and state) styles of unit testing is not as simple as it might seem and often requires architectural changes.

First of all, you need to pay attention to how you work with dependencies in your code.

Dependencies can be divided into two categories: stable and volatile. Volatile dependencies are dependencies that work with the outside world, for example, with the database or an external HTTP service. Stable dependencies, on the other hand, are self-contained and don’t interact with any resources outside the process in which the SUT is executed.

Make sure you separate the code which contains business logic from the code that has volatile dependencies. I wrote about it here but it’s worth repeating: your code should either depend on the outside world, or represent business logic, but never both.

That is why building an isolated and self-contained domain model is important. The onion architecture with the domain model at the center of it is exactly about that:

Pragmatic unit testing: Onion architecture

Onion architecture

The inner circle here consists of domain classes which communicate only with each other and don’t refer to the outer layers.

You need to change the way you work with stable dependencies as well. Never substitute them in tests and try to always use real objects. Mocking stable dependencies just doesn’t make any sense because both the mock and the original object have predictable behavior which doesn’t change because of external factors.

A common concern which you can often see raised to oppose this idea is that such tests wouldn’t really be unit tests as they touch several classes at once. A unit test, by this logic, is a test that only exercises the SUT and substitutes all its neighbors out.

That is not the case either. In the original TDD book, the definition Kent Beck gave to Unit Test has nothing to do with the SUT being tested in isolation. Instead, it’s about a test running in isolation from other tests. In other words, a unit test is a test which can be run in parallel with other unit tests because it doesn’t interfere with them through any shared resources such as the database or the file system.

In practice, it means that you shouldn’t substitute your domain classes with test doubles. As long as your domain model is isolated from the outside world, your unit tests can be segregated from each other without any additional help.

An important corollary from this guideline is that extracting an interface out of a domain entity in order to “enable unit testing” is a design smell:

public class Order

{

    public void AddLine(IProduct product, IAddress address)

    {

        _lines.Add(product, address);

    }

}

Such interfaced have a special name – header interfaces. That is interfaces that fully mimic the domain class they are supposed to “abstract”. Interfaces that have the only implementation (test doubles don’t count), don’t really represent abstractions and generally should be avoided.

If you adhere to this guideline, you inevitably find yourself on your way up the functional ladder making your code more and more resembling functional architecture. First, you get rid of volatile dependencies in your domain model and move from examining collaborations to verifying state. After that, you refactor some parts of it and remove side effects thus switching to the functional style of unit testing. Most likely, you will not be able to impose immutability to the whole domain model, but the shift – even a partial one – is still worth it.

Architectural changes: layers of indirection

The second architectural change here is reducing the number of layers of indirection. As I mentioned in the previous post, an excessive number of such layers leads to an architecture which is harder to grasp because of lots of dependencies introduced without necessity. This situation is also known as test-induced design damage:

Pragmatic unit testing: Test-induced design damage

Test-induced design damage

To avoid it, use as flat class structure as possible. That would allow you to eliminate cyclic dependencies and reduce the complexity of your code base.

When you adhere to this guideline, your typical class diagram starts looking like this:

Pragmatic unit testing: Flattened the class structure

Flattened class structure

The domain classes here are isolated from the outside world and interact with the minimum amount of peers all of which are also domain classes. The overall coordination is handled by application services (aka controllers / view models). This way, you are able to achieve a good separation of concerns. Application services know how to communicate with the outside world and how to coordinate the work between domain classes whereas the domain classes contain domain knowledge.

These two architectural changes allow you to unit test the domain model using the first two styles, without falling down to the collaboration verification style. And that, in turn, raises the value of your unit test suite. As a side effect, these guidelines also help you see the bigger picture due to its reduced complexity.

I’ll show a thorough example of implementing these changes in practice in the future articles. In the next post, we’ll talk about unit testing the Application Services layer. I’ll also demonstrate legit examples of using mocks.

Summary

Let’s summarize this article with the following:

  • Pragmatic unit testing is about choosing to invest in the most valuable unit tests only. For the domain model, that are tests that focus on output and state verification, not collaboration verification.
  • Adhering to pragmatic unit testing often requires architectural changes.
  • Separate the code that contains domain knowledge from the code that has volatile dependencies.
  • Don’t substitute stable dependencies, don’t introduce interfaces for domain classes in order to “enable unit testing”.
  • Reduce the number of layers in your architecture. Have a single Application Services layer which talks directly to domain classes.

Other articles in the series





  • Guillaume L

    I’m slightly confused by your paragraph about reducing the number of layers of indirection. Do you recommend getting rid of Repositories, as the “flattened class structure” diagram suggests ? What do the boxes in the “Test-induced design damage” diagram stand for ? When would I know that there’s such damage ? When class count in a particular subsystem reaches 6 ? If you’re talking about the domain layer, why couldn’t there be 6 different domain concepts and therefore 6 objects interacting together ? With Value Objects and Aggregates, you can easily reach that number.

    Or is the problem the number of arrows between the objects ? What does it have to do with the use of mocks ?

    By the way, DHH’s original post about test-induced damage was criticizing the use of Hexagonal Architecture, Onion Architecture’s twin brother 😉

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      By reducing the number of layers of indirection, I mean removing cyclic dependencies and flattening the overall class structure. The boxes and arrows in the diagram are there just to show an example of how an overcomplicated class structure can look like (with many collaborators and collaborations between them) in contrast to its alternative (which is depicted below it). No particular limit on the number of such collaborators is intended.

      When would I know that there’s such damage ?

      I wish I had a precise answer 🙂 All I have to offer is an example of what I deem as such and how it can be refactored into something simpler and more robust. The example is coming in 1 post. This example is also the reason why I left this description vague: hopefully, it’ll become more or less clear when I introduce the sample.

      Or is the problem the number of arrows between the objects ? What does it have to do with the use of mocks ?

      The problem is that focusing on collaboration verification inside the domain model, a developer sees every bit of domain logic as a potential collaborator and tries to represent it as such whereas it may be very well modelled as a value object or part of another already existing class. The use of mocks fosters this line of thinking.

      • Guillaume L

        As much as I try to relate to what you’re saying, I can’t seem to find something in my experience or in third party code I’ve come across that matches it. I have barely seen one or 2 attemps at mocking domain entities in my whole career, which were immediately discarded by the programmer’s own realization that this was subpar or by external advice given to them. More importantly, I have never noticed any correlation between using a tests+mocks-driven design and indulging in circular dependencies, useless layers of indirection or other forms of overengineering. It doesn’t even make sense to me why there should be a relationship.

        The use of mocks fosters this line of thinking

        No, poor design skills cause that line of thinking.

        If you’re bad at coming up with loosely coupled and strongly cohesive classes, you’ll be bad at it no matter the design approach. TDD isn’t a substitute for good design skills. Mocking isn’t a substitute for good design skills. They are orthogonal concerns to design skills. All they do is create a space to think about design, but they are completely agnostic of which sort of design.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Mocking domain entities usually comes into play when they are not quite isolated from volatile dependencies. In other words, when code that contains domain knowledge also refers to the external world.

          I agree that mocks by themselves don’t necessarily lead to test-induced design damage. They do, however, almost inevitably lead to it when used to substitute classes inside the domain model. I think the reason why you don’t see anything similar to this in your experience is because:

          I have barely seen one or 2 attemps at mocking domain entities in my whole career

          A great example of what I’ve written about in this post is the sample project from the GOOS book: https://github.com/sf105/goos-code . I’m sure its authors have good design skills. Still, the code base shows all the signs I described in the article: too many layers of indirection, mocked domain entities, circular dependencies.

          • Guillaume L

            In that codebase, most of the mocks (in fact all but one) target the interfaces that are defined in the main AuctionSniper package – but implemented in subpackages that represent infrastructure layers. In other words, these tests cross boundaries.

            From the book :

            There is core domain code (for example, AuctionSniper) which depends on bridging code (for example, SnipersTableModel) that drives or responds to technical code (for example, JTable).

            It is these interactions that the mock-based tests in GOOS primarily intend to tackle. Besides, usage of the word “domain” in that book can be somewhat different from what you get in DDD. For instance, page 62
            they call Swing and the messaging infrastructure “Domains”. Sometimes
            the AuctionSniper calls into these other “domains” via injected dependencies, which is admittedly unacademic from a DDD perspective but it is important to understand that this code doesn’t use the
            DDD tactical patterns
            .

            This isn’t the same thing as “mocking domain entities” as I was describing it. I was referring to a context where domain objects have no dependencies to the external world, where a result can easily be checked by verifying the state of an entity, where the Aggregate can reasonably be considered a unit and thus mocking makes little sense.

            Regarding the design of the classes as the authors created them, I don’t see the attached diagram as particularly bloated, circular or incomprehensible. The concepts look pretty clear and cohesive to me.

          • http://enterprisecraftsmanship.com/ Vladimir Khorikov

            I understand that the authors can use those terms differently from the DDD book, and I understand that you were referring to a different context when you wrote about mocking domain entities. That’s not the point, however, I’m not intending to catch the GOOS book on using the DDD terms “incorrectly”. What I’m going to do is provide an alternative implementation to the solution they write about in the book and show how much cleaner and more concise it becomes after you lay a proper separation of concerns and get rid of mocks.

            In other words, these tests cross boundaries.

            They shouldn’t. When you make domain logic self-contained, there’s no need in that and thus no need in mocking those dependencies out.

            The code base from the book has two major drawbacks: it doesn’t properly isolate the domain model (in the DDD sense of this word) and makes a heavy use of head interfaces (interfaces with a single implementation). The latter flows from the former and results in a solution which is more complicated than it should be.

  • Jonathan Dennis

    Hello Vladimir,

    Thanks for this series of posts on unit tests. They are very helpful. Especially the diagrams on the styles of unit testing. They make it very clear on what is happening.

    I took a bootcamp style .NET course about 5 years ago and the instructor was teaching mockist style TDD. He said things like “Remember, TDD is about design” I have gone back and forth over the years on whether I agree with him or not.
    I’ve done a lot of reading on the subject and at the risk of committing the

    synthesize the expert anti pattern
    here are some things I have found for whatever they are worth.

    First of all I agree with Mark Seemann that
    TDD is not inherently a good design methodology.
    If you aren’t good at design then TDD is not going magically help.

    One of the best videos I have ever seen on testing was a talk done by Justin Searls called
    Breaking up (with) your test suite.”
    He talks about different types of tests and knowing what goal you have in mind when you do different types of testing.
    Of the styles of testing he talks about is what he calls “Discovery testing” It is basically his version of GOOS mockist style testing.
    At about 38:38 into the presentation he talks about how Discovery tests are similar to training wheels. –

    Discovery tests are kind of like a training wheel. Once you’ve gotten used to writing tiny things, they [Discovery Tests] aren’t necessarily all that important or all that useful anymore but they help us to understand really good habits. And those good habits are discovered by test pain. If it’s really hard to write a test double in a particular test it doesn’t necessarily mean that we really suck at using test doubles it probably means that our design could stand to be improved somehow. So pain in good, pain is part of the process. And once you have gotten good at this maybe you can start to shed discovery tests when you no longer feel any interesting pain while you are writing them.

    Training wheels for bikes are for absolute beginners, but if you have a beginner try to do discovery testing then you will probably end up with test induced damage.
    So the analogy may not be perfect. But if you already have to have some knowledge of good design then I think these kinds of tests can be useful.
    Even with discovery testing Justin still advises maximizing the creation and testing of pure functions.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      Hi Jonathan,

      Thanks for the references. I’ll need to go through the whole talk, it looks like there are a lot of interesting nuggets there.

      I have gone back and forth over the years on whether I agree with him or not.

      So, where do you stand now in terms of TDD as a design methodology?

      • Jonathan Dennis

        That’s a good question. I currently think if it is done right it can help you with your design. I don’t think great design is going to automatically emerge with TDD but I think it can help. The main reason I take this position is that I have been very impressed with the tutorial on “Discovery Testing” Justin did with the Game of Life. I find this methodology very interesting. Also see here for another shorter summary of the use of mock objects in discovery tests.

        But I don’t feel like I’ve really gotten good at it yet. I’m still practicing. So once I either give up or really decide that is helping me I will have to follow up with you on this question. I would be curious to hear your thoughts on this type of testing once you have a chance to view all the videos.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Definitely follow up with me on this question once you decide whether it’s helping you or not, would be very interesting to hear your story.

          I’ve bookmarked all the videos, I’ll get back when I go through all of them.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          In the “Breaking up (with) your test suite” video, discovery tests Justin talks about is basically the top-down approach to TDD, when you start designing the system from the top and mock your way down until you have all required code in place. Mocks are required in this approach because when starting from the top, you don’t have the functionality to build the top features upon, so you replace it with test doubles. You are right, that is the style the GOOS book describes.

          There’s some tension between the top-down and bottom-up approaches (aka mockist vs classicist). Some folks oppose them to each other, others think that there’s room for both. BTW, it’s interesting that the bottom-up approach is the only approach available for most functional languages (at least those that employ strong typing), they just don’t have facilities that enable the top-down style of TDD.

          I personally find the bottom-up (classicist) approach more convenient. When I work on a green field area, I usually start by playing with the domain model – define types and see how they might interact with each other. In other words, I start with the smallest things first and try to discover relationships between them. No tests are needed at this point.

          When I’ve gotten some confidence in how the domain model should look like, I switch to the outside-in TDD mode. Write an integration/end-to-end test, see it fails, then implement the required functionality in the domain model in the bottom-up fashion. The bottom-up approach helps avoid the use of mocks and build an isolated domain domain model which is easy to exercise directly.

          Although I personally tend to stick to the bottom-up approach, I think discovery tests can also be useful. But only when you delete them or replace the test doubles in them with real objects afterwards. That way, you’d be able to avoid excessive number of false positives later on.

          Ok, now that I watched the discovery testing series, I see he covers the differences between the bottom-up and top-down approaches pretty well which makes most of what I wrote previously in this comment redundant 🙂

          I like the fact that he admits the top-down approach entails coupling unit tests to implementation details. I don’t think I ever saw anyone from the London school of thought admitting it. He also says that it makes it hard to refactor the SUT because of that – the point I put a lot of effort into making in my post series. It’s also interesting to hear the workaround he proposes for that: removing all the tests and writing them from scratch in the face of more or less significant refactoring. I personally think that it defeats the purpose of having a test suite. The tests stop being a safety net in this case and don’t contribute to the overall confidence. I’m not sure what the point in keeping such tests after the feature/task is completed.

          Having watched this series, I now see the line of though of the London school more clearly. It just values different things than the Detroit school. Tests are viewed not as a safety net but more of the tool that helps end up with a good design. Although, that’s probably just the Justin’s opinion, I’m not sure other Londoners would subscribe to it.

          The demo Justin went through was indeed quite interesting. At the same time, I think the implementation was overcomplicated and could be reduced to just a couple of domain classes (values as the author called them in the video). I see that he puts a great emphasis on the value of the top-down approach. That is, it forces you to think about the high level architecture first. I think the same can be achieved with the use of outside-in TDD, when you start with an end-to-end test and then implement the domain model to fulfill that test. At the same time, no mocks would be needed in this case.

          I’d like to thank you for posting these links here, it was interesting to go through the line of though of someone from the London school. Very insightful.

          • Jonathan Dennis

            I’m glad you found the videos insightful. Thanks for watching and responding to them. I think one of the key insights to take from this is what you said about London style tests not being a safety net. I totally agree with that thought.
            If you understand that when you write them, then I think that can avoid a lot of frustration. I also agree with you that there isn’t much point to keeping those tests around after the feature is complete. Again thanks for your thoughts and I will make it a point to get back with you when I get some more
            experience with these kind of tests to let you know what I think of their usefulness.