How to do painless TDD

Last week, we nailed the root cause of the problems, related to so-called test-induced damage – damage we have to bring into our design in order to make the code testable. Today, we’ll look at how we can mitigate that damage, or, in other words, do painless TDD.

Painless TDD: to mock or not to mock?

So is it possible to not damage the code while keeping it testable? Sure it is. You just need to get rid of the mocks altogether. I know it sounds like an overstatement, so let me explain myself.

Software development is all about compromises. They are everywhere: CAP theorem, Speed-Cost-Quality triangle, etc. The situation with unit tests is no different. Do you really need to have 100% test coverage? If you develop a typical enterprise application, then, most likely, no. What you do need is a solid test suite that covers all business critical code; you don’t want to waste your time on unit-testing trivial parts of your code base.

Okay, but what if you can’t test a critical business logic without introducing mocks? In that case, you need to extract such logic out of the methods with external dependencies. Let’s look at the method from the previous post:

public class CustomerController : ApiController


    private readonly ICustomerRepository _repository;

    private readonly IEmailGateway _emailGateway;


    public CustomerController(ICustomerRepository repository,

        IEmailGateway emailGateway)


        _emailGateway = emailGateway;

        _repository = repository;




    public HttpResponseMessage CreateCustomer([FromBody] string name)


        Customer customer = new Customer();

        customer.Name = name;

        customer.State = CustomerState.Pending;





        return Ok();



What logic is really worth testing here? I argue that only the first 3 lines of the CreateCustomer method are. Repository.Save method is important, but mocks don’t actually help us ensure it works, they just verify the CreateCustomer method calls it. Same is for the SendGreetings method.

So, we just need to extract those lines in a constructor (or, if there would be some more complex logic, in a factory):


public HttpResponseMessage CreateCustomer([FromBody] string name)


    Customer customer = new Customer(name);





    return Ok();


public class Customer


    public Customer(string name)


        Name = name;

        State = CustomerState.Pending;



Unit-testing of those lines becomes trivial:


public void New_customer_is_in_pending_state()


    var customer = new Customer(“John Doe”);


    Assert.Equal(“John Doe”, customer.Name);

    Assert.Equal(CustomerState.Pending, customer.State);


Note that there’s no arrange section in this test. That’s a sign of highly maintainable tests: the fewer arrangements you do in a unit test, the more maintainable is becomes. Of course, it’s not always possible to get rid of them completely, but the general rule remains: you need to keep the Arrange section as small as possible. In order to do this, you need to stop using mocks in your unit tests.

But what about the controller? Should we just stop unit-testing it? Exactly. The controller now doesn’t contain any essential logic itself, it just coordinates the work of the other actors. Its logic became trivial. Testing of such logic brings more costs than profits, so we are better off not wasting our time on it.

Types of code

That brings us to some interesting conclusions regarding how to make TDD painless. We need to unit test only a specific type of code which:

  • Doesn’t have external dependencies.
  • Expresses your domain.

By external dependencies, I mean objects that depend on the external world’s state. For example, repositories depend upon data in the database, file managers depend upon files in the files system, etc. Mark Seemann in his book Dependency Injection in .NET describes such dependencies as volatile. Other types of dependencies (strings, DateTime, or even domain classes) don’t count as such.

Here’s how we can illustrate different types of code:

Painless TDD: types of code to test

Types of code to test

Steve Sanderson wrote a brilliant article on that topic, so you might want to check it out for more details.

Generally, the maintainability cost of the code that both contains domain logic and has external dependencies (the “Mess” quadrant at the diagram) is too high. That what we had in the controller in the beginning: it depended on external dependencies and did some domain-specific work at the same time. The code like that should be split up: the domain logic should be placed in the domain objects so that the controller keeps track of coordination and putting it all together.

From the remaining 3 types of code in your code base (domain model, trivial code, and controllers), you need to unit test only the domain-related code. That is where you can have the best return of your investments; that is a trade-off I advocate you make.

Here are my points on that:

  • Not all code is equally important. Your application contains some business-critical parts (the domain model), on which you should focus the most of your efforts.
  • Domain model is self-contained, it doesn’t depend on the external world. It means that you don’t need to use any mocks to test it, nor should you. Because of that, unit tests aimed to test your domain model are easy to implement and maintain.

Following these practices helps build a solid unit test suite. The goal of unit-testing is not 100% test coverage (although in some cases it is reasonable). The goal is to be confident that changes and new code don’t break existing functionality.

I stopped using mocks long time ago. Since then, not only did the quality of the code I write not dropped, but I also have a maintainable and reliable unit test suite that helps me evolve code bases of the projects I work on. TDD became painless for me.

You might argue that even with these guidelines, you can’t always get rid of the mocks in your tests completely. An example here may be the necessity of keeping customers’ emails unique: we do need to mock the repository in order to test such logic. Don’t worry, I’ll address this case (and some others) in the next article.

If you enjoyed this article, be sure to check out my Pragmatic Unit Testing Pluralsight course too.


Let’s summarize this post with the following:

  • Don’t use mocks
  • Extract the domain model out of methods with external dependencies
  • Unit-test only the domain model

In the next post, we’ll discuss integration testing and how it helps us in the cases where unit tests are helpless.

Other articles in the series


  • Anders Baumann

    Great article and I really like the ‘No mocks’ approach. I will try and use that in the future.

    • Vladimir Khorikov

      Thank you!

  • Pellared

    First of all, very good article! However I am not sure about the statements from the summary “Don’t use mocks” and “Extract the domain model out of methods with external dependencies”. In my experience I would write “Minimize the use of mocks” and “Extract the domain model with only domain-related dependencies”. Why? Because from my observations many domains are related with some other systems – there are some boundaries (I am not a DDD expert so please correct me if I am using bad vocabulary). Many of software systems are build to manage some “external” things. Example: consider that you are making some industrial software that takes care of controlling and monitoring the state of a power plant – your domain dependencies would be things like Engine (which would have “methods” such as Run(Speed), Stop(), CorrectnessCheck()), Protective relay (which functionalities like GetMeasrements() Trip()). Then how would you like to test the domain without mocks? You could try simulating these components, but these are so complex that it is almost impossible. Moreover automated testing such system is also impossible. What could be better than mocking those domain dependencies?

    • Vladimir Khorikov

      I actually tend to differ external (volatile) dependencies such as databases or 3rd-party web services and internal (stable) dependencies, such as in-memory data structures. Mocks help us with the first type of dependencies, but they are not necessary in the latter case. Objects from different bounded contexts fall into the second category: you can just create such objects manually and feed them to the entities working with them.

      If I understood your example correctly, you can, for instance, do something like the following:

      public void Test()
      // Assert
      Engine engine = CreateEngine(); // Here we create a full fledged copy of the domain object for test purposes

      ProtectiveRelay relay = new ProtectiveRelay();

      // Act
      relay.GetMeasrements(engine); // Passing it as a dependency

      // Assert

      The key here is to make the domain model self-contained so that it doesn’t refer to any external dependencies. That way, we can use real objects in order to unit test the domain model.

      > “A small reference from (starting: 29:20):”

      Interesting talk, thanks for the reference. Seams actually is a good practice, but it doesn’t include the need for mocks, it’s just a reminder that we need to support a reasonable separation of concerns amount our entities. If the concerns are separated, we can easily test them in isolation, be it through mocks or classic object construction as in the example above.

      Thank you for the comment!

      • Pellared

        Thanks for your reply!

        The problem is that the

        > “CreateEngine(); // Here we create a full fledged copy of the domain object for test purposes”

        is not possible. This “domain object” is far more complex than the IT system itself! We would have to know how the Drives and Engines are working… We do not want to simulate the Engine – we want to communicate with it and we have no idea of its internals – it is a true black-box for us.
        Therefore we have an IDrive interface in a Domain, because we cannot create such objects manually and we use mocks to test it. And really the domain is like: “if the system is in some particulate state – send a “stop” message to the Drive”.

        From my experience, often the problems in unit testing is that people are often making them to granular. They treat classes like a unit under test which leads to mocking overuse and “leaking” the implementation to the tests.

        • Vladimir Khorikov

          From my experience, often the problems in unit testing is that people are often making them to granular.

          I couldn’t agree more. Just recently had a discussion on that topic. In my opinion, it’s not always a good idea to isolate every class from each other as it loosens the cohesion between them which itself can lead to poor encapsulation.

          Regarding the engine domain object. In my experience, if we maintain a good separation of concerns and adhere to some kind of onion architecture ( ), then it should be possible to fairly easy create any domain object, either using its constructor or a factory.

          In my experience, inability to easily create a domain object often signalized about problems with separation of concerns. In such cases, it was indeed hard or even impossible to create some of the classes in the domain object. I would look towards extracting all the external dependencies out of the domain objects and making them simple and self-contained. Ideally, classes from your domain model (such as Engine) should interact with other domain model classes only. That would allow to actually test the domain model in isolation without mocks.

  • Aaron Poche

    Does not the approach used here then mean you have business logic in entity classes?

    • Vladimir Khorikov

      If by entities you mean Customer and other domain classes, than yes. Most of the business logic resides there.

  • Maxim Balaganskiy

    Most of our business logic lives in service classes separate from domain models. And 99% of a time it’s linq queries. Therefore, my Arrange section consists primarily of data creation commands. We do use mocks for several services like DateTimeProvider or EmailProvider but they are setup in a fixture, not in tests themselves, and reusable similar to stubs. The thing which bugs me is that data preparation produces lots of duplicate code. I tried reusing it but this creates tests dependencies, so back to square one… Is there a better approach with regards to initial data setup?

    • Vladimir Khorikov

      Code reuse in unit tests is a tough topic. On one hand, unit tests should be independent from each other; on the other, no one wants to have tons of boilerplate code, so we need to keep a balance here.

      I know of 2 methods of how to mitigate this problem. The first one is to reuse the data preparation methods (those that are used in the Arrange section), but do this only within a single unit test class; don’t reuse them across several test classes. So it goes something like this: . The methods CreateOrganization and CreateUser are made private within the UserTests class, other test classes don’t have access to it.

      The other method is for more sophisticated scenarios, where you need a more fine-grained control over what you create in the arrange section. The idea is that you create a special Builder class for each class you want to construct in your tests and then reuse this builder across all unit tests:

  • Guillaume L

    I think you’re underestimating the value of mock-based tests on two levels.

    Development process and design value

    One of the things that is not often mentioned about mocks and mock-based tests is that they were largely explored and developed by people who have a very specific approach to implementing systems – namely, outside-in test-driven development. This methodology is explained in the book “Growing Object-Oriented Software, Guided by Tests” by Pryce and Freeman.

    In that approach, they start from the outermost layer of an application and progressively “mock their way to the core”. They gradually implement distinct layers of language in the program. At each layer, a semantically cohesive set of objects have to be defined. But these objects are not isolated, they talk to their peers and sometimes have to reach out to objects in other, still unimplemented layers. So one option is to define interfaces that you know the object you’re currently developing will have to call, but without coding that dependency straight away. It allows you to think about the contract, the shape of that dependency from a consumer’s perspective without caring about implementation details at first.

    Why are fake objects, and more precisely fakes written with the help of a library, important in this scheme ? Well they are an easy and fast way to set up placeholders for these still unimplemented interfaces in tests. In addition, the mock flavor of fakes can check whether the object currently being tested behaves properly towards its dependencies. The whole rationale is that it’s easier to design your dependencies well when you have unit tests for interactions with these dependencies – a.k.a. interface discovery.

    Correctness value

    By saying “don’t use mocks” without detailing what the tradeoff is, you’re also overlooking multiple correctness aspects that they validate for you directly or indirectly :

    – Does object X call its dependency Y at all ? In your example, sending an email is still a very important thing to get right.
    – Does it give it the right parameters ?
    – Does it call its dependencies in the right order ?
    – Does it do the right thing with a value returned by one of its dependencies ?

    You might say that you’re overlooking them on purpose, which is a perfectly valid choice, but then you should explain why each one is not important in your opinion. You touched a bit on that by mentioning Steve Sanderson’s blog post, but even he stays very superficial on the subject, only stating that “it’s expensive to do and yields little practical benefit”. I think you should make these tradeoffs more explicit in your article, and most importantly put them in perspective with your context.

    I admit that the typical use of mocks is subpar in a majority of code bases these days. The tests are riddled with implicit assertions and rendered fragile because mocks are used instead of stubs or dummies in unit tests. Behavioral tests don’t have their own properly named test methods, they are buried deep into other state-based or outcome-based test methods. Some people nest mocks, which is utter nonsense and denotes ignorance of the “Tell, Don’t Ask” best practice.

    But just because an approach is misunderstood and misused by a lot of people doesn’t make it invalid. And we’d also be better off not totally forgetting where the approach originated in and what problems it was designed to solve in the first place.

    • Vladimir Khorikov

      First of all, thank you for such a long and thoughtful comment, really appreciate it.

      Outside-in test-driven development is indeed a very useful approach. I myself apply it broadly in my projects. However, this approach doesn’t mean you should use mocks, it can very well be achieved without them.

      Interaction-based unit testing is a separate big topic. In short, I think such tests bring the least value as they tend to verify implementation details of the system, not the operations’ outcomes. A better approach here is to adhere to the ports-and-adapters architecture and test you domain model on its ports, without going deeper to the objects’ interactions.

      I think you should make these tradeoffs more explicit in your article

      Thanks for the feedback. I touch this topic a bit in the “The most important TDD rule” post (the idea is to verify the end result not how this result was achieved) but should probably indeed make it more explicit.

      I have a couple of articles planned on this topic, including a detailed review of the GOOS book which I think represent a canonical way of using mocks for TDD. I think it will address most of the points left unclear by this post series. Hopefully, will be able to post them soon.

  • Michal Pietrus

    Hello Vladimir,

    I wonder what’s your opinion about mocking and unit testing the CreateCustomer method in a dynamic, duck-typed language, say Ruby or Python? These have no compiler and hence, simple typo’s or even checking whether duck quacks like it’s supposed to do are (IMO) hard to catch without a test. However, the question remains: shall it be a unit test checking the behavior (i.e. verify that _emailGateway.SendGreetings is getting a customer instance) or an integration test, which utilizes real database and – for instance – checks whether customer has been created? Such integration test obviously duplicates “New_customer_is_in_pending_state” test so in general sounds like a bad idea.

    • Vladimir Khorikov

      Good question. I think tests (even trivial) bring more value in such setting, precisely due to the fact that the type system doesn’t help us with catching those errors.

      However, I don’t think it makes much difference. Mocking in a dynamicly-typed language is easier (and thus such tests are more maintainable) but tests with mocks still don’t bring much value unless they work solely with external dependencies you don’t control. I described the approach I’d recommend to take in these 3 articles:

      In short: start with an integration/end-to-end test that will touch the database directly, verify the customer is indeed saved. If the domain logic is simple (CRUD), then that’s it. If not, start introducing unit tests that would thoroughly test the domain model specifically. Try to isolate that domain model from the external world, that would allow you to not involve mocks when testing it.

      • Michal Pietrus

        Thanks for answering! After a while I’ve realized my question is answered even in the next blog post (“Integration testing or how to sleep well at night”), but nontheless I was curious of your opinion. Although I’ve never written a single line of code in C#, I really appreciate the knowledge behind your posts, because any language is just a tool, but how to use that (or another) tool wisely is a completely different story. I also value the way you present “what’s worth and what’s not” (the rate of investment) of doing X or Y, because literally I haven’t studied any literature, that would at least touch such topic in that way. Perhaps I haven’t found it, yet 🙂 .

        Anyway, what if I’d assume that you’re in a legacy codebase, when it would be diffcult to introduce concepts that would let you isolate the domain model easily and such codebase hasn’t been written following DDD? Some pieces are unit tested (i.e. with mocked repositories) and repositories are integration tested with real DB separately. Now, in such case the behavior, that these pieces are working correctly with each other, should be verified somewhere and since you have some “leaf” pieces covered — the natural step sounds like to write a test that checks the behavior (with mocks, obviously). Conversely, to verify the code works as a black box and for given input it returns some output, test of state is more convenient (what you’re advocating). However, since repos have their tests, “leaf nodes” have their tests, black box test duplicates the tests you already have (i.e. repo tests). What am I missing here?

        • Vladimir Khorikov

          Thank you for your kind comment 🙂

          Interesting question, I don’t think I wrote about testing repositories specifically, that would make for a good blog. I wouldn’t recommend testing repositories in isolation from other pieces. The general guideline here is to build tests in a way that allows you to describe the system behavior. Repositories don’t reflect any behavior, they just a façade on top of the data storage (and least they should be), so tests covering them specifically don’t add much in terms of description, they don’t “tell a story”.

          What I would recommend instead is having integration tests verify your operations end-to-end. The entry point would be a controller and the verification here would be done against a real DB – tests will check its state. Such tests will involve repositories behind the scene because the controller uses them to store data, but the main goal here is not repositories, it’s the end-to-end behavior.

          Don’t create too many such tests, though. The approach I usually apply is to have a single test for each controller action that verifies the happy path. If there’s a non-trivial conditional logic in the action, I recommend putting it to the domain model and using unit tests to verify it.

          • Michal Pietrus

            Again, thanks for answering!

            Don’t create too many such tests, though. The approach I usually apply is to have a single test for each controller action that verifies the happy path. If there’s a non-trivial conditional logic in the action, I recommend putting it to the domain model and using unit tests to verify it.

            Huh, that’s what I’m usually doing, except the E2E test part. What I’m usually trying to achieve is to isolate the boundaries (e.g repos) in order to simplify things or just speed the build time up. Obviously it comes with a cost, which includes mocks usage (unit test isolation) and a need to test the behavior (again, with mocks). The result is tests are brittle, because any refactor ends with red tests. In case of rate of investment this doesn’t provide any good.

            On the other side, E2E tests aren’t cheap as well (i.e. they’re noticeable slower), but I’ll trust your experience. I’ll try to minimize mocks and introduce more E2E tests to cover happy paths and we’ll see how it works in the long time, or whether the ROI is positive 🙂 .

  • Michal Domagala

    Hello Vladimir,

    I read this post and, especially example with fixing destructive decoupling.

    I have impression that both posts presents solutions going opposite direction:

    – “fixing destructive decoupling” moves related code under single “umbrella” to achieve better cohesion/meaningfulness

    – this example moves related code (class Customer) away from single “umbrella” (class CustomerController) to achieve better testing isolation

    As many years had gone in the meantime, do you have a reflection which direction is better?

    • Vladimir Khorikov

      Good question, the dichotomy is spot on. (I assume you meant this post ?)

      The line between the two is out-of-process dependencies. Separating the work with out-of-process dependencies from business complexity is one of the few techniques that are of tremendous help. This separation provides better testability but also better simplicity and both are extremely important. More thoughts on this topic here:

      • Michal Domagala

        First, thanks for the answer, I wanted to be confirmed that my “dichotomy” observation is correct. You confirmed me and I was fine

        But I also had to read more posts about that. My understanding is:
        1. Destructive decoupling should be fixed. Gathering spread code under cohesive “umbrella” is good.

        2. Having code under domain “umbrella”, extracting functional core should be considered. However, functional core should be cohesive. If functional core is bag of unrelated classes, the extraction is not profitable.