When to include external systems in testing scope

By Vladimir Khorikov

Should you always mock out your database? Or should you include it in the unit/integration testing scope? What about other external systems? This post is based on my Pluralsight course about Pragmatic Unit Testing.

Two types of external dependencies

When it comes to external dependencies (dependencies outside the process that hosts your application, such as a database, a 3rd party system, etc.), there’s no single guideline regarding how to work with them in tests. Sometimes, it makes sense to include them into the testing scope and verify your application together with those dependencies directly. In other situations, the best approach is to substitute them with mocks (or other test doubles). How to determine which way to go?

To do that, you need to look at what exactly the dependency represents.

All external dependencies can be roughly attributed to two categories: those you control, and those you don’t. The first category is about systems that reside “close” to yours. The application database and the file system are good examples here. If your application is the only one working with the database, you have a full control over its underlying structure and data in it. The same is true for the file system.

If, on the other hand, your application shares access to an external dependency, your options in affecting its internal workings are limited. For example, when working with a payment API a bank exposes, you can’t simply change that API. It might be developed by some other team, as well as such a change could affect multiple clients, so backward compatibility must be maintained. And you need to also keep in mind that a sheer interaction with that service may in itself have an effect on other applications integrating with it.

In other words, communications with external dependencies shared across multiple applications are visible to the outside world whereas communications with dependencies you fully control are not.

This distinction is an important one. Remember that the most important rule of unit testing is verifying the end result of the SUT as it’s seen from the outside world. This is the only way you can avoid getting false positives and thus increase the value of your test suite.

Calls to external systems you don’t have control over comply with that definition. They are the end result the SUT produces because they are visible from the outside of your system. On the other hand, dependencies you do control comprise a single cohesive whole with your application. Calls your application issues to them are not visible from the outside.

Communications with external systems outside of your control comprise part of the bounded context’s contract because they are visible from the outside of that context. On the contrary, communications with external systems you do control are internal implementation details.

Here’s how it can be depicted graphically:

When to include external systems in testing scope: Communication inside a BC vs communication between BCs

Communication inside a BC vs communication between multiple BCs

Your database, if not shared across multiple applications, is part of the same bounded context. When someone calls your bounded context’s API to, say, create a new product:

POST /products { “id”: 1, “name”: “Pizza”, “price”: 8 }

they don’t expect to then go directly to the database and work with that product from there. It’s not part of the contract (post-condition) the API method proposes. What it does propose is a promise that after the client creates the product, it can then fetch it using another API method:

GET /products/1

and that product will come out as expected, with all the data the client has set up for it.

The underlying storage in this scheme is an implementation detail. It’s not reachable by the clients, they don’t know anything about it. This implementation detail can therefore be changed freely, your tests shouldn’t bind to the way your application communicates with the storage.

You can’t say the same about dependencies you don’t control. The side effects you induce in them are visible from the outside. A client buying a pizza using this API method:

POST /products/1/purchase

can expect your app to charge some amount of money from their bank account, and they can verify that payment using their bank mobile app. Your application is not the only one working with the bank API.

However, as you don’t have control over what the bank does after you ask it to charge the payment, you can’t declare the statement “the client’s credit card balance increases by 8 dollars” as a post-condition to the /purchase API method. The only thing you can guarantee is that you ask the bank to do that. In other words, call the appropriate API on it. Whether or not the call will be processed correctly is up to the bank itself, you can’t influence that decision in any shape or form.

So, again, the difference between external dependencies you have control over and dependencies you don’t control determines whether to include them into testing scope or not. The best way to work with the former is to exercise them directly. This way you will be able to validate the contract your application promises to fulfill. As for dependencies you don’t control, the only thing you can guarantee is that your application properly communicates its intention to them, and mocks is the best way to verify that guarantee.

This article is an important addition to the post I wrote recently regarding when to use mocks. In it, I stated that the only valid place for them is inter-system communications: communications between your and other applications. But it’s not a sufficient condition. A sufficient condition for the use of mocks would be “inter-system communications with dependencies you don’t have control over”. So, the overall picture looks like this:

When to include external systems in testing scope: When to use mocks

When to use mocks

We can boil this guideline further using DDD terms. Use test doubles only to check communications with other bounded contexts. In that sense, communications with dependencies that are external but still reside inside your bounded context are not a subject for verifying in tests. It is better to check the state those dependencies ended up in. The communication pattern that brought them in to that state doesn’t (and shouldn’t) matter. An application database is a good example here: it is hosted by a different process but is still a part of the same bounded context.

Note that all said above is applicable to integration tests only: tests that verify how your application works with other applications. As for unit tests, I’m assuming they focus on verifying business logic which is properly isolated from external dependencies of any kind, so there’s no need in using test doubles here anyway.

Application database vs integration database

Along with application databases, there’s also the concept of integration database. That is a database which is used by multiple applications and serves as an integration point between them.

An integration database is not part of some single bounded context, it spans across several of them. Because of that, it falls into the same category as other external dependencies you don’t have control over. You can’t change the structure of an integration database without introducing breaking changes and any modification you make to its data is visible to other applications. Therefore, it is preferable to treat them the same way you treat an external bounded context.

Light-weight databases: do or do not?

There’s a popular practice when it comes to testing how your code works with databases: replacing them with in-memory implementations. For example, SQL Server could be substituted with SQLite which gives a significant performance boost comparing to executing tests against the real database.

While the performance benefit is great, I wouldn’t recommend you employ this practice. The problem with it is that most in-memory databases don’t quite operate the same way as the normal ones. You always have a chance to fall into some edge cases, some scenarios where the in-memory database works differently.

That’s quite annoying, to say the least. Your tests may work in 99% cases and lie in other 1% either providing a false positive or a false negative depending on how the in-memory DB implementation differs from that of the normal one.

And that, to a large degree, defeats the purpose of having integration tests in the first place. Even if you have all your integration tests pass, you can’t really be sure that the integration between your code and the database works entirely correctly, and you will still need to test this integration manually anyway.

Overall, try to avoid those “in-between” solutions. Either mock your database out completely or test your code against the same type and version of the database as you use in production.

Summary

  • External dependencies fall into two categories: those you control and those you don’t.
  • Communications with dependencies you control are not visible from the outside of your bounded context. Because of that, you shouldn’t couple your tests to them.
  • Communications with dependencies you don’t have control over are visible from the outside world. It’s fine to couple your tests to such communications. Use test doubles to do that.
  • Don’t test against light-weight databases, test against the same type and version as you use in production.

Related articles





  • Guillaume L

    “Note that all said above is applicable to integration tests only: tests that verify how your application works with other applications. As for unit tests, I’m assuming they focus on verifying business logic which is properly isolated from external dependencies of any kind, so there’s no need in using test doubles here anyway.”

    My approach is the opposite symmetric – by using mocks in your tests, you are able to isolate the SUTs from external dependencies of any kind for the duration of the test, which allows you to turn many tests into unit tests
    But, yeah, anyway. I guess we definitely disagree 🙂

    By the way, would you still not use a mock when a test runs a method that incidentally calls a database but it is not the focus of the test?

    For instance,

    – Save an object to DB
    – Send a message on a bus (what we want to test)

    or

    – Read from DB
    – Send an email (what we want to test)

    Or do you always verify the 2 things together in the same test (violating the “one assert per test” rule that some have)? If so, what name do you give to the test? Also, how do you simulate a DB failure in that kind of test?

    Other questions come to mind – how do you manage to keep tests isolated from each other if they write/read from the same DB? How and when do you set up test data in the DB? How do you keep a DB-intensive test suite within a reasonable run time?

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      My approach is the opposite symmetric – by using mocks in your tests, you are able to isolate the SUTs from external dependencies of any kind for the duration of the test, which allows you to turn many tests into unit tests…
      But, yeah, anyway. I guess we definitely disagree 🙂

      The approach I try to employ is to isolate decisions made by the domain model from the effects of those decisions (side effects made to the DB, etc.). This naturally allows for not using test doubles because the decisions can be represented using domain classes.

      It’s interesting that we are still disagreeing on such seemingly basic topic 🙂 Maybe you have some code base where you employ mocks and which you are willing to share with me? I could go through it and see if I could perform the kind of isolation I talked about in my previous blog posts. Or I could change my mind and admit that the approach I’m proposing is not as good as I thought it is – after I look at the counter-example of yours.

      Or do you always verify the 2 things together in the same test (violating the “one assert per test” rule that some have)?

      I do violate this rule in integration tests. And I don’t think this rule is practical as one assert doesn’t necessarily correspond to a single business case, especially in integration tests where you verify how your system work with other sub-systems.

      If so, what name do you give to the test?

      Here’s an example of an integration test:

      [Fact]
      public void AddUser_adds_user()
      {
      Organization organization = CreateOrganization("My org");
      var model = new AddUserModel
      {
      Email = "email@email.com",
      FirstName = "John",
      LastName = "Doe",
      OrganizationId = organization.Id
      };

      int userId = Invoke(x => x.AddUser(model)).Result;

      using (var db = new DB())
      {
      Maybe userFromDb = db.GetUser(userId);
      userFromDb.ShouldExist()
      .WithEmail("email@email.com")
      .WithFirstName("John")
      .WithLastName("Doe")
      .WithStatus(UserStatus.Pending)
      .WithNoSubscriptions();

      EsbGateway.Instance.EsbProvider
      .ShouldSendNumberOfMessages(1)
      .WithUserCreatedMessageInvolving(userFromDb.Value);

      EmailService.Instance.EmailProvider
      .ShouldNotSendAnyEmails();
      }
      }

      I tend to create 1 integration test for a single business case. In this case there are two test doubles (spies) are involved for 2 external services: a message bus and an email service. And the database is exercised directly.

      Also, how do you simulate a DB failure in that kind of test?

      If a DB failure is an expected outcome, then it surely should be wrapped with a test double. In the test above, it is not.

      Other questions come to mind – how do you manage to keep tests isolated from each other if they write/read from the same DB? How and when do you set up test data in the DB?

      Tests are run sequentially, test data is set up before each test. BTW, the exact same set of questions is addressed in the “Getting the Most out of Your Integration Tests” module of my last course on Pluralsight ( https://app.pluralsight.com/library/courses/pragmatic-unit-testing/table-of-contents )

      How do you keep a DB-intensive test suite within a reasonable run time?

      Mainly by making sure there are not too many of them. I tend to cover only the most important use cases with them, all edge cases are addressed by unit tests.

      • Guillaume L

        The approach I try to employ is to isolate decisions made by the domain model from the effects of those decisions (side effects made to the DB, etc.).

        I tend to think that these domain decisions – most often, state machine transitions – have no direct effect per se. It is the application services that trigger the side effects. So, the effects are kind of intrinsically isolated.

        I do agree that domain objects and their tests are de facto isolated from the rest of the world. It doesn’t result from a test design decision though, it follows from the fact that the domain layer has no dependency to any other layer in an Onion (or Hexagonal) architecture. Thus, domain tests are naturally either unit tests or integration tests with a very low integration-ness footprint (in terms of performance, test case combinatorial, test data setup and cleanup, etc.)

        However, in my experience, the domain object vs non-domain object ratio of any given system is usually 1-to-3 or less. This leaves a lot to be tested besides the domain model. The question, how to test it?

        In my view, non-domain objects tend to communicate a lot with outmost layers of a system (adapters in Hexagonal parlance). The real thing behind the adapter might be an external system but it could also be a database within the same Bounded Context, or a message bus. It is often slow, circumstance-dependent, unstable and complex to set up. That alone, in my opinion, justifies turning as many of these tests as possible into isolated unit tests – with test doubles. We also want to test failure cases, which are more frequent than when working with domain objects in their in-memory bubble. Concretely, it could be a network interruption, a faulty filesystem, a database deadlock exception – to some extent, we don’t mind the details. But we don’t want failures to happen at random when the tests are running, we want to control them. Fakes allow us to do that.

        Maybe you have some code base where you employ mocks and which you are willing to share with me?

        I’m afraid I don’t have such a code base available publicly or in English. I was thinking about gathering some thoughts on mocks in a couple of blog posts, but I’m not really a blog person and this might take time 😉

        I tend to create 1 integration test for a single business case. In this case there are two test doubles (spies) are involved for 2 external services: a message bus and an email service. And the database is exercised directly.

        Fair enough. I still have a number of problems with this approach. First of all, it’s probably slower than the equivalent 3 isolated tests. Then the test name doesn’t really reflect what is verified. A programmer confronted with a red test will spend more time figuring out which of the 3 tested actions really caused the failure. It happens often that you have a series of failing tests (after a refactoring, for instance) and just having a look at their names to see if the effects are what you thought is a precious thing to have.
        The test suite also becomes poorer as a documentation IMO, because you have to dive deep in the guts of the test to really see what’s at stake.

        Mainly by making sure there are not too many of them. I tend to cover only the most important use cases with them, all edge cases are addressed by unit tests.

        Do you mean, unit tests with mocks/test doubles? What is your ratio of edge cases vs normal cases?

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Okay, I think I see now where the disagreement lies, thanks for the detailed elaboration. In terms of performance, such tests are definitely slower than those that use mocks. They are also less granular, I agree with that too. But, as just almost anything in programming, it’s a trade-off. Here’s a picture I like to refer to with that regard (you can peak any 2):

          http://i.imgur.com/WCA1VUP.png

          Your approach tends to favor Low chance of getting a false positive + Fast feedback, mine – Low chance of getting a false positive + High chance of catching a regression error.

          Do you mean, unit tests with mocks/test doubles? What is your ratio of edge cases vs normal cases?

          I mean, I try to test the domain model (which is isolated from the outside world) as thoroughly as possible and test its integration with other sub-systems less thoroughly, using integration tests like the one I brought above. In terms of the ratio, it depends on how complex the domain model is. In complex domain models, the ratio can be 3:1, 5:1 or even more in favor of unit tests that cover the domain model (and don’t use test doubles). In simpler applications, it might be 1:1. In basic CRUD apps where there’s almost no domain logic, the test suite can consist almost solely of integration tests.

          I have a question for you. How do you test that your app works with the database correctly? For example, you might introduce a subtle issue to your database’s structure that manifests itself only in complex edge cases. Or, even worse, in one of the major use cases. Tests that don’t exercise the database directly won’t be able to reveal it. Do you rely on QA folks to do this kind of work?

          • Guillaume L

            I do have integration or end-to-end tests that I move to a separate
            test suite as soon as they become too slow for the main test suite. I
            just have fewer of them than unit tests and I’m less concerned when they
            fail because they are often unduly fragile — much more so than unit
            tests with mocks, if you ask me.

            Talking about fragility, there’s
            something I’d really like to say about low chances of getting a
            false positive
            . Everybody’s acting like mocks are the main
            cause of false positives (or negatives) in tests. I don’t think they
            are. A number of things can cause false alerts, starting with
            configuration mistakes, data incoherences and i/o failures, all of which
            are, precisely, found in integration tests and not unit tests…

            Besides,
            the definition of a mock-induced false positive (or negative) is a
            situation where a mock-based test is red, while the production
            application continues to work perfectly and another type of test would
            have stayed green.

            I don’t see a lot of cases when this could
            happen, and as a matter of fact have not experienced it often. When
            could it actually occur ?

            We can set functional
            changes
            aside straight away, because they cause non-mock tests
            to fail just as much as mock-based ones. There remains
            refactorings.

            Rename
            refactorings I always do with the help of an automated tool that makes
            the right changes everywhere, including in mock setups. No red tests
            here.
            Refactorings that reorganize how a class
            works internally won’t affect unit tests with mocks, because they only
            deal with external collaborators. That’s the power of
            encapsulation.
            Refactorings that reorganize the
            structure of the code into more (or fewer) moving parts will usually
            result in temporary compiler errors, because we can’t be in two places
            at the same time changing both the code and the code that calls it.
            Those errors I fix one by one everywhere in the code, including in
            tests. If X starts calling Z instead of Y, I modify collaboration tests
            to take that new relationship into account, if
            necessary
            . By the time everything is corrected, no failing
            tests whatsoever have shown up, let alone treacherous green tests that
            pretend to be red.

            Refactorings that change
            the order in which two dependencies are called, the number of times a
            dependency is called or that add an unexpected call can result in false
            alerts, I admit that. But I don’t think these are things you
            should verify in mock-based tests in the first place.
            My
            collaboration tests are here to help me in an outside-in exploration of
            the system I’m building, outlining interfaces by putting mock scaffolds
            around them as I go. I have absolutely no reason to check the number of
            times an object talks to another, the order in which exchanges are done,
            or to verify that they don’t speak together at all.

          • http://enterprisecraftsmanship.com/ Vladimir Khorikov

            I do have integration or end-to-end tests that I move to a separate test suite as soon as they become too slow for the main test suite. I just have fewer of them than unit tests and I’m less concerned when they fail because they are often unduly fragile — much more so than unit tests with mocks, if you ask me.

            In my experience, if you set up a proper DB versioning mechanism and restrict the number of external dependencies that are exercised directly to only those that belong to the same microservice/bounded context, that should be enough to eradicate most of the failures (probably all of them) that are not related to bugs.

            In terms of false positives/false alarms. First of all, I want to point out that these issues you describe here:

            data incoherences and i/o failures, all of which are, precisely, found in integration tests and not unit tests

            are similar to false positives but not exactly the same thing. Those issues fall into the “non-determinism in tests” category where the same test can pass or fail depending on some external factors. This is an important issue too, but is not related to false positives which occur only when you actually refactor the SUT.

            In terms of mock-induced false positives. The API refactorings you described (rename, re-order, etc.) – these usually result in compilation errors and are pretty simple to fix and I personally wouldn’t consider them as false positives at all. True false positives are those that pretend there’s a bug in the behavior itself which doesn’t manifest in compilation errors and goes straight to turning tests red. They usually occur when you change the implementation of the SUT, not its interface. For example, when you decide that calling some dependency is no longer needed to achieve the task at hand and that the SUT can do the same work by invoking some other dependency. These changes will result in false positives when you use mocks and when you verify which dependencies the SUT should collaborate with.

            In my opinion, the only place where it’s fine to insist on a particular collaboration pattern between objects is inter-system collaboration: when your system as a whole talks to some other system (and only when it invokes some commands on that system). This kind of collaboration should stay in place at all times for backward compatibility reasons and that’s why it’s fine to couple tests to them. This is essentially what I tried to describe here http://enterprisecraftsmanship.com/2016/10/19/when-to-use-mocks/ .

            Refactorings that change the order in which two dependencies are called, the number of times a dependency is called or that add an unexpected call can result in false alerts, I admit that.
            But I don’t think these are things you should verify in mock-based tests in the first place. My collaboration tests are here to help me in an outside-in exploration of the system I’m building, drawing the contours of interfaces by putting mock scaffolds around them as I go. I have absolutely no reason to check the number of times an object talks to another, the order in which exchanges are done, or to verify that they don’t speak together at all.

            Okay, now I’m confused. AFAIK, outside-in TDD expects you to do all those things you are saying you are not doing (check the number of times an object talks to another, the order in which exchanges are done, etc). How do you use mocks then?

          • Guillaume L

            are similar to false positives but not exactly the same thing. Those
            issues fall into the “non-determinism in tests” category where the same
            test can pass or fail depending on some external factors. This is an
            important issue too, but is not related to false positives which occur
            only when you actually refactor the SUT.

            Sorry but that’s not my definition of a false positive (or negative). To me it’s a test failure that occurs for other reasons than SUT incorrectness. Any other reasons.

            They usually occur when you change the implementation of the SUT, not its interface. For example, when you decide that calling some dependency is no longer needed to achieve the task at hand and that the SUT can do the same work by invoking some other dependency. These changes will result in false positives when you use mocks and when you verify which dependencies the SUT should collaborate with.

            In my experience, this never happens. Why? Because a compiler error in the test always happens in the meantime. It happens because 99% of the time, when I decide to not call a dependency’s method any more, I delete it in the dependency. Why do I delete it? Because this is all part of a refactoring, so if you don’t call A any more, you must call something else instead to remain isofunctional. If the feature that used to be in A is now somewhere else, you’d better delete it from A.

            Okay, now I’m confused. AFAIK, outside-in TDD expects you to do all those things you are saying you are not doing (check the number of times an object talks to another, the order in which exchanges are done, etc).

            I’m pretty positive that you can do outside-in TDD without doing all these things. Pryce&Freeman may do some of them here and there in GOOS (not sure) under their “object communication protocol” concept. But object protocol testing is not what outside-in TDD is about, let alone protocol testing at such a meticulous level.

            How do you use mocks then?

            I just don’t specify how many times I expect something to be called or in which order. AFAIK, this is the default mode of mocking framework assertion methods, by the way.

          • Guillaume L

            Anyway, even if mock-based tests were bad in these instances (which I don’t believe), we’re talking about a micro-subset of all possible uses of mocks and a micro-subset of refactoring situations. I’m not sure it allows you to be as affirmative as saying that all of mocks “have the worst value proposition”, that no mocks are suitable to replace dependencies that you have control over, or that all mocks result in fragile tests.

            What’s more, it is well known that a lot of programmers will pick such assertions and see it as gospel truth that mocks are just bad, without trying to ponder what the author says or see the small nuances. I see that reversed cargo cult over and over, in job interviews, in tweets, on Stack Overflow, etc.

            By paying more attention to people that are distorting and misusing mocks than sticking to what we know they were originally intended for and do well, we’re creating a false sense that there’s more evil than good in mocks and depriving ourselves from useful tools unnecessarily.

          • http://enterprisecraftsmanship.com/ Vladimir Khorikov

            Sorry but that’s not my definition of a false positive (or negative). To me it’s a test failure that occurs for other reasons than SUT incorrectness. Any other reasons.

            I just brought this distinction here to point out that those issues should be treated differently. Non-determinism is better cured by stabilizing test environment, other false positives – by changing the approach to testing.

            In my experience, this never happens. Why? Because a compiler error in the test always happens in the meantime. It happens because 99% of the time, when I decide to not call a dependency’s method any more, I delete it in the dependency. Why do I delete it? Because this is all part of a refactoring, so if you don’t call A any more, you must call something else instead to remain isofunctional. If the feature that used to be in A is now somewhere else, you’d better delete it from A.

            When you delete a dependency, you not just deleting it, you also need to re-arrange the assertion part of your tests that depend on it (change the mock expectations), and that means you need to go through the full red-green process again as you need to always see your tests failing. Also, it’s not always the case that you need to delete a dependency. For example, if you use constructor injection, the dependency can still be used by another SUT’s method, which is tested elsewhere. In this case the test turn red without any complication errors. This is the essence of all claims to mocks, in my experience.

            BTW, some outside-in TDD advocates prefer to just drop the tests covering a piece of functionality they are refactoring in case the refactoring is more or less substantial (not just renaming, but re-arranging dependencies). Sometimes, the burden of failing tests is so big that it’s just easier to just re-write them from scratch.

            I really think it’d be nice to go through some code sample and compare our approaches in practice.

            I’m not sure it allows you to be as affirmative as saying that all of mocks “have the worst value proposition”, that no mocks are suitable to replace dependencies that you have control over.

            Well, it might or it might not, no one knows for sure. I’m trying to generalize my experience into some applicable guidelines therefore making it useful for others. That’s basically what everyone does; mathematically accurate proofs are impossible in such areas unfortunately.

            that all mocks result in fragile tests

            Let’s not exaggerate, that’s not what I said.

            What’s more, it is well known that a lot of programmers will pick such assertions and see it as gospel truth that mocks are just bad, without trying to ponder what the author says or see the small nuances. I see that reverse cargo cult over and over, in job interviews, in tweets, on Stack Overflow, etc.

            Some will, some will not. It’s impossible to prohibit people from drawing conclusions authors don’t make, and I’m not sure the authors can do anything about it.

            By paying more attention to people that are distorting and misusing mocks than sticking to what we know they were originally intended for and do well, we’re creating a false sense that there’s more evil than good in mocks and depriving ourselves from useful tools unnecessarily.

            I’d like to disagree here. If by “originally” you mean the GOOS book, I already showed how the sample project from the book can be implemented with much less code, more robust tests, and without cutting any functionality. I do think there are useful areas of application for mocks and I tried to depict them in my recent posts. But I also think the original intend the GOOS book has put into their use is not balanced.

          • Guillaume L

            It’s impossible to prohibit people from
            drawing conclusions authors don’t make, and I’m not sure the authors can
            do anything about it.

            I do have a few ideas (not targeting you specifically, just a general answer) :

            – Don’t write provocative blog post titles unless you’re able to prove every bit of them in the post — which would probably disqualify a good half of “X considered harmful” articles.

            – Write out your definitions clearly, or at least point to an earlier definition by someone else. Don’t think that there’s a universal shared understanding of a term in every developer’s mind across the globe that’s the same as yours.

            – Contextualize and relativize. If you’re a tech lead in a major web-based multinational, your advice will may not be as well suited for a desktop application programmer in a team of 3.

            – Don’t just tweet mysterious technology-bashing blanket statements with a knowing air. Try to explain them extensively – then maybe you’ll see things are not so binary

            – Don’t throw babies with the bathwater

            – etc.