# Early exit is a tail call optimization of procedural languages

A couple of days ago, a nice analogy came to mind which I thought would be an interesting one to share: early exit is essentially a tail call optimization of procedural languages. Let’s see how it is so.

## Tail call optimization

Before explaining the metaphor, let me give you a refresher for what a tail call and the tail call optimization are.

A tail call is basically any call that is performed last in a method. Let’s look at a simple example of it:

public int Run(int integer)

{

Log(integer);

return Calculate(integer); // Tail call

}

private int Calculate(int integer)

{

return integer + 1;

}

The Calculate method invocation here is a tail call because it is executed at the end of the Run method. Now, compare it to the following code:

public int Run(int integer)

{

Log(integer);

return 2 + Calculate(integer); // NOT a tail call

}

private int Calculate(int integer)

{

return integer + 1;

}

In this example, the Calculate method invocation is not the last one: Run still needs to perform the “plus 2” operation after it gets the result from Calculate. Therefore, it is not a tail call.

The tail call optimization eliminates the necessity to add a new frame to the call stack while executing the tail call. For the first code sample, such optimization would have the same effect as inlining the Calculate method (although compiler doesn’t perform the actual inlining, it gives CLR a special instruction to perform a tail call optimization during JIT-compilation):

public int Run(int integer)

{

Log(integer);

return integer + 1; // Tail call optimization

}

Such optimization is not guaranteed in C# and that’s basically fine because there are few cases where a typical C# application would benefit from it anyway.

The place where the tail call optimization is essential is functional languages, which highly rely on recursion. Without it, long-running recursive functions would exhaust the call stack and fail due to stack overflow exceptions.

A tail call in a recursive function has a special name: tail recursion. There’s also a special kind of optimization for it in functional languages: they guarantee that a properly tail-recursive implementation would be converted to a loop.

## Recursive call optimization

Let’s look at the following function:

// Version without proper tail recursion

let rec factorial n =

match n with

| 0 | 1 -> 1

| _ -> n * factorial (n – 1)

factorial 3 // returns 6

It’s a recursive function that calculates factorial of a number. Note that the compiler can’t apply the optimization here because this implementation is not properly tail-recursive.

When we try to execute it with 3 as a parameter, it goes like this:

Recursion without tail call optimization

With every recursive call, the program has to memorize its current location in order to be able to eventually get back and multiply all values with each other. Obviously, such implementation is limited by the size of the stack as we put additional pressure on it with every call.

To get the benefits of tail call optimization, we can refactor the function introducing an additional accumulator parameter:

// Version with a proper tail recursion

let rec factorial acc n =

match n with

| 0 | 1 -> acc

| _ -> factorial (acc * n) (n – 1)

factorial 1 3 // returns 6

This time, the compiler is able to avoid the use of additional stack space while calling the function and effectively transforms it into a loop:

Recursion with tail call optimization

As you can see, this implementation doesn’t use the stack to memorize the previous values, it just passes an intermediate result along with the next value of N.

## Early exit

So, how is this related to early exit? Early exit is a refactoring technique that helps reduce indentation and thus simplify the code. Let’s look at an example:

public string Execute(int integer, string str, bool boolean)

{

string result;

if (integer > 42)

{

if (!string.IsNullOrWhiteSpace(str))

{

if (boolean)

{

result =  “Success”;

}

else

{

result = “Error: Incorrect boolean”;

}

}

else

{

result = “Error: String is null or empty”;

}

}

else

{

result = “Error: Integer is too small”;

}

return result;

}

When you read such code, you need to go through its branches and make sure you don’t forget to examine all of them:

Code without Early Exit refactoring

Each time you go one “if” deeper, you have to memorize the “else” branch in order to explore it later. This code exhausts your mental stack. That’s why the early exit refactoring is so effective. It helps you “flatten” the reading pattern and alleviate the burden on your brain.

The code above can be transformed into the following one:

public string Execute(int integer, string str, bool boolean)

{

if (integer <= 42)

return “Error: Integer is too small”;

if (string.IsNullOrWhiteSpace(str))

return “Error: String is null or empty”;

if (!boolean)

return “Error: Incorrect boolean”;

return “Success”;

}

Which takes us much less effort to understand:

Code with Early Exit refactoring

Just as the recursive call optimization removes the limit imposed by the call stack size, the early exit technique takes off the limitation imposed by our mental stack.

## Conclusions

No conclusions here, actually, just hope you found this analogy interesting. It followed me for several days and I felt like I had to post it here 🙂

## More articles with silly metaphors

• jheriko

please don’t use unscoped ifs in examples. its bad practice – see the ‘goto fail’ incident – where we can blame copy-paste or goto, but another contributing factor is the lack of scope on if statements – which would have made most variations of that copy-paste mistake into a compile-time error instead of something that compiles but does the wrong thing.

this is however an interesting analogy, although i have met programmers who do not believe that early exit is as valuable as single point of return – although they are certainly a minority.

Could you elaborate what you mean by unscoped if?

• jheriko

unscoped:

if( foo )
bar;

scoped:

if( foo )
{
bar;
}

Ah, I see, thank you for the clarification.

Thank you!

• David Raab

Nice explanation of (tail-call) recursion and how to achieve it with an accumulator!

But for me there is too much of a difference to compare early return to tail-cail. The first difference is that a if/else is a branching structure. In your diagram you write “i > 42 ?” and then you add two branches with “check later if not”. But there is not really a “later”. With an if structure it either goes deeper or it goes directly into else.

A non tail-call function on the other hand puts something on the stack, does something different and later really needs to re-evaluate the thing that was put on the stack once more.

In that sense i feel early return is more comparable to pattern matching. For example your last Execute function could be written like this in F#

``` let execute int str bool = match int,str,bool with | x,_,_ when x "Error: Integer is too small" | _,str,_ when String.IsNullOrWhiteSpace str -> "Error: String is null or empty" | _,_,false -> "Error: Incorrect boolean" | _ -> "Success" ```

But the pattern matching makes one problem more obvious. There are a lot of combinations of your arguments possible that you didn’t check at all.

For the special example with validation there are even more problems with that example, and i wouldn’t do validation like this in F#. But i just take it as an example of your idea, not as an example how we should do validation.

I’m not comparing the actual execution here, two stacks are completely different ones: one of them is real – the thread call stack – and the other one is “imaginary” – mental stack.

In your diagram you write “i > 42 ?” and then you add two branches with “check later if not”. But there is not really a “later”. With an if structure it either goes deeper or it goes directly into else.

“Later” – in a sense that when you read such code, you usually do it up to bottom and left to right, so when you encounter an “if” statement, you must not forget to check what’s going on in the else branch

• David Raab

Hmm, it is probably a question how people read code. At least when i see code i immediately look into both branches and try to understand when it goes into the “if” or “else” branch. Or to say, i don’t try to “remember” the case, and try to understand it later.

So the way how i read code in that case is not strictly top/down left/right and more “jumping”. And i actually do that exactly because it is like you describe it, it is hard to remember every case.

That’s also the reason why i don’t like that style and i also prefer a early return. Because as you show, code becomes more readable.

• mr_mac28

I like the article. I remember reading it a few months ago and thinking it was a good point at the time. I’m beginning to disagree now, though. I’ve recently had to do a lot of systems work, and while early exit definitely makes the code easier to understand, I think it makes it harder to maintain in many cases. Logically, it’s a lot clearer to have one place in which we do initialization (e.g. getting locks) and one place in which we do clean up (i.e. at the end of the function) rather than having to do cleanup in several places. Also, I would think that code written in this pattern is much less buggy.

Likewise, if code is initially written with early exits and then later we want to add some initialization and cleanup that needs to be done before returning, we would have to add that code in all of the places we return or refactor it to be single point of return, when we could have just written it that way from the beginning.

-Mac

Hi Mac,

Sure, there are boundaries for where early exit is applicable, and sometimes it makes more sense to keep the indentation. BTW, it sometimes helps to separate the code that does initialization and cleanup from the code that performs “if-ing”. That can allow you to keep the early exit practice in the second method while keeping the first one short and simple.

• http://sidburn.github.io/ David Raab

Btw, just some interesting thoughts. In F# something (or in general functional languages) something like early exists doesn’t exists at all. You don’t have statements only expressions, so you always have to write a function with a single point of return. While this sometimes feels like a burden, i think it is a good burden that leads to better solution with better code.

Well, you can view a pattern matching statement as such. To me, it feels very similar to the early exit practice.

• http://sidburn.github.io/ David Raab

Well, i wrote about it a little bit, but technically it is not at all compareable. Also Pattern matching is just an expression that returns a value. You just can do

``` let func x = let result = match x with | Foo -> true | Bar -> false result ```

But because it is just an expression and returns a value. And every last value is automatically returned you sure don’t need to bind the result of a pattern match to a value, you can immedialtely return it.

``` let func x = match x with | Foo -> true | Bar -> false ```

It feels somewhat the same, but it isn’t. But it feels the same because everything is an expression and returns a value, and the last expression is automatically returned, and you also dont have a “return” statement.

• http://sidburn.github.io/ David Raab

Not that i disagree with what you say. But in your special case of cleanup it is usually better to implement the IDisposable interface and let the language handle the cleanup of resources.