IASRA is a stupid recursive acronym!

To recurse or not to recurse? This is of course a brilliant idea if you’re in search for a cool and geeky name for your new pet project. But is it also wise to code your-favorite-algorithm ™ in a recursive manner?

Let’s start with the basics, Wikipedia has a good definition available:

Recursion, in mathematics and computer science, is a method of defining functions in which the function being defined is applied within its own definition.

Furthermore, in our first Computer Science semester we where taught the logical equivalence of coding an algorithm in an applicative or imperative way. So what’s the matter then? As usual, problems arise when reality hits (Computer Science) theory hard. Generally spoken, those two methods of encoding an algorithm differ physically. Ok, what’s meant by that? Well, depending on the used language (more to that soon), it is wise to use the former or the latter way. Let’s start out with good old procedural languages like C, C++, Java and all the others that actually matter. Procedural languages use functions to encapsulate sub-problems. For example, when a C++ program is transformed into machine-specific assembly, the compiler enforces certain conventions that ensure that these functions can be called in an uniform way. Executing a function usually means to jump to a different portion of code, do something and to jump back to where you’ve been before to use the computed result. So we have to store that return address. The function concept becomes more powerful if you can send parameters to them, so we have to store those too. Functions tend to use local variables for saving intermediate result and every function call needs a local copy of those variables, so these have to be stored somewhere too. The convention on how to store all this is usually referred to as a stack frame. As the name implies, a certain frame is stored on the stack, a certain memory area that was set up by your kind operating system for your proggy’s process in advance. Let’s shamelessly steal a picture from Wikipedia to visualize this concept:

Stack frame example

Note that the actual frame layout differs with the language that you use. For example Pascal-derived languages use a different layout than the C world. It easy to see that this unavoidable overhead gets bigger (relative, not absolute) the shorter your functions are. Recursive algorithms generally tend to use short functions, which is shown exemplarily with this C++ implementation of the Fibonacci numbers:


int fib(unsigned int n)
{
if (n == 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return fib(n - 1) + fib(n -2);
}
}

This means big relative overhead. Let’s look at the imperative (and optimised) version of the same algorithm:


int fib(unsigned int n)
{
static unsigned int tmp[] = {0, 1};
static std::vector fibs(tmp, tmp + sizeof(tmp) / sizeof(unsigned int));

if (n >= fibs.size()) {
for (unsigned int i = fibs.size() – 1; i < n; i++) {
fibs.push_back(fibs[i] + fibs[i – 1]);
}
}
return fibs[n];
}

One function means one stack frame, easy peasy. So to say, the overall runtime and memory cost of those stack frames increases the more excessive you execute short functions from within functions (hey, that’s recursion!). The imperative version only uses one stack frame, which saves us countless bytes on the stack (speak, main memory) and reduces the risk of stack overflows. Furthermore it saves CPU cycles for those push/pop and store/fetch instructions you don’t have to execute (The inline keyword in C/C++ comes to mind here). Therefore it is always faster to go imperative than recursive in the real world. This concludes to:

NEVER EVER USE RECURSION IN A PROCEDURAL LANGUAGE !

However, the situation looks slightly different if you use a functional language (or even handcraft the Fibonacci algorithm in assembly, but that’s another story). Functional languages often use the tail recursion idiom to allow (or force, depending on personal opinion) the user to encode algorithms recursively. What happens when that get’s translated into assembly? That idiom is simply mappend back to simple loops for the exact same reason (remember, they’re equivalent logically). This post already got longer than I expected, so let’s keep it short. Functional languages allow convenience recursion idioms that get translated back to imperative idioms to run performant on real hardware. Maybe someone more experienced in Haskell can provide more detail how this is done exactly (assembly listings appreciated).

If you want to confirm the above, take the attached little test programs and run them through your favorite profiler and compare jump/call counts and stack changes. You might want to check what time one iteration of the algorithm takes (speak one loop iteration versus one recursive function call). That’s it!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.