It is said, that the biggest and toughest problem can be solved by breaking it down into smaller problems and then solving it. One of the technique based on similar lines is recursion. Recursion in simpler and theoretical terms is solving a problem by solving smaller problems of “same” form and using their results to compute the final result. We will delve into the mathematical and programmatic understandings, but prior to that, lets understand the abstract concept of recursion through an example.

Suppose there a volcano at a place in country A, which erupts in a set pattern which has been observed. That is, it has been observed that it erupts after twice the number of years of its previous eruption. As in, if last volcanic eruption was after 1024 years, then the next eruption will be after 2* 1024 = 2048 years. Now, we want to compute how many years have passed since the volcano is active?

It might seem a big mathematical problem, and would have many ways to solve it. We shall do it using recursion as that is what we are here to learn. Compute the number of years in a span, using the number of years in its previous time span. And hence, adding it all until the span is 1 year will give the result. In simple words, at any year, if we know the number of years till previous eruption, it can be used to compute the number of years till the current eruption. This is, as we said, solving the problem by solving the smaller problem of same form, and hence recursion.

### Mathematical Representation

Mathematically, a recursion or a recursive method can be represented as

_ | 1, if n = 1 F(n) = | C * F(n -1) + T , for all n >1 |_

where C, T are constants.

As we can see, the definition is represented in more than one part, one part is the function of the same form, which is like a sub-problem and other part is for the smallest possible value of ‘n’.

### Programmatic Representation

Recursion is a widely used and highly acceptable concept in algorithms and programs. In algorithms, recursions are talked about as methods which call themselves. However, the definition may sound absurd as methods calling itself would lead to infinite loop. Therefore, in a recursive method, the most important is the terminating condition where the recursive calls terminates. The terminating condition is a condition at which the sub-problem has become its smallest possible and hence needs to be handled differently.

Lets take a simple problem and understand its algorithm which involves recursion. We need to determine a factorial of a number. Lets first write down how do we calculate a factorial

6! = 6 * 5 * 4 * 3 * 2 * 1 5! = 5 * 4 * 3 * 2 * 1 4! = 4 * 3 * 2 * 1 3! = 3 * 2 * 1 2! = 2 * 1 1! = 1 0! = 1

So, with the above computations, observe the general nature of factorial is multiplying the number with the factorial of a one less number. That is,

(Ignoring 0!) 1! = 1 * 0! 2! = 2 * 1! 3! = 3 * 2! 4! = 4 * 3! 5! = 5 * 4! 6! = 6 * 5!

So, we got our recursion which mathematically looks like

n! = 1, if n = 0

= n * (n-1)!, if n >0

Here, with n = 0, its the smallest possible sub-problem, and hence given a constant definition. For all other values of ‘n’, it is dividing itself into smaller similar forms

Lets write a C code implementation with a method calling itself with of course a terminating condition. Do run through the code mentally for smaller values of n, to understand how recursion works.

int factorial (int n) { if ( n < 0) return -1; /*Error*/ if (n == 0) return 1; /*Terminating condition*/ return (n * factorial (n -1)); }

### Why recursion?

As we have seen, although recursion does make representation and programming code simpler, however it does adds to the complicated understanding. Moreover, it would not be very difficult to confirm, that whatever can be done through recursion, can also be achieved through iteration / loops. Then, why do we need recursion?

Well, there are certain data structures which make programming much better if it uses recursion. Such data structures are generally nested data structures i.e. a data structure which contains pointers to itself. Such data structures can be easily traversed using recursion. An example could be a tree.

### Recursive Methods in C

Even though, we can learn to implement recursions in C, it will always be hard and perplexing until one understands what is internally going on beneath. Understanding, how C sees recursions helps us imagine each call and draw how the algorithm moves further.

Like any other C function call, for every recursive call, a snapshot is stored in the stack. This snapshot includes the local variables along with the status registers and point of return to the caller function. The snapshots of these stacks are also called the activation records or sometimes stack frame, which gets pushed on the function call and gets popped on the function return.

Note, for every activation record on the stack,, the local variables would be separate and independent. Since, in a recursion, the recursive calls keep on calling itself until a termination condition is reached, after which it keeps returning and unfolding. Therefore, stack keeps getting pushed with activation records until the terminating condition. After this point is reached, all activation records are popped one by one.

Considering the same factorial example explained above, lets take the value of ‘n’ as ‘3’. The function calls itself 4 times until the termination condition is met. Hence, in this case, there would be four activation records for each of the four recursive calls. This is illustrated diagrammatically with respect to ‘n’ and return values, as follows. To begin with, a call is made to

factorial (3);

This creates the first activation record with value of ‘n’ equals 3. The stack looks like figure 1

Figure 1

It checks for value of n, which is positive and not equal to zero, hence calls itself with n = 2.

Again following function is called

factorial (2)

This creates another activation record which is pushed to the stack. See figure 2.

Figure 2

Now, going through the checks, again it calls itself with n = 1. Within the third call to factorial (), stack looks like Figure 3.

Figure 3

From here, the next call being

factorial (0)

which pushes another activation record on the stack as Figure 4

Figure 4

Now here, since value is ‘n’ is equal to zero, it no more calls itself and returns a constant ‘1’. On its return, activation record 4 is popped and wiped out. This return value is being used by Activation record 3 to return its value and so on until the first activation record returns the final answer.

### Tail Recursion

The biggest disadvantage of using recursive methods is the maintenance of activation records in the stack until terminating condition, which is an overhead. Moreover, the records over the stack raises the overall space complexity of the algorithm. However, there is an interesting concept of tail recursion which eludes this overhead.

Tail recursion is a recursion which does not do anything after the call to itself. Hence, whatever the recursive method returns is the the return value. With this, there is no reason to maintain the activation records in the stack, it just needs to pass on the return value, and nothing else.

Lets take the same factorial example. Re-writing the code here

int factorial (int n) { if ( n < 0) return -1; /*Error*/ if (n == 0) return 1; /*Terminating condition*/ return (n * factorial (n -1)); }

Note in the C implementation, when the factorial (n-1) returns, it has to be multiplied by ‘n’. Hence, there is something (a multiplication) to do after the recursive call returns, which requires the activation record to store the value of ‘n’ at least. Suppose, we avoid this multiplication after return as follows:

int factorial (int n, int result) { if ( n < 0) return -1; /*Error*/ if (n == 0) return result; /*Terminating condition*/ return factorial( (n -1) , (n * result) ); }

So, now, the return value of the final recursive call is the final result. Hence, we don’t have any reason to maintain the activation records over the stack. Therefore, the compiler in such cases does the optimization to straight away return the final result, avoiding any overheads of activation records.

### Recursion in Linux

There are many commands in Linux which needs to work recursively based upon the task to be done by the command. One of the most common and frequent use case is listing the files in the current directory including the files in the sub directories. As one can observe from the nature of the operation, that it is a recursive operation. We can perceive the problem as a tree traversal problem. One of the example scenario is depicted in the figure below.

Hence, it is a recursive operation. The Linux command to list files in a directory is

$ls file1 Dir1 Dir2

However, to see all the files under a directory including ones in the sub directories, we use

$ls -R .: Dir1 Dir2 file1 ./Dir1: file2 file3 ./Dir2: Dir3 ./Dir2/Dir3: file4 file5

There are many Linux commands which give a recursive option through ‘-R’ option. Some of them I’ve listed here:

Note, internally these Linux commands may be implemented using a loop, however the objective is to understand recursive operations of Linux.

### Caveat and Conclusion

The most important caveat to keep in mind should be to take care of the terminating condition foremost. As, missing the terminating condition would lead to a method calling itself infinitely. This would keep on pushing the activation records over the stack until a stack overflow leading to a crash.

To give an example as to how that might experience like, lets have an example infinite recursive function program.

#include <stdio.h> void recursive(int i) { printf("%d\n", i); recursive(i-1); } int main() { int ind = 10; recursive(ind); return 0; }

One can go through the above source, that the method recursive() does not have any terminating condition, hence it will go on calling itself infinitely. I am reiterating, this is NOT recommended, but the purpose of this example is just to illustrate, what it may lead to.

Compiling ,

$ gcc infi.c -Wall -o infi $./infi

Running the built executable, it runs for some time until the activation records keep getting created and the stack keeps growing until it gets overflowed. The last few lines look like

-261939 -261940 -261941 -261942 -261943 -261944 -261945 -261946 -261947 -261948 -261949 -261950 -261951 -261952 -261953 -261954 -261955 -261956 -261957 Segmentation fault (core dumped)

However, in the end, I would say, recursion is a really useful technique in data structures like trees, or sorting or maybe searching. For example, doing preorder, inorder and postorder traversals of a tree can be best done through recursive algorithms.

you are superb !!

i am sorry but lack of examples could not understand recursion

Which part you didn’t understand?

Waoo.. such a clear n conceptual explanation!