DEV Community

Discussion on: Recursion Algorithm Practice: Fibonacci Sequence Generator

Collapse
 
edh_developer profile image
edh_developer

They want to see that you understand memoization, and that you have a general idea how big its impact can be for a problem like computing fibonacci numbers. Some example code:

import time

known = {1:1,2:1}
def fibonacci_memo(n):
    global total_calls
    total_calls += 1
    if n in known:
        result = known[n]
    else:
        result = fibonacci_memo(n-1) + fibonacci_memo(n-2)
        known[n] = result
    return result

def fibonacci(n):
    global total_calls
    total_calls += 1
    if n < 3:
        result = 1
    else:
        result = fibonacci(n-1) + fibonacci(n-2)
    return result

def fibtest(f,n):
    t1 = time.time()
    rc = f(n)
    t2 = time.time()
    print("%s(%d) = %d\n- total function calls = %d\n- elapsed time = %3.6f seconds\n\n" %
        (f.__name__, n, rc, total_calls, (t2 - t1)))


total_calls = 0
fibtest(fibonacci_memo,40)

total_calls = 0
fibtest(fibonacci,40)

The results on a recent Macbook Air:

$ python fib.py
fibonacci_memo(40) = 102334155
- total function calls = 77
- elapsed time = 0.000025 seconds


fibonacci(40) = 102334155
- total function calls = 204668309
- elapsed time = 31.633700 seconds

Calling the recursive function less than 100 times vs calling it over 200 million times. I would focus on making it clear you understand how and why this works the way it does. Then you can go on to explain why the lack of tail call elimination in python makes either solution problematic for large values of n.

Collapse
 
kaelscion profile image
kaelscion

For sure! I ran your suggestions through the same series of tests the others have gone through so far and came up with the following:

If you take a look at my first solution that deals with iteration rather than "true" recursion, the 3 variables used are a stand-in for the dictionary known as a form of memoization, giving us the performance boost of allowing our previously known values to be stored and re-assigned with each iteration, rather than stored separately in an ever-growing dictionary. I find this way of going about it simply more to my personal taste because we are never allocating more RAM than those three values take up combined on that particular pass through the loop. But, if we needed to not only find the nth fib number but also store all previous fib numbers up until that point, I wouldn't have even considered using a dictionary as a cache so thanks for pointing out that use case to me because I'm sure they will want to know these types of solutions with "one-off-calculation" use cases, as well as persistent, referenceable use cases.

Also, using "true recursion" is the better solution for numbers of passes in the hundreds. But the use-case I'm gunning for (hundreds of thousands of passes), true recursion hits that stupid Python depth limit well before I can get where I want to go. The iterative solution is a nice, safe, and quick-to-implement O(n) algo that I can write quickly, debug with minimal fuss (or really, no fuss) and have the peace of mind that I know what I'm going to get with a fairly predictable, linear increase in execution time with an increase in the dataset's size.