**¡Short post!** (not so short after the edit)

Some time ago we had a great discussion in a post *(see also comments)* about From 100% to 0% CPU with memoization applied to Fibonacci series.

From that post comes the idea that explaining recursive functions using the Fibonacci example might fail the very purpose of Python in the first place.

What about solving the Fibonacci problem this way?

**EDIT 20 Feb 2021**

The original `for`

loop was `for i in range(2, i -1):`

, I corrected it to the following bellow. Thanks to @paddy3118 for spotting that repetition of `i`

.

```
# first definitions
i = 50 # number of elements in the final Fibonacci series
fib = [0, 1, 1] # fixed seed
fiba = fib.append # short cut to the append method
# Calculate the Fibonacci series
for _ in range(3, i): fiba(fib[-2] + fib[-1])
```

@paddy3118 in his/her comment also suggested the use of numpy to store the values and just fill the results in the indexes. Obviously, that requires using `numpy`

and that might not be an option, however, if it is, let's investigate how much time it costs to use that approach:

This is how I understood @paddy3118 comment, please Paddy correct me I am wrong.

```
i = 50
fib = np.zeros(i, dtype=np.int)
fib[0:3] = [0, 1, 1]
for k in range(3, i):
fib[k] = fib[k - 2] + fib[k - 1]
```

In my machine with Jupyter `%%timeit`

I receive:

```
The slowest run took 5.29 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 5: 23 µs per loop
```

From which: `100000 loops, best of 5: 2.42 µs per loop`

refer to the array creation and `10000 loops, best of 5: 21.5 µs per loop`

to the `for`

loop.

While if I `%%timeit`

the approach with lists, we have for the **whole process**:

```
100000 loops, best of 5: 4.66 µs per loop
```

Actually, it is much faster to perform this operation with lists. Maybe contrarily to expectations?

Is that what you were referring to @paddy3118? Thanks for your comment, I enjoy it a lot!

**Second EDIT**

On a later discussion with a friend, we saw that if the list grows considerably, the `append`

version becomes much slower than the version where the list is defined beforehand (see example below). Hence, if the number `i`

is known, and that number is large, the version presented next is much faster.

This latter version avoids the list being (internally) copied over to larger allocated memory blocks as new values are `append`

ed. So, depending on the size of the target lists you expect to produce and on the performance requirements, you could choose one or the other. I believe this last version is more in line with what @paddy3118 was referring to before.

```
i = 50_000
fib = [0 for _ in range(i)]
fib[1:3] = [1, 1]
for k in range(3, i):
fib[k] = fib[-2] + fib[-1]
```

Runs in:

```
100 loops, best of 5: 4.1 ms per loop
```

While the first version with `i = 50_000`

runs in:

```
10 loops, best of 5: 68.8 ms per loop
```

I started this post to discuss that the Fib series might not be a good example to explain recursion. We ended up finding a very nice situation where Fib can (maybe) serve as a more adequate example. I will look forward to writing a small post on how lists work internally in Python and explain the behavior here observed.

Cheers,

## Top comments (5)

Hi again, I was researching a similar sequence, the padovan sequence which is like the fibonacci, but

`P[n] = P[n-2] + P[n-3]`

. In that code I keep only the last four terms of a list turned into a deque with the following code:Similar could be done to save memory in Fibonacci.

Hi,

That is also a very nice implementation if you don't want to keep a list of the whole sequence. Going forward in the discussion, we can actually avoid using

`deque`

and increase the speed almost by two.withoutneeding to append, which takes time.Hi, thanks a lot for your comment. I updated the post in response. Let me know your thoughts. Cheers!

added more to the cause