I have not gone immutable with it. Python has a recursion depth limit that, while usually a good safety net, tends to become an obstacle when working with more and more recursions. Finding the 200,000th fib number in Python with true recursion would hit the depth limit really early and cProfiles only measures to the thousands/sec decimal place so, even if I didn't hit the limit, the iterative solution I have would have the same exact level of performance as true recursion. My solution though, on second look, seems to be fast because I leverage RAM quite a bit by storing the current fib number, the previous fib number, and their sum in variables in each iteration. Granted, the same variables are assigned different data on each pass, but the numbers get larger and larger meaning that each variable goes from storing a single-digit int to an int with thousands of digits, using more and more RAM per pass.
Another commenter suggested a generator function which would be likely to leverage the CPU to calculate current, previous, and next fib number rather than simply recalling them from RAM, which would be more efficient memory-wise, but put a heavier load on the CPU. In this particular use case, that wouldn't be a huge deal. But in more complex calculations with numbers that big, the CPU could strain big time if it doesn't have a good deal of horsepower. I'd like to learn the generator function anyway though for the sake of understanding both environments. In a cheap Digital Ocean droplet, 1GB of RAM will go further than 1 CPU core, so iteration would be best. On an expensive EC2 instance, you've got CPU power for days and RAM can get expensive, so a generator would be best. Either way, it couldn't hurt to know both :D
That's a great explanation of your choice though!
Hired :P !
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.