In this post I'll explore and benchmark some simple dynamic programming concepts in Python.
The example I'll use is a classic recursive implementation of the fibonacci series. (And for simplicity I'll skip the first element in the series (fib(0)=0).)
Naive approach
This implementation:
...works well (enough) for small numbers:
but becomes impossibly slow quickly...
...since it has a time complexity of O(2n). (So our seemingly harmless fib(42) would result in more than 4 trillion calls to fib... (Or about 4 times the number of bacteria on the human body... Or 2 times the number of galaxies in the observable universe... Or more than the estimated population of fish in the ocean... Etc.))
Enter dynamic programming
Since the result of these recursive calls to fib are all deterministic, we can cache the results instead of recalculating them.
@functools.cache
First a test by simply leveraging the cache in the library functools...
This speeds, as expected, the runtime up significantly...
...by a factor of about a million in the case of this 43rd fibonacci number.
(Of course we can't run timeit in loops, as the first run would generate the cache for successive runs...)
Hand made
Then we can to compare this to the classic, hand made, way to implement memoization in dynamic programming, like so:
This should give more or less the same results as above. And it does.
Interestingly, it actually systematically outperforms the functools@cache slightly on my M1 air.
On my intel desktop I get similar difference in results results:
Conclusion
Leveraging @functools.cache is a great way to implement dynamic programming memoization in Python with little code overhead (but you still might be able to tailor make a faster solution).
Comments
Post a Comment