In this post I'll explore and benchmark some simple dynamic programming concepts in Python. The example I'll use is a classic recursive implementation of the fibonacci series . (And for simplicity I'll skip the first element in the series (fib(0)=0).) Naive approach This implementation: ...works well (enough) for small numbers: but becomes impossibly slow quickly... ...since it has a time complexity of O(2 n ). (So our seemingly harmless fib(42) would result in more than 4 trillion calls to fib... (Or about 4 times the number of bacteria on the human body... Or 2 times the number of galaxies in the observable universe... Or more than the estimated population of fish in the ocean... Etc.)) Enter dynamic programming Since the result of these recursive calls to fib are all deterministic, we can cache the results instead of recalculating them. @functools.cache First a test by simply leveraging the cache in the library functools ... This speeds, as expected, the runtime up si...