Skip to main content

Posts

Showing posts with the label algorithm

Benchmarking Dynamic Programming with Python

In this post I'll explore and benchmark some simple dynamic programming concepts in Python. The example I'll use is a classic recursive implementation of the fibonacci series . (And for simplicity I'll skip the first element in the series (fib(0)=0).) Naive approach  This implementation: ...works well (enough) for small numbers: but becomes impossibly slow quickly... ...since it has a time complexity of O(2 n ). (So our seemingly harmless fib(42) would result in more than 4 trillion calls to fib... (Or about 4 times the number of bacteria on the human body... Or 2 times the number of galaxies in the observable universe... Or more than the estimated population of fish in the ocean... Etc.)) Enter dynamic programming Since the result of these recursive calls to fib are all deterministic, we can cache the results instead of recalculating them. @functools.cache First a test by simply leveraging the cache in the library functools ... This speeds, as expected, the runtime up si...

Metaballs Revisited

The new visual identity of my work place (and the re-ignition of an old lava lamp) made me think of the good old metaballs from the Amiga demo-scene of yonder and how it has been a while since I have implemented them from scratch. This time I wanted to play around with Python and numpy to see what that could bring. But first, what are metaballs?  It's, for example, this: Wikipedia defines them as: In computer graphics, metaballs are organic-looking n-dimensional isosurfaces, characterised by their ability to meld together when in close proximity to create single, contiguous objects. https://en.wikipedia.org/wiki/Metaballs As cool as multidimensional metaballs are, we'll stick to 2D ones in this post. The algorithm The algorithm behind them is quite straightforward -- in their simplest form. Basically for each pixel in each frame, or buffer, you add up the influence of each metaball in the simulation and then cut off below a certain threshold and normalize what’s left. The...

Implementing k-means clustering in Ruby

Inspired by the partitioning problem I set about to implement a well known algorithm, k-means clustering, from memory, for fun! ... and, for science... Interestingly, this is somehow the opposite of the partitioning problem. In the partitioning problem we tried to maximize variation of categorical variables within groups, whereas here we're trying to find groups of elements that are the most "similar" to each other in a n-dimensional continuous space. The main idea of k-means is the following - if you have a bunch of elements and a given number of clusters: Create the initial clusters randomly within the space spanned by the elements (typically (always?) you would pick randomly from your elements). Lob all elements into the cluster with the nearest center (using some Euclidean distance metric typically). Recenter each cluster on the average of its elements,  If necessary move the elements to their now nearest clusters.  Repeat the "re-centering" and mov...

Partitioning - or, students into n groups - in Ruby

A while back a friend of mine asked if I could automate the creation of groups in a class of students - to maximize the variation within each group, but minimize the difference between them. This sounded like an interesting problem, so I set about to solve it in my own naive experimental Monte Carlo (inspired) way - without looking into ways this has been solved (surely elegantly) before. The result was this little Ruby script: Loading... First I just require some code I have previously written. (I guess I really should make them into gems or something instead of copying code around...) names.rb  tries to guess sex from a person's name and/or title, and countries.rb  maps countries to continents, regions etc with some fuzzy matching of names. (I'm looking at you Democratic Republic of Kongo,  Koreas  etc.) Easy-peasy. Then I set some standard variables. (Maybe I'll make this dynamic in the future, why not?) The most interesting entry here is the classifiers ...