Plunging into the depths of algorithmic image generation


[03-2020 / English / 525 words]

Creating pictures with randomness is fascinating. One day I stumbled onto a short Python script that applies several mathematical functions (sinus, cosinus, add, subtract, multiply, divide) randomly onto a numpy array. The image comes into existence as a set of random rgb values, and then is distorted, split, squeezed, shred, and smoothed by these operations. The results are spectacular.

On the first day, I just generated image after image to explore the general capabilities of the algorithm. By playing around with the code, I figured out how to set a seed. Every random generator uses a seed to base its randomness upon, and when the seed is kept identical, the results of the generator can be reliably reproduced.

Now, that I was able to vary the seed, I moved past the ephemeral sifting of transient pictures, and started to mine infinity (at least in theory). I set up a script that created images over night with number seeds starting from 1 going to 1000. The next day I had a folder with thousand pictures to review. Most of the pictures turned out to be boring single-color rectangles, or showed a simple brightness gradient. About 10% of the pictures had much more to them, however. I found compositions so colorful and fine-grained that they appeared like mud. I found elementary areas of color resembling a Rothko painting. I found grids of mesmerizing complexity and depth. And I found carnival-like arcs of distorted 3D-patterns.


I experimented more with the code and found out that each of these pictures is in fact only a pixelated representation of an infinite algorithmic space. The color information gets “pressed” into an image grid when it is exported. The larger the image resolution, the more information is saved. I immediately thought about the Blade Runner enhance scene, in which Harrison Ford is able to retrieve visual information from the depths of an endlessly magnified picture. Inspired by this vision, I modified the code to have a user-configurable “view window” that allows to export a given algorithmic space in any magnification or position.

Based on these changes, I wrote a script that operated on a list of interesting seed values, and captured a sequence of smaller and smaller subsections of each of these images. I converted the resulting frames to mp4 videos, and watched them with a growing sense of vertigo. My screen showed an endless zoom to lower and lower depths of the algorithmic space, revealing structures and sub-structures that made up the conglomerate patterns in the layers above.



After this success, I decided to create a Graphical User Interface to make the navigation of those dazzling algorithmic depths easier and accessible to more people. I first sketched out how a good interface might look like, and then used Tkinter to implement a sleek retro-button version of my Blade-Runner inspiration that I named “Zoom_Enhance”. You can find this program on my Github or download the packaged 27MB .exe directly from my website (tested only on Windows10).


Thanks for reading! If you have questions, comments, or unrelated thoughts, I’m always happy to talk. You can reach me via fd (at) fabiandietrich (dot) com