Better approximate atavising
As I learned from this article, atavising is the art of “reversing” Game of Life configurations. In the article, “differentiable automata” are used to atavise an image of Conway.
Nice! However, it seemed to me that a much simpler method would also work:
- Start with some Game of Life configuration
- Flip a random pixel in and call the result
- From , calculate the next configuration
- Compute the distance between and the target image
- If the distance is less than , set
- Go to step 1
The distance function applies a simple blurring kernel, gamma encoding, and taking the sum of the squares of the difference for each pixel. Because both the blurring kernel and computing the next Game of Life configuration only depend on pixels that are near the flipped pixel, we can get away with only computing the distance over a small section around the flipped pixel, and the whole process is reasonably efficient even without optimizations.
I wrote a simple C program that does this and displays the result using SDL. The code can be found on GitHub. If I run the program for this target image:
The result converges to this within 10 seconds:
Which arguably is of higher quality than the atavised image obtained by using the differentiable automata:
Using a higher resolution image is entirely feasible. This is a 707 by 900 picture of Conway:
Even atavising multiple generations is feasible, although the quality degrades quickly with the number of generations. This is the same image, atavised 3 generations: