Better approximate atavising

As I learned from this article, atavising is the art of “reversing” Game of Life configurations. In the article, “differentiable automata” are used to atavise an image of Conway.

Nice! However, it seemed to me that a much simpler method would also work:

  1. Start with some Game of Life configuration CC
  2. Flip a random pixel in CC and call the result CC'
  3. From CC', calculate the next configuration DD'
  4. Compute the distance between DD' and the target image TT
  5. If the distance DT|D' - T| is less than DT|D - T|, set CCC \leftarrow C'
  6. Go to step 1

The distance function applies a simple blurring kernel, gamma encoding, and taking the sum of the squares of the difference for each pixel. Because both the blurring kernel and computing the next Game of Life configuration only depend on pixels that are near the flipped pixel, we can get away with only computing the distance over a small section around the flipped pixel, and the whole process is reasonably efficient even without optimizations.

I wrote a simple C program that does this and displays the result using SDL. The code can be found on GitHub. If I run the program for this target image:

Small image of Conway

The result converges to this within 10 seconds:

Small atavised image of Conway

Which arguably is of higher quality than the atavised image obtained by using the differentiable automata:

Original small atavised image of Conway

Using a higher resolution image is entirely feasible. This is a 707 by 900 picture of Conway:

Large atavised image of Conway

Even atavising multiple generations is feasible, although the quality degrades quickly with the number of generations. This is the same image, atavised 3 generations:

Large image of Conway atavised over multiple generations