0

I'm working on cellular automation simulation. Rule are the following:

  • Each cell interacts with its Moore's neighborhood to update its value.
  • The cell is in any infinite-dimensional grid.
  • The cell may have a randomized initial value.
  • Rules are stable, after a certain iteration, they will converge to a uniform state.

It's not necessary for a certain programming language so we only have basic datatypes i.e. bool, int, their's n-dimensional array, etc. in this algorithm.

I have an initial value of any cell that I can load into the memory whenever I wanted. Is there any algorithm to calculate its stabilized value without looping the whole infinite grid?

To be specific, what I'm working on is a rule B5678/S45678 2 dimensional life-like cellular automation.

  • 1
    I don't see how you can compute anything if you want every cell -- that is, an infinite number of cells -- to start with a random initial value. – j_random_hacker Apr 28 '21 at 10:29
  • *"after a certain iteration, they tend to be stable"*: that is quite vague. Please specify what "stable" means, what certainty "tends to be" gives, and which is the "certain iteration". – trincot Apr 28 '21 at 11:16
  • According to Wolfram, the elementary cellular automaton can split into classes using its behavior. The rule I use will certainly in Class 1 in this case ([Elementary_cellular_automaton](https://en.wikipedia.org/wiki/Elementary_cellular_automaton#Random_initial_state)). In a higher dimension, it will reach the uniform state, a "fixed point" to be exact. –  Apr 28 '21 at 11:25

1 Answers1

0

Is there any algorithm to calculate [a particular cell's] stabilized value without looping the whole infinite grid?

For this particular CA rule, yes, sort of. In particular, you can almost surely determine the final stable state of any given cell on the lattice by inspecting only a finite number of surrounding cells. However, the number of cells you may need to inspect can be arbitrarily large.


First, let me note that the life-like cellular automaton rule code "B5678/S45678" denotes a "majority vote" rule where the state of each cell on the next time step is the current majority state among the nine cells comprised of itself and its eight neighbors.

This rule happens to satisfy a monotonicity property: flipping the initial state of one or more cells from "off" to "on" cannot cause the future state of any cell to flip from "on" to "off", or vice versa. In other words, the future state of the lattice is a monotone increasing function of the current state.

This monotonicity has some important consequences. In particular, it implies that if you have a cluster of cells in the "on" state that is surrounded on all sides by cells in the "off" state (or vice versa), and if this cluster is currently stable (in the sense that applying the CA update rule once will not lead to any cells in the cluster changing state), then it will in fact be forever stable regardless of what else happens elsewhere on the lattice.

This is because the only way that events elsewhere could possibly affect the cluster is by changing the state of one or more cells surrounding it. And since all those surrounding cells are in the "off" state while the cells in the cluster are in the "on" state, monotonicity ensures that changing the state of any surrounding cells to "on" cannot cause the future state of any cell in the cluster to change to "off". (Of course the same argument also applies mutatis mutandis to clusters of "off" cells surrounded by "on" cells.)

(In fact, you don't really need the cluster of "on" cells to be actually surrounded by "off" cells, or vice versa — all that's required for stability is that the cluster would be stable even if all cells surrounding it were in the opposite state.)

Thus, in general, to determine the final state of a cell it suffices to simulate the time evolution of its surrounding cells until it becomes part of such a stable cluster.

One way to do this in (almost surely) finite time is to treat the sequence of 2D lattices at successive time steps as forming a 3D lattice of stacked 2D slices, and to calculate successive "pyramid-shaped" sections of this 3D lattice consisting of the states of the central cell up to time step n, its neighbors up to time step n − 1, their neighbors up to time step n − 2, and so on. At regular intervals, examine each layer of this growing pyramid to see if any of them includes a stable cluster (in the sense described above) containing the central cells.


Assuming that the central cell in fact eventually becomes part of such a stable finite cluster (which almost all cells on a randomly initialized lattice eventually do under this rule; proof left as exercise!), this method will eventually find that cluster. However, depending on the initial states of the surrounding cells, such stabilization could take an arbitrarily long time and the final state of the cell might depend on the states of other cells arbitrarily far away.

For example, let's assume that the cell we're interested in happens to be located in a region of the lattice where the initial cell states, just by chance, are arranged like the squares on a checkerboard: the four orthogonal neighbors of each cell are in the opposite state, while the four diagonal neighbors are all in the same state as the central cell. Clearly such a checkerboard arrangement is locally stable, since each cell is (barely!) in the majority among its neighbors, but any deviations in either direction from this precarious balance around the edges of the checkerboard will propagate as a chain reaction throughout it. Thus the final stable state of any particular cell on the checkerboard will depend on the state of cells surrounding the checkerboard region, which could be arbitrarily far away.

Ilmari Karonen
  • 49,047
  • 9
  • 93
  • 153
  • Thank you, this solved my problem really well. Can we calculate the minimum range needed to almost surely determine its final state by anymean? Or should I try finding an optimal value? –  Apr 29 '21 at 04:36
  • @mwit30room8: For "almost surely" in the mathematical sense, there is no finite range that will suffice: what "almost surely" means in this case is that the success probability tends to 100% as the range tends to infinity. If you fix some definite success probability (say, 95% or 99% or 99.9999%) that you consider good enough, you could estimate the range needed to achieve that probability. (This could be calculated exactly, at least for small ranges, but it's probably easier in practice to just use random sampling.) – Ilmari Karonen Apr 29 '21 at 12:18
  • BTW, I amended my answer slightly to note that a cluster of cells in one state doesn't actually have to be fully surrounded by cells in the opposite state in order to count as stable in the relevant sense. All that's really needed is for the cluster to remain stable even if all surrounding cells *would be* in the opposite state, whether they actually are or not. This is relevant because it can allow you to find smaller clusters, and thus use a smaller range. – Ilmari Karonen Apr 29 '21 at 12:24
  • I mean when I think about it. For 0 iterations, you need the radius of 1 (the cell), for 1, you need 2 (its neighbors), for 2, you need 3, and so on. That's the range that the cell can get data from. Now the point is to find "when" it will stable. Is it just "almost surely" (from the trial) or can we calculate it precisely (with some kind of probable error that we can deal with)? –  Apr 29 '21 at 12:29
  • For anyone who may ask this question in the future, Some structures may self-sustaining but not stable. There are some examples in some rules (in this case, the long rectangular path of width 2 is stable except it's end). So, to calculate a certain generation, you need to use a certain radius, but for a stable one, it's not possible. To calculate the possibility that it is stable, it's maybe a harder one. –  May 01 '21 at 11:40