At this point, perhaps we would like to attempt a variation of the game of Life, under the impression that it can be much more challenging and entertaining. The object of the new game, rather than starting out with a specific pattern and watch it evolve as time goes by, is to work backwards in time, starting out with a configuration we already have in our computer screen. It will be like playing a videotape backwards with one exception: we will assume that the same set of rules that allowed our pattern to come into being will be equally valid if time runs backwards, so as to allow us the possibility to reconstruct prior events step by step. The holy grail of our game will now be to answer the question: If we start out with a fully developed pattern in our computer screens, can we find the original “primeval” pattern which gave rise to the more intricate, developed pattern? Notice that what we are really attempting to do is, given a fully developed pattern, determine the initial conditions that gave rise to such pattern. It turns out that this proves to be much more difficult than what we may have thought at first, due to the uneasy complication that two or more differing initial configurations can eventually evolve later into the same pattern. Since the pattern we have on our computer screen had to evolve from just one initial configuration, in order to work backwards we need another rule which will allow us to make a selection among several possible choices. If we are biased by the notion that we expect complex systems to evolve from simpler ones, then we may introduce another rule, which would be completely arbitrary, stating that when faced with two or more differing prior possible configurations we should pick “the simplest one” as the most probable one (and deciding which of two differing initial conditions may be the simpler one poses another headache of its own altogether!) Perhaps now the reader can see more clearly why the inductive mode of reasoning is fraught with many land mines. When we work backwards, instead of threading back through the “right path” we may actually end up traveling on a road that is leading us far astray, not out of the labyrinth but even deeper inside. However, the alternative is to simply give up and abandon our quest. The choice of whether it is better to be completely ignorant about something rather than run the risk of being wrong about it while exploring it, is an issue that has always generated intense debate among philosophers and scientists. Historically, when faced with these two choices, either stand forever at a crossroads doing absolutely nothing or venture into one of the roads even at the risk of traveling through a wrong path, man has usually decided for the former. Such appears to be his destiny.
Let us assume that we start out with the fifth generation of the blinker we just saw on figure 6.5 and try to work backwards. An alternate equally valid pattern configuration to the pattern of figure 6.4 that could lead to the horizontal pattern of the fifth generation in figure 6.5, out of many possible ones, would be the following:
And we have already started out wrong!
Another possibility is that by sheer luck we guess correctly the three vertically arranged cells of the fourth generation that preceded the fifth generation. At this point, we might mislead ourselves into thinking that the pattern has been oscillating forever between the configuration of the fifth generation and the configuration of the fourth generation, the way a blinker should do, and that the initial configuration must have been one of the two. But if we take a look at figure 4.1, the real initial configuration with which we first started, we can see that it does not resemble anything close to a blinker.
Thus, even on a cellular automaton as simple as the game of Life, it is next to impossible for us to go backwards in time and try to figure out, given any specific pattern whatsoever, what the initial condition or configuration of the first generation which lead to a final pattern might have looked like. After only a few generations have elapsed, the only one who knows for sure what the first generation could have looked like is the one who specified the initial conditions of the first generation.
At first, it seems that this limitation in our possible knowledge of the prior history of cellular automata extends only to situations and phenomena modeled in “discrete” or finite steps. After all, if something such as the evolution of our universe is modeled with an equation or a set of equations based upon variables that can be varied continuously, then in principle we can apply those equations to move backwards in time as far as we may want to go, limited only by the accuracy with which we carry out our calculations. Unfortunately, as we saw in the previous chapter, without an infinite computer at our disposal and a knowledge of the current conditions also with unlimited accuracy, as we go backwards the number of possible scenarios begins to grow out of bound, so in going from a discrete model to a continuous model we have not gotten rid of that limitation, it has only been shifted somewhere else.
The reader at this point may be getting the wrong impression that any knowledge about the origin of the universe is way beyond our reach, limited as we are to doing math with a finite level of accuracy and limited as we are to the accuracy with which our models can describe any physical phenomena. Nothing could be farther from the truth. There is still a lot we can learn provided we give up on trying to describe everything down to the minutest detail, and provided we know where to look for answers. Take for example the attractive force between the earth and the moon. An exact application of Newton’s laws of motion would require us to calculate the attractive force between every single atom on Earth and every single atom on the Moon, specifying the exact location of every atom and adding up their combined contributions. However, if we assume for computational purposes an idealization which is perfectly valid, namely that the entire mass of the Earth can be taken to be concentrated at the center of the Earth (and there is mathematical justification for carrying out this assumption, although we know from the very beginning that it is just an idealization) and apply the same idealization to the Moon itself, then the only force we need to compute is that between two point objects. With such assumptions we are actually able to estimate such things as the total mass of the Earth, the total mass of the Moon, the trajectory of comets, and derive such things as Kepler’s laws for the orbits of planets around the Sun. From the very large to the very small, on countless occasions we have been able to extract a lot of information by indirect and ingenious means with a good degree of accuracy. We have been able to estimate the mass of an electron and its charge even though nobody has ever seen an electron directly. Yes, we know that our results will be approximations that will have to be refined as time goes on and as more data becomes available, and that those results will certainly never be exact. But the point is proven that we can still have working models that can be useful for making some useful predictions in some situations. Or for inferring past conditions by going back from the present time. But we will have to restrain ourselves to the big picture, and even then we will always have a lingering doubt on whether our model has been accurate enough or has taken us unwillingly to the wrong conclusions.
Even though it may be nigh onto impossible to know the exact initial condition that gave rise to a certain complex pattern in a cellular automaton after several generations, under a certain set of rules, we can still make an educated guess by discarding those alternatives that will not be able to produce a sophisticated pattern regardless of how many generations have gone by -we already know that certain initial patterns will end up producing a stagnant scenario from which no further motion can be extracted- and by imposing additional constraints that were not part of the original game. If we expect a very complex pattern to have emerged from a simpler one or from a set of simpler patterns that eventually got together and coalesced into the more complex pattern –pretty much the way we expect evolution to have unfolded- then as we attempt to go backwards we can discard those possible intermediate patterns that are more complex than the end result, for we are assuming that throughout the simulation we are always going through increasing levels of complexity and not the other way around (the latter instead of being an evolution would be a degeneration). Of course, we could be wrong, and in the process of going “upwards” as time goes by there could have been serious setbacks that sent the pattern “downwards” towards simpler states, after which the pattern somehow managed to recover to continue evolving again towards higher levels of complexity. But those intermediate states should be of no concern to us; since the big question we are trying to answer is “Can a certain complex pattern Z be obtained from a simpler pattern Q after several generations have gone by?” If not, then the complex pattern Z must be interpreted as the initial condition all along, and this would very much resemble the position of those creationists who interpreting religious scriptures such as the Bible in the most literal sense of the word firmly believe that complex structures such as Man did not evolve from a prior simpler structure. But if indeed we can show that a certain complex pattern Z was able to evolve from a simpler pattern Q, and in a cellular automaton we can always verify this by just starting out with the simpler pattern Q and, applying the same set of rules, wait for it to evolve into the more complex pattern Z, then the next big question that comes to mind is “Can the simpler pattern Q be obtained in turn from an even simpler pattern M”? If so, then we have the following possible scenario for the ancestry of our cellular automaton:
M → Q → Z
At this point, we might be wondering: “Is M the simplest possible pattern from which the more complex pattern Z may eventually be obtained, or is there an even simpler pattern?” Again, we may be able to go backwards and obtain an even simpler pattern K from which pattern M can be obtained, and by repeating the same procedure we should arrive sooner or later at the simplest possible pattern A that will eventually evolve into the more complex pattern Z, undergoing gradually increasing levels of complexity just as we would expect from any evolutionary process:
A → C → F → …. K → M → Q → Z
Since any cellular automaton, however big it may be, has a finite number of possible states (and by this we mean all the possible combinations of patterns we are able to draw on the grid), we know we cannot go backwards indefinitely without eventually finding a “primeval” simplest possible pattern A that will eventually evolve into the more complex pattern Z. Of course, it is always possible that through a completely different route there may be an even simpler pattern than pattern A that will also evolve into pattern Z. But whatever route we take while going backwards, we can rest assured that we will eventually arrive at a starting point beyond which we cannot go back any further, at which point we have enough information to reconstruct a possible evolution scenario for pattern Z.
It was previously mentioned that the rules for the game of Life were no accident. Although they look deceptively simple, they were chosen on purpose by John Conway to produce a simulation capable of developing interesting patterns. On a cellular automaton such as the game of Life, where each cell can be “dead” or “alive”, the eight neighboring cells can each take also one of these two possible values. Thus, the total number of possible combinations any given cell and its eight neighbors can take before passing from one generation to the next is 29 or 512. For each possible combination, there can be one of two outcomes: the cell surrounded by the eight neighboring cells will be “dead” on the next generation, or it will be “alive”. For example, among the many possible rules we could choose from, we could pick one rule from the following two choices:
Choice # 1: If all of the neighboring cells are dead except the one on the top left corner, and the cell is dead, then it will remain dead on the next generation.
Choice # 2: If all of the neighboring cells are dead except the one on the top left corner, and the cell is dead, then the cell will come alive on the next generation.
Among the remaining 511 possible combinations we can start out with, we could also pick another rule from the following two choices:
Choice # 3: If all of the neighboring cells are dead except for the one on top, and the cell is dead, then it will remain dead on the next generation.
Choice # 4: If all of the neighboring cells are dead except for the one on top, and the cell is dead, then the cell will come alive on the next generation.
Thus, each of the 512 combinations can produce one of two possible outcomes, either a dead cell or a live cell, and from each possible outcome we can write down a rule. For the above two possible combinations, we have 2*2 = 4 combinations of rules we can pick from (e.g. choice # 1 and choice # 3, or choice # 2 and choice # 4, etc.). To get the total number of possible combinations of rules we can write down for this two-dimensional cellular automaton, we have to multiply 2 by itself 512 times, i.e. 2512. This is an enormously huge number, and when we evaluate this number it comes down as follows:
13,407,807,929,942,597,099,574,024,998,205,846,127,479,365,820,
592,393,377,723,561,443,721,764,030,073,546,976,801,874,298,166,
903,427,690,031,858,186,486,050,853,753,882,811,946,569,946,433,
649,006,084,096
592,393,377,723,561,443,721,764,030,073,546,976,801,874,298,166,
903,427,690,031,858,186,486,050,853,753,882,811,946,569,946,433,
649,006,084,096
This is the total number of possible combinations available to us, from which a set of rules that may be up to 512 different rules for our cellular automaton will emerge. Assuming that we only had just one second available to pick out each one of our 512 rules from the above set of possible combinations entirely at random, with no specific design purpose in mind (except, of course, that we do not end up with any contradicting pair of rules), the time it would take for us to sift through all of the possible combinations would far exceed many, many, many trillions upon trillions of years. In comparison, the age of the universe has been estimated by some to be around twelve billion years or
3,732,480,000,000,000,000 seconds
Clearly, if we had started out right away when the universe was born picking a set of rules for our “simple” two-dimensional cellular automaton at the rate of one rule per second, even with the help of a vast array of billions and billions of the most powerful digital computers working in unison we would still be very far from reaching our goal.
If the selection of rules out of a vast universe of possible combinations is to be guided with a specific design objective in mind like the game of Life, then it appears we would need much more time than just one second per rule in order to evaluate the impact each combination would have on our growing selection of rules, with the possibility that at one point we could be forced to abandon our evolving design and start from scratch all over again if the design fails to meet our expectations.
How then was a human being named John Conway able to come up with such an interesting two-dimensional cellular automaton as the game of Life within his lifespan? Well, he resorted to some simplifying assumptions. If the position of each neighboring cell on the grid is irrelevant for the application of a set of rules (for example, the “survival” rule in the game of Life only requires two of the neighboring cells to be “alive” in order to retain its current state, regardless of whether they are located above, below, or touching the corners), then an enormous simplification of the rules ensues. Other simplifying assumptions can be used to discard potential sets of rules that will lead nowhere after a short period of time (like a set or rules in which the majority of the rules dictate that the cell should die, in which case it is not difficult to see that most or perhaps all of the initial patterns will die out after the passage of just a few generations). Nevertheless, there are still many possible combinations of rules available, and it should be more than obvious by now that coming up with a set of rules that will result in a two-dimensional cellular automaton with just two states capable of evolving interesting patterns from simpler ones is no sheer accident.
For those who are not yet convinced that the design of a good cellular automaton can be taken as proof that a highly developed intellect had to be behind the selection of the rules, we will now make the challenge of designing a two-dimensional cellular automaton a little bit more interesting. Let us allow each cell to have more than just two states. We can envision that the number of possible combinations (and the number of possible rules) will grow enormously beyond what has already been shown above. If each cell in question and its eight neighbors can have one of four states (e.g. cyan, magenta, yellow and black), then there can be up to 49 or 262,144 possible combinations of cells. Since each one of these combinations can produce on the next generation a cell that will have one of four different states, then the final set of rules can be made up from a combination of
4262,144 possible combinations of rules
This number, obtained by multiplying four by itself a total of 262,144 times, is so large that if we try to write it down explicitly it would not fit in a book this size even if the entire book was dedicated just to writing down such number! And yet, only a fraction of those rules can be used for a cellular automaton that will allow the development of intricate patterns from simpler ones.
As we just saw, even with some simplifications such as making the position of each neighboring cell on the grid irrelevant, we are still left with a wide variety of choices. It appears that if our goal is to allow the automaton to start out with some simple initial pattern and obtain from such pattern a much more complex pattern, then with the final selection of the rules of the cellular automaton left to us as would-be creators we would have far too many rules where to choose from. However, this first impression might not be entirely correct, and we may not have as many choices as we appear to have at our disposal. We have already seen in the previous chapter that some initial patterns will lead nowhere, producing dull universes where all the cells quickly end up dead or alive. It turns out that the same thing applies to the rules themselves. Stephen Wolfram [yes, this is the Stephen Wolfram who created the popular scientific computer program known worldwide as Mathematica], using as his starting point one-dimensional cellular automata, discovered that the possible sets of rules could be grouped into four major classes:
- Rules under which the cellular automaton evolves into some homogeneous state, producing dull universes where all the cells end up dead or alive or where the cellular automaton “freezes” into some permanent configuration.
- Rules under which the cellular automaton evolves into simple separated periodic structures, with the stable repetitive configurations leading nowhere (as in the case of the blinker we saw in the game of Life).
- Rules under which the cellular automaton evolves into totally chaotic patterns where no order seems to emerge and where no order will ever emerge.
- Rules under which the cellular automaton evolves into complex patterns of localized structures.
We are mostly interested in rules that belong to the last class. The above classification is purely qualitative, giving us no idea of how one class compares to the other. Christopher Langton gave a more precise definition, one that assigns a numerical measure that he calls lambda to the sets of rules. This measure, which varies continuously through a range of values, allows us to classify Wolfram’s classes according to increasing values of l, and after doing such thing it is found that the classes can be placed in the following order:
- Class 1
- Class 2
- Class 4
- Class 3
Notice that the fourth class –rules that produce complex patterns that are locally organized- occurs only for a narrow range in lambda, as we go from “dull” homogeneous behavior (class 1) to periodic behavior (class 2) to chaotic behavior (class 3). According to Langton, complex patterns show a phase transition between periodic behavior and chaotic behavior, and successful cellular automata simulation of life and intelligence becomes possible only in this transition zone.
The important thing to notice here is that, out of the many different possible sets of rules we can try to come up with to design a fine cellular automaton such as the game of Life, not all of the possibilities will lead to stable highly organized patterns after we start out with simpler ones, and we may not have such a wide selection of rules where to choose from. Still, trying to sift out the chaff from the wheat is something that will certainly require a lot of patience, perhaps some luck, and most certainly some intelligence.
Stephen Wolfram also posed 20 problems that he considers need to be addressed before any major breakthroughs can be accomplished in the field of cellular automata. Among those 20 questions, we can cite the following:
- What is the correspondence between cellular automata and continuous systems?
- How is different behavior distributed in the space of cellular automaton rules?
- How does thermodynamics apply to cellular automata?
- What statistical quantities characterize cellular automaton behavior?
- What is the nature of the infinite size limit for cellular automata?
- How are cellular automata affected by noise and other imperfections?
- What is the analogue of geometry for the configuration space of a cellular automaton?
- How random are the sequences generated by cellular automata?
- What is the correspondence between cellular automata and stochastic systems (systems that involve random variables such as queues of customers at the bank)?
- How common are computationally intractable problems about cellular automata?
To date, very few inroads have been made to address the above questions. In the last question, for example, it is suspected that computationally intractable problems are common in many class 3 and class 4 systems. But nobody has given any proof of it.
In trying to reconstruct from a given pattern the past history of a cellular automaton, we can make our endeavor much more challenging and interesting by assuming that the number of possible states and the rules under which that certain pattern developed are both unknown to us. Can we still get anything accomplished? Take, for example, the following 10 by 10 grid:
Just by looking at the above pattern, frozen in time, we can tell that the given cellular automaton allows for at least four different possible states (red, green, blue and yellow). Other states besides these may exist, but in arriving at the above pattern they could have been wiped out of the grid from a previous pattern leaving no trace. The only hope we would have of uncovering the existence of other states would be to let the above cellular automaton evolve for several more generations with the expectation that those “hidden” states might reappear, and if they don’t reappear we might consider those states to be redundant.
Once we have established the existence of four different states just by looking at the current pattern, we now notice that the pattern exhibits a good deal of symmetry and complexity, and by this we mean that if we were to place entirely at random sixteen red marbles, twelve green marbles, six yellow marbles and six blue marbles on the cells of this 10x10 grid (ten horizontal rows by ten horizontal columns), the odds of getting this pattern at random would be quite small (it can be shown mathematically that the odds would be astronomically small). Thus, unless this pattern represents the very first generation waiting to be undone by the mechanical application of whatever rules this cellular automaton may obey, it may be safe to assume that this pattern may have grown out of an evolutionary process from a simpler pattern. This educated guess allows many of the possible sets of rules to be discarded, thus narrowing our search by striking out from the list all those rules that give rise to dull universes, those rules that give rise to chaotic patterns, and perhaps those rules that give rise to stable repetitive configurations. From here on, the reconstruction of prior history becomes just a matter of trying out the remaining possibilities, daunting though the task may be.
If the above cellular automaton happens to be running on a computer simulation at the moment we get to see the above pattern, then even if we do not have access to the internal details of the software under which it is running (which would hide the rules that are being used throughout) if we are allowed to see the cellular automaton evolve from the above configuration through several generations we should be able to recover one by one each of the rules being obeyed by the cellular automaton just by looking at how each colored cell behaves from one generation to the next. Indeed, this is how science obtains many of its rules we call “laws of nature”.
There is another important datum we may infer from a cellular automaton in progress after many generations: the cleverness of its creator. If the patterns are stagnant going nowhere (class 1), or are completely chaotic (class 3), it is very likely that the messy behavior was the result of poor planning by an inexperienced dilettante or an experimenter taking the naturalist approach and hoping that order will somehow arise out of the chaos. But if the patterns have grown into some highly structured complex configuration or are in the way of evolving and acquiring some complex shapes, then perhaps the creator was taking the engineering approach and had some specific purpose in mind that is now being realized. Of course, it is also possible that any person who is specifying initial patterns may discover out of sheer luck a very interesting initial pattern that will produce extremely complex and interesting outcomes, and this is likely to happen in small cellular automata with simple rules (such as in the case of the R-pentomino in John Conway’s game of Life). But as the cellular automaton becomes more and more complex, the odds of digging up interesting initial patterns by sheer accident become smaller and smaller, and in the long run as the cellular automaton gets even more complex those odds may not be any better than trying to get a computer that is working randomly to print out the string “THE HOUSE ON THE PRAIRIE”. Yes, such a thing can happen. It is possible. As it is also possible that an honest penny thrown a billion times will always come up heads 100% of the time (or tails). The rules of probability do not forbid such offbeat circumstances from happening. But it is most likely that you will never see such a thing happening in all your lifetime. Don’t bet your life on it.