When a cellular automaton is being run, we have already seen that certain rules combined with certain initial patterns will give rise to complex intricate forms. Adding more diversity to the number of states that any cell can posses will increase enormously the complexity of the patterns we can obtain as the cellular automaton is evolving. Almost as in real life, shapes can develop from very simple initial configurations onto more elaborate forms that can move across the screen, eat neighboring forms, and combine with other forms to produce even more complex forms. Since the cellular automaton is fully deterministic, once we have identified those combinations of rules and initial patterns that will produce the most highly organized patterns among all of the other possible combinations, we must accept the fact that every time the simulation is run again from the very start with the same combination of rules and initial patterns we will always obtain the same complex structures we expect to appear in the screen. Thus, starting from a certain set of carefully chosen initial conditions, the appearance of those patterns that resemble primitive life forms is something that is destined to occur. It cannot be otherwise.
However, having met the challenge of designing a sophisticated cellular automaton that can allow the likeness of primitive life forms to emerge, we could pose ourselves an even bigger challenge, a challenge much more demanding that might seem impossible to meet: Can we design a very elaborate cellular automaton that at some point in time will have patterns capable of producing their own designs, engendering other patterns that will be completely different from all of the other previous patterns that developed before them, patterns perhaps of even greater complexity? In other words, can we vest a cellular automaton with the capabilities of evolving life forms having themselves the capacity not just to survive but also to create? We are talking here about the possibility of complexity being able to produce a new different type of complexity entirely on its own with no help from us.
And what about the second law of thermodynamics?
In real life, evolution is able to stack some odds in its favor by resorting to vast amounts of time and allowing the possibility of many different new types of slightly different combinations to be tested and tried out in the battlegrounds of daily survival. But evolution has its limitations, and alone by itself would not be able to go farther away than producing a few primitive protozoa over millions upon millions of years, and in order to evolve more complex structures it has to receive the help of nonequilibrium thermodynamics that, using the magic provided by nonlinearity in certain processes, will give the push necessary to produce mammals, fish, reptiles, monkeys, and all other sorts of life forms. But none of these life forms, as sophisticated as they may be, are able to create by using their survival instincts some form of tool, much less to gather together to build a rocket or to manufacture an airplane.
This apparent surge of newer complexity out of complexity of a different kind would seem to pose a direct threat against the second law of thermodynamics. For if the creation of order somewhere must come at the expense of the destruction of order somewhere else, then as complexity increases its demands for newer usable energy would keep on growing without bound. Even the design of a simple artifact such as a power supply for a television set would thus require the burning of a lot of coal or petroleum, yet even an amateur engineer can specify promptly in a matter of just a few weeks (or perhaps just a few days if he is skilled enough) the components (power transformer, filter capacitors, regulating circuitry, etc.) required to build that power supply, and at no time will his brain be “overheating” because of the work being done mentally to carry out the design. The power consumption in his brain for the entire design process may not exceed even one calorie, an energy consumption that is far less than the two thousand or so calories he needs to keep the rest of his body working on any single day. How can all this order emerge with such small power consumption? And this apparent creation of order does not come at the expense of the destruction of order somewhere else, for the body of the amateur engineer will still require those two thousand daily calories regardless of whether he is designing the power supply or not. What then happened to the second law of thermodynamics?
In order to give some qualitative measurement concept for the destruction of order, we came up with the word entropy, stating that all natural closed systems (well, almost all) go from a state of lower entropy (or lower order) to a state of higher entropy (higher disorder). Yet, here we have a very peculiar phenomenon: the apparent creation of an enormous amount of order almost out of nowhere, an order of rising complexity that does not come necessarily at the expense of destroying order somewhere else.
On planet Earth, only Man and Man alone has this capability. This capability is possessed by no other life form on Earth.
As life begins to evolve from more primitive life forms, its orderly structure reflects a lower degree of entropy than that of its surroundings, yet primitive life forms are incapable of generating any more order than the one they have already acquired through evolutionary processes. As life forms become more complex, and their level of order begins to increase in comparison to less developed life forms, they are still unable to create even the simplest of tools. But at a very specific stage in the evolutionary process, the sparks begin to fly, and all of the sudden complexity begins to surge everywhere. Elaborate buildings rise, complex irrigation systems are carved out, universities are born, libraries are opened, and information begins to multiply exponentially. Thus, when life has achieved a certain critical level of order, something that is nothing short of miraculous takes place, and the second law of entropy appears to be in full retreat. This minimum level of order, a creative order that can in turn beget newer complex order of a different kind, will be given a name here, and, for lack of a better term, we will identify this level of order with the symbol ζ (the Greek letter zeta, corresponding to the last letter of our Roman alphabet in lowercase).
Edward Fredkin was able to show that even though energy is still needed for information storage and retrieval, the energy required to perform any given example of information processing can be reduced arbitrarily, and there is no lower limit to the amount of energy required. This important result can be stated as follows:
The energy required to process information can come arbitrarily close to zero.
In a way, this discovery confirms something that we have suspected all along. Even though the daily physical activity of an adult human being may require an energy consumption ranging anywhere between one thousand and two thousand calories, the brain that is designing the next generation of supercomputers or that is conceiving the next major musical trend will require no more than one-tenth of a calorie to do its job. We already see this happening in the case of personal computers; they are becoming more and more powerful yet while at the same time their information-processing power is increasing exponentially their energy consumption is not. This conclusion is remarkably important for the long-term survival of any of our offspring as the Universe continues to age. It implies that even as the Universe continues to “cool down” and expand, the mental processes that make awareness possible can still continue to thrive for a very long time to come, with practically no consumption of whatever energy may be left after several trillions of years. With temperatures everywhere in the Universe approaching absolute zero and with almost all of the stars dying out, our reliance upon our current physical bodies will have to give way to some new developments that we cannot even imagine at the present time. Yet, our basic instinct for survival is so strong that, provided Man does not destroy himself, we may begin to evolve into forms requiring less and less of the sort of physical activity that demands a heavy consumption of energy, while at the same time the mental information-processing activity that can take place with very minute amounts of energy may become stronger and stronger. On the other hand, the founder of cybernetics, Norbert Wiener, had proposed earlier that instead of the old model of life based upon energy transactions in biochemical processes at the cellular level we should interpret life as information transactions at all levels, a philosophy that makes more sense as we continue to evolve as a global society, and we may be close to the dawn of a new reality that we could not have anticipated before, a new reality in which complexity arises out of chaos, and once a certain level of complexity as been reached then that level of complexity will be able to expand exponentially on its own, meeting successfully all sorts of challenges posed by its surrounding environment, giving full meaning to the old sayings “knowledge is power” and “the truth shall set you free”. And this exponential multiplication of order only happens when the complexity of the life form that will carry out the activities of sophisticated information processing has the reached the level of maturity we call here order ζ. So far, only Man has been granted this level of order here on Earth. This level of order cannot be considered anything less than the fundamental difference between merely surviving and creating: slightly below order ζ the life form is happy just to reproduce and survive, responding solely on instincts; whereas slightly above order ζ the life form gets involved in activities that have absolutely nothing to do with survival, such as developing the “useless” mathematics of number theory, “uselessly” scoring orchestral works such as Tchaikovsky’s Nutcracker Suite, writing “useless” dramas like Hamlet, erecting “useless” monuments like the Eiffel Tower and the Statue of Liberty, and even attempting to create order ζ from scratch. Once order ζ has been reached, even if the biological hardware has ceased to evolve there is an inherent capacity for cultural evolution. Our genetic makeup has been the same for at least the past 40,000 years, yet our knowledge keeps growing exponentially.
At this point, an informed reader may be wondering: Why resort to cellular automata in trying to design an artificial mind? Why not use other traditional approaches like writing long computer programs like those that have already produced impressive results? Before justifying the use of cellular automata in our attempts to model a working mind, we must first review some basics regarding our own brains.
First, although similar in shape, no two human brains are built exactly alike. But it goes much deeper than this. If we consider a human brain similar in some respects to a giant three-dimensional cellular automaton, then we must accept the fact that all the initial patterns thrown into the brain while in its early stages of development plus all the additional patterns thrown thereafter over the course of many years will be unique to each brain. No two brains in this Universe accumulate exactly the same set of experiences; no two brains have the same patterns stored into them. Assuming that the brain, as any other cellular automaton, is a highly nonlinear device, then precisely because of their nonlinearity no two brains will ever develop the same outcome, even if all human brains had been built identical to each other. The fact that human brains have a myriad of differences in the way their neurons are interconnected only makes the long-range outcome even more uncertain, but it must be realized that nonlinearity lies at the very root of the enormous variety of possible outcomes. With nonlinearity firmly entrenched, the smallest difference in the initial patterns can lead to widely different possible outcomes as time goes on. And just as Edward Lorenz found out back in 1960 in his attempts to model the weather, exact prediction of long-term behavior (or even short-term behavior, for that matter) of human beings becomes impossible; something that psychologists and married couples have known all along and has forced them to settle for some kind of compromise when helping patients or trying to iron out their differences (psychologists try to look for general behavioral patterns that seem to recur “most often” and apply standard therapies that seem to work “most often” in “most of the cases”, but results are never 100% guaranteed).
A typical human brain has about 100 billion neurons, and each neuron is connected to 1,000 to 100,000 other neurons. Donald Hebb proposed that if pairs of neurons are active simultaneously, then the connections between those neurons becomes stronger and the interconnecting pathways become reinforced. It is believed that a single neuron “fires” when it recognizes a particular combination in the incoming signals provided by other neurons. In 1962, Frank Rosenblatt proposed a working model to simulate the behavior of neurons, the perceptron, which was essentially a linear model. But seven years later, Marvin Minsky and Seymour Papert wrote an influential book entitled Perceptrons, where they proved mathematically several theorems, among them the theorem that all perceptrons were equivalent to simpler ones, and that these linear devices would never be able to solve some simple problems such as being able to compute some of the most basic properties of an image. Thus, no machine built with perceptrons, however big, would ever be able to recognize even the letter “O”. This killed all interest in perceptrons and artificial neural networks, and the field was dormant for several years. Clearly, something was amiss, since the human brain, built with nothing else but neurons, has been able to accomplish astonishing intellectual feats. Eventually, a major breakthrough came when several researchers decided it was time to abandon the linear paradigm and explore instead the possibility of non-linear neural networks. And just as in the case of Ilya Prigogine and his “dissipative structures”, their efforts were rewarded, for they soon developed neural networks that could actually learn, that could actually be trained to recognize symbols and patterns, overcoming the deficiencies criticized by Minsky and Papert. Early advances included Teuvo Kohonen’s self-organizing networks, the counter-propagation network, Stephen Grossberg’s adaptive resonance networks, the back propagation model and the Hopfield network, just to cite a few. And there is now the widespread belief that the discoveries have just started. Since it was the book Perceptrons the piece of work that essentially acted as the major stumbling block in the procurement of research funds to study and model computing machinery after the human brain, almost ten years later after the publishing of the book one its authors, a repentant Seymour Papert, expressed the following in 1988 (betraying perhaps some feelings of guilt and remorse):
“Once upon a time two daughter sciences were born to the new science of cybernetics. One sister was natural, with features inherited from the study of the brain, from the way nature does things. The other was artificial, related from the beginning to the use of computers. Each of the sister sciences tried to build models of intelligence, but from very different materials. The natural sister built models (called neural networks) out of mathematically purified neurones. The artificial sister built her models out of computer programs. In their first bloom of youth the two were equally successful and equally pursued by suitors from other fields of knowledge. They got on very well together. Their relationship changed in the early sixties when a new monarch appeared, one with the largest coffers ever seen in the kingdom of the sciences: Lord DARPA, the Defense Department’s Advanced Research Projects Agency. The artificial sister grew jealous and was determined to keep for herself the access to Lord DARPA’s research funds. The natural sister would have to be slain. The bloody work was attempted by two staunch followers of the artificial sister, Marvin Minsky and Seymour Papert, cast in the role of the huntsman sent to slay Snow White and bring back her heart as proof of the deed. Their weapon was not the dagger but the mightier pen, from which came a book –Perceptrons- purporting to prove that neural nets could never fill their promise of building models of mind: only computer programs could do this. Victory seemed assured for the artificial sister. And indeed, for the next decade all the rewards of the kingdom came to her progeny, of which the family of expert systems did best in fame and fortune. But Snow White was not dead. What Minsky and Papert had shown the world as proof was not the heart of the princess; it was the heart of a pig.”
Thus, this time in the field of artificial intelligence, again nonlinearity has shown its enormous capabilities in providing results nothing short of miraculous. And, as usual, it has brought along a very high price tag, by forbidding all attempts at full predictability and resisting like a mule any general theoretical analysis. It is as if Nature had taken care of building an almost impenetrable barrier out of which come countless wonders but whose access to the scientific community, if not barred, will only come at a very considerable sacrifice.
If the brain is indeed a highly nonlinear piece of machinery, then trying to describe it with linear models is a recipe for failure. And by being an extremely nonlinear device, then science in general (biology, physics, artificial intelligence, psychology, etc.) may have all but lost its powers of analysis and prediction regarding the mind of Man, losing at the same time all hopes to design and build in the near future a mind like Mozart’s, for no architect as good as he may be is capable of even conceiving a sophisticated structure if he is completely ignorant about the properties of the most basic materials that will be available to him for the project. This prospect has sent many chills to talented researchers who are coming to grips with the somber reality that they may not witness an artificial conscious mind within their lifetimes or even within one thousand lifetimes. The only way out would be to address directly the nonlinearity issue and its full ramifications, and this may prove to be a monumental task that can only be compared to a weatherman in Chicago trying to forecast the weather on the city as it will be at a certain hour on a certain date five hundred years later. Yes, it may be possible to accomplish such a feat in weather prediction some day. Such a thing could happen. But is highly unlikely, and for the time being, most weathermen consider themselves extremely lucky if they can forecast the weather accurately at least for the next five days.
It is interesting to notice that before the modern science of chaos had evolved, some prominent scholars were resorting to Gödel’s incompleteness theorem to try to refute the possibility of creating an artificially conscious mind, and among them we can cite the Oxford philosopher J. R. Lucas, who wrote in 1961 the article entitled Minds, Machines and Gödel in which we can read:
“So far, we have constructed only fairly simple and predictable artifacts. When we increase the complexity of our machines, there may, perhaps, be surprises in store for us. He (Alan Turing) draws a parallel with a fission pile. Below a certain ‘critical’ size, nothing much happens: but above the critical size, the sparks begin to fly. So too, perhaps with brains and machines. Most brains and all machines are, at present, ‘sub-critical’ –they react to incoming stimuli in a stodgy and uninteresting way, have no ideas of their own, can produce only stock responses –but a few brains at present, and possibly on some machines in the future, are super-critical, and scintillate on their own account. Turing is suggesting that it is only a matter of complexity, and that above a certain level of complexity a qualitative difference appears, so that ‘super-critical’ machines will be quite unlike the simple ones hitherto envisaged … This may be so. Complexity often does introduce qualitative differences. Although it sounds implausible, it might turn out that above a certain level of complexity, a machine ceased to be predictable, even in principle, and started doing things on its own account, or, to use a very revealing phrase, it might begin to have a mind of its own. It would begin to have a mind of its own when it was no longer entirely predictable and entirely docile, but was capable of doing things which we recognized as intelligent, and not just mistakes or random shots, but which we had not programmed into it … we should take care to stress that although what was created looked like a machine, it was not one really, because it was not just the total of its parts: one could not even tell the limits of what it was going to do, for even when presented with a Gödel-type question, it got the answer right … However complicated a machine we construct, it will, if it is a machine, correspond to a formal system, which in turn will be liable to the Gödel procedure for finding a formula unprovable-in that-system. This formula the machine will be unable to produce as being true, although a mind can see it is true. And so the machine will still not be an adequate model of the mind.”
In his Pulitzer prize-winning book Gödel, Escher, Bach: An Eternal Golden Braid published in 1979 (almost twenty years after Edward Lorenz made his discovery limiting severely our weather-predicting capabilities), Douglas Hofstadter went to great lengths to refute the arguments of Lucas, while at the same time staunchly defending the argument that since Man’s brain is essentially a piece of biological hardware, an organic machine, then nothing prevents theorists working in the field of artificial intelligence from their attempts in trying to build an artificial mind that can display consciousness and awareness –an artificial mind exhibiting order ζ. However, twenty years later after Hofstadter’s book was published, with our awareness that the many of the most basic mechanisms of the human brain can only be simulated rather crudely by resorting to the daunting field of nonlinearity, Gödel’s incompleteness theorem notwithstanding many of these theorists are awakening to the crude reality that they may have hit the end of the road, and from here on every victory will have to be extracted at such a heavy toll that it may be far less expensive to invest all those research funds in trying to make us more intelligent than we are today rather than try to build a machine as intelligent as us, especially if there is not even a remote semblance of a theoretical model we may try to go by when attempting such a difficult task. Still, many artificial intelligence scientists, refusing to accept the idea that nonlinearity may have erected a solid brick wall in their attempts to build an “artificial soul”, have tenaciously and rather naively adopted an ostrich-like attitude believing that their dream still lies somewhere just around the corner. These attitudes reflect an observation made by Max Planck in 1949, who stated: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Other more down-to-earth researchers have settled for much more modest goals, such as trying to model the brain of a snail (forget about coming up with order ζ!) with something like 500 nonlinear neurons (forget about the 100 billion neurons in the human brain!), and they believe that the results of their research will start paying off at least five years from now (perhaps)!
Many attempts have been made and are still being made to come up with machines that can display some sort of awareness and creative intelligence. One obvious –and perhaps simplistic- attempt has been based upon writing very long computer programs in some high level language like LISP or PROLOG that will somehow mimic human behavior. Some progress has been made, as evidenced by sophisticated software such as DENDRAL (1965, the first expert system, used for molecular-structure analysis), SHRDLU (1970, a natural-language system capable of exhibiting some intelligent behavior in the world of children’s blocks), MYCIN (1974, an expert system designed to help medical practitioners in the prescription of antibiotics), CADUCEUS (1982, a medical expert system designed to help in the diagnosis of illnesses according to their symptoms) and RACTER (1984, the first computer program capable of authoring a book). But after using any of these programs for some time, the user cannot avoid the impression that these programs are nothing more than large interconnected databases that use “canned” responses programmed into them by their sculptors. And it may take just a few days for any user to become familiar enough with the programs so as to be able to anticipate their actions. In effect, all of these programs become fully predictable. And predictability is the very essence of linearity.
Only recently have many researchers in the field of artificial intelligence been coming to the conclusion that the approach of writing long computer programs to mimic the behavior of the human brain has some serious limitations, and that this approach may have gone “as fur as it kin go”. Back in the 1950s, it was argued by some prominent researchers in the field (Alan Turing has already been mentioned in this context) that as soon as the working memories of computers [by "working memory" we are referring here to the type of memory commonly known as RAM -Random Access Memory- as opposed to the non-volatile memories such as the one provided by a disk hard drive] became big enough (during the early days computer memories were in the neighborhood of 1,024 bytes) the computers would be able to develop a mind of their own. Half a century later, some personal computers are quickly approaching close to four gigabytes of working memory, and we don’t see yet any computer programmed with a “mind” of its own (even a pesky mosquito seems to have a stronger desire for survival than the most powerful mainframes with the most powerful software built to date), and with mass-produced working computer memories of ten gigabytes and one hundred gigabytes looming near the horizon, even many formerly optimistic “artificial soulists” are expressing serious reservations about the possibility of actually creating a thinking machine merely by writing longer computer programs and resorting to computer memories in larger and larger numbers. It may be because the human brain does not work like a very long linear (sequential) stored program performing one single machine instruction at a time. Although it may very well be outperformed when it comes to number crunching and playing chess (IBM’s “Deep Blue” computer that beat world chess champion Gary Kasparov in 1997 comes to mind), none of the artificial number crunchers and chess players have come even remotely close to displaying some of the ingenuity and creativity that can be displayed by their creators. On the other hand, it is an undeniable fact that the human brain employs many tiny processors (the neurons) working simultaneously in a highly coordinated fashion; and even if a vast portion of the neural machinery is destroyed because of an injury or a tumor, whatever is left may still be able to function remarkably well (considering the damage done) as opposed to the big sequential machines where just a very small error in the software or the hardware (whereby a single computer bit of “1” among many hundreds of millions of bits may be mistakenly read or written as a “0”) is usually more than enough to send the powerful giants crashing down. But trying to build an artificial “neural” brain from scratch is currently out of the question, for at present we do not have the technological capabilities to do so, nor do we have the minimum theoretical machinery needed to make some sense of what may be going on inside the nonlinear brain.
So, in trying to come up with some form of artificial intelligence capable of displaying some kind of awareness and creativity, we are forced to look for something else, something placed “in-between”. And the closest thing we have in the shady zone lying between the long sequential stored programs and the neural networks in the human brain are precisely cellular automata. Perhaps an “old-fashioned” artificial intelligence scientist could argue correctly that, at present, almost all cellular automata are really nothing more than sequential stored programs executing one machine instruction at a time, and if a “thinking” machine can be conceived of as a cellular automaton, then nothing prevents us from writing out a long sequential program that will “encode” the awareness manifested by the cellular automaton. This might be true except for some salient differences. First, going from one cellular automaton “state” to the next is not the same thing as a going from one single machine instruction to the next under a sequential program. If the cellular automaton is something as simple as a rectangular grid consisting of 1000 horizontal rows and 1000 vertical columns, then the cellular automaton cannot be considered to have gone to the “next” state unless all of the “next” states of each one of its one million cells have been computed under the rules assigned. For a complex cellular automaton, it will take the execution of many sequential machine coded instructions just to determine what the “next” state of the cellular automaton will be. Second, whereas the appearance and evolution of order might be much simpler for us to analyze and follow up visually in a very complex cellular automaton, without any visual cues it might be nigh on to impossible to do the same thing during the execution of a long sequential program. And lastly, considering the computational time expenditures involved, for a highly complex cellular automaton there may be no other alternative than to actually build some hardware with a high level of integration in order to attain the massive parallel computations that the current architecture of most modern-day computers (the so-called Von Neumann architecture) is incapable of providing.
Going back to original question: “Can we design a cellular automaton with the capabilities of evolving life forms having themselves the capacity not just to survive but also to create?” we can state our new challenge as follows:
Can we design a cellular automaton capable of producing from a much simpler pattern at least one pattern that can achieve order ζ?
From the very outset, it should be obvious that the design of such a cellular automaton will be a very complex endeavor. We know all too well that an acceptable solution to the problem of designing order ζ should come out of a very simple set of initial conditions, since –if we consider the universe a gigantic three-dimensional cellular automaton- we ourselves are made up of no more than some fifteen basic elements (carbon, hydrogen, oxygen, nitrogen, calcium, sodium, iron, phosphorous, etc.) taken from a periodic table of natural elements (elements created artificially by Man not being considered) whose number does not exceed one hundred, and the basic physical rules (Maxwell’s basic equations for electromagnetism, Schrödinger’s wave equation, etc.) governing the interaction between the elements that make up our bodies are certainly within the grasp of the human intellect (otherwise, we would never have been able to find them). The initial conditions that gave birth to this Universe have already solved the problem of creating order ζ, as can be attested by our own existence and our own level of awareness. It is almost inconceivable that a random universe could have solved a problem that is way beyond the reach of the best mathematicians and artificial intelligence theorists nowadays.
In contrast, Fermat’s last theorem can be considered a much “simpler” problem. This theorem can be stated simply as follows:
“There are no whole numbers (integers) x, y and z as solutions for the equation:
xn + yn = zn
when n is greater than two.”
When n equals two in the above equation, the equation has an infinitely large number of solutions; for example the sum of the squares of the whole numbers three and four (which turns out to be 25) can be seen to be equal to the square of another whole number, which is five. In other words:
3² + 4² = 5²
Thus, for n=2 there is at least one solution. But for n=3 we have the equation:
x3 + y3 = z3
which would state that there are some whole numbers x, y, and z such that the sum of the cubes of the two whole numbers x and y will equal the cube of another whole number z. Fermat’s last theorem states that such combination of whole numbers does not exist for n = 3, and that we will find no such combination of numbers no matter what whole numbers we try out, starting from one and going all the way upwards to infinity. Notice that this is a pretty strong statement. But Fermat’s last theorem goes beyond the case n = 3. It states that this equation does not have solutions for n = 4 either, nor for n = 5, nor for any value of n going all the way upwards to infinity. How the French mathematician Pierre de Fermat (1601-1665) was able to come up with this discovery is something that remains a mystery to this date. But as simple as the statement of this problem sounds, the search for its solution consumed the lives of tens of thousands of the best mathematicians in History, until it was solved in 1995 by Andrew Wiles in what is now considered to be one of Man’s greatest intellectual achievements. As recently as the late 1970s, the state of affairs regarding Fermat’s last theorem was not very optimistic, as we can gather by reading the following conclusions in an article by Harold M. Edwards published in the October 1978 issue of Scientific American:
“What, then, is the present status of the work on Fermat’s last theorem? It is certainly one of the most famous unsolved problems in mathematics today, but it is not the object of a great deal of mathematical research because no one knows how to attack it. It has withstood (Ernst Eduard) Kummer’s powerful methods and their many subsequent refinements, and it has still not even been proved for infinitely many prime exponents. If someone were to put forward a concept that promised to bring about some real advances on the problem, it would surely elicit a great deal of interest and activity … There is every reason to believe that in the future, as in the past, attempts to solve Fermat’s last theorem will bring about important advances in mathematics.”
The words put forth by Harold M. Edwards were prophetic, for we can read in the book Fermat’s Enigma the following comment by Simon Singh:
“During (Andrew) Wiles’s eight-year ordeal he had brought together virtually all the breakthroughs in twentieth-century number theory and incorporated them into one almighty proof. He had created completely new mathematical techniques [among the many issues that Andrew Wiles had to resolve we can cite, for example, the Taniyama-Shimura conjecture, which states, "Every single elliptic equation is related to a modular form" (an elliptic equation is an equation of the type y=x3+ax2+bx+c where a, b and c are whole numbers); simple to state, but extremely hard to prove] and combined them with traditional ones in ways that had never been considered possible. In doing so he had opened up new lines of attack on a whole host of other problems”
From the statement of the problem, to a non-specialist it might seem that in order to verify that Fermat’s last theorem holds true for any combination of whole numbers, however large, we would require the services of an infinite computer capable of trying out all the possible combinations of whole numbers from one to infinity. Yet, the problem was eventually solved, without the need to resort to an infinite computer, and the solution was published on the May 1995 issue of Annals of Mathematics. If a problem as tough as Fermat’s last theorem could be solved by the human intellect, then perhaps there may be some hope that the theoretical design of order ζ can be approximated somehow even if not solved completely.
It would seem that in order to find a cellular automaton capable of producing from a much simpler pattern at least one pattern that can achieve order ζ and prove that indeed it will generate order ζ, there might be some simplifying assumptions that can be discovered which, taken in conjunction with other yet-to-be-discovered mathematical tools, will spare us from the intractable problem of starting to try out every possible combination of rules and initial patterns. However, to date there is not even the slightest hint that such a thing may be possible. If it turns out that this problem belongs to a category of problems known in the technical literature as NP-complete (which stands for Non-deterministic, Polynomial-time-complete), then the problem is unsolvable by human means, since the only way we could try to find a solution would be by trying out every possible combination of rules and initial patterns that comes to mind. Considering the number of possible combinations, a number close to infinity, the only way we would be able to mimic Nature is by stumbling once every million years upon some mediocre imitation of what we are really looking for, but perhaps never being really able to find it. All NP-complete problems posses a very surprising characteristic: their answers can be very easily verified. An answer to an NP-complete problem might come as a simple example or a counterexample, which, trivial as it might seem, may be nearly impossible for us to find on our own, especially considering that even “simpler” mathematical assertions have turned out to be mistakes committed unwillingly by talented mathematicians. Pierre de Fermat, for instance, believed that the following assertion was true:
“Every number of the form
when n is a positive integer (n=1,2,3,...) is a prime number”.
If we try out the first three integers, we might be deluding ourselves into thinking that the assertion is indeed a valid assertion:
Considering that each result grows explosively as we try out higher valued integers, we might be tempted to take the assertion as valid for all integers from n = 1 all the way up to infinity. However, it turns out that Fermat was wrong. For if we try the integer n = 5, we get:
and it turns out that this number is not a prime number, since it can be decomposed into the product of two numbers:
4,294,967,297 = (641)(6,700,417)
This factorization was discovered by the Swiss mathematician Euler (1707-1783). How Euler was able to come up with this discovery in an age when no hand-held calculators existed is still a subject of debate, although we might as well suspect it took a lot of hard work by hand, testing one by one each possible factorization. However, there is no doubt that if the lowest non-prime number obtainable by the statement had been a number decomposable into products having no less than some twenty or forty digits each, the answer would have remained a mystery until the development of the digital computer in modern times. Notice that this factorization found by Euler was enough to discredit the original assertion. However, if somebody had made a stronger statement such as: “Every number of the form given will generate no more than 100 prime numbers for every conceivable integer value of n”, then unless we are handed down a formal procedure leading to such a conclusion, we may never know if the statement is true or not, for we have no way of testing the validity for every conceivable integer from one to infinity.
Anyway, if someone came to us saying: “I have the solution to the problem of designing a two-dimensional cellular automaton capable of generating order ζ from a much simpler initial pattern: start out with a rectangular grid of 1012 horizontal rows and 1012 vertical columns, allow each cell in the grid to be able to acquire one of fourteen different possible states –which may be represented with different colors-, apply to each generation the same set of twenty different rules that will be handed down to you, start out by placing near the center of the grid an initial pattern consisting of 37,985 cells that will be given to you, and by the time you reach generation number 14,657,312 you will have a pattern that will exhibit some order ζ, and from there on this pattern will evolve new creations entirely on its own with each creation exhibiting a high level of complexity, a sophisticated level of complexity that only a structure with order ζ can create, and don’t ask me how I got this result!”, then in principle all we have to do to verify the veracity of the statement is to follow the recipe. The solution, as complex as it might appear to us to be, will certainly be far, far simpler than trying to start out looking for such a solution by carrying out a full search throughout every possibility. Verifying the solution might take us anywhere from a month to ten years (depending upon our current level of technology), whereas trying to find the solution on our own might take us forever, literally.
From what has been said so far, the reader may have inferred that in trying to come up with order ζ, we are not looking to obtain something with the creativity of Picasso or Beethoven, this is out of the question. We must settle for something far more modest, something that will display a level of intelligence perhaps a little bit lower than early Homo sapiens but perhaps a little bit higher than Pithecanthropus erectus (primitive Man), very much like a four-year old toddler. We cannot expect the “creature” to be able to write a long series of poems in whatever “cellular automaton language” it might be able to develop (although perhaps it could develop the ability to do so given enough time and enough memory space). And trying to come up with a series of tests that can be given to the “creature” to test its creativity and ingenuity will certainly be another major headache.
Before trying to solve the seemingly impossible problem of designing any kind of cellular automaton capable of exhibiting order ζ, let us examine a much “simpler” problem (not that the problem is easy to solve, it is much simpler when compared to trying to design order ζ), the problem of designing an automaton capable of reproducing itself. We are not demanding that the offspring be any more complex than the progenitor. We will be more than happy if we can design an automaton capable of “building” another automaton in its likeness. This problem was in fact addressed by none other than the gifted Hungarian mathematician John von Neumann, one of the most talented geniuses of the twentieth century. In particular, von Neumann was trying to address the following issues regarding automatons:
Constructibility:
Can an automaton be constructed by another automaton?
What class of automata can be constructed by a single suitable automaton?
Construction Universality:
Is any single automaton construction universal?
Self-reproduction:
Is there a self-reproducing automaton?
Is there an automaton which can both reproduce itself and perform further tasks?
All of the above questions obviously had an affirmative answer, for von Neumann gave his word that he would answer all of these questions by constructive means, i.e. by actually designing various types of constructing and self-reproducing automata. Von Neumann realized that the above questions lead directly to the next important question:
Evolution:
Can the construction of automata by automata progress from simpler types to increasingly complicated types?
In other words, order ζ!
Unfortunately, von Neumann died before he could complete his work regarding self-reproducing automata. Fortunately, he left enough information in his draft notes to enable another scientist, Arthur W. Burks, to bring von Neumann’s project to its completion. Von Neumann solved the problem with a cellular structure capable of acquiring 29 different states. The results of the investigations carried out by von Neumann and completed by Burks are explained in detail in the book Theory of Self-Reproducing Automata, where we find the following:
“The two problems in automata theory that von Neumann concentrated on are both intimately related to complexity. These are the problems of reliability and self-reproduction. The reliability of components limits the complexity of the automata we can build, and self-reproduction requires an automaton of considerable complexity … In Chapter 5 I (Arthur Burks) show how to complete the design of Von Neumann’s self-reproducing automaton … he formulated and partially answered two basic questions of automata theory: How can reliable systems be constructed from unreliable components? What kind of logical organization is sufficient for an automaton to be able to reproduce itself? … His work on self-reproduction also belongs to the theory of complicated automata. He felt that there are qualitatively new principles involved in systems of great complexity and searched for these principles in the phenomenon of self-reproduction to self-repair, results on self-reproduction would help solve the reliability problem … Von Neumann believed that the lack of an adequate theory of complicated automata is an important practical barrier to building more powerful machines … He explicitly stated that until an adequate theory of automata exists there is a limit in the complexity and capacity of the automata we can fabricate … Von Neumann speculated that extremely complex systems involve new principles. He thought, for example, that below a certain level, complexity is degenerative, and self-reproduction is impossible … Von Neumann then posed analogous questions about construction … Because of the difficulties in modeling automata construction in a continuous space, he decided to work in a discrete space. In sum, he decided to carry out his automata designs in a 2-dimensional, regular, cellular structure which is functionally homogeneous and isotropic … There are many possible forms of cellular structure which may be used for self-reproduction. Von Neumann chose, for detailed development, an infinite array of square cells. Each cell contains the same 29-state finite automaton. Each cell communicates directly with its four contiguous neighbors with a delay of at least 1 unit of time.”
Thus, thanks to John von Neumann we now know that self-reproducing automata do exist, and there are in fact many types of self-reproducing automata we can design. As to the possibility of designing an automaton capable not just of self-reproduction but also of showing signs of evolution towards greater levels of complexity, von Neumann never again addressed this issue, realizing perhaps that even if he had lived to be one hundred years old or more, this nut would be an extremely hard nut to crack even for one of the greatest mathematicians of modern times.
The fact that the problem of designing self-reproducing automata did not turn out to be an NP-complete problem, and could be solved by resorting to ingenious techniques and mathematical proofs, raises the hope that perhaps the problem of designing an automaton that can exhibit order ζ is also a solvable problem; extremely tough, yes, but solvable. However, a solution is nowhere in sight, and as von Neumann himself stressed, we need an adequate theory of complicated automata before we can even dream of tackling such a major undertaking. And at the end of the second millennium, such a theory does not exist, nor does anybody seem to have even the most remote idea on how such a theory might look like. The only agreement there seems to be is that the problem of cracking down something like order ζ will not come out of sheer experimentation, we cannot rely on serendipity alone to tackle such a tough problem, for the number of possible combinations of rules and initial patterns is a number so big that we could use it to represent infinity itself, yet the number of possible valid solutions in such a vast universe of possibilities may be so small so as to make it impossible to pick out those solutions by mere random trials. Any solution will most likely come as a result of major intellectual breakthroughs that will rival or surpass those required for the task of uniting general relativity with quantum mechanics or for designing an earth-based nuclear fusion reactor.
Assuming that the problem of designing an automaton capable of exhibiting order ζ has been solved somehow somewhere by someone, some interesting observations would indeed arise. Can a cellular automaton exhibiting order ζ also exhibit free will on its own? The answer is of course no. The rules of a cellular automaton such as the game of Life are forward deterministic, and the automaton is not allowed to deviate one iota from the inflexible rules that guide its destiny. Once a cellular automaton has been fully run for many generations, if we start running again the same automaton from the very beginning with exactly the same initial pattern and evolving under the same set of rules, it will follow exactly the same route that we already know is awaiting in its future. Even if an automaton of such complexity was able to acquire an awareness all of its own (and again, this is a very big if that perhaps only the strong advocates of artificial life –“A-Life”- are willing to accept), its belief in free will would just be an illusion. But if we take a close look at ourselves, we should recall that many prominent philosophers and even many theologians have also believed at one time or another in some sort of predestination, and according to them we really have no free will of our own either, constrained as we are to move about under the seemingly inflexible dictum of many natural laws. However, if the automaton exhibits no free will of its own, this is because we have not allowed any element of chance to enter into the rules. If we allow a small element of chance to enter into some of the rules (and this can most certainly be done!), then the automaton could exhibit some unanticipated behavior while selecting one of many different courses of action when confronting certain situations; it would have a capability for choosing. It is interesting to notice that, at the atomic and subatomic levels, matter and energy are governed not by classical mechanics which is fully deterministic but by quantum mechanics which is totally probabilistic; and since every life form in this universe, however big or however small, however smart or however dumb, is still made up of those small discrete probabilistic units we call atoms and molecules, then the apparent randomness of many decisions taken throughout a lifetime may be due precisely because there really are different courses of action that can be taken, with the decisions taking shape as the probabilistic nature of the microscopic world makes its way upwards into a different level of reality, the reality that ticks inside the crevices of our brains, investing each decision with the power to determine a completely different outcome. Thus, any given decision could reflect either a fully deterministic attitude (there are no other rational options available at the time, our fate is sealed), either a completely random attitude (let’s do whatever comes to mind, let’s toss a coin and leave it up to chance), or a combination of both (let’s take things as they come, one-at-a-time). It all depends on how the rules are written for the automaton that will exhibit order ζ.
Another observation that can be added is the fact that, even though the simplest possible combination of states for doing any logical computation consists of just two states (on and off, white and black, one and zero, true and false, etc.), and that this combination is not just the combination selected by John Conway for his game of Life but is also the very foundation of almost every digital computer that has ever been built by Man, John von Neumann opted instead for a cellular automaton consisting of not just two different states but of 29 different states in order to design his self-reproducing automaton. This may not have been accidental. Indeed, in order to design an automaton capable of self-replication, a cell having just two states appears to introduce an enormous burden that makes the problem much tougher to solve. William Poundstone seems to agree with this observation, for we can read near the end of his book The Recursive Universe in reference to Conway’s game of Life:
“It must be stressed again that ‘living’ Life patterns would be huge. And the random field required for spontaneous generation of self-reproducing patterns would be vast beyond imagining. Let’s estimate. We’re interested mainly in the minimum size of self-reproducing Life-patterns … There is Von Neumann’s estimate of 200,000 cells for his cellular self-reproducing machine. But this was in Von Neumann’s 29-state space, not Life. A self-reproducing Life pattern would be much, much larger than 200,000 pixels. [A pixel is the smallest "dot" element available to compose an image on a monitor. Thus, a monitor with a resolution capability of 1024x768 (1024 vertical lines by 768 horizontal lines) would be able to display a total of 786,432 different "picture elements" or pixels. If you approach your monitor closely with a magnifying glass you can actually see each individual pixel.] Two states must do the work of Von Neumann’s 29. Von Neumann’s space was tailor-made for self-reproduction; Life is not … In both the Von Neumann and the Conway automaton, the most fundamental structural unit is the bit. Here, bit means the section of ‘wire’ or position in a glider stream that may or may not contain a pulse/glider. It is that area of the pattern that encodes one bit of information in the pattern’s computer circuitry. In Von Neumann’s model, a single cell encodes a bit. The effective bits of Life computers are much larger … Von Neumann’s machine is 200,000 times larger than a single bit. To an order of one or two magnitude, this ratio probably holds for minimal self-reproducing Life patterns, too. That means that the size of a self-reproducing Life pattern should be about 200,000 times 100 million pixels. This comes to 20 trillion or 2•1013 pixels … That’s big. The text mode of a typical home computer has about 1000 to 2000 character positions. A high-resolution graphics mode might display 100,000 pixels. Displaying a 1013-pixel pattern would require a video screen about a 3 million pixels at least. Assume the pixels are 1 millimeter square (which is very high resolution by the standards of home computers). Then the screen would have to be 3 kilometers (about 2 miles) across. It would have an area about six time that of Monaco … It is practical only to set a lower limit on the chances of a self-reproducing pattern forming in a random field … Clearly the chances of the region containing a self-reproducing pattern can be no worse than 1 in 210,000,000,000,000, even in the initial random state. We can also place an upper limit on the minimum size of field that would be required for spontaneous formation of self-reproducing patterns … It seems pretty certain that self-reproducing patterns would form in a random field of 101,000,000,000,000 pixels. Probably they would form in fields much, much smaller. Unfortunately, there is no easy way of telling how much smaller … Conway’s fantasy of living Life patterns forming from video snow will almost certainly never be realized on a computer. It is barely conceivable that someone could someday go to the trouble of designing a self-reproducing Life pattern in full detail. It is possible that computer technology could advance exponentially to the point where such a pattern could be tracked through its reproduction. Spontaneous creation of self-reproducing patterns is another matter, however. Not even the rosiest predictions for future computer technology are likely to make any difference. Although 101,000,000,000,000 pixels may not be necessary, it seems highly unlikely that a computer/video screen any smaller than (as a guess) the solar system could suffice. Self-reproducing patterns must arise on a big enough screen, but no screen ever likely to be built will be big enough.”
If von Neumann was able to foresee that a cell capable of having not just two or three or five different states but all the way up to 29 different states was mandatory in order to make more manageable the task of constructing a self-replicating automaton, such a design approach may reflect the fact that we ourselves, considered as flesh-and-blood automatons, are made up of “cells” (atoms) that can take not just two but many different states. If this is so, and the principle remains valid for all complex automata, then the only way we may be able to design an automaton capable of exhibiting order ζ would be by resorting to cells capable of acquiring twenty or thirty or perhaps even more different states.
In retrospective, the mathematical problem of designing a cellular automaton by determining the minimum initial conditions required for it to produce at least one lifelike pattern capable of attaining the z-order level of complexity is a much tougher intellectual challenge than actually assembling molecule by molecule the full DNA specification of a human being. Assuming we know the DNA sequence required to produce a human being built to meet certain specifications (the Human Genome Project sponsored by the National Academy of Sciences which began in early 1986 and whose first rough draft was announced jointly on June 26, 2000, by US President Bill Clinton and British Prime Minister Tony Blair, has brought science that much closer to this possibility), once we have determined the position and the nature of each nucleotide base required in our target molecules we could then proceed directly to the construction of the DNA strands on a “piecemeal” basis (in Mary W. Shelley’s novel, Doctor Frankenstein took an easy shortcut by assembling his creation out of whole body parts instead of molecule-by-molecule, an approach that is no longer science-fiction at a time when multiple transplantation of organs can be performed with currently available medical technology). Granted, assembling a human being at the DNA level from an original blueprint would be a monumental task that would strain current technologies to their very limits and would take a very long time to accomplish, but if the assembly proceeds at a non-stop comfortable pace by adding to the growing DNA strand, say, just six nucleotide bases per year, considering that there are about 30 billion nucleotide bases of the human genome thought to encode approximately 100,000 proteins then if we had started about five billion years ago (about five to seven billion years after the Universe was born), then we would have finished the job by now. And once the assembly process has started, it would be a very moronic repetitive job, devoid of any creativity or ingenuity.
In contrast, the task of solving the problem of designing not a three-dimensional human being but just a two-dimensional cellular automaton capable of producing in time at least one pattern that will exhibit ζ-order, having evolved from simple initial conditions (or preferably, from the simplest initial conditions that can be found from all the possible combinations), is well beyond the reach of the best mathematical minds we can gather here on Earth. Any scientist showing up right now with any acceptable solution to this problem would immediately be hailed as the greatest mathematician who has ever lived on this part of the galaxy (or perhaps in the entire universe) not just now but perhaps for many centuries to come, since the solution to the problem goes to the very heart of the mathematical definition of intelligent life. All other problems such as Fermat’s last theorem pale in comparison with the problem of coming up with a cellular automaton capable of producing order ζ. The best that has been done so far has been to try out on a random though very “educated” basis thousands upon thousands of different possible combinations of rules and initial patterns with the vague hope that luck will be on our side and we may stumble accidentally unto something “good”. This is precisely how we have been able to come up with many interesting cellular automata and computer programs such as TIERRA, developed by Thomas Ray, a biologist at the University of Delaware. Stephen Prata gives the following discussion of TIERRA in his book Artificial Life Playhouse: Evolution at your fingertips:
“One A-Life program that has impressed many biologists is TIERRA. The program tests individuals, then culls the less fit. TIERRA provides a more open-ended environment in which survival is the only test. Most A-Life systems have a fixed genome, that is, a fixed number of genes controlling a fixed number of characteristics. TIERRA allows its organisms to develop their own strategies for genomes. Ray felt the essential characteristics of life are: It can self-replicate and it can evolve; these ideas shaped TIERRA. The TIERRA environment consists of a computer CPU, a block of computer memory called the ‘soup,’ and a program that provides the rules for the world. Time on the CPU is a metaphor for energy, and space in the soup is a metaphor for physical resources. The organisms of this world are short programs that reproduce themselves. Each program is a block of code residing in the soup; the code block corresponds to a simple biological cell. When the organism-program is run, it finds an empty area in the soup and copies its own code to that location, producing an identical daughter organism. Ray initially seeded the soup with a single organism, an 80-instruction block of code. Running the code causes the creation of an identical block of code elsewhere in the soup. As each new organism is created, it’s added to a circular queue of programs to be run. (A circular queue is a list in which you start again at the beginning when you reach the end of the list.) Thus, the next cycle of activity would have both the mother and the daughter cell replicate, resulting in four identical organisms, and so on. If nothing else happened, soon the soup would be filled completely by identical 80-instruction, self-replicating code fragments. But Ray arranged for other things to happen: (*) A reaper monitoring program eliminates the oldest creatures and the most defective creatures (those whose instructions generate errors most often). Thus, eventually death comes to all organisms in TIERRA. (*) Mutations randomly flip bits in memory from 0 to 1 or from 1 to 0. This is TIERRA’s equivalent of cosmic rays, mutagens and background radiation. (*) When the creatures replicate, they occasionally have copying errors. (*) Tierran computer instructions occasionally execute incorrectly. (*) Creatures that execute certain instructions delay their death. (*) The system favors self-replicating creatures that use less CPU time because they reproduce faster. This bias favors small organisms, that is, organisms built from a small number of instructions … When Ray first injected his 80-instruction, self-replicating ‘Ancestor’ into the soup in January of 1990, he was debugging his first attempt, so he didn’t expect much to happen. However, not only did the ancestor reproduce, it generated a variety of different creatures. Soon self-replicating programs that needed only 67 instructions developed. Eventually, evolution developed a 22-instruction creature that reproduced nearly six times faster than the original. TIERRA did more than evolve more efficient procedures. For example, Ray discovered a 45-instruction parasite. Lacking the code to reproduce itself, it borrowed the code from adjacent ancestor creatures! Then creatures immune to the parasite took over, only to be attacked by new, cleverer parasites. Social hyper-parasites developed that could not reproduce in isolation, but which could reproduce in cooperative communities. In short, TIERRA is a microcosm of life, death, evolution, and species competition. Build the right world, and life will come.”
In this day and age of Internet and personal computers, the thought might have occurred to some that computer viruses are in their own right self-replicating automata, not unlike the self-replicating automaton that von Neumann was trying to design. If so, how come von Neumann’s self-replicating automaton is substantially more complex than most computer viruses that require no more than a few hundred or perhaps a few thousand lines of computer code for their specification? The answer lies in the total amount of resources that need to be allocated in order for the program to run. Indeed, a computer virus is nothing more than a short piece of maliciously but cleverly written computer code. And just as in the case of its biological counterpart, it is by its very nature a parasitic contraption. It cannot reproduce on its own. It needs to infect a host, and it must use the usually complex machinery of the host to carry out its self-replication. No host, no replication. It’s that simple. And the host provides a lot more than just a small section of its memory. It provides a highly complex central processing unit that will execute the instructions contained in the machine code of the virus. It provides an operating system (such as Windows or Linux) under which the virus will be allowed to function while it carries out its replication task. And it provides a small portion of non-erasable memory (the hard drive) where the computer will survive while it awaits for its chance to be downloaded or activated. On the other hand, the self-replicating automaton of John von Neumann is also self-sufficient in the sense that the initial pattern and the rules contain all of the information required to carry out self-replication, without the need of a host, since von Neumann’s automaton was not designed to be a parasitic creature, it was designed to be a creature all to its own.
For the above reasons, a computer virus cannot possibly be considered the precursor of artificial life nor can it be used as a starting point for the solution of the problem of coming up with order ζ. The biological virus needs previously existing cells in order to reproduce, and the computer virus is incapable of doing any better. If anything, a computer virus (or a viral pattern inside a cellular automaton) is a destroyer of order, not a creator of order. It takes not much more than a rather mediocre computer hacker to design a computer virus that can do a lot of damage. But coming up with order ζ, if such a thing is ever attainable, will require the most talented minds that the human race may be expected to produce for some time to come.
If a young mathematician could talk to the Universe regarding the problem of designing a three-dimensional cellular automaton capable of growing a structure exhibiting order ζ, perhaps he would say:
-“The problem is not just extremely difficult to solve, it is impossible to solve even on a time scale of many trillions upon trillions of years, even by discarding from the very outset many possible combinations of rules and initial patterns that will not work, and even by coming up with many simplifying assumptions with each assumption representing a major mathematical breakthrough in its own right.”
Then perhaps the Universe would reply:
-“Actually, it’s not that difficult. The problem has already been solved, and it took me less than some ten billion years to implement the solution. Just look at yourself. You are the living proof that the problem as intractable as it might seem can be solved. Otherwise, you would not be here today, carrying all those wonderful abstract formulas and thoughts in your head. As if this was not enough, the problem has been solved many times over uniquely and differently, and each mathematician like yourself is a one-of-a-kind, with no two mathematicians being exactly alike. That’s why there was only one Gauss, only one Euclid, only one Bernoulli, only one Leibniz, only one Fermat, only one von Neumann, only one Wiles, and so on. It is the very diversity of minds exhibiting order ζ that has allowed mathematics to be enriched with so many different ideas.”
Then the mathematician would attempt to take the credit away from the Universe by adding:
-“Yeah, you solved it all right, not by exhibiting any form of intelligence but by using the brute force approach resorting to large numbers, trying out many possible combinations of rules here and there and everywhere, so that it was inevitable that somewhere the solution would appear out of sheer luck.”
To which the Universe would quickly reply:
-“Au contraire, my dear mathematician, au contraire. I started out since the very beginning with a very simple combination of rules picked out even before my creation from among a number so huge that even I pale in comparison when I look at such numbers. This very simple combination of rules is exactly the same on Earth as it is on the most distant galaxies you can detect. Electromagnetism is no different here on Earth than it is on Alpha Centauri. The atoms out of which you are built are exactly the same in the solar system as those found anywhere inside and beyond the Milky Way. The periodic table of elements is identical everywhere within my confines, and your scientist friends the chemists and the physicists have not been able to find a single exception no matter how hard they look. It is the universality of the basic simple set of rules from which I was forged that gives you high hopes and expectations that there may be many other planets harboring life forms like you, self-replicating and exhibiting order ζ and discovering the same geometrical relationships, inventing calculus, developing trigonometry, designing very simple-minded cellular automata, and so forth. If I had been trying out the many different possible sets of rules here and there, what you call natural laws would only be valid in your part of the galaxy, and elsewhere you would be discovering that atoms could be made of different sizes, that such things as gravity would act differently in other corners, and I the Universe would resemble a gigantic random soup out of which, somewhere, according to you, order ζ would be likely to appear. But this is not so, what you see near you behaves the same as what you see very far away from you. So, the problem of designing order ζ from scratch by starting out with a simple set of rules was already solved when I was born. You see no evidence of any brute force approach that is resorting to the sheer power of large numbers in order to bring into being mathematicians such as yourself.”
-“But, what about evolution?”
-“You are kidding yourself, my dear mathematician, and you know it. Evolution was just an intermediate step in a way to achieving something else. Evolution did not spring out of nothing; it was able to function because the rules allowed it to do so. The rules came first and evolution came later, not the other way around. And the solution to the problem of creating order ζ was already solved long before stars and planets were born and had had any chance to evolve. Evolution came much, much later. So, you see, when I was born the problem was already solved, and it’s up to you to enjoy the end result. The initial conditions of my birth already had the necessary information to execute the solution.”
-“And who came up with those initial conditions, in order that they could have the precise combination of rules, values and patterns they had at your moment of birth?”
-“End of conversation.”
And the Universe goes silent.