*This is the first installment in a three-part subseries in our "Bringing the Real World to Genesis" series, curated by Jan M. Long. These three articles, written by Mailen Kootsey, address the sources of variation in biology. Previous "Bringing the Real World to Genesis" articles can be found here. *

A review in the January 25, 2013 issue of Science [1] estimated that there are between 2 and 8 million species of eukaryotes (animals and plants, mostly multicellular) on the earth at the present time. The number is so large that the review article questioned whether scientists would be able to name all these species before they became extinct, which they appear to be doing at a rate of a fraction of a percent a year. How did this enormous variation in life originate?

Evolution pictures a tree-like structure of inheritance of living things with an increasing number of final branches as species differentiated over long periods of time to adapt to different environments. For the young-earth creationist, diversity begins with a relatively small number of species exiting from an ark so a mechanism is needed for rapid differentiation to arrive at the millions of species on earth today. Whether your preference in origins involves a long or short chronology, there is a need to understand the origins of the enormous range of variations in the world of biology.

Three mechanisms of variation have been referenced in articles and comments in this series: randomness, chaos, and emergence. In Part 1, I will discuss randomness and chaos. Part 2 will consider emergence and Part 3 the significance of the three mechanisms in thinking about origins.

**RANDOMNESS**

The primary mechanism assumed by most people to be the cause of variations in evolution is random changes in the genetic structure of cells, combined with natural selection. Random changes in genes can be caused by radiation damage (cosmic rays or local radioactivity) or by errors in copying from one generation to the next, for example.

The meaning of randomness is best understood in terms of a series of events: The events are random if a given descriptor of each event in the series has a value independent of that descriptor for the prior events in the series. For example, a series of events is random if the time to each new event is unrelated to the times between prior events. A second example would be a series of numbers where the value of each new number in the series is unrelated to the values of the preceding numbers. Randomness means no discernible pattern is present. There are statistical tests to identify randomness, so we don’t have to depend on intuition to recognize when it occurs.

Radioactive decay is perhaps the best-known example of randomness. In radioactive decay, the unstable nucleus of an atom decays spontaneously, emitting ionizing radiation and leaving the nucleus in a different state. In a large group of unstable atoms of the same type (a chunk of the material if it is a solid), each atom has the same probability of decaying in a given length of time, say one year. There is no way to tell in advance which atom is going to decay next, only that a certain fraction of all the atoms will decay in the year. The result is a series of identical events that occur at random times, times that are not related to the times of all the previous individual decays, although the average rate of the events has a known behavior: exponential decrease with time.

Randomness also plays a role in chemical reactions. Take, for example, the simple reaction

A + B < -- > C

If A, B, and C represent three species of molecules (some or all species may be ionized) floating in a solution, this reaction equation says there is a certain probability that an A will combine with a B to produce a C and also a different probability that a C will break down into an A and a B. As in the case of radioactive decay, there is no way to predict which A and which B will get together next in time or exactly when a C might break down; those are random events. Only the average rates from left to right and right to left or the net movement from one side to the other can be measured and studied.

Scientists who construct models of natural processes involving randomness have to choose between two different approaches: 1) to include randomness directly in their model or 2) to consider only the instantaneous average rates (radioactive decay rate or chemical reaction rate, for example) and ignore the detailed randomness. This choice is generally made on the basis of the goals for the model. Models of the first type are given the name “Monte Carlo” because solving a model requires generating a series of random numbers equivalent to rolling dice repeatedly. This requirement does raise a problem for computer solutions. True randomness occurs only in natural processes. Digital computers cannot produce a truly random sequence of numbers without some external sensor input. However, a computer can be programed to produce a “pseudo-random” sequence of numbers – a sequence that qualifies as random for a while, but eventually repeats and begins the sequence again.

**CHAOS**

The term “chaos” has a popular lay meaning that is not unlike randomness. The scientific definition of chaos is quite different, however. Edward Lorenz discovered the phenomenon accidentally as he was testing some simple mathematical models of weather in an early computer in the 1950s.

It had been an accepted principle of science that small influences have small effects. As James Gleick writes in his book *Chaos*[2]: “Classically, the belief…was well justified. It worked. A tiny error in fixing the position of Comet Halley in 1910 would only cause a tiny error in predicting its arrival in 1986, and the error would stay small for millions of years to come.

One day Lorenz decided to take a shortcut. Instead of repeating a whole computer run, he decided to start over again in the middle and repeat just the last part. To his surprise, the results were very different! Computers are supposed to be exactly repeatable in their calculations! Checking the operation carefully, he discovered the source of the difference. The computer was storing its calculated numbers to six decimal places internally, but the printout of results from the original run he used to input the starting values for the repeated last past only showed three decimal places to save space. So an internal value of .506127 in a complete run was replaced by .506 in the repeat of the last part. That tiny difference of 1 part in 5000 in a starting value was enough to completely change the calculated results! Lorenz had discovered chaos, where the output of a system – real or mathematical – is extremely sensitive to changes in input over at least some range of the input. In 1972, this phenomenon was given the popular name of “butterfly effect”: “flapping of a butterfly’s wings in Brazil can cause a tornado in Texas” (your choice of distant geographic locations). Chaos has since been observed in many systems besides models of weather. For example, small variations in the beating rate of the human heart have been shown to be chaotic. A mathematical model of the nerve action potential went from firing its electrical spike to non-firing with a stimulus change of 1 part in the 15th decimal place.

A system does not have to be complex in order to demonstrate chaos. Chaos is simply a behavioral characteristic of some systems of mathematical equations (as well as the real physical systems the equations represent). The classic example is the set of three relatively simple ordinary differential equations in three variables named for discoverer Edward Lorenz [3]. Complex systems can also exhibit chaos, of course, as illustrated by the recent models of weather that have hundreds of thousands of variables. While chaos may produce some apparently random results, chaos is completely unrelated to randomness in its origins, i.e. a system does not have to contain any random mechanisms in order to demonstrate chaos.

In biology, the mechanism of chaos can produce behavioral differences between individuals of a single generation or in a single individual at different times. However, chaos does not necessarily produce any inherited changes in following generations. So, if chaos is invoked to account for the development of new species, it would also be necessary to hypothesize a link between changes due to chaos in the system and genetic change.

The next installment of this series will discuss the relatively new concept “emergence.”

Mailen Kootsey has a BA from Pacific Union College and a PhD from Brown University, both in Physics. He had a 41-year career in university education, including faculty and administrative positions at Duke University, Andrews University, and Loma Linda University. His expertise is multidisciplinary, having had appointments in departments of Physics, Physiology, Computer Science, Biomedical Engineering, and Biology. Dr. Kootsey published research on ion transport and electrical activity in heart muscle, applying techniques of mathematical modeling and computer simulation. He is now COO of Alexandros, Inc., an international business company.

*Art: Josh Keyes, Burst, acrylic on panel, 2009*

**REFERENCES**

1. Mark J. Costello, Robert M. May, and Nigel E. Stork, “Can we name earth’s species before they go extinct?”, *Science*** 339**:413-416 (2013).

2. James Gleick, *Chaos: Making a New Science*, Penguin Books, New York, 1987, p15.

3. Edward Norton Lorenz,"Deterministic nonperiodic flow". *Journal of the Atmospheric Sciences*** 20** (2): 130–141 (1963).

This is a companion discussion topic for the original entry at http://spectrummagazine.org/node/5447