For those who found the last installment of this series unsettling with an instinctive reflex to counter-punch, I would like to suggest that there is a more productive place to put your energy. It's something of a puzzle to science: biological origins. We touched on this in an earlier segment, but now I would like to take a closer look at this issue.

In the early 19th century, William Paley used the watchmaker analogy to argue for design as the basis for biology. He did so by noting that if a pocket watch were to be found on the ground, that unlike the finding of a stone, the most reasonable assumption would be that someone dropped it there, and that it was made by a watchmaker (1). This argument has received a great deal of critical attention over the years from scholars as to the conclusions that can be appropriately drawn. Chief among objections are those having to do with the inadequacy of extrapolating from watch to biology.

Design arguments have never been completely dead, but in recent years they have rebounded in a number of creative ways. While many of these attempts are clearly inadequate, there is one particular form of design argument currently being made that approaches the watchmaker analogy in a relatively sophisticated manner that correlate to some extent with advances made in our understanding of both information theory and statistical probability theory. In what follows I will attempt to provide a summation of this approach, its contribution and its limitations.

We can begin by thinking about random occurrences, and the first thing we should take note of is that we are surrounded by them. For example, while visiting the city of Prague a couple of years ago, we bumped into Newt Gingrich—it was completely random. Yet the same can generally be said about every person we passed on the street. The larger point is that even though these random events happen with regularity on a daily basis, we should not assume that all random events are mathematically equal. There are, in fact, two types of seemingly random events that are qualitatively different and therefore impact the sort of conclusions that can be drawn.

First of all, let’s consider the average random event. Such occurrences will always have a degree of complexity associated with them due to the multitude of variables that will be in play. Our bumping into Newt Gingrich required an exact independent alignment of schedules, and it also involved picking his face out of a sea of people in order to make the connection.

Yet the fact that this event was improbable—but happened—did not raise any suspicion about it having any functional significance, or of being a part of some larger pattern. The reality is that random events of low probability happen with regularity and are a natural part of existence. The distinction that can be made, then, for the other type of random occurrence is that it involves a seemingly improbable event that also turns out to be functionally significant, or part of some independent pattern.

It is this subgroup of improbable occurrences that mathematician William Demski (2), notes as being of important statistical significance due to its capacity to raise an inference of design—that is, those in which a pattern emerges that is independent of the event, or that involves functional significance, conveying meaning. He refers to such improbabilities as “specified complexity (3).” Let us take a look at some of the elements of this type of improbability.

Written language is an example of specified complexity, because words, sentences and paragraphs generally do not materialize when an alphabet soup of letters is tossed into the air and the letters randomly arrange themselves on the ground. The probability of finding a random word is going to be necessarily different than finding a randomly composed paragraph or book from an alphabet soup. When we find a group of alphabetic letters that communicates information to us, we know immediately that it is the product of design, and our confidence in this conclusion rises as the probability of a random occurrence declines.

In a similar way, the binary code upon which computers operate is an example of specified complexity, it being composed of only 0s and 1s. It is the specific arrangement of those 0s and 1s that creates the ability of the computer to act in intelligent ways, conveying information. Behind it we always know there is designer.

In fact, the alphabetic and binary notation are quite similar to deoxyribonucleic acid (DNA), the digital code for biological organization. DNA is composed of four nucleotide base pairs A-T-C-G, but the real head-turning statistical reality is that this code comprises a string of more than 3 billion long within the human cell, and it is only the exact arrangement of this code that gives biology viability. Could this be the product of a random event characterized as mere complexity, or is it specified complexity that is the product of design? Well, the answer is sufficiently compelling to have led one of the leading atheists of the 20th century, Anthony Flew, to change his mind in favor of theism.

The statistical chances of the DNA code assembling 3 billion base pairs, and to do so as a completely random event to create a functional protein (specified complexity) are mathematically astronomical. This is the dilemma that naturalistic science is up against, for such analysis requires an accounting of *time*, *particles*, and *opportunity* all coming together to form something as complex and significant as life. Stephen Meyer observes that given the probabilistic resources of the universe, the odds of producing a functional protein stands at one chance in a trillion, trillion (4). It is from this context that an inference of design arises as the best explanation among multiple working hypotheses. This reality should be the basis for common ground.
__
*Jan M. Long, J.D., M.H.A., works for the County of Riverside, California.*

Read all the previous articles in this series: The Search for Common Ground on Genesis.

- See William Paley,
*Natural Theology*(1802). In it he states the following. “In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for anything I knew to the contrary, it had lain there forever: nor would it perhaps be very easy to show the absurdity of this answer, but suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer I had before given, that for anything I knew, the watch might have always been there. There must have existed, at some time, and at some place or other, an artificer or artificers, who formed [the watch] for the purpose which we find it actually to answer; who comprehended its construction, and designed its use. Every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature’ with the difference, on the side of nature, of being greater or more, and that in a degree which exceeds all computation. - William Demski holds a Ph.D. in mathematics and is considered the leading proponent of this sort of analysis. However, Stephen Meyer, a colleague with a doctorate in the History of Science from Cambridge University, has written a popularized account of Demski’s mathematical methodology in his book entitled “Signature in the Cell.”
- See generally, Stephen Meyer,
*Signature in the Cell*, HarperCollins Publisher (2009) - See Stephen Meyer,
*Signature in the Cell*, p. 218.

This is a companion discussion topic for the original entry at http://spectrummagazine.org/node/2807