Microevolution, better described as variations within a kind, can be seen all throughout nature, but to suggest that it supports macroevolution is terribly misleading.

A catastrophe, such as a global flood, could seriously alter dating method results.

Mankind comes in a variety of shapes and sizes, but we remain just that ... the same "kind". None is superior or inferior to another, for all are descendants of Adam who was made "in the image of God."

Light travels quickly, as any lightning storm will reveal. The light is first seen, and then the thunder is heard. As fast as light may seem, however, it once was faster, both according to evolutionists and creationists.

No, we don't, and in the proper context of this answer, even evolutionists will agree. What we don't see and what we can't see is macroevolution happening.

"Evolution, at least in the sense that Darwin speaks of it, cannot be detected within the lifetime of a single observer." David B. Kitts, Ph.D. (zoology) (School of Geology and Geophysics, University of Oklahoma, Noman Oklahoma, USA). 'Paleontology and evolutionary theory'. Evolution, vol. 28, September 1974, p. 466.

"No wonder paleontologists shied away from evolution for so long. It seems never to happen. Assiduous collecting up cliff faces yields zigzags, minor oscillations, and the very occasional slight accumulation of change over millions of years, at a rate too slow to really account for all the prodigious change that has occurred in evolutionary history. When we do see the introduction of evolutionary novelty, it usually shows up with a bang, and often with no firm evidence that the organisms did not evolve elsewhere! Evolution cannot forever be going on someplace else. Yet that's how the fossil record has struck many a forlorn paleontologist looking to learn something about evolution." Niles Eldredge - Chairman and Curator of Invertebrates, American Museum of Natural History. "Reinventing Darwin: The Great Evolutionary Debate," (1995), phoenix: London, 1996, p. 95.

The question is, what is macroevolution?

Macroevolution: Change from one kind of animal to another kind i.e. reptile to bird.Microevolution: Variation within a kind i.e. dog, wolf, coyote are different species, but the same ‘kind’.

The evolutionist will immediately respond, "Of course we don't see macroevolution happening. It takes too much time." To this response we must in turn respond by asking, "If we don't see macroevolution happening, but only micro, is microevolution really evolution at all?"

One might wonder exactly why creationists aren’t willing to accept macro as a scientific fact if they will accept micro. Part of the problem is with the term “microevolution” itself. This term sounds like small changes are occurring that will lead to macroevolution provided there is enough time. However, when creationists say they believe microevolution occurs, what they really mean is they believe variations within a kind of animal or plant can occur. Sometimes this variation can lead to a new species, and in some cases, even a new genus. But the variations have limitations. That limitation is within the genetic information of the organism. For instance, dogs can produce numerous varieties of dogs, but they will never produce a fundamentally different kind of animal, such as a bird. It’s just not within their genetic content.

Evolutionists will bring up countless examples of what they deem to be examples of evolution in action, from drug resistance to vestigial organs to breeds of dogs to, yes, even seedless oranges. Problem is, nearly all of the examples given are a result of the loss or rearrangement of genetic information. Is that how evolution works? You lose everything until you have it all? Truth be told, there is no such thing as a vestigial organ, whether it be your appendix or tailbone or tonsils or what have you (all of which serve a variety of purposes). Even if there were vestigial organs, that would be an example of loss, not gain. While we realize evolution isn't just a series of gains, it must rely on a vast majority of gains in comparison to losses in order to go from goo to you by way of the zoo. Breed your Great Dane into a Chihuahua, but try breeding your Chihuahua into a Great Dane. It cannot be done, because the information is not available.

And therefore, one simply cannot expect a single-celled organism to reach homo-sapien status by relying on a majority of mutational losses over billions of years. Scientists will readily admit that most mutations are harmful, and our very own Authority Explorer has reported many such cases.

A concise answer to this question could prove difficult, but perhaps not if we focus on the fundamentals. Our desire also is to make it as understandable to the lay reader as possible.

Before we get into the technicalities, consider the following observation made by J.E. O'Rourke, an evolutionist, in an article titled, "Pragmatism versus Materialism in Stratigraphy," published in the American Journal of Science in 1976 (vol. 276 (January, pg. 54).

"Structure, metamorphism, sedimentary reworking, and other complications have to be considered. Radiometric dating would not have been feasible if the geologic column had not been erected first."

Some evolutionists complain that this quote is taken out of context, but even in its entirety O'Rourke's point still stands. Although he is not discounting dating methods as a whole, he is expressing how vitally important the geologic column is to them. "...some enthusiasts," he says later, "are already talking about its replacing stratigraphy entirely." That is, some think dating methods should completely nix the stratified geologic column as we know it. Most evolutionists, however, would heartily oppose such a notion, knowing full well that the geologic column, with its "index fossils", is a vitally important companion to dating methods. So much, in fact, that dating methods cannot work without it.

And lastly, a quote which speaks for itself.

"As for having all the credit passed to the physicists and the measurement of isotopic decay, the blood boils! Certainly such studies give dates in terms of millions of years, with huge margins of error, but this is an exceedingly crude instrument with which to measure our strata and I can think of no occasion when it has been put to an immediate practical use. Apart from very 'modern'' examples, which are really archaeology, I can think of no cases of radioactive decay being used to date fossils."" (Ager, Derek V., "Fossil Frustrations," New Scientist,, vol. 100, 1983, p. 425.)

Let's move on.

Radioisotope dating methods rely on three major assumptions. They are:

1. Constant decay rate
2. No loss or gain of parent or daughter
3. Known amounts of daughter present at start

Again, we want to make this understandable, yet you must try to concentrate as best you can.

Before delving into these three assumptions, let's explain how rocks are dated. The first and most popular radioisotope dating technique which forms the basis for all of the others utilizes uranium-238 (U), which eventually decays into lead-206 (Pb). Notice the terms "Parent" and "Daughter" that were mentioned above. In this case, uranium would be the parent, and lead would be the daughter. Why? Because uranium is what you start with, and lead is what it eventually turns into. The parent is older than the daughter, and so these terms help us to remember which is which.

The half life of uranium-238 is 4.5 billion years. What's the "half life"? The half life, in this example, is the time it takes for half of a given number of uranium-238 atoms to turn into lead-206.

Constant decay rate

Uniformitarianism . . . this is the foundation of all dating methods. Without it, ages become void, and nothing is certain. The present is key to the past, so the theory holds, and what we see today is what, essentially, has always been. The question is, has it?

No one knows. No one can know for certain, which in and of itself raises serious questions for those who place great faith in dating methods.

“The age of our globe is presently thought to be some 4.5 billion years, based on radiodecay rates of uranium and thorium. Such ‘confirmation’ may be short-lived, as nature is not to be discovered quite so easily. There has been in recent years the horrible realization that radiodecay rates are not as constant as previously thought, nor are they immune to environmental influences. And this could mean that the atomic clocks are reset during some global disaster, and events which brought the Mesozoic to a close may not be 65 million years ago but, rather, within the age and memory of man.” (Frederic Jeuneman, “Secular Catastrophism”, 1982)

Indeed, any number of catastrophes could alter the results, not the least of which is a global flood (which in turn would effect the atmosphere), an occurrence that not only the Bible, but hundreds of legends from cultures around the world, attest to. Additionally, observations in astronomy over the past 325 years have shown a definite measured statistical decrease in the speed of light. If the speed of light has decreased substantially, this would alter all radiometric dating method results to produce billion-year-old dates for objects that are no more than a few thousand years old.

No loss or gain of parent or daughter

This assumption is equally important. It assumes that no amount of parent or daughter concentrations were lost or gained since the rock's formation. Some of the intermediate stages (i.e. the products between uranium and lead) are highly mobile gases, and leaching can easily occur.

"The above evidence conclusively demonstrates that the U/Pb system, including its intermediate daughter products, especially Ra and Rn, has been so open with repeated large scale migrations of the elements that it is impossible to be sure of the precise status-history of any piece of pitchblende selected for dating." (Snelling, Andrew, "The Age of Australian Uranium," Creation Ex Nihilo Vol. 4, No. 2, 1981, pp.44-57)

Truly, how can one be certain that no leaching or contamination has occurred with any sample when it has been in existence for over millions or even billions of years? No matter what the precautions taken, one cannot conclusively assert any rock as fool-proof.

Known amounts of daughter present at start

This assumption is perhaps the most crucial of them all. It assumes the original quantity of particularly the daughter product. However, if some of the daughter product was present at the sample's beginning, the rock would already appear to be old.

And such has been the case in countless examples, of which we'll share two.

The eruption of Mt. Rangitoto in New Zealand was dated by radiocarbon studies of destroyed trees as occurring less than 300 years ago. Potassium-argon dating, however, gave it a date of 485,000 years. (McDougall, I., et al., "Excess Radiogenic Argon in Young Subaerial Basalts from Auckland Volcanic Field, New Zealand," Geochemica et Cosmochemica Acta, Vol. 33, 1969, pp. 1485-1520)

Sunset Crater in northern Arizona is believed to have erupted around 1065 A.D., due to tree-ring dating and the tales of local Indians (Indian artifacts and remains have also been found within the rocks formed by the volcanic eruption). Much to everyone's surprise, however, potassium-argon gave the lava flows ages of 210,000 and 230,000 years. "Excess argon gave the lava such old dates" is the explanation. The answer ... Of course! (Dalrymple, G.B., "40 Ar/36 Ar Analyses of Historical Lava Flows," Earth and Planetary Letters, Vol. 6, 1969, pp. 47-55)

"The entire hominid collection known today would barely cover a billiard table, but it has spawned a science because it is distinguished by two factors which inflate its apparent relevance far beyond its merits. First, the fossils hint at the ancestry of a supremely self-important animal - ourselves. Secondly, the collection is so tantalizingly incomplete, and the specimens themselves often so fragmentary and inconclusive, that more can be said about what is missing than about what is present. Hence the amazing quantity of literature on the subject. Very few fossils indeed afford just one, incontrovertible interpretation of their evolutionary significance. Most are capable of supporting several interpretations. Different authorities are free to stress different features with equal validity, often placing remarkable emphasis on the form they propose for the bits that are missing. Points distinguishing the various interpretations may be so slight or unclear that each depends as much upon the proponent's preconceived notions as upon the evidence of the fossil. Furthermore, since the meager collection has accumulated so slowly, the long gaps between discoveries have provided ample time for investigators to form very definite notions of what ought to be found next. Zinjanthropus boisei is a good example of this phenomenon, but ever since Darwin's work inspired the notion that fossils linking modern man and extinct ancestor would provide the most convincing proof of human evolution, preconceptions have led evidence by the nose in the study of fossil man." John Reader (photo-journalist and author of Mission Links), "Whatever happened to Zinjanthropis?"New Scientist, 26 March 1981, p. 802.

There are many highlights to this quote.

1. The entire collection of ape/man bones in the world could fit on a "billiard table".2. The collection of ape/man bones are "tantalizingly incomplete," and more could be said about "what is missing rather than what is present".3. Most fossils can have multiple interpretations, and they're based on "preconceived notions".4. Preconception "have led evidence by the nose in the study of fossil man".

Our Ancestors

Piltdown Man: Exposed as a forgery in 1953, Piltdown Man's skull was of a modern human being and the teeth on the ape's jaw had deceivingly been filed down.

Nebraska Man: Constructed from one tooth, Nebraska man was later revealed to be a peccary, an animal similar to a pig.

Java Man: Constructed from a skullcap and thigh bone (found a year later and some fifty feet away), Java Man was first considered by most in the anthropological community as truly human, able to reproduce with any other member of the human family. There was no question about the thigh bone, as virtually every authority except Dubois himself believed it was indistinguishable from a modern human femur. They did believe it was racially inferior, however, in the same way they considered Africans, native Australians, and Southeast Asians to be inferior.

Java Man's discoverer, Dutch anatomist Eugene Dubois, insisted it was not human, nor ape, but an intermediate, pushing his case in one of his earliest papers (1895) entitled, "On Pithecanthropus erectus: a Transitional Form Between Man and the Apes." Famed Cambridge University anatomist Sir Arthur Keith commented on Dubois' paper by saying that Java Man's skullcap was distinctly human in terms of both cranial capacity and the large muscular ridges and processes connected with the chewing apparatus. Sir William Turner also pointed out that the cast of a microcephalic woman in the Edinburgh University Museum possessed a frontal flattening extremely similar to that of Java Man.

Dubois hated for anyone to disagree with him, and labored intensely to discount other fossils similar to his own Java Man that were later found (he wanted his to be unique, and searched for miniscule aspects that differentiated his bones from others). G.H.R von Koenigswald wrote of him: "...on this point he was unaccountable as a jealous lover. Anyone who disagreed with his interpretation of Pithecanthropus was his personal enemy."Meeting Prehistoric Man, Michael Bullock trans., (New York: Harper and Brothers, 1956), 32.

In any case, the considerable distance between the two bones of Java man raise serious doubts regarding "his" true identity. Fifty feet is a long way. Whether they belong together or not, however, matters little. Today, Java Man is called Homo erectus, as is Peking Man (whose femora differed from Java's more "modern" femur), and Homo erectus lived alongside Homo sapiens. Evolutionists as a whole will now concede this point.

As quoted in a USA TODAY article: "Finding such a link wouldn't prove sexual contact between Homo sapiens and Homo erectus ... but it would strengthen the argument that the two species were in very close proximity."(USA TODAY, Tuesday, Oct 5, 2004, 6D)

As can be seen, the evolutionary world not only allows for the coexistence of both "species", but also entertains the idea of sexual contact, which would place them in the same "kind".

The conclusion? The difference between Home erectus and Homo sapien is an artificial one. Both are human, and lived as contemporaries. What is called evolution by mainstream science is instead just genetic variation within the human family. Remember, according to evolutionists, Java Man had a femur more modern than that of Peking Man. Because of this, it is difficult for them to maintain a species difference between Homo erectus and Home sapien. The difference would be insignificant.

This is a question that frequently gets brought up when discussing the age of the universe (especially when speaking about young-earth creation). There are a few possibilities on why we see the stars even if the universe is young. Is it possible that God created the stars with their light visible from the beginning? Has the speed of light always been 186,000 mps? What about time itself? Is it relative or is it constant?

1.Mature creation: When God created, he may have created some things in their mature form. For example, if one were to look at Adam the day he was created, that person may come to the conclusion that Adam is somewhere around 25 years old. However, this would be wrong; Adam wouldn’t even be a day old. This is because God created Adam as a mature young adult. If this is the case with Adam, then it is possible that God created trees with rings, rivers with beds, and yes, maybe even stars with their light visible. Although this is a possibility, it won’t satisfy most critics.

2.Has the speed of light remained constant throughout all of space and time? This might be the key to answering this question. Starlight distances are calculated by how long it takes light to reach the earth based on its current speed. Today, light seems to travel at a constant speed of 186,000 mps. However, there is evidence and documented research which suggests that light may not travel at a constant speed:

Black holes can attract light.[1] If gravity can attract light, then it is a clue that light does not travel at a constant speed.

Danish Physicist, Dr. Hau from Havard University has slowed light down to just 38mph by cooling it to -459.67 (fifty billionths of a degree above absolute zero).[2]

“Prof Alan Lightman Op-Ed article appreciates achievement of physicists at Harvard and Smithsonian who have succeeded in slowing light down to a crawl and finally a dead stop."(NY Times "Capturing the Light". Feb 7, 2001)

In 2003, Discover Magazine published an article based on the work of physicist Joao Maguejo challenging the constant speed of light:

“Extreme heat made photons—particles of light—zip along much faster than 182,282 miles per second, at almost infinite velocities. Then, as the universe began to expand and cool, its physical properties abruptly changed just as water suddenly transforms into ice at the freezing point. When the temperature dropped below a critical transition value, light ‘froze’ at the lower speed we now observe.”(Tim Folger, Discover, “At the Speed of Light” Vol. 24 No. 4 p.36, Apr 2003)

If light can be affected by gravity, and apparently sped up by heating and slowed by cooling, then it is obviously not a constant. We have to keep in mind that a “light year” is not a time, but a distance in which light can travel. Obviously, this would change if the speed of light were faster or slower.

3.Relative Time. Physicist Russell Humphreys proposed a new creationist cosmology back in 1994 to explain this question, and has written a book on his theory.[3] His theory uses Einstein’s theory of general relativity, which teaches time itself can be distorted by gravity. Humphreys’ theory assumes the universe has a limit, a point where there is no more matter. If the universe has an ‘edge’ then it also has a ‘center’. And, if the universe has a center, then most of the net gravitational effect in the universe would be focused to the center, and less effective to the edge.[4] If the earth is close to the center of the universe, then it would be in an area where the net amount of gravitational effect in the universe is stronger. Furthermore, if the distant stars are closer to the edge of the universe, then time passing where the stars are might appear to be much faster relative to the time passing on the earth.

There is also evidence that the universe is expanding. If this is the case, then it had to expand out of a previous state where it was surrounded by an event horizon (known has a ‘white hole’). As the matter expanded out of this event horizon, the horizon had to shrink (and eventually disappear) and would have been touching the earth at some point. At this moment, time on earth would be virtually frozen (though people on earth wouldn’t feel any different) relative to a point away from the event horizon. (This is similar to how the general theory of relativity teaches that at the event horizon of a ‘black hole’, time sits still.) This would mean billions of years in space could pass while only a very short amount of time on earth would pass.[5]

There seems to be a lot of support for the concept that the speed of light is not constant. Joao Maguejo has recently published a book[6] which continues to give credence to the idea that the speed of light was much faster in the past. Russell Humphreys’ theory seems to solve the problem of how we see starlight in a young universe by questioning time itself, which goes well with Einstein’s General Relativity. Whichever theory is correct, the starlight issue is not a problem for young-earth creation.