Signed Dated Essay

An Essay on Radiometric Dating

By Jonathon Woolf
http://my.erinet.com/~jwoolf/rad_dat.html

 

Radiometric dating methods are the strongest direct evidence that geologists have for the age of the Earth. All these methods point to Earth being very, very old -- several billions of years old. Young-Earth creationists -- that is, creationists who believe that Earth is no more than 10,000 years old -- are fond of attacking radiometric dating methods as being full of inaccuracies and riddled with sources of error. When I first became interested in the creation-evolution debate, in late 1994, I looked around for sources that clearly and simply explained what radiometric dating is and why young-Earth creationists are driven to discredit it. I found several good sources, but none that seemed both complete enough to stand alone and simple enough for a non-geologist to understand them. Thus this essay, which is my attempt at producing such a source.

Contents:

·         I. Theory of Radiometric Dating

·         II. Common Methods of Radiometric Dating

·         III. Possible Sources of Error

·         IV. Creationist Objections to Radiometric Dating

·         V. Independent Checks on Radiometric Dating

·         VI. Summary and Sources

 

I. Theory of radiometric dating

 

What is radiometric dating? Simply stated, radiometric dating is a way of determining the age of a sample of material using the decay rates of radio-active nuclides to provide a 'clock.' It relies on three basic rules, plus a couple of critical assumptions. The rules are the same in all cases; the assumptions are different for each method. To explain those rules, I'll need to talk about some basic atomic physics.

There are 90 naturally occurring chemical elements. Elements are identified by their atomic number, the number of protons in the atom's nucleus. All atoms except the simplest, hydrogen-1, have nuclei made up of protons and neutrons. Hydrogen-1's nucleus consists of only a single proton. Protons and neutrons together are called nucleons, meaning particles that can appear in the atomic nucleus.

A nuclide of an element, also called an isotope of an element, is an atom of that element that has a specific number of nucleons. Since all atoms of the same element have the same number of protons, different nuclides of an element differ in the number of neutrons they contain. For example, hydrogen-1 and hydrogen-2 are both nuclides of the element hydrogen, but hydrogen-1's nucleus contains only a proton, while hydrogen-2's nucleus contains a proton and a neutron. Uranium-238 contains 92 protons and 146 neutrons, while uranium-235 contains 92 protons and 143 neutrons. To keep it short, a nuclide is usually written using the element’s abbreviation. Uranium’s abbreviation is U, so uranium-238 can be more briefly written as U238.

 

Many nuclides are stable -- they will always remain as they are unless some external force changes them. Some, however, are unstable -- given time, they will spontaneously undergo one of the several kinds of radioactive decay, changing in the process into another element.

 

There are two common kinds of radioactive decay, alpha decay and beta decay. In alpha decay, the radioactive atom emits an alpha particle. An alpha particle contains two protons and two neutrons. After emission, it quickly picks up two electrons to balance the two protons, and becomes an electrically neutral helium-4 (He4) atom. When a nuclide emits an alpha particle, its atomic number drops by 2, and its mass number (number of nucleons) drops by 4. Thus, an atom of U238 (uranium, atomic number 92) emits an alpha particle and becomes an atom of Th234 (thorium, atomic number 90).

 

A beta particle is an electron. When an atom emits a beta particle, a neutron inside the nucleus is transformed to a proton. The mass number doesn't change, but the atomic number goes up by 1. Thus, an atom of carbon-14 (C14), atomic number 6, emits a beta particle and becomes an atom of nitrogen-14 (N14), atomic number 7.

 

A third, very rare type of radioactive decay is called electron absorption. In electron absorption, a proton absorbs an electron to become a neutron. In other words, electron absorption is the exact reverse of beta decay. The mass number doesn’t change, while the atomic number goes down by 1. So an atom of potassium-40 (K40), atomic number 19 can absorb an electron to become an atom of argon-40 (Ar40), atomic number 18.

 

The half-life of a radioactive nuclide is defined as the time it takes half of a sample of the element to decay. A mathematical formula can be used to calculate the half-life from the number of breakdowns per second in a sample of the nuclide. Some nuclides have very long half-lives, measured in billions or even trillions of years. Others have extremely short half-lives, measured in tenths or hundredths of a second. The decay rate and therefore the half-life are fixed characteristics of a nuclide. They don’t change at all. That’s the first axiom of radiometric dating techniques: the half-life of a given nuclide is a constant. (Note that this doesn’t mean the half-life of an element is a constant. Different nuclides of the same element can have substantially different half-lives.)

 

The half-life is a purely statistical measurement. It doesn’t depend on the age of individual atoms. A sample of U238 ten thousand years old will have precisely the same half-life as billion years old. So, if we know how much of the nuclide was originally present, and how much there is now, we can easily calculate how long it would take for the missing amount to decay, and therefore how long it’s been since that particular sample was formed. That’s the essence of radiometric dating: measure the amount that’s present, calculate how much is missing, and figure out how long it would take for that quantity of the nuclide to break down. Because it’s a statistical measure-ment, there’s always a margin of error in the age figure, but if the procedure is done properly, the margin is very small.

 

Obviously, the major question here is "how much of the nuclide was originally present in our sample?" In some cases, we don’t know. Such cases are useless for radiometric dating. We must know the original quantity of the parent nuclide in order to date our sample radiometrically. Fortunately, there are cases where we can do that.

 

In order to do so, we need a nuclide that’s part of a mineral compound. Why? Because there’s a basic law of chemistry that says "Chemical processes like those that form minerals cannot distinguish between different nuclides of the same element." They simply can’t do it. If an element has more than one nuclide present, and a mineral forms in a magma melt that includes that element, the element’s different nuclides will appear in the mineral in precisely the same ratio that they occurred in the environment where and when the mineral was formed. This is the second axiom of radiometric dating.

The third and final axiom is that when an atom undergoes radioactive decay, its internal structure and also its chemical behavior change. Losing or gaining atomic number puts the atom in a different row of the periodic table, and elements in different rows behave in different ways. The new atom doesn’t form the same kinds of chemical bonds that the old one did. It may not form the same kinds of compounds. It may not even be able to hold the parent atom’s place in the compound it finds itself in, which results in an immediate breaking of the chemical bonds that hold the atom to the others in the mineral.

Why not? you might ask. Well, an atom’s chemical activity pattern is a result of its electron shell structure. (The exact details of this are rather complicated, so I won’t go into them here.) When the number of electrons change, the shell structure changes too. So when an atom decays and changes into an atom of a different element, its shell structure changes and it behaves in a different way chemically.

 

That’s it. That’s the sum total of the chemical and physical basis of radiometric dating. That’s all you really need to know to understand radiometric dating techniques.

How do these axioms translate into useful science? In the next part of this article, I’ll examine several different radiometric dating techniques, and show how the axioms I cited above translate into useful age measurements.

 

II. Common Methods of Radiometric Dating

 

This section describes several common methods of radiometric dating. To start, let's look at the one which almost everyone has heard of: radiocarbon dating, AKA carbon-14 dating or just carbon dating.

 

 Method 1: Carbon-14 Dating

 

The element carbon occurs naturally in three nuclides: C12, C13, and C14. The vast majority of carbon atoms, about 98.89%, are C12. About one atom in 800 billion is C14. The remainder are C13. Of the three, C12 and C13 are stable. C14 is radioactive, with a half-life of 5730 years. C14 is also formed continuously from N14 (nitrogen-14) in the upper reaches of the atmosphere. And since carbon is an essential element in living organisms, C14 appears in all terrestrial (landbound) living organisms in the same proportions it appears in the atmosphere.

 

Plants and protists get C14 from the environment. Animals and fungi get C14 from the plant or animal tissue they eat for food. When an organism dies, it stops taking in C14. The C14 already in the organism doesn’t stop decaying, so as time goes on there is less and less C14 left in the organism’s remains. If we measure how much C14 there currently is, we can tell how much there was when the organism died, and therefore how much has decayed. When we know how much has decayed, we know how old the sample is. Many archaeological sites have been dated by applying radiocarbon dating to samples of bone, wood, or cloth found there.

 

Radiocarbon dating depends on several assumptions. One is that the thing being dated is organic in origin. Radiocarbon dating does not work on anything inorganic, like rocks or fossils. Only things that once were alive and now are dead: bones, teeth, flesh, leaves, etc. The second assumption is that the organism in question got its carbon from the atmosphere. A third is that the thing has remained closed to C14 since the organism from which it was created died. The fourth one is that we know what the concentration of atmospheric C14 was when the organism lived and died.

 

That last one is more important than it sounds. When Professor William Libby developed the C14 dating system in 1949, he assumed that the amount of C14 in the atmosphere was a constant. However, after a few years a number of scientists got suspicious of this assumption, because dates obtained by the C14 method weren’t tallying with dates obtained by other means. A long series of studies of C14 content produced an equally long series of corrective factors that must be taken into account when using C14 dating. So the dates derived from C14 decay had to be revised. One reference on radiometric dating lists an entire array of corrective factors for the change in atmospheric C14 over time. C14 dating serves as both an illustration of how useful radiometric dating can be, and of the pitfalls that can be found in untested assumptions.

 

Method 2: U238/U235/Th232 Series

 

U238 and U235 are both nuclides of the element uranium. U235 is well known as the major fissionable nuclide of uranium. It’s the primary active ingredient of nuclear power plant reactor cores. It has a half-life of roughly 700 million years. U238 is more stable, with a half-life of 4.3 billion years. Th232 is the most common nuclide of the element thorium, and has a half-life of 13.9 billion years.

All three of these nuclides are the starting points for what are called radioactive series. A radioactive series is a sequence of nuclides that form one from another by radioactive decay. The series for U238 looks like this:

 

U238--> Th234 + A

Th234--> Pa234 + B

Pa234 --> U234+ B

U234--> Th230 + A

Th230 --> Ra226 + A

Ra226 --> Rn222 + A

Rn222 --> Po218 + A

Po218 --> Pb214 + A

Pb214 --> Bi214 + B

Bi214 --> Po214 + B

Po214 --> Pb210 + A

Pb210 --> Bi210 + B

Bi210 --> Po210 + B

Po210 --> Pb206 + A

(Chemical symbols: U = Uranium; Th = Thorium; Pa = Protactinium; Ra = Radium; Rn = Radon; = Polonium; Pb = Lead; Bi = Bismuth. A indicates alpha decay; B indicates beta decay.)

 

We can calculate the half-lives of all of these elements. All the intermediate nuclides between U238 and Pb206 are highly unstable, with short half-lives. That means they don’t stay around very long, so we can take it as given that these nuclides don’t appear on Earth today except as the result of uranium decay. We can find out the normal distribution of lead nuclides by looking at a lead ore that doesn’t contain any uranium, but that formed under the same conditions and from the same source as our uranium-bearing sample. Then any excess of Pb206 must be the result of the decay of U238. When we know how much excess Pb206 there is, and we know the current quantity of U238, we can calculate how long the U238 in our sample has been decaying, and therefore how long ago the rock formed.

 

Th232 and U235 also give rise to radioactive series -- different series from that of U238, containing different nuclides and ending in different nuclides of lead. Chemists can apply similar techniques to all three, resulting in three different dates for the same rock sample. (Uranium and thorium have similar chemical behavior, so all three of these nuclides frequently occur in the same ores.) If all three dates agree within the margin of error, the date can be accepted as confirmed beyond a reasonable doubt. Since all three of these nuclides have substantially different half-lives, for all three to agree indicates the technique being used is sound.

 

But even so, radioactive-series dating could be open to question. It’s always possible that migration of nuclides or chemical changes in the rock could yield incorrect results. The rock being dated must remain a closed system with respect to uranium, thorium, and their daughter nuclides for the method to work properly. Both the uranium and thorium series include nuclides of radon, an inert gas that can migrate through rock fairly easily even in the few days it lasts. To have a radiometric dating method that is unquestionably accurate, we need a radioactive nuclide for which we can get absolutely reliable measurements of the original quantity and the current quantity. Is there any such nuclide to be found in nature? The answer is yes. Which brings us to the third method of radiometric dating . . .

 

Method 3: Potassium-Argon Dating

 

The element potassium has three nuclides, K39, K40, and K41. Only K40 is radioactive; the other two are stable. K40 is unusual among radioactive nuclides in that it can break down two different ways. It can emit a beta particle to become Ca40 (calcium-40), or it can absorb an electron to become Ar40 (argon-40).

 

Argon is a very special element. It’s one of the group of elements called "noble gases" or "inert gases". Argon is a gas at Earth-normal temperatures, and in any state it exists only as single atoms. It doesn’t form chemical compounds with any other element, not even the most active ones. It’s a fairly large atom, so it would have trouble slipping into a dense crystal’s molecular structure. By contrast, potassium and calcium are two of the most active elements in nature. They both form compounds readily and hold onto other atoms tenaciously.

 

What does this mean? It means that potassium can get into minerals quite easily, but argon can’t. It means that before a mineral crystallizes, argon can escape from it easily. It also means that when an atom of argon forms from an atom of potassium inside the mineral, the argon is trapped in the mineral. So any Ar40 we find deep inside a rock sample must be there as a result of K40 decay. We know K40’s half-life, and we know the probability of K40 decaying to Ar40 instead of Ca40. That and some simple calculations produce a figure for how long the K40 has been decaying in our rock sample.

 

However, again it’s important to remember that we’re dealing with assumptions, and we always have to keep in mind that our assumptions may be wrong. What happens if our mineral sample has not remained a closed system? What if argon has escaped from the mineral? What if argon has found its way into the mineral from some other source?

 

If some of the radiogenic argon has escaped, then more K40 must have decayed than we think -- enough to produce what we did find plus what escaped. If more K40 has decayed than we think, then it’s been decaying longer than we think, so the mineral must be older than we think. In other words, a mineral that has lost argon will be older than the result we get says it is. In the other direction, if excess argon has gotten into the mineral, it will be younger than the result we get says it is.

 

An isochron dating method (isochron dating is described in the next section) can also be applied to potassium-argon dating under certain very specific circumstances. When isochron dating can be used, the result is a much more accurate date.

Method 4: Rubidium-Strontium Dating.

 

Yet a fourth method, rubidium-strontium dating, is even better than potassium-argon dating for old rocks. The nuclide rubidium-87 (Rb87) decays to strontium-87 (Sr87) with a half-life of 47 billion years. Strontium occurs naturally as a mixture of several nuclides. If three minerals form at the same time in different regions of a magma chamber, they will have identical ratios of the different strontium nuclides. (Remember, chemical processes can’t differentiate between nuclides). The total amount of strontium might be different in the different minerals, but the ratios will be the same. Now, suppose that one mineral has a lot of Rb87, another has very little, and the third has an in-between amount. That means that when the minerals crystallize there is a fixed ratio of Rb87:Sr87. As time goes on, atoms of Rb87 decay to Sr-87, resulting in a change in the Rb87:Sr87 ratio, and also in a change in the ratio of Sr87 to other nuclides of strontium. The decrease in the Rb87:Sr87 ratio is exactly matched by the gain of Sr87 in the strontium-nuclide ratio. It has to be -- the two sides of the equation must balance.

 

If we plot the change in the two ratios for these three minerals, the resulting graph comes out as a straight line with an ascending slope. This line is called an isochron. The line’s slope then translates directly into a figure for the age of the rock that contains the different minerals. The power of the Rb/Sr dating method is enormous, for it provides multiple independent ways of verifying the accuracy of the isochron. When every one of four or five different minerals from the same igneous formation matches the isochron perfectly, it can safely be said that the isochron is correct beyond a reasonable doubt. Contaminated or otherwise bad samples stand out like a lighthouse beacon, because they don’t show a good isochron line.

 

There are numerous other radiometric dating methods: the samarium-neodymium, lutetium-hafnium, rhenium-osmium, and lead isochron methods just to name a few. However, I simply haven’t time or room to deal with all of them. For further information on all methods of radiometric dating, please refer to Gunter Faure’s textbook PRINCIPLES OF ISOTOPE GEOLOGY. A full cite for this book is given in the bibliography.

 

. Possible Sources of Error

 

Now, why is all this relevant to the creation-vs.-evolution debate? Every method of radiometric dating ever used points to an ancient age for the Earth. For creationists to destroy the old-Earth theory, they must destroy the credibility of radiometric dating. They have two ways to do this. They can criticize the science that radiometric dating is based on, or they can claim sloppy technique and experimental error in the laboratory analyses of radioactivity levels and nuclide ratios.

 

Option 1: Criticize the Theory

 

Is there any way to criticize the theory of radiometric dating? Well, look back at the axioms of radiometric dating methods. Are any of those open to question. Answer: yes, two of them are. Or at least, they seem to be. Do we know, for a fact, that half-lives are constant (axiom 1)? Do we know for a fact that nuclide ratios are constant (axiom 2)?

 

Regarding the first question: There are sound theoretical reasons for accept-ing the constancy of nuclide half-lives, but the reasons are based in the remote and esoteric reaches of quantum mechanics, and I don’t intend to get into that in this article. However, if all we had were theoretical reasons for believing axiom 1, we would be right to be suspicious of it. Do we have observational evidence?

 

The answer is yes. On several occasions, astronomers have been able to analyze the radiation produced by supernovas. In a supernova, the vast amount of energy released creates every known nuclide via atomic fusion and fission. Some of these nuclides are radioactive. We can detect the presence of the various nuclides by spectrographic analysis of the supernova’s radiation. We can also detect the characteristic radiation signatures of radioactive decay in those nuclides. We can use that information to calculate the half-lives of those nuclides. In every case where this has been done, the measured radiation intensity and the calculated half-life of the nuclide from the supernova matches extremely well with measurements of that nuclide made here on Earth.

 

Now, because light travels at a fixed rate (a bit under 300,000 kilometers per second), and because stars are so far away, when we look at a distant star we’re seeing it as it was when that light left it and headed this way. When we look at a star in the Andromeda Galaxy, 2,700,000 light-years away, we’re seeing that star as it was 2,700,000 years ago. And when we look at a supernova in the Andromeda Galaxy, 2,700,000 years old, we see nuclides with the exact same half-lives as we see here on Earth. Not just one or two nuclides, but many. For these measurements to all be consistently wrong in exactly the same way, most scientists feel, is beyond the realm of possibility.

 

What about nuclide ratios? Are they indeed constant? Well, let’s think about it: Minerals form by recognized chemical processes that depend on the chemical activity of the elements involved. The chemical behavior of an element depends on its size and the number of electrons in its outer shell. This is the foundation of the periodic table of the elements, a basic part of chemistry that has stood without challenge for a hundred and fifty years.

 

The shell structure depends only on the number of electrons the nuclide has, which is the same as the number of protons in its nucleus. So the shell structure doesn’t change between different nuclides of the same element. K39 is chemically identical to K40; the only way we can distinguish between them is to use a nonchemical technique like mass spectrometry. (Note: It’s true that some natural processes favor some isotopes over others. Water molecules containing oxygen-16 are lighter and therefore evaporate faster than water molecules with oxygen-18. However, as far as is known such fractionation occurs only with light nuclides: oxygen, hydrogen, carbon. The atoms used in radiometric dating techniques are mainly heavy atoms, so we can still use the axiom that mineral-forming processes can’t distinguish between different nuclides.)

 

So the processes that are involved in mineral formation can’t distinguish between nuclides. Sr86 atoms and Sr87 atoms behave identically when they bond with other atoms to form a mineral molecule. If there are ten Sr86 atoms for every Sr87 atom in the original magma melt, there will be ten Sr86 atoms for every Sr87 atom in the minerals that crystallize from that melt.

 

Option 2: Criticize the Techniques

 

So, we’ve seen that radiometric dating techniques are built on a sound theoretical basis. The only other possible source of error is in laboratory technique. To translate theory into useful measurements, the lab procedures must be accurate. A contaminated rock sample is useless for dating. A sample that is taken from the surface, where atoms could get in and out easily, is also useless. Samples must be taken by coring, from deep within a rock mass. To date a rock, chemists must break it down into its component elements using any of several methods, then analyze nuclide ratios using a mass spectrometer. If the lab technique is sloppy, the date produced isn’t reliable.

 

There’s no way to eliminate the possibility of error. It can’t be done. Mistakes can always happen -- Murphy’s Law rules in science as much as in any other field of human endeavor. (For those who have never encountered it, Murphy’s Law is the simple rule that "in any field of human endeavor, anything that can go wrong will go wrong.")

 

But we can try to minimize error. And when we do, the dates produced can be accepted as accurate. When samples taken from different parts of a given igneous rock formation are dated by different people at different labs over many years, the possibility that all those measurements could be wrong is vanishingly small. Some may well be wrong. If nine analyses agree, and a tenth produces radically different results, the odd-man-out is usually considered a result of some kind of error and discarded. If the researcher doing the work can find and document a specific problem in analysis that could produce precisely that observed wrong result, then it’s virtually certain the odd-man-out is an error. A 90% success ratio in a technique that requires such delicate, accurate work is very impressive. And some radiometric techniques have a much better success ratio than that.

 

In any case, while it’s true that there are numerous possible sources of error, there is no source of error that could account for the enormous difference between the 6000-year age demanded by young-Earth creationists and the 3.5-billion-year radiometric age for the oldest known rocks on Earth. It’s completely illogical to think that these techniques could be wrong by that much.

 

IV. Creationist Objections to Radiometric Dating

 

Creationist objections to radiometric dating techniques basically fall into three categories:

 

1) Creationists often claim that radiometric dates are unreliable because the entire theory is based on invalid assumptions. We’ve already seen that this doesn’t hold up under examination. The assumptions that are used in radiometric dating techniques are perfectly justified given current physics.

 

2) Creationist geologists compile long lists of dates obtained by radiometric techniques that deviate widely from the normally accepted values. Creationist geologist John Woodmorappe is the best known of the creationists who attempt this approach. He’s compiled a list of several hundred radiometric dates that are widely divergent from the values accepted by conventional geology. Based on this, he claims that radiometric dating methods don't produce consistent results, that geologists conceal radiometric dates which don't match what's expected, and that therefore the whole methodology of radiometric dating is worthless.

 

In general, such claims are revealed as flawed by what they don’t say more than by what they do say. In an article for the creationist journal Creation Science Research Quarterly, Woodmorappe listed 350-odd aberrant dates, and claimed that there are many, many more. What he did not say is that those 350 were winnowed out of tens of thousands of radiometric dates which do give more reasonable results. If we run dating tests on 500 samples and 350 (70% of the total tests) are scattered all over the place, with only a few anywhere near the expected values, then we clearly have a serious problem. But if we run dating tests on 10,000 samples and get 350 aberrant results (3.5% of the total), that could easily be simple experimental error. If we test fifty samples from the same rock formation and we get 2 ages of 1 million years, 2 ages of 500 million years, and 46 ages all clustered around 175 million years, it’s not a great leap of logic to conclude that the 4 aberrant results were in error, and the 46 clustered results are probably correct.

 

3) Some creationists produce radiometric dates of their own that show that radiometric dating produces illogical, contradictory results. A classic example of this tactic is a claim by creationist geologist Steve Austin that rocks taken from Recent lava flows on the Uinkaret Plateau at the top of the Grand Canyon produce older dates than rocks taken from the bottom of the canyon, when both samples are dated using the Rb87/Sr87 isochron method. This appears to be a serious blow against the Rb/Srisochron technique, but in his article "A Criticism of ICR's Grand Canyon Dating Project," Chris Stassen points out that made a critical error in his work. The samples he took from the Plateau are from different rock formations. For any type of radiometric dating to work properly, all samples must come from the same formation. So it’s not surprising that ’s results make no sense.

 

V. Independent Checks on Radiometric Dating

 

We’re still not done, though. If all we had was the radiometric techniques that I’ve described, there would remain a possibility (a miniscule one, but a real possibility nonetheless) that the entire idea is grievously wrong, that some as-yet-undetected factor is throwing off all the hundreds and thousands of radiometric dates that have been produced based on rock samples from all around Earth and even beyond it.

But we have more than that. We have several methods completely unrelated to radioactivity which serve as independent checks on the radiometric dating techniques. For example, in 1838, the American geologist James Dwight Dana made a systematic survey of the . He noted that the islands become more heavily eroded as you move from toward the northwest. He interpreted this to mean that the islands become older as you move northwest along the chain. When rock samples were taken from the and dated radiometrically, they agreed with Dana’s conclusions of over a century before. The islands do indeed become older as you move northwest. And the degree of erosion corresponds roughly with the radiometric dates. No island in the chain is dated as being significantly older than the erosion rate implies, nor is any island in the chain dated as being significantly younger than the erosion rate indicates.

 

Both modern corals and fossil corals deposit daily and annual growth bands. By careful analysis of these bands, we can tell how many days there were in a year when the coral was growing. For modern corals, this technique yields 365 day-bands per year, more or less, just as it should. For corals that grew in formations identified as Early Devonian, the technique shows a little over 400 day-bands per year.

Astronomers can measure the rate at which Earth’s rotation is slowing. Assuming the rate of slowing has remained constant, a day-count of 400 days per year indicates an age of roughly 400 million years. And when Early Devonian rocks are dated radiometrically, we get dates of roughly 400 million years.

Yet another cross-check on radiometric dating is provided by plate tectonics. The plates that form the Earth’s surface are moving at a measurable rate. There are several ways of measuring this movement that themselves have nothing to do with radiometric dating. The plate that forms the basin is moving northwest at at a known rate. Geologists generally accept that the have been formed by upwelling of magma from a ‘hot spot’ in the Earth’s mantle, below the seafloor. The Pacific Plate is moving; the hot spot remains fixed; and the result is a series of volcanic islands growing upward over the hot spot. Between and (2400km northwest of ) are some thirty volcanoes, active and extinct. Many of these volcanoes have had lava flows dated by the potassium-argon method. In all these cases, the radiometric date agrees substantially with the date derived from extrapolation of plate motion. Particularly striking is the correlation for itself. By drift rate it should be about 27 million years old. By K-Ar dating, the volcanic rock that forms Midway's core is 27.7 million years old.

 

Two other island chains that are located over Pacific Plate hot spots show substantially similar patterns of motion to the . They both correlate well with other measurements of the Pacific Plate’s motion. And their radiometric dates match as well. Three independent methods of dating these islands, and they all agree within acceptable ranges of error. What are the chances of all three being wrong in such ways as to produce the same wrong answer? There’s no way to calculate them, but I’d say they’re so miniscule that you’d have better odds of hitting the state lottery five times in a row.

One more example, just to wrap things up. In the late sixties, two geologists identified a specific reversal of the Earth’s magnetic field using the rocks in in . ( is famous as one of the best sites in the world for early hominid fossils.) They called this reversal the Olduvai Event. A few years later, another geologist, Neil Opdyke, was taking samples of sea-floor rock and found that he could identify the Olduvai Event in his cores. Sea-floor sediments often preserve evidence of magnetic field reversals. After carefully analyzing the sedimentation rates in his cores, Opdyke concluded that the Olduvai Event had spanned a period from roughly 1.85 million to about 1.5 million years ago. Obtaining radiometric dates from Olduvai Gorge is difficult for several reasons, but one of the few dates that’s considered solid comes from a specific layer of volcanic tuff called ‘Tuff 1B’. This tuff occurs near the bottom of the Olduvai Event. It was dated in 1959 using the potassium-argon method. The date produced was approximately 1.8 million years ago.

 

VI. Summary

 

Far from being rickety constructs full of sources of error and unproven assumptions, radiometric dating techniques are actually on a very sound theoretical and procedural basis. To destroy that basis, creationists would have to destroy much of chemistry and a lot of atomic physics too. The periodic table is the bedrock on which modern chemistry is built. The constancy of the velocity of light is a basic axiom of Einstein’s theories of relativity, theories which have passed every test physicists could devise. The constancy of radioactive decay rates follows from quantum mechanics, which has also passed every test physicists can create. In short, everything we know in chemistry and in physics points to radiometric dating as being a viable and valuable method of calculating the ages of igneous and metamorphosed igneous rocks. To question it seems to be beyond the bounds of reason.

 

To charge thousands of chemists all over the world with mass incompetence also seems to be beyond the bounds of reason. Radiometric dating has been used ever more widely for the past forty years. The dates produced have gotten steadily more precise as lab techniques and instrumentation has been improved. There is simply no logical reason to throw this entire field of science out the window. There is no reason to believe the theory is faulty, or to believe that thousands of different chemists could be so consistently wrong in the face of every conceivable test.

 

Further, radiometric dates can be checked by other dating techniques. When they are, the dates almost always agree within the range of expected error. In cases where the dates don’t agree, it’s always been found that some natural factor was present which selectively affected one or the other dating method being used.

 

Creationists are forced to challenge radiometric dating because it stands as the most powerful and most damning evidence against their idea of a young Earth. But in the end, they are reduced to saying that "radiometric dating must be wrong, because we know it happened this way." And that is not a scientific position. If theory says it happened this way and evidence says it happened that way, theory must be revised to fit the evidence. Creationists won’t do that. That reveals creation ‘science’ to be a sham, and not any kind of science at all.

 

REFERENCES

 

Information on the nature of atoms, half-lives, and types of radioactive decay was taken from : Journey Across the Subatomic Cosmos, by Isaac Asimov (c. Nightfall, Inc., 1991).

Information on radiometric dating techniques comes from a variety of sources, including the Asimov book cited above, and these:

 

PRINCIPLES OF ISOTOPE GEOLOGY, 2nd Ed.
Gunter Faure
c. John Wiley & Sons, 1986

 

EARTH THEN , 2nd Ed.
Carla and David Dathe
c. Wm C. Brown, Publishers, 1994

 

"Scientific Creationism vs. Evolution: The Mislabeled Debate"
Kenneth R. Miller
Essay in SCIENCE CREATIONISM
Edited by Ashley Montagu
c. Oxford Press, 1984
(This essay contains an excellent explanation of the rubidium-strontium isochron dating method.)

 

GROLIER’S ACADEMIC AMERICAN ENCYCLOPEDIA, 1994 ed.
Articles on Carbon-14, Radiometric Dating, and related subjects
Multimedia CD-ROM distributed by Grolier Electronic Publishing

 

Information on creationist critiques of radiometric dating, and the flaws in those critiques, comes primarily from these sources:

 

"The Unreliability of Radiometric Dating"
Jon Covey
Uploaded to CompuServe’s Dinosaur/Paleontology Forum Library 13 as JDATIN.

 

Evaluation of ICR Grand Canyon Research Project
Chris Stassen
This paper is available on the World Wide Web at the talk.origins FAQ Archive,

 

YECNO.
Glenn Morton
File uploaded to CompuServe Religion Forum

(Note: Glenn Morton is an old-Earth creationist, not an "evolutionist". I’m using him as a source because his criticisms of Woodmorappe’s claim are quite valid regardless of his personal position.)

 

Information on the independent cross-checks for radiometric dates comes mainly from these sources:

On the chain:

‘The Evolution of the Pacific’
Heezen, Bruce and MacGregor, Ian
c. SCIENTIFIC AMERICAN, Nov. 1973.
Reprinted in CONTINENTS ADRIFT CONTINENTS AGROUND, Scientific American Press, 1976.

 

On the Olduvai Event:

’S CHILD: The Discovery of a Human Ancestor
Donald Johanson and James Shreeves
c. 1989, Wm Morrow and Company, Inc.

 

On the length of the year as shown by coral growth patterns:

A TRIP THROUGH TIME: Principles of Historical Geology, 2nd Ed.
John Cooper, Richard Miller, and Jacqueline Patterson
c. Merrill Publishing, 1990
"A Calendar in the Coral", pp. 307-8

 

These sources are supplemented by references in other books, numerous threads on several CompuServe forums, and other papers and articles downloaded from CompuServe and the World Wide Web.







September 2004

Remember the essays you had to write in high school? Topic sentence, introductory paragraph, supporting paragraphs, conclusion. The conclusion being, say, that Ahab in Moby Dick was a Christ-like figure.

Oy. So I'm going to try to give the other side of the story: what an essay really is, and how you write one. Or at least, how I write one.

Mods

The most obvious difference between real essays and the things one has to write in school is that real essays are not exclusively about English literature. Certainly schools should teach students how to write. But due to a series of historical accidents the teaching of writing has gotten mixed together with the study of literature. And so all over the country students are writing not about how a baseball team with a small budget might compete with the Yankees, or the role of color in fashion, or what constitutes a good dessert, but about symbolism in Dickens.

With the result that writing is made to seem boring and pointless. Who cares about symbolism in Dickens? Dickens himself would be more interested in an essay about color or baseball.

How did things get this way? To answer that we have to go back almost a thousand years. Around 1100, Europe at last began to catch its breath after centuries of chaos, and once they had the luxury of curiosity they rediscovered what we call "the classics." The effect was rather as if we were visited by beings from another solar system. These earlier civilizations were so much more sophisticated that for the next several centuries the main work of European scholars, in almost every field, was to assimilate what they knew.

During this period the study of ancient texts acquired great prestige. It seemed the essence of what scholars did. As European scholarship gained momentum it became less and less important; by 1350 someone who wanted to learn about science could find better teachers than Aristotle in his own era. [1] But schools change slower than scholarship. In the 19th century the study of ancient texts was still the backbone of the curriculum.

The time was then ripe for the question: if the study of ancient texts is a valid field for scholarship, why not modern texts? The answer, of course, is that the original raison d'etre of classical scholarship was a kind of intellectual archaeology that does not need to be done in the case of contemporary authors. But for obvious reasons no one wanted to give that answer. The archaeological work being mostly done, it implied that those studying the classics were, if not wasting their time, at least working on problems of minor importance.

And so began the study of modern literature. There was a good deal of resistance at first. The first courses in English literature seem to have been offered by the newer colleges, particularly American ones. Dartmouth, the University of Vermont, Amherst, and University College, London taught English literature in the 1820s. But Harvard didn't have a professor of English literature until 1876, and Oxford not till 1885. (Oxford had a chair of Chinese before it had one of English.) [2]

What tipped the scales, at least in the US, seems to have been the idea that professors should do research as well as teach. This idea (along with the PhD, the department, and indeed the whole concept of the modern university) was imported from Germany in the late 19th century. Beginning at Johns Hopkins in 1876, the new model spread rapidly.

Writing was one of the casualties. Colleges had long taught English composition. But how do you do research on composition? The professors who taught math could be required to do original math, the professors who taught history could be required to write scholarly articles about history, but what about the professors who taught rhetoric or composition? What should they do research on? The closest thing seemed to be English literature. [3]

And so in the late 19th century the teaching of writing was inherited by English professors. This had two drawbacks: (a) an expert on literature need not himself be a good writer, any more than an art historian has to be a good painter, and (b) the subject of writing now tends to be literature, since that's what the professor is interested in.

High schools imitate universities. The seeds of our miserable high school experiences were sown in 1892, when the National Education Association "formally recommended that literature and composition be unified in the high school course." [4] The 'riting component of the 3 Rs then morphed into English, with the bizarre consequence that high school students now had to write about English literature-- to write, without even realizing it, imitations of whatever English professors had been publishing in their journals a few decades before.

It's no wonder if this seems to the student a pointless exercise, because we're now three steps removed from real work: the students are imitating English professors, who are imitating classical scholars, who are merely the inheritors of a tradition growing out of what was, 700 years ago, fascinating and urgently needed work.

No Defense

The other big difference between a real essay and the things they make you write in school is that a real essay doesn't take a position and then defend it. That principle, like the idea that we ought to be writing about literature, turns out to be another intellectual hangover of long forgotten origins.

It's often mistakenly believed that medieval universities were mostly seminaries. In fact they were more law schools. And at least in our tradition lawyers are advocates, trained to take either side of an argument and make as good a case for it as they can. Whether cause or effect, this spirit pervaded early universities. The study of rhetoric, the art of arguing persuasively, was a third of the undergraduate curriculum. [5] And after the lecture the most common form of discussion was the disputation. This is at least nominally preserved in our present-day thesis defense: most people treat the words thesis and dissertation as interchangeable, but originally, at least, a thesis was a position one took and the dissertation was the argument by which one defended it.

Defending a position may be a necessary evil in a legal dispute, but it's not the best way to get at the truth, as I think lawyers would be the first to admit. It's not just that you miss subtleties this way. The real problem is that you can't change the question.

And yet this principle is built into the very structure of the things they teach you to write in high school. The topic sentence is your thesis, chosen in advance, the supporting paragraphs the blows you strike in the conflict, and the conclusion-- uh, what is the conclusion? I was never sure about that in high school. It seemed as if we were just supposed to restate what we said in the first paragraph, but in different enough words that no one could tell. Why bother? But when you understand the origins of this sort of "essay," you can see where the conclusion comes from. It's the concluding remarks to the jury.

Good writing should be convincing, certainly, but it should be convincing because you got the right answers, not because you did a good job of arguing. When I give a draft of an essay to friends, there are two things I want to know: which parts bore them, and which seem unconvincing. The boring bits can usually be fixed by cutting. But I don't try to fix the unconvincing bits by arguing more cleverly. I need to talk the matter over.

At the very least I must have explained something badly. In that case, in the course of the conversation I'll be forced to come up a with a clearer explanation, which I can just incorporate in the essay. More often than not I have to change what I was saying as well. But the aim is never to be convincing per se. As the reader gets smarter, convincing and true become identical, so if I can convince smart readers I must be near the truth.

The sort of writing that attempts to persuade may be a valid (or at least inevitable) form, but it's historically inaccurate to call it an essay. An essay is something else.

Trying

To understand what a real essay is, we have to reach back into history again, though this time not so far. To Michel de Montaigne, who in 1580 published a book of what he called "essais." He was doing something quite different from what lawyers do, and the difference is embodied in the name. Essayer is the French verb meaning "to try" and an essai is an attempt. An essay is something you write to try to figure something out.

Figure out what? You don't know yet. And so you can't begin with a thesis, because you don't have one, and may never have one. An essay doesn't begin with a statement, but with a question. In a real essay, you don't take a position and defend it. You notice a door that's ajar, and you open it and walk in to see what's inside.

If all you want to do is figure things out, why do you need to write anything, though? Why not just sit and think? Well, there precisely is Montaigne's great discovery. Expressing ideas helps to form them. Indeed, helps is far too weak a word. Most of what ends up in my essays I only thought of when I sat down to write them. That's why I write them.

In the things you write in school you are, in theory, merely explaining yourself to the reader. In a real essay you're writing for yourself. You're thinking out loud.

But not quite. Just as inviting people over forces you to clean up your apartment, writing something that other people will read forces you to think well. So it does matter to have an audience. The things I've written just for myself are no good. They tend to peter out. When I run into difficulties, I find I conclude with a few vague questions and then drift off to get a cup of tea.

Many published essays peter out in the same way. Particularly the sort written by the staff writers of newsmagazines. Outside writers tend to supply editorials of the defend-a-position variety, which make a beeline toward a rousing (and foreordained) conclusion. But the staff writers feel obliged to write something "balanced." Since they're writing for a popular magazine, they start with the most radioactively controversial questions, from which-- because they're writing for a popular magazine-- they then proceed to recoil in terror. Abortion, for or against? This group says one thing. That group says another. One thing is certain: the question is a complex one. (But don't get mad at us. We didn't draw any conclusions.)

The River

Questions aren't enough. An essay has to come up with answers. They don't always, of course. Sometimes you start with a promising question and get nowhere. But those you don't publish. Those are like experiments that get inconclusive results. An essay you publish ought to tell the reader something he didn't already know.

But what you tell him doesn't matter, so long as it's interesting. I'm sometimes accused of meandering. In defend-a-position writing that would be a flaw. There you're not concerned with truth. You already know where you're going, and you want to go straight there, blustering through obstacles, and hand-waving your way across swampy ground. But that's not what you're trying to do in an essay. An essay is supposed to be a search for truth. It would be suspicious if it didn't meander.

The Meander (aka Menderes) is a river in Turkey. As you might expect, it winds all over the place. But it doesn't do this out of frivolity. The path it has discovered is the most economical route to the sea. [6]

The river's algorithm is simple. At each step, flow down. For the essayist this translates to: flow interesting. Of all the places to go next, choose the most interesting. One can't have quite as little foresight as a river. I always know generally what I want to write about. But not the specific conclusions I want to reach; from paragraph to paragraph I let the ideas take their course.

This doesn't always work. Sometimes, like a river, one runs up against a wall. Then I do the same thing the river does: backtrack. At one point in this essay I found that after following a certain thread I ran out of ideas. I had to go back seven paragraphs and start over in another direction.

Fundamentally an essay is a train of thought-- but a cleaned-up train of thought, as dialogue is cleaned-up conversation. Real thought, like real conversation, is full of false starts. It would be exhausting to read. You need to cut and fill to emphasize the central thread, like an illustrator inking over a pencil drawing. But don't change so much that you lose the spontaneity of the original.

Err on the side of the river. An essay is not a reference work. It's not something you read looking for a specific answer, and feel cheated if you don't find it. I'd much rather read an essay that went off in an unexpected but interesting direction than one that plodded dutifully along a prescribed course.

Surprise

So what's interesting? For me, interesting means surprise. Interfaces, as Geoffrey James has said, should follow the principle of least astonishment. A button that looks like it will make a machine stop should make it stop, not speed up. Essays should do the opposite. Essays should aim for maximum surprise.

I was afraid of flying for a long time and could only travel vicariously. When friends came back from faraway places, it wasn't just out of politeness that I asked what they saw. I really wanted to know. And I found the best way to get information out of them was to ask what surprised them. How was the place different from what they expected? This is an extremely useful question. You can ask it of the most unobservant people, and it will extract information they didn't even know they were recording.

Surprises are things that you not only didn't know, but that contradict things you thought you knew. And so they're the most valuable sort of fact you can get. They're like a food that's not merely healthy, but counteracts the unhealthy effects of things you've already eaten.

How do you find surprises? Well, therein lies half the work of essay writing. (The other half is expressing yourself well.) The trick is to use yourself as a proxy for the reader. You should only write about things you've thought about a lot. And anything you come across that surprises you, who've thought about the topic a lot, will probably surprise most readers.

For example, in a recent essay I pointed out that because you can only judge computer programmers by working with them, no one knows who the best programmers are overall. I didn't realize this when I began that essay, and even now I find it kind of weird. That's what you're looking for.

So if you want to write essays, you need two ingredients: a few topics you've thought about a lot, and some ability to ferret out the unexpected.

What should you think about? My guess is that it doesn't matter-- that anything can be interesting if you get deeply enough into it. One possible exception might be things that have deliberately had all the variation sucked out of them, like working in fast food. In retrospect, was there anything interesting about working at Baskin-Robbins? Well, it was interesting how important color was to the customers. Kids a certain age would point into the case and say that they wanted yellow. Did they want French Vanilla or Lemon? They would just look at you blankly. They wanted yellow. And then there was the mystery of why the perennial favorite Pralines 'n' Cream was so appealing. (I think now it was the salt.) And the difference in the way fathers and mothers bought ice cream for their kids: the fathers like benevolent kings bestowing largesse, the mothers harried, giving in to pressure. So, yes, there does seem to be some material even in fast food.

I didn't notice those things at the time, though. At sixteen I was about as observant as a lump of rock. I can see more now in the fragments of memory I preserve of that age than I could see at the time from having it all happening live, right in front of me.

Observation

So the ability to ferret out the unexpected must not merely be an inborn one. It must be something you can learn. How do you learn it?

To some extent it's like learning history. When you first read history, it's just a whirl of names and dates. Nothing seems to stick. But the more you learn, the more hooks you have for new facts to stick onto-- which means you accumulate knowledge at what's colloquially called an exponential rate. Once you remember that Normans conquered England in 1066, it will catch your attention when you hear that other Normans conquered southern Italy at about the same time. Which will make you wonder about Normandy, and take note when a third book mentions that Normans were not, like most of what is now called France, tribes that flowed in as the Roman empire collapsed, but Vikings (norman = north man) who arrived four centuries later in 911. Which makes it easier to remember that Dublin was also established by Vikings in the 840s. Etc, etc squared.

Collecting surprises is a similar process. The more anomalies you've seen, the more easily you'll notice new ones. Which means, oddly enough, that as you grow older, life should become more and more surprising. When I was a kid, I used to think adults had it all figured out. I had it backwards. Kids are the ones who have it all figured out. They're just mistaken.

When it comes to surprises, the rich get richer. But (as with wealth) there may be habits of mind that will help the process along. It's good to have a habit of asking questions, especially questions beginning with Why. But not in the random way that three year olds ask why. There are an infinite number of questions. How do you find the fruitful ones?

I find it especially useful to ask why about things that seem wrong. For example, why should there be a connection between humor and misfortune? Why do we find it funny when a character, even one we like, slips on a banana peel? There's a whole essay's worth of surprises there for sure.

If you want to notice things that seem wrong, you'll find a degree of skepticism helpful. I take it as an axiom that we're only achieving 1% of what we could. This helps counteract the rule that gets beaten into our heads as children: that things are the way they are because that is how things have to be. For example, everyone I've talked to while writing this essay felt the same about English classes-- that the whole process seemed pointless. But none of us had the balls at the time to hypothesize that it was, in fact, all a mistake. We all thought there was just something we weren't getting.

I have a hunch you want to pay attention not just to things that seem wrong, but things that seem wrong in a humorous way. I'm always pleased when I see someone laugh as they read a draft of an essay. But why should I be? I'm aiming for good ideas. Why should good ideas be funny? The connection may be surprise. Surprises make us laugh, and surprises are what one wants to deliver.

I write down things that surprise me in notebooks. I never actually get around to reading them and using what I've written, but I do tend to reproduce the same thoughts later. So the main value of notebooks may be what writing things down leaves in your head.

People trying to be cool will find themselves at a disadvantage when collecting surprises. To be surprised is to be mistaken. And the essence of cool, as any fourteen year old could tell you, is nil admirari. When you're mistaken, don't dwell on it; just act like nothing's wrong and maybe no one will notice.

One of the keys to coolness is to avoid situations where inexperience may make you look foolish. If you want to find surprises you should do the opposite. Study lots of different things, because some of the most interesting surprises are unexpected connections between different fields. For example, jam, bacon, pickles, and cheese, which are among the most pleasing of foods, were all originally intended as methods of preservation. And so were books and paintings.

Whatever you study, include history-- but social and economic history, not political history. History seems to me so important that it's misleading to treat it as a mere field of study. Another way to describe it is all the data we have so far.

Among other things, studying history gives one confidence that there are good ideas waiting to be discovered right under our noses. Swords evolved during the Bronze Age out of daggers, which (like their flint predecessors) had a hilt separate from the blade. Because swords are longer the hilts kept breaking off. But it took five hundred years before someone thought of casting hilt and blade as one piece.

Disobedience

Above all, make a habit of paying attention to things you're not supposed to, either because they're "inappropriate," or not important, or not what you're supposed to be working on. If you're curious about something, trust your instincts. Follow the threads that attract your attention. If there's something you're really interested in, you'll find they have an uncanny way of leading back to it anyway, just as the conversation of people who are especially proud of something always tends to lead back to it.

For example, I've always been fascinated by comb-overs, especially the extreme sort that make a man look as if he's wearing a beret made of his own hair. Surely this is a lowly sort of thing to be interested in-- the sort of superficial quizzing best left to teenage girls. And yet there is something underneath. The key question, I realized, is how does the comber-over not see how odd he looks? And the answer is that he got to look that way incrementally. What began as combing his hair a little carefully over a thin patch has gradually, over 20 years, grown into a monstrosity. Gradualness is very powerful. And that power can be used for constructive purposes too: just as you can trick yourself into looking like a freak, you can trick yourself into creating something so grand that you would never have dared to plan such a thing. Indeed, this is just how most good software gets created. You start by writing a stripped-down kernel (how hard can it be?) and gradually it grows into a complete operating system. Hence the next leap: could you do the same thing in painting, or in a novel?

See what you can extract from a frivolous question? If there's one piece of advice I would give about writing essays, it would be: don't do as you're told. Don't believe what you're supposed to. Don't write the essay readers expect; one learns nothing from what one expects. And don't write the way they taught you to in school.

The most important sort of disobedience is to write essays at all. Fortunately, this sort of disobedience shows signs of becoming rampant. It used to be that only a tiny number of officially approved writers were allowed to write essays. Magazines published few of them, and judged them less by what they said than who wrote them; a magazine might publish a story by an unknown writer if it was good enough, but if they published an essay on x it had to be by someone who was at least forty and whose job title had x in it. Which is a problem, because there are a lot of things insiders can't say precisely because they're insiders.

The Internet is changing that. Anyone can publish an essay on the Web, and it gets judged, as any writing should, by what it says, not who wrote it. Who are you to write about x? You are whatever you wrote.

Popular magazines made the period between the spread of literacy and the arrival of TV the golden age of the short story. The Web may well make this the golden age of the essay. And that's certainly not something I realized when I started writing this.



Notes

[1] I'm thinking of Oresme (c. 1323-82). But it's hard to pick a date, because there was a sudden drop-off in scholarship just as Europeans finished assimilating classical science. The cause may have been the plague of 1347; the trend in scientific progress matches the population curve.

[2] Parker, William R. "Where Do College English Departments Come From?" College English 28 (1966-67), pp. 339-351. Reprinted in Gray, Donald J. (ed). The Department of English at Indiana University Bloomington 1868-1970. Indiana University Publications.

Daniels, Robert V. The University of Vermont: The First Two Hundred Years. University of Vermont, 1991.

Mueller, Friedrich M. Letter to the Pall Mall Gazette. 1886/87. Reprinted in Bacon, Alan (ed). The Nineteenth-Century History of English Studies. Ashgate, 1998.

[3] I'm compressing the story a bit. At first literature took a back seat to philology, which (a) seemed more serious and (b) was popular in Germany, where many of the leading scholars of that generation had been trained.

In some cases the writing teachers were transformed in situ into English professors. Francis James Child, who had been Boylston Professor of Rhetoric at Harvard since 1851, became in 1876 the university's first professor of English.

[4] Parker, op. cit., p. 25.

[5] The undergraduate curriculum or trivium (whence "trivial") consisted of Latin grammar, rhetoric, and logic. Candidates for masters' degrees went on to study the quadrivium of arithmetic, geometry, music, and astronomy. Together these were the seven liberal arts.

The study of rhetoric was inherited directly from Rome, where it was considered the most important subject. It would not be far from the truth to say that education in the classical world meant training landowners' sons to speak well enough to defend their interests in political and legal disputes.

[6] Trevor Blackwell points out that this isn't strictly true, because the outside edges of curves erode faster.

Thanks to Ken Anderson, Trevor Blackwell, Sarah Harlin, Jessica Livingston, Jackie McDonough, and Robert Morris for reading drafts of this.




Categories: 1

0 Replies to “Signed Dated Essay”

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *