Primary Opinion

Collected Essays: 1997-2004

Name:
Location: Portsmouth, VA

Currently a graduate student at Old Dominion University

Thursday, October 27, 2005

Table of Contents

One: Physical Science

1. Distilling Science from Philosophy: Aristotle, Galileo, and Newton
2. Black Holes, Dark Matter, and the Great Attractor
3. The Einstein-Bohr Debate: Does God Play Dice?
4. Measuring the Age of the Earth
5. Scale
6. The Relative Abundance of Chemical Elements
7. Mechanisms of Change

Two: Ethics

1. Mammon and America
2. Racism and Racialism
3. Decision to Drop the Bomb
4. Feeding the Hungry
5. Civil Disobedience
6. Natural vs. Unnatural
7. Double Effect
8. Debate on Euthanasia
9. Loyalty vs. Civic Responsibility
10. Abortion
11. Ethics in Science

Three: Politics and Ideology

1. The Enlightenment and Jeffersonian Thought
2. Securing Individual Liberties
3. Thomas Paine and Revolution
4. The Tenor of Current Political Debate
5. The Untold Story of Watergate

Four: Public Policy

1. Problems at NASA
2. Freedom of Speech
3. Comparable Worth
4. Preferential Hiring and Promotion
5. Substance Abuse on the Job
6. Performance Appraisal Systems
7. Collective Bargaining--Public & Private
8. Air Traffic Controllers vs. Reagan: Lessons Learned from the Strike

Five: Philosophy

1. Ontology and Physics
2. Rationalism and Empiricism
3. Political Theory: Argument for Moderation
4. Imposition of Islamic Law
5. Apostasy
6. The Nature of God
7. Reverends Robertson and Falwell Revisited
8. When Science Goes Too Far
9. Aids: Biological Warfare or the Judgment of God?
10. Legislating Morality

Six: Law

1. The U.S. Supreme Court and Affirmative Action
2. Judicial Review
3. The Bar
4. Presumption of Innocence
5. Problems with Juries
6. Stare Decisis
7. Statutory Interpretation
8. Constitutional Interpretation
9. Judicial Effectiveness and Miranda
10. The Basis of Law
11. Supreme Court Decisions and Prisoner's Rights

Seven: Social Science

1. Juvenile Crime and Cognitive Dysfunction
2. Trends
3. East German Prisons
4. Juvenile Detention
5. Bail Practices

Eight: Ephemera

1. Misguided Fears: The Electronic Surveillance Debate
2. Electronic Surveillance, Law Enforcement, and the USA Patriot Act
3. Explanations for Subemployment
4. Protectionist Trade Policies
5. Small Business Solution

Nine: The Tetragrammaton

1. Coping With Terrorism: United States, Germany, and South Korea
2. Credibility Gap: Unanswered Questions on the Bush Administration's Iraq War Policy
3. Sixth Amendment Under Siege: How the U.S. Department of Justice is Undermining Attorney-Client Privilege
4. Is the Religious Right Eroding Separation of Church and State?


Introduction

This is the best of my academic writing—from the early years at Tidewater Community College (1997-2000) to completing a baccalaureate in governmental administration at Christopher Newport University (2000-2004). Whatever the intrinsic worth of such documents, one thing is clear: they are too good to just throw away. Hence this collection.

Why anyone would want to read all of this stuff is beyond me—not even I would welcome the task. But I would suggest several items: “The Enlightenment and Jeffersonian Thought”; “When Science Goes Too Far”; “Juvenile Crime and Cognitive Dysfunction”; “The Untold Story of Watergate”; “Electronic Surveillance, Law Enforcement, and the USA Patriot Act”; “Explanations for Subemployment”; “Ethics in Science.” Most of the rest might appeal to those with particular interests—philosophy, natural science, law and criminal justice, public administration.

I should say a word or two about the last chapter, “The Tetragrammaton.” During my final semester at CNU I took four classes, each of which required a term paper. I had the novel idea of writing four thematically related papers (although none of the professors knew it), dealing with various aspects of 9/11, terrorism, the Bush Administration, and the direction our country is taking. The resulting texts, while highly informative were less than fabulous, but it was an attempt to experiment with form—something I’ve continued in graduate school.

Throughout all of this work, a distinct worldview is discernable—a sort of philosophy—which is the only real justification for this volume. My worldview is grounded in science (which is why the “physics” essays appear in Chapter One), but is also Theistic. Fundamentally, there is no conflict between science and asserting the existence of God, in my view. Hopefully, a moral sensibility comes through as well.

A great deal of material was omitted—literary analysis, eastern philosophy, music reviews, and such—because it didn’t quite fit the rationalist tone of the collection. I plan to include those items in a future volume. So, here it is—the good, the bad, and the ugly—all in one tome.

CHM

One: Physical Science

Distilling Science from Philosophy: Aristotle, Galileo, and Newton


I must admit the study of mechanics is a dreary science, but it is nonetheless fundamental. Take aerodynamics as an example. How could one ever succeed in flying an airplane, launching a missile, putting a spacecraft in orbit, or sending it to land safely on the moon, without a thorough understanding of motion, acceleration, inertia, gravity, and all associated forces? For this, science owes a debt of gratitude to the great minds who first explored these areas: Aristotle, Galileo, and Newton, just to name three. To this list dozens of other names might be added, but for our purposes we shall focus on the three and the chain of thought, beginning more than twenty-five centuries ago, that guided man's understanding from primitive superstition to modern physics.

The struggle between our supernatural preoccupations and a more logical, rational way of thinking is perhaps as old as the human race itself. For most of that time, supernaturalism had the greatest credence. The sun shone, the rain fell, the earth shook, all by the will of various gods (or one God). Yet even the ancients knew of certain patterns--the inevitable cycle of day and night, regular motions of stars and planets, four orderly seasons, and so on. Aristotle was not the first to apply logic to explaining the natural world--in fact, he was one of the last in the pre-Christian era to do so. Contained in the most important of his preserved writings--Physics, Metaphysics, Ethics, Poetics, Categories, and many others--is the most profound synthesis of human knowledge assembled to that time. He was heir to a great body of research, from the pre-Socratics beginning with Thales of Miletus, who successfully predicted a solar eclipse in 585 B.C., to Plato, at whose academy he studied for twenty years.

Aristotle, no doubt, annoyed his teacher Plato with his youthful brilliance and open disagreement with some cherished Platonic precepts. Plato, according to Aristotle, was too immersed in the metaphysical to adequately deal with the real physical universe. For example, when addressing the question of "what is the ideal state?", Plato waxed eloquent in the Republic. Aristotle, on the other hand, studied 158 actual constitutions to determine which of them worked the best (Mitchell, 1996, p.363). In grappling with the physical workings of the universe, Plato regarded this world as a mere "copy" of an immaterial world of ideal forms. Between the two was a "demiurge" translating perfect ideas into imperfect, tangible beings. Aristotle dismissed this notion altogether and looked for answers solely in the material world. Here, we see science beginning to separate itself from philosophy. According to Jefferson Hane Weaver:

Aristotle was less concerned than Plato about the relationship between the ideal world and the real world. One of his greatest contributions to science may have been his efforts to systematically classify plants and animals. He also sought to discover the primary forces of nature. According to Aristotle, the universe is balanced by opposite movements of the four elements. Air and fire move upward naturally while earth and water move downward, thereby preserving a general equilibrium. Aristotle used a fifth element, ether, to explain the movements of stars, which he supposed consisted of ether. (p.288)


His common sense approach, although later proven inaccurate, began the process of divorcing pure science from metaphysical speculation. How far Aristotle might have gone had he been born in a later century is a tantalizing thought, for he was severely limited by a lack of technology. Durant points out, "See, here, how inventions make history: for lack of a telescope Aristotle's astronomy is a tissue of childish romance; for lack of a microscope his biology wanders endlessly astray" (1961, p.55).

In light of later discoveries it is too easy to dismiss him, yet Aristotle's writings, once approved by Catholic authorities, assumed a status comparable to Holy Scripture. Though the rise of Christianity reasserted the dominance of supernaturalism during the Middle Ages, Aristotle continued to be studied and adhered to, despite unexplainable facts and discrepancies. It was not until the Renaissance that "natural philosophy" began to break free of Aristotelian limits. This made it possible for a pioneer like Galileo to make tremendous strides in the study of motion and astronomy.

No scientist works in a vacuum, therefore Galileo's work was preceded by a thorough familiarization with the theories of Copernicus, who maintained that the sun, rather than the earth, was at the center of planetary motion, and Kepler, who demonstrated that planetary orbits were elliptical rather than perfect circles around the sun. The Church, of course, was unwilling to admit to any of this.

Galileo, a professor of mathematics at the University of Pisa, taught courses of astronomy to medical students, comparing Copernican and Ptolemaic cosmologies (Weaver, pp.424-25). In early experiments with pendulums he arrived at a primitive law of energy conservation, claiming that without an impeding force, the pendulum would continue to rise and fall forever. And despite Aristotle's claim that any moving object would naturally come to rest, Galileo held that in the absence of friction, a moving body would continue its path in a straight line indefinitely. He was the first to measure acceleration rates of spheres down an incline and actually believed in a simple theory of relativity--a notion that Newton later rejected. In short, Galileo was instrumental in overturning almost all of Aristotle's time-worn assumptions. He introduced mathematics into physics and helped separate science from philosophy (Weaver, p.453). Later in life, when he turned his telescope toward the night sky, Galileo observed mountain ranges on the moon, phases of the planet Venus, and moons orbiting Jupiter. Because he firmly held to the Copernican view of heliocentricity, he found himself before the Inquisition in 1615, at which time he recanted. Galileo was placed under house arrest and barred from further research, although he was allowed to write one last book in 1632. Obviously, supernaturalism was not quite ready to surrender to the truth.

It was Isaac Newton, born in 1642--the same year Galileo died--who finally nailed the coffin shut on the occult in science. Beginning his work where Kepler and Galileo left off, Newton created a cosmogony that worked without the need of constant Divine Intervention. He was a sickly child, an inept farmer, and, later, an undistinguished college student. But when he retreated to his family's farm in 1665 to avoid a plague then sweeping London, he "[i]n a single year... (1) discovered the binomial theorem; (2) discovered the basic principles of the differential and integral calculus; (3) worked out the theory of gravitation; and (4) discovered the spectrum..." (Weaver, p.484).

By this time the scientific world was ready for a revolution, and Newton was a national hero by 1667. Eventually, what became known as "Newtonian physics" left its indelible mark on science and remained unchallenged for more than 200 years. Newton was criticized by some for refusing to insert causality in his system, preferring instead to identify fundamental rules of order, but old patterns of thought were slowly giving way to the new. It wasn't until 1685 that he finally wrote, in Latin, the summation of his life's work, Mathematical Principles of Natural Philosophy. This remains one of the most important scientific works ever written.

Thus, classical physics reigned unchallenged and nearly unquestioned until the 20th century, providing the basis of the "clockwork universe," a view that had theological, as well as philosophical implications. The 18th century Deists, for example (American President Thomas Jefferson was a Deist), held to the Newtonian cosmos, limiting God's role to that of a "watchmaker" who, having created the universe, immediately withdrew to allow it to run by itself without further input. This left men free to pursue their own destinies.

Isaac Newton himself, however, remained an enigma. He never married, was obsessed with alchemy and the occult, and was tormented by religious questions till the end of his life. He might have been comforted somewhat by such modern discoveries as relativity and quantum mechanics, which prove that at the sub-atomic level, as well as the macrocosmic, reality doesn't operate according to Newton's laws. Perhaps God did not withdraw after all.



References


Durant, W. (1961). The Story of Philosophy. New York: Washington Square.

Mitchell, H. (1996). Roots of Wisdom. Albany: ITP.

Weaver, J. (1987) The World of Physics. New York: Simon and Schuster.


***




Black Holes, Dark Matter, and the Great Attractor



When Albert Einstein published his theory of General Relativity in 1915 it was considered the end of an era--the era of "Newtonian” or classical physics. The elegance and simplicity of Newton's law of universal gravitation, once thought an absolute, could not explain certain phenomena under specialized conditions. But there was no need to relegate Newtonian physics to the same curio shop as, say, Ptolemaic astronomy. In the realm of ordinary experience, dealing with quantities that require measurement for practical purposes, the old theory of gravity worked perfectly well. In the Newtonian universe there were "objects" containing mass, and "forces" which acted upon the objects. Time and space were two distinct parameters within which, and through which, all measurements were made. Thus, gravity was a force that acted between all particles of matter possessing mass. Where that "force" came from or how it was generated classical physics could not speculate, but its observed effects could be calculated. For example, to determine the mass of the sun was a straightforward problem easily solved. Obviously, one could not put the sun on a scale and weigh it, but its mass could be determined indirectly by measuring the orbital speed of the planets. If the sun were more massive than it is, the planets would travel at a higher rate of speed; if it were less massive, their motions would be much slower (Morris, 1990, p.97). As it turns out, the sun's mass is roughly 3.3 million times that of the earth.

General Relativity, of course, attributes the effects of gravity to a curvature of space and time in the vicinity of a massive body, so there is no "force" of gravity per se. To be sure, an apple falling from a tree or an asteroid hurtling toward the sun seems to be in the grip of a powerful force, but in reality these objects are merely following their natural trajectories--that is, they are following the path of least resistance. For example, a freight train traveling 100 mph due east makes a slow 90 degree turn and heads south. There is no "force" acting upon the train making it turn south (other than its own running force), it is simply following the course of the tracks. The curvature of space has a comparable effect on massive bodies. Not only is it possible to determine the velocities of nearby objects like planets, those of distant stars can also be measured. One might wonder, how? Even through a powerful telescope stars do not appear to move, but remain in fixed positions night after night, year after year. In fact, it would take many human lifetimes to observe even the slightest shift of a star's position. Yet astronomers know they are all moving at high rates of speed and can assign them accurate velocities. How is it done? By analyzing the light these objects emit.

Although it was long known that light, when passed through a prism, produced a spectrum of colors, Isaac Newton was the first to demonstrate that white light was actually an admixture of colors (it had been previously assumed that the prism somehow added colors to the light). In the 1800s it became possible to analyze the chemical composition of a light source through a device called the spectroscope. Through spectral analysis, then, scientists could determine the hydrogen/helium ratio of a distant star (and thus the star's approximate age), traces of heavier elements in interstellar gas, and many other things. It is also possible to measure the velocity of a moving light source through a phenomenon known as the Doppler Effect. When a light-emitting object is approaching, the waves are shifted toward the blue end of the spectrum; when receding, they are shifted toward the red end, thus an object's relative redshift or blueshift indicates its velocity away from or toward the earth. Not only is visible light analyzed this way (being only a tiny fraction of the electromagnetic spectrum), but all wavelengths from low-frequency radio waves to high-frequency gamma rays can be analyzed. Telescope observation may represent the romantic aspect of astronomy, but radio telescopes provide the fullest range of data, from which information about the cosmos can be gleaned. With this understanding, then, we can now briefly consider some of the more puzzling questions, and outright mysteries confronting scientists.

Implicit in Einstein's General Relativity equations is the possibility of such exotic objects as black holes. After a star exhausts its hydrogen fuel through nuclear fusion, a series of changes take place, the end result of which depends on the mass of the star. Low mass stars eject most of their outer layers, which remain as planetary nebulae. A high mass star dies violently in a supernova explosion, the remnants of which, if enough mass remains (about 3 solar masses), can collapse under the weight of its own gravity to form an object from which nothing, not even light, can ever escape. That means the escape velocity from the collapsed star exceeds the speed of light, thus it disappears from the universe, forming a black hole. It is not possible to observe a black hole directly, but its effects can be observed. Any matter in the vicinity of the object will be accelerated by the intense gravitational field. Hot interstellar gasses, when accelerated, produce high energy radiation that can be detected. Thus, the x-ray source Cygnus X-1 is strongly suspected of being a black hole. Since what we experience as gravity is really the warping of space-time, the condition reaches an extreme in a black hole. According to Kaufmann (1985, p.446):

Inside a black hole, powerful gravity distorts the shape of space and time so severely that the directions of space and time become interchanged. In a limited sense, inside a black hole you can have freedom to move through time. It does you no good, however, because you lose a corresponding amount of freedom to move through space. Whether you like it or not, you are inexorably dragged from the event horizon [surface] to the singularity [core]. Just as no force in the universe can prevent the forward march of time (past to future) outside a black hole, no force in the universe can prevent the inward march of space (event horizon to singularity) inside a black hole.

Black holes are thought to reside at the centers of many, if not most, galaxies. There is strong evidence to suggest that one of these massive objects lurks at the center of our own Milky Way galaxy.

As observational techniques have become more refined, it has become possible to measure relative velocities of even the most distant objects in the universe--galaxies and galaxy clusters. In 1929 American astronomer Edwin Hubble discovered that nearly all galaxies showed a significant redshift, meaning that they were receding from the earth. Moreover, he realized that the redshift was proportional to the galaxy's distance from us--the further away it was, the faster it was receding. The inescapable conclusion was that the universe as a whole was expanding, carrying the galaxies along with it (something that was also implied by Einstein's equations). This led to the Big Bang theory of the universe's origin. All debate about the veracity of this theory was silenced in 1964 when physicists Arno Penzias and Robert Wilson discovered a background radiation of 2.7 Kelvins prevalent throughout the universe. Only one explanation for this uniform microwave radiation has ever been accepted: it is a remnant of the Big Bang itself, thought to have occurred 15-20 billion years ago (Morris, p.38). Interestingly, by using the microwave background as a reference point, it is possible to measure the peculiar motions of bodies through space, separate and distinct from their general motions.

In 1977 it was discovered that the Milky Way galaxy along with its small cluster--known as the Local Group--was moving at a speed of about 600 kilometers per second. According to Morris, "Astronomers soon concluded that the peculiar motion of the Local Group must be caused by the gravitational attraction of a concentration of mass that lay millions of light-years away and could have no other cause" (p.127). Scientists thus became aware of the Great Attractor. They did not have a clear idea of how far away this concentration of mass had to be, but when telescopes were aimed in that direction nothing could be seen. In 1987 a group of astrophysicists known as the Seven Samurai completed a five-year study which revealed that an enormous volume of the local universe, including two super-clusters of galaxies, were streaming at high velocity toward the (as yet undiscovered) Great Attractor (Morris, p.129). Calculations as to the mass required to attract entire galaxy clusters revealed that it must be equal to tens of thousands of galaxies and must be at least 400 million light-years away. But what is the Great Attractor? Scientists do not know.

Fact is, the existence of large quantities of mass that cannot be seen is a problem scientists have been grappling with for more than fifty years. Dutch astronomer Jan Oort was the first to note this problem while studying the motions of stars in the disk of our own galaxy. A star would occasionally veer away from the galactic plane only to be yanked back in place by gravity. Yet there was not enough mass from observed sources (stars, inter-stellar gasses, etc.) to produce the effect. The "missing" amount of mass was at least 50% of what should have been there. Eventually it was determined that as much as 90% of a galaxy's mass resided beyond the luminous disk of stars, as a form of as yet undiscovered dark matter. Dark matter makes it possible for galaxies to maintain their beautiful spirals (computer models indicate that the spiral structure, without the unseen mass, would dissipate after only a few million years), group together in clusters and superclusters, and form the long string-like filaments that are the large scale structural features of the universe. Indeed, the mysterious dark matter may be the predominant form of matter in the cosmos. Scientists do not know for sure what dark matter is, but a clue was recently found when it was discovered that the neutrino (a particle that far outnumbers ordinary protons, neutrons, and electrons, previously thought massless) has a small mass after all. Though it is too early to tell, dark matter may indeed be a vast ocean of neutrinos. Thus, the mystery of dark matter is one of the most intriguing questions in science.


References

Morris, R. (1990). The Edges of Science: Crossing the Boundary from Physics to Metaphysics. New York: Prentice Hall.

Kaufmann, W. J. (1985). Universe. New York: W. H. Freeman.



***


The Einstein-Bohr Debate: Does God Play Dice?


Imagine, if you will, the following philosophical dilemma: one flips a coin, knowing it will come up either heads or tails. What is the likelihood that it will come up heads (or tails) each and every time? The probability of such an occurrence decreases proportionately as the number of trials increases—the more times the coin is flipped, the more minute the possibility that it will always be heads (or tails). It is not impossible, but improbable. Indeed, there is a whole realm of mathematics devoted to the study of probability and statistics, and there are laws to describe such things. In the case of a coin, for example, where there are only two possible outcomes, the laws of chance state that in the long run the coin will come up heads 50% of the time and tails 50% of the time, with some small margin of variance. Again, the degree of variance decreases as the number of trials increases. A similar thing applies to biology as well—in the determination of sex, for instance. Generally, there is a 50/50 split between male and female offspring in most species. Otherwise, the species would be at a reproductive disadvantage and in danger of extinction.

Unfortunately, the above-mentioned laws of chance fly in the face of what we now call classical, or Newtonian physics. Isaac Newton, in his grand vision of universal order, set forth a series of postulates about the natural world that, at the time, seemed inviolable. His theory of universal gravitation provided the mathematical framework that made it possible to fully describe planetary motions and orbits—one of the all time great achievements of science. It is important to point out, however, that mere theories do not science make. Theory must match experimental observations. That’s not to say there is no room for error, but there should be at least an acceptable degree of coincidence between theory and hard fact.

Whether it was intended to or not, Newton’s theories became the springboard of what we might call absolute determinism in nature. The universe was compared to a gigantic clock and God assigned the role of clock-maker. Once the clock was built and set in motion, the Creator’s role was finished and He presumably withdrew from the workaday concerns of running the universe. Thus, if it were possible to possess omniscience—to know all there is to know about every atom and every particle, and every force that acts upon them—then it would be possible to predict, with unerring certainty, whether the coin would wind up heads or tails on any given flip, just as one can use Newton’s laws to predict the position and/or velocity of a planet at any given moment. What appears to us as “chance” is an illusion of the senses—the result of our ignorance, our understandable lack of omniscience.

Such was the argument of Albert Einstein during the late 1920s in his celebrated debate with Danish physicist Niels Bohr. Einstein had the famous quote, saying, “God does not play dice,” but Bohr, in the end, won the debate. According to Bohr there was no absolute determinism. Even if all quantities were known, with unlimited precision, there is still an element of chance in physical processes. Although there is a high degree of predictability in science, even at the atomic level, it is because the probability factor is high, not because all things are somehow predetermined. Einstein, for philosophical, aesthetic, and even religious reasons, found Bohr’s position to be completely unacceptable.


The disagreement between Albert Einstein and Niels Bohr was understandable since the two men were utter contradictions. Both were major figures in the early 20th century revolution of physics, but there the similarity ends. Einstein was born March 17, 1879 in Ulm, Germany, the son of an engineer. According to Motz and Weaver (1989, p.72), "Albert [as a child] was slow to learn to speak and his propensity to daydream and ignore the world around him caused his parents to fear that he might be retarded." Although Jewish, he attended a Catholic school until the age of ten. Einstein's parents were completely secularized, so the religious orientation of the school was of no concern. At the Luitpold Gymnasium, however, Albert showed little interest in any subject but mathematics, and neglected his study of the classics. His academic performance was so poor and his distaste for authority so pronounced, that he was expelled from the Gymnasium in 1894 (ibid). Unable to enroll in a university without a gymnasium certificate, he eventually attended the Federal Polytechnic Academy in Zurich, Switzerland, with the intent of becoming a teacher.

Einstein graduated in 1900, but his continued rebellious attitude toward authority--that is, toward Prussian-style regimentation--and unwillingness to devote himself fully to academics made it impossible to secure employment as a teacher. Therefore, he accepted a job in 1902 as an examiner in the Swiss Patent office in Bern. Einstein soon became a valued member of the staff, and during his seven-year stint with the patent office, he completed his dissertation and received a doctorate from the University of Zurich in 1905. That same year he published three papers in the Annalen der Physik, one explaining the cause of Brownian motion, one on the photoelectric effect, and one on the special theory of relativity. Any one of these papers would have established Einstein as a major figure in physics, but his ideas were slow to catch on. All three papers together represented a broadsided assault on classical physics.

After 1909 Einstein held a series of professorships--at the University of Zurich, the German University of Prague, the Zurich Polytechnic, and the University of Berlin. All the while his theories were gaining greater and greater acceptance. In 1916 he published the general theory of relativity--essentially a theory of gravity--and in its aftermath became a world-famous celebrity. In 1921 Einstein received the Nobel Prize in physics for his work on the photoelectric effect. Why did he not receive the prize for his epoch-making theory of relativity? There was a little known clause in the will of Alfred Nobel that the award must go toward discoveries with practical applications. What practical use was there for knowledge of what happens to objects at or near the speed of light? Thus, the award was given for Einstein's work on the photoelectric effect. By the 1920s it was obvious that the world's most famous scientist had almost single-handedly overturned Newtonian physics.

Niels Bohr, in contrast, was as much a product of academia as Einstein was an anathema to it. Bohr was born October 7, 1885 in Copenhagen, Denmark, the son of a university professor. The Bohr home received a steady stream of distinguished visitors and young Niels would often sit and listen to the weekly conversations on topics ranging from politics to science, to art and religion. In a way, these conversations helped prepare him for the rigors of his formal education. Bohr was both an outstanding student and a good athlete who enjoyed soccer and sailing. He began his undergraduate work in 1903 at the University of Copenhagen where he excelled in all subjects, but especially in mathematics and physical sciences. There was never any doubt that he would pursue a career in science.

After receiving his doctorate in 1911, he worked with Ernest Rutherford in Manchester, England. By the time Bohr returned to Denmark in 1913, he had already finished the first of his papers on quantum mechanics, which was quietly revolutionizing the realm of particle physics. Finding little interest for his work in Denmark, Bohr returned to England to work as a reader in mathematical physics while continuing his own theoretical research. He moved back to Denmark in 1916, and in 1921 realized a long-cherished dream by founding the Institute for Theoretical Physics. Under Bohr's leadership, the Institute became one of the major centers of research in Europe (Motz & Weaver, p.198). Niels Bohr received the Nobel Prize in physics in 1922--the year after Einstein received his.

Bohr was an accomplished speaker and his lectures well attended. This work took him to many foreign countries, including the United States. All the while, he probed the philosophical implications of modern physics, developing what came to be called the Copenhagen interpretation of quantum theory. In time, Bohr convinced nearly all leading physicists as to the merits of his views, but much to his regret he was never able to convince Einstein, whom he admired greatly. It was truly ironic that Einstein himself, in his various scientific papers, had provided Bohr with the theoretical tools to make his own breakthrough discoveries. For example, it was Einstein, years earlier, who had suggested that electrons, like photons of light, could display wave as well as particle properties.


Although we take for granted nowadays the image of the atom as a sort of miniature solar system, with electrons orbiting the nucleus just as planets orbit the sun, during the early part of the 20th century it was not so clear just what an atom was. Scientists of the 19th century thought, like the ancient Greeks, that it must be a simple, featureless sphere. Rutherford proved that the atom had a dense central core, and it was later demonstrated that electrons (discovered more or less independently) formed an outer shell. The atom's chemical properties originated largely from its electrons and each element had its own characteristic spectrum. But it was in trying to account for the phenomenon of spectral lines (among other things) that led physicists to doubt the "planetary model" and seek an alternative in quantum mechanics. That's because if one applied strict Newtonian principles to the atom, with the electrical force playing the role of gravity, then no atom could be stable. The electrons would radiate energy constantly, thus losing angular momentum, and would spiral into the nucleus. No element would have a specific spectrum and all atoms would rapidly decay, much like radioactive thorium. Observations told otherwise, however. Most atoms are stable, with their own spectral "fingerprint." Alternatives to the planetary model were proposed, such as the "raisin-pudding" model, with electrons embedded in the atomic mass.

It was Niels Bohr who resurrected the planetary model and who worked out the equations to account for the observed features of the hydrogen atom. Indeterminacy, however, was at the heart of his theory and there was, as a consequence, no clear distinction between the observer and the observed. In other words, the mere attempt to measure the velocity or position of quantum particles influenced the results--something that is unacceptable in science. Einstein attributed such uncertainty to the clumsiness of technology--that if more precise instruments could only be developed, eventually it would be proven that a particle's velocity and position could be ascertained, just as any larger object's could. Einstein thus rejected the central argument of quantum theory, saying, "Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not bring us any closer to the secret of the Old One. I, at any rate, am convinced that He does not throw dice" (qtd. in Clark, 1971, p.414).

Other scientists, such as Bohr, Heisenberg, and Born, accepted indeterminacy because, on that statistical basis, it was possible to account for observed phenomena in a coherent way. The implications of all this, nonetheless, were unsettling, even to the most adventurous mind. Quantum particles, far from being solid physical objects, could almost be seen as phantoms, existing everywhere at once or nowhere at all. The atom could be compared to a large insurance company that on the whole is stable and continuous. But when one gets down to the level of individual policy holders (quanta), random events make certainty and predictability a mere pipe-dream. Who can say when this policy holder dies, or that one has an accident?

Sadly, Albert Einstein made himself something of an anomaly in the scientific community due to his stubborn belief in some sort of ultimate causality. Although such causality may exist, it is beyond the reach of science. There is an intangible realm of mind and consciousness that is not reducible to physical processes. The God Einstein spoke of is not the God of Jewish and Christian theology (who is little more than a myth), but the God of Spinoza--a fundamental aspect of nature. Indeed, this whole idea that the universe sprang out of nothing, without cause or reason, and has developed since then simply on the basis of blind, random forces is patently absurd. You would have better luck convincing me that out of a roomful of monkeys, each equipped with a typewriter, one of them could, given enough time, eventually come up with something like Macbeth. It defies logic and borders on insanity. Thus, like Einstein I have little choice but to believe in the Old One. But there is no such thing as magic or the supernatural, and there is no need for constant Divine Intervention to make the universe work. In that respect, science carves a closer pathway to the truth than any religion or philosophy.



References

Clark, R. (1971). Einstein: The Life and Times. New York: Avon.

Motz, L. & Weaver, J.H. (1989). The Unfolding Universe: A Stellar Journey. New York: Plenum Press.



***



Measuring the Age of the Earth


The recent controversy over John Glenn's return to space has got me thinking about the concept of age. Much ado was made over the fact that he is seventy-seven years old, and some have been rather strident in their remarks. It was just a publicity stunt, they said--why send someone that old up on the shuttle? But it occurred to me that if Glenn had been, say, forty-seven, not a word of complaint would have been uttered. The sole reason for the criticism was the man's age, not that he wasn't qualified. It's just another example of how our present-day culture places no value on age. We even deem the elderly worthless. In other, more traditional cultures, seniors are held in high esteem. I say this not because of my own age (which I'm comfortable with), but because it doesn't seem right. I know a seventeen year-old girl who thinks that anyone over 25 is a fossil. "But you're old..." I've heard her say to individuals still in their twenties. Is "old" necessarily bad? After all, think of the planet Earth--it is ancient beyond counting, yet we all cherish it. John Glenn reported having an emotional experience seeing the Earth from orbit once again; other astronauts have felt the same thing. How old is the Earth, anyway?

To all appearances the planet seems to be unchanging. The seasons, years, and centuries pass with scarcely an observable change. It could easily be assumed that the Earth has always existed, as it probably always will. Yet all human civilizations, from the very beginning, have had a creation myth--a specific explanation of how the world came to be. The Babylonians, for example, believed that in the beginning was chaos, or what they called Tiamat. Then the gods and goddesses were born and they shaped the formless void into heaven, earth, and ocean. Scholars believe that the Jewish people, who were held captive in Babylon during the sixth century B.C., adapted the Babylonian tale for their own use--including it in what eventually became the Old Testament. Anyway, there are striking similarities between the two accounts. All this points to a universal human need to discover our origins, which are lost in the mists of time. Most ancient peoples considered the Earth to be thousands of years old, but no one could say precisely how many thousands. The concept of a "million," even if it existed, would have been inconceivable.

According to Isaac Asimov, one of the first attempts to calculate the age of the Earth was made by Anglican bishop James Ussher (1581-1656). Ussher worked his way backward through the Bible, assigning probable dates to important events. The oldest firm date seemed to be that of the ascension of Saul as King of Israel, thought to have occurred about 1020 B.C. (1989, p.21). The conquest of Canaan under Joshua was given as 1451 to 1425 B.C., the Exodus from Egypt about 1491 B.C., and the arrival of Abraham in Canaan about 2126 B.C. Noah's flood was placed by Ussher in 2349 B.C. and the whole creation at about 4004 B.C.--exactly four thousand years before the birth of Christ (ibid). Therefore, it was commonly believed that the Earth could not be much more than 6000 years old--that is, if one rigorously held to the Biblical record.

In 1715 the English astronomer Edmond Halley (1656-1742) made the first attempt to measure the Earth's age scientifically by using the uniformitarian principle--i.e. the idea that change on Earth takes place slowly, over long periods of time. The method he devised was to calculate the rate of the ocean's salinization. Rivers, as they flow into the ocean, carry with them quantities of salt and other minerals. If one were to assume that the ocean was fresh water to begin with, and if one could determine the amount of salt deposited annually, it should be possible to measure the age of the ocean by analyzing how much salt is in seawater. After all, when water is evaporated by sunlight, all minerals are left behind. Seawater, it turns out, contains 3.3% salt. Using these facts, Halley estimated the Earth's age to be 1,000 million years, which, according to Asimov, was "quite a respectable estimate for the first time around" (p.158). The religious argument against this figure, of course, was that God simply created the ocean with that much salt content.

Another way of calculating the planet's age was to measure sedimentation rates. Rivers, lakes, and oceans laid down layers of sludge, or sediment, which hardened into rock over time. Much of this rock contained the fossilized remains of extinct sea-creatures. Examining layer after layer of sedimentary rock was almost like looking into a time machine, since the more primitive organisms were found toward the bottom. It was difficult, however, to determine the rate of sedimentation, as some years may have been more turbulent than others weather-wise, thus producing varying amounts of sediment year by year. Still, geologists put forth estimates of the Earth as being at least 500 million years old, and that was good news to naturalist Charles Darwin. At that time (the mid 1800s), Darwin was formulating his theories of evolution through natural selection--a mechanism that required millions of years to work, if such evolution had in fact taken place.

But even before Darwin published his landmark book The Origin of Species in 1859, scientists were busy upsetting traditional notions of a 6000 year-old Earth by using incontrovertible physical laws. By the 1840s the First Law of Thermodynamics--that of the conservation of energy--was being formulated. For the first time, the source of the Sun's prodigious power began to be considered. Having as yet no idea about nuclear energy, the only logical explanation was, according to physicist Hermann von Helmholtz (1821-1894), that the Sun, a vast ball of hydrogen, was slowly contracting. Its inward falling mass, by weight of gravity, was constantly being converted to light and heat energy. A contraction of only 1/2000th of the Sun's radius--which was hardly noticeable--could account for all sunlight emitted since the dawn of civilization. However, the Scottish physicist Lord Kelvin (1824-1907) estimated that the Sun's radius, if contraction was the sole source of energy, would have to have been the size of the Earth's orbit a mere 50 million years ago. Geologists and biologists, convinced that the Earth was far older than that, were dismayed, to say the least (Asimov, p.161). Some other power source had to be found for the Sun.

The solution came in 1896 when physicist Antoine Henri Becquerel (1852-1908) accidentally discovered that the element uranium gave off energetic radiation. Other radioactive substances were soon found, and although the rates at which energy was thus released was slow, it was comparatively greater--much greater--than that of ordinary coal-burning. Scientists began to suspect that this was the source of the Sun's power. By 1911 Ernest Rutherford (1871-1937) demonstrated that the atom, once thought to be a featureless ball, actually consisted of a tiny nucleus in which almost all the mass was concentrated, surrounded by an electron shell. It was further determined that radioactivity altered the makeup of the nucleus, and thus converted one element into another. An ordinary explosion (which is a violent release of energy) is chemical in nature--that is, a release of electron energy--that has no effect on the nucleus. A nuclear explosion, on the other hand, results from forces within the nucleus and is millions of times more powerful. Therefore, it was nuclear energy that allowed the Sun to shine for billions of years with no perceptible change.

It was by using the properties of radioactivity that scientists finally began to understand the age of the Earth. Radioactive elements have what is called a half-life--a time period over which half the substance breaks down to some other element. Uranium, for example, turns to lead over a half-life of 4,500 million years. Other elements have their own characteristic half-lives, and since a wide variety of these substances are found in rocks all over the Earth, it was possible to determine the age of such rocks by analyzing their rates of radioactive decay. Some rocks were one billion years old; by 1931 some that were two billion years old had been found. Some rocks in western Greenland topped the 3 billion-year mark (Asimov, p.165). Eventually it was determined, from the changing proportions of rubidium and strontium in rocks that the Earth was approximately 4.55 billion years old (incidentally, analysis of meteorites, moon-rocks, and other extraterrestrial artifacts indicate that the entire solar system formed at roughly the same time).

Now that the age of the Earth has been accurately determined, I would like to demonstrate graphically the relative lengths of its various epochs by comparing the 4.5 billion years to the length of one solar year: the Earth's entire history symbolized by 365 days--January 1st to December 31st.

January 1, of course, represents the beginning of the Earth as a recognizable entity. At this point the planet is molten. By the second week of February, the Earth has solidified its outer regions, and as a result of volcanic activity, hot gasses are spewed out to form a primordial atmosphere. Among the gasses are vast quantities of water vapor. Oceans begin to form (through condensation and endless rainfall), and strangely enough, life appears--nothing more complicated than single-strand RNA. By the end of February primordial viruses and microspheres exist. The middle of March sees the first prokaryotes (bacteria): some with chloroplasts give rise to algae, others are more animal-like. These micro-organisms rule the Earth for 2.2 billion years.

It isn't until late September that the planet has an oxygen-rich atmosphere--oxygen being a waste product of blue-green algae. The first true cells, or eukaryotes, with DNA, nucleus, cytoplasm, and so on, appear about mid-October. The Earth is now 3.5 billion years old. By the end of October, multicellular life finally appears--mostly porifa (sponges). November represents the start of the Cambrian period, an age of coelenterates (jellyfish). By November 15th the Ordovician period begins with the appearance of arthropods, annelids (trilobites), and the like. By the 25th jawed fish arise and plant life begins to colonize the land. The end of November is the Silurian period--an age of coelacanths and rhipidistians. The latter eventually gives rise to amphibians. December 1st is the Devonian period--the Age of Fish. At this time insects and arachnids follow the rich plant life onto land. December 5th is the Carboniferous period, which sees the earliest amphibians. The 7th of December is the Permian period, wherein the first reptiles evolve. December 10th is the Triassic--the beginning of the Age of Dinosaurs. December 15th is the Jurassic and the 20th the Cretaceous. By Christmas Day, December 25th, the dinosaurs are extinct and the Age of Mammals begins. On the very last day of the year, December 31st, primates, apes, and hominids finally appear. The entire history of man has only occupied the last hour or so of the last day, and the recorded history, since the earliest civilizations, has all taken place in about the last minute.

As should be abundantly clear by now, a seventy-seven year-old man like John Glenn cannot be considered truly old-for one human life is naught but a blink of the eye.




References

Asimov, I. (1989). Beginnings: The Story of Origins--of Mankind, Life, the Earth, the Universe. New York: Berkley.



***


Scale


One of the most challenging aspects of any scientific endeavor is the fact that one is confronted with numbers representing physical quantities, which are sometimes unimaginably large or incredibly small. For example, what distance is really represented by a light year? The human mind cannot easily comprehend such a distance, yet scientists assure us, without a second thought, that this star or that, this galaxy or that, is x light years away. Similarly, how can one really grasp what is meant when one refers to the relative sizes of microscopic entities—the realm of eukaryotic and prokaryotic cells, bacteria and viruses, RNA and DNA, or the diameter of an atom?

To express these kinds of numbers, scientists use an amazingly versatile tool called scientific notation. Rather than write out the entire number representing, say, the astronomical unit (AU: the average distance from the Earth to the Sun—apprx. 150,000,000,000 meters), it would be expressed as 1.50 x 1011 m. For the extremely small, a negative exponent would be used; thus the diameter of a sodium atom might be written as 8.2 x 10-11 m.

The two parts of the scientific notation expression serve two distinct purposes. The second part, containing the positive or negative exponent, tells us the magnitude of the quantity. This is sometimes more of a problem than one might think, as explained by physicist Lawrence Krauss:

I was flabbergasted several years ago when teaching a physics course for nonscientists at Yale—a school known for literacy, if not numeracy—to discover that 35 percent of the students, many of them graduating seniors in history or American studies, did not know the population of the United States to within a factor of 10! Many thought the population was between 1 and 10 million—less than the population of New York City, located not even 100 miles away. (1993, p.28)

An America with a population of only 10 million would be a much different place that it really is—with a population approaching 300 million, or 3.0 x 108. The exponent tells us there are eight zeros following the one—in other words, a range of 100 million. The first part of the expression, written as a number between one and ten followed by a decimal, gives us the exact figure to any desired degree of accuracy (the greater the accuracy, the more numbers behind the decimal). Negative exponents, on the other hand, are fractions. Thus, 10-8 meters is 0.00000008 meters, or 1/100000000 m. A centimeter is 10-2 meters and a millimeter is 10-3. Therefore, 10-8 meters is one hundred thousand times smaller than a millimeter, or 1/100000th of a millimeter.

The handy thing about scientific notation, of course, is that it fits the metric system. The advantage of the metric system is primarily mathematical since it is based on a scale of 10. Whether the quantity measured is length (meters), mass (grams), liquid volume (liters), or density (grams/cm3), all have a factor of 10. In science, such numerical convenience is essential. One can always express something like the AU in terms of miles, yards, and inches, but the common system is based on the number 12—similar to the way we reckon time. Such a 12-based number system makes advanced computation unnecessarily cumbersome.

Therefore, it is not surprising that the scientific community would try to foist the metric system—devised after the French Revolution of 1789—upon all humanity, replacing the old system. Their efforts were highly successful in all nations except Great Britain and the United States. We Anglophiles stubbornly cling to our old system of inches, feet, and miles; ounces, pounds, and tons; pints, quarts, and gallons; and so on. There is a reason for that, however. Despite the obvious advantages of the metric system, the common system is tailor-made to human physiology. A foot, for example, is the average length of a grown man’s appendage; an inch about the length of a finger’s end-joint. A yard is about the length of a person’s out-stretched arms, a mile approximately 1000 paces (1 pace = 5 ft.). The fact is, these units of measurement feel much more natural to us than centimeters or kilometers ever will. For everyday use, then, I say keep the old system; but for science the metric system is invaluable. Fortunately, nature has simplified things greatly since there are only three fundamental dimensions of matter: length, time, and mass. I dealt with the aspect of time in “Measuring the Age of the Earth,” so here I will focus on length, or distance. A few observations on relative mass may also be in order.

Let’s begin with the realm of living organisms, taking ourselves as a standard measure. The human being is a comparatively large creature, though by no means the largest. As mammals, we are in the intermediate range—smaller and less massive than some (elephant, bear, whale, etc.), but larger than many (cat, mouse, rabbit, etc.). The “average” U.S. male is approximately 1.72 meters tall and weighs about 73.48 kilograms (Diagram Group, 1980, p.72). A much wider range of sizes, however, can be found among arthropods—a classification that includes insects. The longest known insect is the Tropical Stick insect, which may reach 33 centimeters, but the smallest are invisible to the unaided eye. Most of these are parasites (for whom infinitesimal size is a tremendous advantage), such as the mange mite, which measures about .25 millimeters. Smaller still are certain unicellular organisms, such as the amoeba, which is between .2 and .3 mm, or the Euglena (.15 mm). Among the smallest of these is the Chlamydomona, at .02 mm. But far smaller than all the above are bacteria, such as cocci or spirilla, which are about .001 to .002 mm in length. More than ten thousand times smaller than the tiniest bacteria, however, are viruses, which may be .1 to .3 micrometers in length—that is, 0.00001 mm (ibid). Viruses are so small that ordinary microscopes cannot resolve them; electron microscopes are needed. They represent the smallest known forms of life.

To demonstrate the scale of these things consider this: if a virus were the size of a flea (which looks like a tiny black speck to our eyes), then the flea would be the size of a whale—about 65 ft. long. If an amoeba were the size of an elephant, the Chlamydomona would be as big as a cat and a bacterium about the size of that flea (ibid). So we see that there is an enormous range of sizes even among organisms that are smaller than the tiniest insect.

Compared to atoms, however, all the above are unfathomably huge. Atoms may be measured in terms of nanometers (10-9 m), which are 0.000001 mm, or one millionth of a millimeter. An iron atom may be about .2 nanometers in diameter, or sulfur about .08 nanometers. Smaller still are the individual components of atoms—electrons, protons, and neutrons. The nucleus of a hydrogen atom—a single proton—is about .01 picometers in diameter (0.000000001 mm). To give a sense of this scale, if a picometer were magnified to the size of a centimeter (keeping in mind that a proton is only 1/100th of a picometer), then a small raindrop—diameter 1.4 mm—would be the size of the Sun (ibid). There may very well be things smaller than the quanta making up atoms, but such a realm is beyond the reach of current knowledge.

The masses of these particles, though, are fascinating to consider. The mass of a proton is estimated to be 1.6726 x 10-27 kg. That’s 0.000000000000000000000016726 kilograms. As minute as that seems, the fact is a proton is incredibly massive compared to its size. If a bunch of protons could be forced together side by side, into a lump one cubic centimeter big, it would weigh 133 million tons. The only reason why we, or anything else, are not impossibly massive is due to the fact that the atom is mostly empty space—and empty space has no mass. The mass of an electron—9.1096 x 10-31 kg.—is miniscule compared to that of a proton. The mass of a virus is about 10-21 kg. By way of comparison, the mass of a virus is to a human being what that of a human being is to the planet Earth.

Now we turn our attention to the other end of the spectrum—to that of the astronomical, and here we are met with numbers that are equally incomprehensible. Within the solar system, distances can be measured in millions of kilometers, but beyond that the light year is needed (a light year is defined as the distance electromagnetic waves travel in one year through a vacuum—about 9.4605 x 1012 km). The distance from the Earth to the Moon is 384,400 km; to the Sun, as mentioned before, 1.496 x 108 km. Venus is 41, 400,000 km away, Mercury 91,700,000 km. Going the other way, Mars is 78,300,000 km distant, Jupiter 628,700,000 km. The rest of the planets, in increasing distance, are as follows: Saturn, 1,277,400,000 km; Uranus, 2,720, 000,000 km; Neptune, 4,347,000,000 km; and Pluto, 5,750,400,000 km. The diameter of the solar system at Pluto’s orbit is about 1.18 x 1010 km or 11,800,000,000 km.

However, the solar system extends well beyond the domain of planets, as far as the Oort cloud (where comets originate). The Oort cloud is an estimated 7.48 x 1012 km, which is 7,480,000,000,000 km out from the Sun. That distance is about two trillion kilometers shy of a light year. The nearest star, Proxima Centauri, is 4.25 light years away. At the fastest speeds available to current spacecraft (between 200,000 and 300,000 km/hr), it would take nearly 20,000 years to reach Proxima Centauri. All the other stars are much farther away than that. The diameter of our Milky Way galaxy is about 100,000 ly., and the distance to the great Andromeda Galaxy is 2,250,000 ly.

Now, to illustrate the above graphically, let’s make a light year equivalent to one kilometer and reduce the size of the Earth accordingly. At that scale, the Moon is just .038 cm away, Mars 7.83 cm, Jupiter 6.287 cm, Saturn 12.774 cm, Uranus 27.2 cm, Neptune 43.47 cm, Pluto 57.504 cm, and the Oort cloud 74.8 cm away. Thus, our scaled-down solar system can fit comfortably in a medium sized room. The nearest star, of course, is about 2.5 miles away. The galaxy extends beyond the atmosphere to about 1/6 the distance to the Moon, while the Andromeda Galaxy is about three times more distant than the Moon. On this scale, the most distant (visible) objects in the universe would be located at about Saturn’s orbit. And there are, no doubt, vast reaches of the cosmos that are beyond visible range due to the finite velocity of light. So, whether one looks to the very small or the very big, infinity, for all intents and purposes, blurs the horizon of the unknown—something that mathematical models make clear.


References


Krauss, L.M. (1993). Fear of Physics: A Guide for the Perplexed. New York: BasicBooks.

The Diagram Group. (1980). Comparisons. New York: St. Martin’s.


***




The Relative Abundance of Chemical Elements


There is a certain fascination with classification schemes, and science produces these aplenty. For example, the division of living organisms into kingdom, phylum, class, order, family, genus, and species, not only shows the similarities of one creature to another, but also provides support for a widely-held theory—that of biological evolution. Similarly, the periodic table of chemical elements, with its seven periods and various groupings, lends itself to another great theory—the atomic theory of matter. The fact that Mendeleev created the periodic table before the atom was really understood speaks volumes for the validity of the theory. If the atomic theory were not true, then it is hard to see how the periodic table could exist at all.

An element, of course, is defined as “[a] substance which cannot be decomposed by simple chemical processes into two or more different substances" (qtd. in Moore, 1987, p.127). There are 92 naturally occurring elements, all but two of which have been found on earth (technetium, Tc 43, and promethium, Pm 61, have not yet been discovered in their natural state). They have a wide variety of physical and chemical properties; some occur naturally in their elemental state (iron, copper, gold, etc.), others can only be found in compounds (silicon, calcium, sodium, etc.). Some are radioactive, most are not; some are quite abundant in nature—oxygen, for instance—others are exceedingly rare.

What accounts for the relative abundance or rarity of an element? The answer, it turns out, can only come from the relatively new branch of science known as astrophysics—the merger of astronomy/cosmology and nuclear physics. I will explore that area momentarily; but first, how did we come by our knowledge of the elements?

As many as ten elements have been known from antiquity: copper and tin, for instance, were used to make bronze--a strong alloy. Their Latin names were cuprum (a reference to the Island of Cyprus) and stannum, respectively. Ferrum, or iron, later became extremely important in the manufacture of tools and weapons. Other known metals included aurum (gold), plumbum (lead), hydragyrum (mercury), and argentum (silver). Antimony, carbon, and sulfur were also well known and commonly used. The only other element discovered before the age of science was arsenic, isolated by Albertus Magnus in 1250 A.D. Phosphorus was identified in 1669, and during the 1700s these elements were named: bismuth, chlorine, chromium, cobalt, hydrogen, manganese, molybdenum, nickel, nitrogen, oxygen, platinum, tellurium, titanium, tungsten, uranium, zinc, and zirconium. And during the 1800s, every year it seemed, new substances were being added to the list: aluminum, barium, beryllium, boron, bromine, cadmium, calcium, cerium, iodine, iridium, lithium, magnesium, niobium, osmium, palladium, potassium, silicon, sodium, tantalum, and so on. Thus, at least 50 different elements had been positively identified at the time Mendeleev began putting together the periodic table. It was perhaps inevitable that such a table be created at some point--some sense of order had to be imposed on the chaos.

The seven periods of the table are the horizontal rows representing the elements in ascending order according to atomic number--that is, the number of protons in the nucleus. An element may gain or lose electrons to become ionized, or occur in several isotopes (according to the number of neutrons), but the number of protons, and thus the strength of its positive charge, defines the element. Its chemical properties arise chiefly from the electrons swarming about the nucleus.

The vertical columns of the table indicate periodicity, or the point where the various properties begin to repeat themselves. Thus, the elements of column IA--H 1, Li 3, Na 11, K 19, Rb 37, Cs 55, Fr 87--are similar, as are those of column 0--He 2, Ne 10, Ar 18, Kr 36, Xe 54, Rn 86. Generally, elements fall into three categories: metals, metalloids, and non-metals. There are a number of subdivisions as well.

As far as relative abundance goes, group IA contains the most plentiful of all elements, hydrogen (making up 88.6% of all the matter in the universe) as well as one of the rarest, francium. There are only an estimated 15-25 grams of francium on earth. According to Kaufmann (1985, pp.110-11), "From chemical analysis of Earth rocks, Moon rocks, and meteorites, scientists have been able to determine relative abundances of the elements in our part of the galaxy." They are (in order of abundance): 1) hydrogen, 2) helium, 3) carbon, 4) nitrogen, 5) oxygen, 6) neon, 7) magnesium, 8) silicon, 9) sulfur, 10) iron, 11) sodium, 12) aluminum, 13) argon, 14) calcium, and 15) nickel (ibid). Here on earth, though, we have what might be called a "specialized environment," and so the most abundant substances do not reflect cosmic ratios.

The 21 most abundant elements in the earth's crust are scattered throughout the table [note: rankings in ascending order are presented along with periodicity--i.e. elements in the same vertical column]: vanadium, which ranks 21st, makes up .017% (by weight); strontium is 20th at .019% and is in group IIA with calcium (5th) at 3.63%, magnesium (8th) at 2.09%, and barium (13th) at .05%; nickel is 19th at .02%, which is in group VIII with iron (4th) at 5.01%. Zirconium is 18th at .026% and shares group IVB with titanium (9th) at .63%; group VIIA includes fluorine (17th) at .03% and chlorine (14th) at .048%; carbon is the 16th most abundant at .034% and is in group IVA with silicon, which is ranked 2nd overall at 27.72%; chromium is 15th at .037%; group VIA contains sulfur (12th) at .052% and 1st ranked oxygen, which accounts for 46.59% of the earth's crust--by far the most abundant element; group VIIB includes manganese (11th) at .10%; surprisingly, hydrogen accounts for only .13% and shares 10th place with phosphorus; hydrogen, though a gas, is part of group IA, the alkaline metals, along with sodium (6th) at 2.85% and potassium (7th) at 2.6%; finally, aluminum is ranked 3rd at 8.13% (Chen, 1975, p.77). All other elements combined account for only .056% of the earth's crust. These abundances are summarized below:





21 Most Abundant Elements on Earth


Ranking % of Earth’s Crust
1 Oxygen 46.59
2 Silicon 27.72
3 Aluminum 8.13
4 Iron 5.01
5 Calcium 3.63
6 Sodium 2.85
7 Potassium 2.6
8 Magnesium 2.09
9 Titanium 0.63
10 Hydrogen, Phosphorus 0.13
11 Manganese 0.10
12 Sulfur 0.052
13 Barium 0.05
14 Chlorine 0.048
15 Chromium 0.037
16 Carbon 0.034
17 Fluorine 0.03
18 Zirconium 0.026
19 Nickel 0.02
20 Strontium 0.019
21 Vanadium 0.017


The relative abundances of elements on earth do not, of course, reflect their occurrences throughout the universe. Again, 88.6% of all matter in the universe (matter, that is, that can be accounted for) is hydrogen and 11.3% helium. All other elements atomic number 3 and up account for only 0.1%. The reason why hydrogen and helium are scarce on earth is due to the way the solar system formed--most of the hydrogen went into the sun's makeup as well as that of the massive outer planets. The small inner planets simply did not have sufficient gravity to retain the lighter elements. They are, as a result, made up of heavier elements--albeit the rarer ones.

Now, how did the elements originate in the first place? Hydrogen and helium were created in the aftermath of the Big Bang--the hypothetical origin of the universe. These simple atoms were the fundamental stuff out of which all else came. Heavier elements were formed as byproducts of thermonuclear reactions within stars--for only there are the temperatures and pressures great enough to induce fusion. In our own sun, for example, its original composition was about 75% hydrogen and 25% helium, with trace amounts of heavier elements. Nuclear fusion, which causes the star to shine and radiate energy, means the transformation of hydrogen nuclei to helium nuclei, a process called core hydrogen burning (Kaufmann, p.392). Thus, helium is the "ash" of hydrogen burning. It is the outward push of radiation that halts a complete gravitational collapse, giving the star its equilibrium during what astronomers call the "main sequence."

After five billion years there is now more helium in the sun's core than hydrogen. In the final years of its life (about another five billion years from now), the sun's hydrogen fuel will be almost depleted and its energy output will decrease. Then gravity will dominate and begin to collapse the star. This causes increased pressure and makes the temperature rise. When the core temperature reaches 100 million Kelvins, helium burning is ignited--i.e. two helium nuclei fused to form an isotope of beryllium. But the beryllium is unstable and rapidly degrades, finally forming carbon through additional bombardment by helium nuclei. Then some of the carbon combines to form oxygen, so carbon and oxygen are the "ash" of helium burning (ibid).

In stars that are more massive than the sun, a greater variety of elements can be formed. When a star's core temperature reaches 600 million Kelvins carbon burning begins, and this produces neon, magnesium, oxygen, and helium. At one billion Kelvins neon burning begins, which produces more oxygen and magnesium. At 1.5 billion Kelvins oxygen burning begins, the principal product of which is sulfur. Oxygen burning also produces silicon, phosphorus, and more magnesium. At 3 billion Kelvins silicon burning is ignited. This is a furious process in which hundreds of complex nuclear reactions take place, but the final result is a stable isotope of iron (ibid). So iron is the end-of-the-line for this particular path of element creation.

It is obvious, however, that iron is not the end of the periodic table. Whence come elements of atomic number 27 and above? These, it turns out, are created in the violent reactions of super-massive stars at the end of their lives--supernova explosions. The beautiful Crab Nebula (NGC 1952), for instance, is the remnant of a supernova that was observed in 1054 A.D. The types of nuclear reactions that take place in these events are unpredictable. The heavier the nucleus of the atom, the higher the energies required for fusion to occur, and the energies associated with supernovae depend upon the mass of the star.

On earth, elements with even atomic numbers are about ten times more abundant than those with odd numbers (Moore, p.128). A graph of the elements' relative abundance (with the even-odd fluctuation adjusted) shows a smoothly descending curve with definite peaks and troughs. There is, for example, a carbon, nitrogen, oxygen peak, an iron peak, and a lead peak. This is somewhat a reflection of the table's periodicity.

One might wish that gold or silver were more common than they are (ignoring the fact that such over-abundance would lessen their monetary value) or that tungsten were more easily obtained, but in the cosmic scheme, all of these--aside from hydrogen and helium--are of little consequence. All the heavier elements put together are little more than trace impurities. Without the stars and their various life-cycles, no heavier element would ever exist. Out of the death-throes of stars comes the stuff of life, literally.


References

Chen, P.S. (1975). A New Handbook of Chemistry. Camarillo, CA: Chemical Elements Publishing.

Kaufmann, W. J. (1985). Universe. New York: Freeman.

Moore, P. ed. (1987). The International Encyclopedia of Astronomy. New York: Orion Books.



***


Mechanisms of Change


It is often said that truth is stranger than fiction, and nowhere is this more the case than in the discovery of continental drift. Nothing in anyone’s experience would tend to suggest that the face of the earth is changing in any significant way. That’s not to say that minor changes are not taking place: rivers change course, mountains and hills wear down, coastlines slowly erode. But the overall configuration of oceans and land-mass seems just as permanent as the stars in the night sky—something the Bible alludes to in Ecclesiastes 1:4: “A generation goes, and a generation comes, but the earth remains forever.”

This kind of permanence is deeply desired by all people who have precious little in the way of security. Political, social, and cultural upheavals never seem to end, and the earth itself is often a hostile environment: earthquakes, floods, droughts, violent storms. Indeed, history seems to be the tale of one catastrophe after another. But man-made disasters notwithstanding, one must question the real process of change in the natural world. Is the earth evolving, slowly changing over time, or is it the hapless victim of continual catastrophism?

Historically, catastrophe and natural disaster were held to be the main agents of change in this world. The best known example of this is the Great Flood described in Genesis 5:13-17: “And God said to Noah, ‘I have determined to make an end of all flesh; for the earth is filled with violence through them… behold, I will bring a flood of waters upon the earth to destroy all flesh in which is the breath of life from under heaven; everything that is on the earth shall die,’” Noah’s flood was a global catastrophe of epic proportions, and if not for the efforts of one man all would have been lost. From a scientific perspective, however, the story of the Great Flood is implausible at best. If there was such a planet-wide deluge, drowning every continent and submerging even the tallest mountains, where did the water go? The sheer volume of it has to be accounted for somehow. There is also a problem with the ark. The dimensions of the ark are very specific in the Bible. How could two specimens of every animal on earth possibly be crammed into such a vessel? Clearly, it’s not possible. The modern view is to dismiss the story altogether as myth.

Whence comes the story of the Flood, anyhow? It is derived from a classic of ancient Mesopotamian literature, The Epic of Gilgamesh, which describes a great prehistoric flood. And indeed, such floods were commonplace in the Tigris-Euphrates valley (what is now southern Iraq). Archaeological excavations have revealed evidence of a particularly devastating flood in that region during those dark years; thus, the story of Noah’s Ark likely has some basis in fact. Is it so surprising that the tale would later become more elaborate through the constant retelling before it was finally written down?

The earth does appear to be unchanging, nonetheless. The first hint that the planet’s past might have been vastly different from anything one could imagine came in the 16th century, shortly after the European’s discovery of the Americas. Rough-hewn maps of the North and South American coastlines were being made. Portuguese explorers were concurrently doing the same for the African continent. It did not take long for those who studied the new maps to come to a startling realization: that the east coast of South America and the west coast of Africa matched one another like pieces of a jigsaw puzzle. The first to write about this was philosopher Francis Bacon (1561-1626). In 1620 he recorded the observation in his book Novum Organum, saying that it could not be mere coincidence. Africa and South America must have once been part of a single landmass that was somehow torn asunder. The question is, how? The clergy, never too comfortable with philosophers and scientists anyhow, were quick to attribute it to the biblical Flood—the force of the deluge must have ripped the continents apart, if indeed they were ever joined. The legendary Flood, it seemed, was a ready-made answer for any incongruity. For instance, the ancient Greeks had once noted that sea shells and fossils of what was obviously marine life were to be found high in the mountains. They speculated that what was now dry land must have at one time been under the sea. The clerical answer to that? The Flood, of course.

In 1784 American statesman Benjamin Franklin (1706-1790) suggested that the earth’s solid crust might be a relatively thin shell floating on an ocean of molten rock. Thus, it could break up into sections that could slowly drift around over time. It was a brilliant speculation, but what Franklin lacked was a mechanism to make it work.

In Great Britain during the 18th century, retired chemist James Hutton (now considered the “Father of Geology”) published a book, Theory of the Earth, in which he ascribed large-scale changes on the planet’s surface to natural processes—sedimentation, volcanic eruption, erosion by wind and water—and judging by the rates of change, concluded that the earth must be millions of years old.

As Hutton’s views gained wider acceptance and the science of geology became firmly established, additional evidence of continental drift was being gathered from around the globe. Not only were the coastlines of Africa and South America similar, their geologies were virtually identical. Moreover, many species of plant and animal, which could not have crossed the Atlantic Ocean, were common to both continents. Similarly, the Island of Madagascar, off the coast of Africa, had few species in common with Africa but many in common with India, which was much further away. How could that be? Since the concept of continental drift was not universally accepted at that time, the notion of “land bridges” came into vogue.

In 1909 Austrian geologist Eduard Suess completed a three-volume work called The Face of the Earth in which he postulated the existence of a super-continent called Gondwanaland. It included South America, India, Australia, and Antarctica, all in their present positions but joined together by land bridges. The assumption was that continents could rise or sink vertically, as if they were corks bobbing in water. This is the phenomenon of isostasy, which holds that landmasses are made of lighter rock (granite) than ocean basins (basalt). Because they are less dense, continents naturally float—that’s why they are continents. But can they move sideways too?

Alfred Wegener (1880-1930) came to the conclusion that they could, that indeed they must. He dismissed the idea of land bridges that somehow once existed (to account for observed data) then conveniently disappeared. The very concept of isostasy ruled that out because any land bridge would continue to float—it would not just disappear. Therefore, the continents must be slowly drifting across the face of the earth. For evidence there was the fit of the shorelines; and when the continental shelves were considered instead of the coastlines, the fit was even better. It was also known that Antarctica had not always been ice-bound. According to James Dyson:

Throughout the long, intervening periods, the Earth’s climate was mild and uniform. The tropics frequently extended into high latitudes; reef-building corals spread through warm seas much further north than they do today, sometimes to the latitude of Greenland and Alaska; palms, tree ferns, breadfruit trees, and other subtropical plants grew equally far north over a time span counted in millions of years. During much of Earth’s history even the Antarctic continent was free of ice. At times it was covered with dense forests.

In fact, fossils of creatures that could only have lived in the tropics were to be found throughout Antarctica. Thus, either the South Pole once enjoyed a warm climate (which is unlikely) or Antarctica has not always occupied its current location. Wegener proposed that all the continents had once been part of a single mass called Pangea, which had long ago broken up and dispersed. But since he was unable to provide a mechanism for the continental drift theory, his ideas were ridiculed. Unlike many of his predecessors, Wegener refused to fall back on legendary floods or other cataclysms to account for it. The continents just drifted.

Knowledge of the all-important mechanism that drove continental drift, it turned out, finally came from undersea exploration. During the mid-1800s an attempt was made to lay a telegraph cable across the ocean floor, connecting the United States and Great Britain. For this purpose, information about the ocean bottom was needed. Oceanographer Matthew Maury (1806-1873) was commissioned to collect data on the depths. His methods (using weighted ropes) were laborious and costly, but by 1854 it was clear that the middle of the ocean was much shallower than either side. One would think that the deepest part of the ocean would be the very center, but not so. Maury called it Telegraph Plateau.

After World War I the invention of sonar made it possible to obtain a detailed picture of the ocean bottom, and it revealed more than just a plateau: there was a mountain range—higher, longer, and more rugged than any on land—running the length of the ocean. This came to be called the Mid-Atlantic Ridge. After World War II it was discovered that the ridge curved around southern Africa and extended into the Indian Ocean. There it divided and worked its way around Australia and then formed a vast circle in the Pacific Ocean. Moreover, in the midst of this globe-spanning mountain range, there were deep, deep canyons—what appeared to be cracks in the earth’s crust. Here it was that sea-floor spreading was discovered as molten basalt from the earth’s mantle constantly welled up, building the Mid-Oceanic Ridge and expanding, centimeter by centimeter, the ocean floor. This is what was driving the continents apart, and Wegener’s theory at last had to be taken seriously.

Therefore, it was through persistence, acute observation, and refusal to rely on apocryphal tales of ancient catastrophes that the truth was finally laid bare. The earth was revealed to be a dynamic, living planet, constantly in motion. If it appears unchanging it is because a human life is so short. In fact, humanity has occupied the planet for only a minute fraction of geological time. Compared to the earth’s 4.5 billion-year history, that’s no time at all.

Although there are catastrophes and natural disasters, these are only local events and insignificant in the long-term. The real mechanisms of change are natural, slow moving, and inexorable. That is why, for example, I doubt the newly popular theory of dinosaur extinction due to an asteroid collision. Perhaps such a collision did take place, but I question if that in itself wiped out the reptiles. Rather, I believe that the break-up of Pangea (which began in the Triassic) and the repositioning of the continents, resulting in climatic changes, was what really triggered their demise. In other words, I feel more comfortable with a uniformitarian explanation than a catastrophic one.

Two: Ethics

Mammon and America

When I was growing up I often heard my father refer to the "Almighty Dollar," making me think that money was some sort of religion. Indeed, money is a god for some--if not money, then the tangible things it buys. This crass materialism, which dominated our nation in the '50s, was temporarily rejected during the '60s, but enjoyed a resurgence in the '80s, is a kind of mammonism--a love of money. In the movie Oh God! John Denver asks God (played by George Burns) why He created man naked. God replies that when one has clothes one also has pockets, and where there are pockets there has to be something to put in them. The implication is that material wealth has little to do with God. In Matthew 22:17 a group of Pharisees confronts Jesus, asking him whether it is lawful to pay taxes to Caesar. Jesus replies, "Show me the money for the tax." They bring him a coin and he says, "Whose likeness and inscription is this?" They reply that it is Caesar's, and Jesus says, "Render therefore to Caesar the things that are Caesar's, and to God the things that are God's."

Nonetheless, so long as one is alive in this world, one's livelihood is a real concern. Let us not think that abject poverty is the guaranteed road to virtue. In fact, just the opposite seems to be the case. The impoverished areas of this nation are invariably hotbeds of crime, drug abuse, prostitution, and crushing despair. It is debatable whether poverty causes crime, but it is certainly true that crime begets poverty. Businesses simply cannot operate in crime-ridden neighborhoods, so they pack up and leave--taking badly needed jobs with them. Is this ethical? Like any other entity, a business's first instinct is to survive. It can hardly be blamed for that, but is there a point where ethical lines are crossed and profits are gained through people's ruin? Historically, there have been two opposing ethical approaches to American businesses: one based on the concept of Social Darwinism (survival of the fittest), the other on the Protestant work ethic (an honest wage for an honest day's work), but in recent years a third approach may be seen--a more utilitarian view.

The emergence of the United States as the world's wealthiest nation began shortly after the Civil War, centering on the burgeoning railroads. According to Nevins and Commager, America during Abraham Lincoln's day "was a nation of small enterprises. A monopoly was practically unknown..., furniture came from the local cabinet maker, shoes from the neighborhood shoemaker" (1981, pp.267-68). Within forty years, however, all this had changed. Companies like International Harvester and Standard Oil owned virtual monopolies in their fields, driving small businesses (and their owners) to ruin. One of the most spectacular successes was the United States Steel Corporation, born in 1901 with a capitalization of 1.4 billion dollars--a sum that was "larger than the total national wealth a century earlier" (ibid). The accumulation of such unbelievable wealth was the result of setting up a corporation--an entity that could "enjoy the legal advantages but avoid most of the moral responsibilities of a human being" (ibid). Then came the trust. A trust was a joining of corporations acting in concert to monopolize resources, eliminate less profitable subdivisions, bargain with labor, compete with foreign interests, and most importantly, control prices. Because of their vast reserves of capital, the trusts came to have an undue influence over the government. By the end of the 19th century democracy itself was an endangered species because most of the country's natural resources, industries, railroads, and utilities were all generating profit for a handful of men. According to Nevins and Commager:

Exorbitant charges, discrimination, and wholesale land grabs by the railroads, the malpractices of Rockefeller, Carnegie, and others in crushing competitors,
the savage power with which many giant corporations beat down labor, the pocketing by the trusts of the savings that came from science and invention,
the spectacle of corporation agents lobbying favorable laws through state legislatures and corporate lawyers finding loopholes in state tax or regulation laws, all aroused widespread alarm and bitterness. (p.273)

Such business practices, described above, were operating according to what has been called Social Darwinism. Charles Darwin, of course, was the English naturalist who spent years studying wildlife all over the world. In 1859 he published The Origin of Species, the central thesis of which was that most animals produce far too many offspring to subsist on available food supplies, therefore they must compete amongst themselves and against other species to survive. In this way, the weak, the sickly, and the maladapted get weeded out. It is nature's method of insuring that only the strongest, most intelligent, cunning, and energetic creatures live to mate and pass on their genes. It was not difficult for the Rockefellers and the Carnegies of the world to adopt Darwin's theory of natural selection as a means to justify their ruthlessness.

This kind of brutality, after all, is nothing new--it is as old as the human race. All the Caesars, Khans, and Hitlers have operated according to the basic principle of "might makes right...” Ethics, the philosophical consideration of right and wrong, hardly enters into the discussion. Probably the most brilliant exposition of this kind of thinking was that of the philosopher Freidrich Nietzsche (1844-1900). Nietzsche was a pious youth--schoolmates derided him as a "Jesus in the Temple" (Durant, 1961, p.403). But at some point he rejected his reverent past and spent the remainder of his life bitterly denouncing Christianity. In such books as Beyond Good and Evil (1886) and Thus Spake Zarathustra (1883), Nietzsche taught that Christian morality was slave morality, a system devised by the weak to fetter the strong. Thus, he pronounced, God is dead, and in His place arises the Superman (Ubermensch), the source of what he called "hero morality"--the ethos of the conqueror and the dictator. That which we call "good" is whatever triumphs, whatever succeeds, whatever obtains victory. "Evil" is what fails, gives way, or is overcome by superior force. The world as a whole operates according to what Nietzsche called the Will to Power. Here, there is no room for compassion, weakness, or sentiment. Altruism is a sham and Christian charity hypocrisy.

Such thinking, obviously, is nothing more than the application of Darwin's theory to human value systems, and, frankly, it is a little hard to argue with its basic premise. Is not strength better than weakness, victory better than defeat? And nature is appallingly indifferent to the welfare of any living thing, evidenced by the fact that 99% of all species that ever existed on earth are now extinct--driven to destruction by environmental pressures. The question is, should we translate these indisputable facts to our own species? Are we human beings to scratch and claw and compete, just as animals do?

Apparently, the U.S. government disagreed, and stepped in to enact antitrust laws in the early 1900s. In his First Inaugural address, President Woodrow Wilson said:

The evil has come with the good, and much fine gold has been corroded. With riches has come inexcusable waste. We have squandered a great part of what we might have used, and have not stopped to conserve the exceeding bounty of nature... scorning to be careful, shamefully prodigal as well as admirably efficient. We have been proud of our industrial achievements but we have not hitherto stopped thoughtfully enough to count the human cost, the cost of lives snuffed out, of energies overtaxed and broken, the fearful physical and spiritual cost to the men and women and children upon whom the dead weight and burden of it all has fallen pitilessly the years through... (qtd. in Nevins & Commager, p.338)

Wilson was perhaps the very embodiment of a more traditional approach to business: that of the Protestant work ethic--so named to differentiate it from more aristocratic Roman Catholic values. Hierarchies are, after all, inherent within Catholicism. Europe was historically a society divided by class, so the Protestant Reformation was as much an economic struggle as a religious one. America is much more egalitarian, although vestiges of class-consciousness remain. But the Protestant work ethic was at the heart of colonial and early American entrepreneurialism. In a social climate preoccupied with "salvation"--the benefits of which are supposedly unobtainable until after death--it was considered best to occupy one's time with useful work. Laziness, sloth, gluttony, and self-indulgence (so beloved of aristocracy) were sins to be avoided. There may have been a time when a man's word was his bond, when written contracts were mere formalities, when business owner and employee were not sundered by vast oceans of wealth and privilege, but those days seemed to vanish in the wake of Andrew Carnegie, John D. Rockefeller, and J.P. Morgan.

The Protestant value system is based upon the simple--but surprisingly radical--teachings of Jesus on the subject of money. In Matthew 6:24 he says, "No one can serve two masters; for either he will hate the one and love the other, or he will be devoted to the one and despise the other. You cannot serve God and mammon." Thus, in Jesus’ opinion, men are too concerned about money and material--the externals of life. In Mark 10:23-25 he says, "How hard it will be for those who have riches to enter the kingdom of God!" and, "It is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God" (in that passage the word "camel" is probably a mistranslation of the original Greek word meaning "rope"). There was a time in American history, I'm sure, when many took those sayings to heart and therefore dealt honestly with others, seeking simply to feed themselves and their families through honorable means. The idea of building a financial empire and living like royalty must have seemed downright un-American. Indeed, the U.S. government had to step in and pass antitrust laws because if it had not, the country would have become an oligarchy and the government little more than a rubber stamp. Democracy itself was at stake.

In the 20th century we have passed through world wars, a cold war, civil rights struggles, political assassinations, and shifts between conservative and liberal agendas, but American business continues unabated. The Protestant work ethic seems hopelessly outdated now, and although Social Darwinism is still practiced, most people do not feel comfortable with it. These days a more utilitarian approach to ethics is the desired norm. Companies want to maintain profitability but workers must be able to make a decent living too. In 1906 a novel by Upton Sinclair called The Jungle shocked the nation with its vivid descriptions of the Chicago stockyards and the plight of (mostly immigrant) workers. Here is a brief passage:

Marija and Elzbieta and Ona, as part of the machine, began working fifteen or sixteen hours a day. There was no choice about this--whatever work there was to be done they had to do, if they wished to keep their places; besides that it added another pittance to their incomes, so they staggered on with the awful load. They would start work every morning at seven, and eat their dinners at noon, and then work until ten or eleven at night without another mouthful of food... When they got home they were always too tired either to eat or to undress; they would crawl into bed with their shoes on, and lie like logs. (p.142)

After The Jungle was published, public indignation forced the government to enact pure-food laws. Labor laws and occupational safety laws soon followed.

There are some who believe that socialism is the only solution to economic woes, but if the results are like those in Russia, or North Korea (where the famine is so bad people are eating the bark off trees), where economies are in shambles, there is little to recommend that route. The United States, European Union, and Japan seem to be the best models of prosperity for the present. No one will deny that there are inequities to be resolved, but such problems are best addressed in an environment of freedom and opportunity, rather than oppression and fear.



References

Durant, W. (1961). The Story of Philosophy. New York: Washington Square.

Nevins, A. & Commager, H.S. (1981). A Pocket History of the United States. New York: Washington Square.

Sinclair, U. (1906). The Jungle. New York: Signet.


***



Racism and Racialism

Racism is racism, one could argue, but the distinction between extrinsic and intrinsic racism presented by Kwame Appiah (2003, pp.264-279) appears to be a matter of the depth to which such views are held—the latter being more insidious than the former (if I understand Appiah correctly). The former—extrinsic racism—is best expressed by the classic and ironic declaration, “Some of my best friends are…(you fill in the blank).” It’s ironic because, if you truly consider someone a “friend,” you would not, it seems to me, identify him or her in ethnic terms. This cuts to the heart of the problem of racism (or bigotry) in general: the failure, or outright refusal, to regard other human beings primarily as individuals, but as members of particular groups. We often consider it normal, even admirable, to treat certain persons according to their “status.” For example, we deal with small children, in every respect, much differently than we do with adults. This is not limited to one’s own children, but all children. It is considered normal, and morally correct, to take especial care and concern when relating to children because of their vulnerability as minors. Similarly, a white person—particularly one of the older generation—may consider it perfectly natural to regard blacks or other non-whites as subordinates; there may be no real malice involved, simply an ingrained assumption (white superiority) that remains unchallenged. Such assumptions are usually unconscious, unless brought to the fore through some shocking challenge.

Intrinsic racism, on the other hand, seems to be more deeply rooted, and thus has the potential to cause greater harm. It is expressed by the Crummell quote on
p273: “Races, like families, are the organisms and ordinances of God: and race feeling, like family feeling, is of divine origin. The extinction of race feeling is just as possible as the extinction of family feeling. Indeed, a race is a family.” Here, no matter how many of your “best friends” may be (blank), you would never consider giving preference to them over others of your own ethnic background. If you’re white, you treat other whites—indeed all whites—differently than you do non-whites, for no other reason than “race.” Here, I should point out, as Appiah does in his essay, that intrinsic and extrinsic racism may not be so easily separated or distinguished; in fact it seems best to regard the whole psychology of it as a spectrum with extreme intrinsic on one end and extreme extrinsic on the other. But all racism, I believe, emanates from a core of false assumptions that Appiah refers to as “racialism”—the notion that there are identifiable and heritable characteristics that a) all members of a particular “race” have in common, and b) make it possible to subdivide humanity into distinct racial groups.

Actually, I am not unfamiliar with these lines of thought that are so clearly discussed in “Racisms." Racialism, which holds that “traits and tendencies characteristic of a race constitute…a sort of racial essence,” and give rise to “what the nineteenth century called the ‘Races of Man,’” has resulted in all sorts of dubious and pseudo-scientific theory. The most notorious example would be the elaborate scheme created by the Nazis to identify Jews according to their relative genealogies (e.g. one Jewish grandparent on the mother’s side—German; two Jewish great-grandparents on the father’s side—Jew; etc.). But in all actuality, there is no scientific basis for any of it. The whole idea of “race”—as expressed in racialism—is itself a fallacy. There is no “sub-species” of human being; there is only one, remarkably homogeneous, species. Genetically, ALL humans are virtually identical, and in fact lineally related. Samples of human DNA taken from every ethnic group in the world—from European to Chinese to Arab to African to Australian aborigine—have proven that all humans alive today are lineally descended from a single woman (sometimes whimsically referred to as “Eve”) who lived in sub-Saharan Africa about 200,000 years ago. I could go on and on about this, of course, but such arguments mean little to those who insist on holding their (irrational) racial views.


References

Appiah, Kwame Anthony. "Racisms." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 264-279.


***


Decision to Drop the Bomb

The debate over the “morality” of war is essentially a conflict between two value systems: the Greco-Roman and Judeo-Christian. Greco-Roman values, of course, accept the inevitability of war, and prize such virtues as courage, honor, obedience to authority, adherence to duty, and so on. Very chivalrous. But Christianity, as Lackey points out, was originally understood to be pacifist: "The early Christians, living at the time the New Testament was being written and shortly afterward, thought that Jesus' teaching was perfectly unambiguous. He did not permit meeting violence with violence, period" (2003, p.221). Such a view appeals to us intuitively; deep down we know it is wrong to fight amongst ourselves. It’s idealistic, even if a bit naïve. I would point out, however, that even Jesus was capable of violent acts: in the Gospel of John he fashions a whip out of bits of rope and uses it to drive the money-changers out of the Temple; in another passage he urges his followers to “pluck out your eye” if it causes you to sin, or cut off your hand if that causes you to sin. These are not pacifist notions. So there are some things worth resisting, by force if necessary. If one were to insist upon utter pacifism, then we would have to eliminate the police and other forms of law enforcement; we would have to let criminals run free and terrorize whomever they wished. Pacifism, carried to such an extreme, can be just as dangerous and harmful as the most strident jingoism. The rational “Golden Mean” is what we need.

As for the use of nuclear weapons in war, specifically the decision to bomb Hiroshima and Nagasaki—I see little that makes one form of destruction morally superior to another. Is it better to kill someone by putting him in front of a firing squad, hanging him from the gallows, or sending him to the guillotine? Our conventional “humanitarian” view says that death should be as quick and painless as possible. Extrapolate that to the international scene during the final days of World War II, and the U.S. decision to drop the Bomb on those Japanese cities seems just. Otherwise, the war would have likely dragged on for another six months or so. The fire-bombing of Tokyo several weeks earlier left more than 100,000 Japanese civilians dead in a single night—but nobody debates the ethics of that. The debate is over nuclear weapons, and whether they should be used at all. It is the fear of nuclear energy that fuels the ongoing argument.

Should nuclear energy be feared? Wouldn’t it be better if there were no such thing? Nuclear power is what makes the Sun shine, and all the stars. Most of us know that atoms are composed of nuclei with orbiting electrons. Conventional explosives, such as TNT or nitro, release electron energy. These can be deadly enough, but atomic nuclei remain untouched. When atomic nuclei are either split (fission) or forced together (fusion), energies bound up in the nuclei are released. Theoretically, nuclear power should not be feared anymore than sunlight pouring through your kitchen window. The reason why there has not been a World War III is because nuclear weapons guarantee destruction to any nation attacking a country that has them. Humanity has, in effect, been pushed into a corner with two options: abandon war altogether or perish in a nuclear holocaust. Some have argued that the decision to deploy atomic bombs against the Japanese was intended to “send a message” to the Soviet Union. But according to Dueck, "Although Truman and his advisors spoke of gaining diplomatic leverage with Stalin through possession of the bomb, there is no reason to believe that the primary reason for dropping the bomb on Japan was anything other than what Truman said it was--to end the war as soon as possible and save American lives" (1997, p.21). All fears aside, in terms of sheer utility, using the Bomb was the quickest way to end the War.



References

Dueck, Colin. "Alternatives to the Bomb." Rpt. in Ethics & Politics: Cases and Comments. eds. Amy Gutmann and Dennis Thompson. Chicago: Nelson Hall, 1997. 16-25.

Lackey, Douglas P. "The Ethics of War and Peace." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 221-229.


***




Feeding the Hungry

The respective articles by Singer and Narveson are diametrically opposed—with the latter appearing, at first glance, to be lacking all compassion. Anyone with an ounce of compassion for his or her fellow human being would surely agree that feeding the hungry (or healing the sick, clothing the naked, housing the homeless, etc.) is a moral imperative. Yet reason—and the sheer reality of life on Planet Earth—contradicts Singer’s decidedly self-righteous pronouncements at every turn. To save needy children’s lives, he says, you have only to donate to Unicef or Oxfam America. In fact, you should donate every penny that’s not absolutely essential to your own survival. Otherwise, you’re practicing “the kind of ethics that led many Germans to look away when the Nazi atrocities were being committed" (2003, p.158). Well, there is certainly nothing wrong with donating to Unicef or Oxfam America or any other charity, and I would encourage those who feel so inclined to donate. But are these the only means of assisting others in need? Singer’s unbelievably simplistic view defies common sense in the following ways: 1) Individual conscience—by attempting to foist one, and only one (his own), moral choice on everyone, he denies the primacy of individual conscience. Those who are too selfish or callous to help others, shame on them. But for those who do feel compelled, should it not be their own personal choice how to help, and to what extent? 2) Too great a need—no matter how much you do, you cannot single-handedly save everyone. Someone will be left outside the boundaries of your charity. What about them? How do you justify helping x number of people and no more? 3) Defining “essential”—according to Singer most families in America can subsist on $30 K per annum, and whatever is left should be donated. Fine. But why stop there? What we call “poverty” in America would be tremendous wealth in many other nations. Will Rodgers said, “America is the only country in the world where people drive to the poorhouse in their cars!” And I see people in public housing projects walking around talking on their cell phones. If you want to go down to “bare essentials” then you should sleep in a cardboard box on the street and beg for your meals. 4) Charity does not address root problems—charities are great for emergencies, like floods or earthquakes, but the root problems which cause poverty, disease, and suffering in the world can never be solved that way. Such problems have political, economic, cultural, and religious roots, and can only be addressed in those ways. Rather than simply give food to the hungry, one should ask the question: Why are they hungry? It is better to teach someone how to provide for his own needs than to simply provide for him.

I agree with Jan Narveson: “In fact, all of the incidence of substantial starvation (as opposed to the occasional flood)) has been due to politics, not agriculture" (2003, p.173). When I was in high school I participated in a world hunger study project and came to the same realizations. There is ample food supply to feed the world’s population several times over. The fundamental difficulty is political strife between nations and the insurmountable problems associated with lack of adequate infrastructure: even if you get the food there, how do you get it to those in remote areas who are starving? How do you stop corrupt regimes, such as North Korea, from confiscating the aid and using it to feed only the military? A wealthy nation like America has abundance because the system of government protects private property, a free market economy, civil rights and civil liberties, and so on. Without such political and economic stability, it would be impossible for individuals to amass much wealth. And we do have a de facto redistribution of wealth in the form of progressive tax scales and government assistance to the needy. I would argue that poorer nations, instead of resenting America or other Western democracies, learn from our example. There are many inequities associated with capitalism and free market economies, granted, but so far nothing seems to work better.



References

Narveson, Jan. "Feeding the Hungry." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 162-173.

Singer, Peter. "The Singer Solution to World Poverty." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 154-160.


***



Civil Disobedience

Regardless of how powerful the U.S. government may appear to be, it is still a government beholden to the will (or the whims) of the people: senators, congressmen, governors, presidents, always have at least one eye on public opinion polls. Politicians know that they must maintain at least the appearance of acting on behalf of their constituencies. That’s why Senator John Stennis of Mississippi, for example, could bargain with the Nixon Administration to assist in delaying implementation of public school desegregation—a clear violation of federal law. That’s also why Governor George Wallace of Alabama made a spectacle of standing in the doorway of one of his public schools to stop blacks from gaining access to white education. Despite their respective personal opinions (although it’s hard to imagine a middle-aged white Southerner of that era having anything other than a traditional view of blacks), both men were acting to please their constituents. This also goes a long way toward explaining why the Supreme Court found itself unable—or unwilling—to deal with the serious issues of racial discrimination in American society for nearly one hundred years after emancipation: the notion of white supremacy was too deeply embedded in the national psyche. That’s one reason for the necessity of civil disobedience as an integral part of the Civil Rights movement.

This is not mere academic analysis on my part. First, I’ve actually lived in these places—Jackson, Mississippi; Birmingham, Alabama; Columbia, South Carolina—and know a thing or two about the racism in those regions. Second, I happen to be half of an inter-racial couple—my wife of sixteen years is African American. We, and others like us, seem to represent a veritable stake through the heart of that racist Dracula. I am well aware of the fact that not too many years ago, under Virginia law, my marriage would have been a felony, punishable by steep fines and imprisonment. Thus, laws are not always just and actions taken by the government are not ipso facto legitimate. In Greenberg’s article “Revolt at Justice,” he and his colleagues found themselves in a quandary: the boss, Attorney General John Mitchell, expected his employees to tow the line on the administration’s Civil Rights policy—a policy which, to all appearances, violated Constitutional and Supreme Court mandates (1997, pp.143-151). Nevertheless, lawyers take oaths to support the Constitution, not simply the directives of whoever occupies the position of authority. Clearly, men of conscience and legal training are to be held accountable for individual moral choices, political expediency notwithstanding. On these grounds, civil disobedience is not only justified, but warranted.

And all the above, curiously enough, is now being replayed in some of the strange emanations coming from the current administration. For example, where once Colin Powell opposed U.S. military intervention in Iraq (during the 1991 Gulf War), even to repel invading Iraqi armies from Kuwait, he now trumpets the Bush Administration’s official party line. I suppose it’s either that or lose his job. And Condoleezza Rice goes on national television splitting the finest hairs to the nth degree on whether she personally supports this new impending war (it was quite clear to me that she has reservations but will not express them). Okay… she wants to keep her job too. Or unnamed Bush Administration officials informing CBS that this year’s Grammy Awards show was NOT to be used as an anti-war forum for Grammy-winning artists—a directive that CBS duly heeded. What happened to freedom of speech, anyway? Without debating the merits or necessity of this looming war, I have to say that one grows weary of having one’s intelligence insulted by this administration, or one’s “patriotism” called into question for opposing its policy toward Iraq—a nation which, so far as I know, has never attacked the United States. The growing anti-war movement, which probably will not prevail, is therefore another form of civil disobedience that must not be taken lightly.


References

Greenberg, Gary J. "Revolt at Justice." Rpt. in Ethics & Politics: Cases and Comments. eds. Amy Gutmann and Dennis Thompson. Chicago: Nelson Hall, 1997. 143-151.


***


Natural vs. Unnatural

In today’s prevailing political climate it is nearly impossible to criticize homosexuality as a form of behavior without sounding like some gay-basher. Essentially, the argument against it is a religious one. It has to do with one’s sense of morality, and thus ethics. But the natural vs. unnatural argument stems from the medieval rapprochement of Christian theology and Aristotelian philosophy—what was considered science in that era. Later came the development of real science, and through the work of such figures as Newton, Galileo, Copernicus, and many others, the “natural law” era of political and social theory began. Problems with moral condemnations of homosexuality, I suppose, led to the “violation of nature” argument. In his article Leiser (2003, pp.144-152) does a good job of picking it apart, a task made all the easier by the archaic nature of the concept. First, he makes a distinction between the “descriptive” laws of nature and the “prescriptive” laws of men: “These ‘laws’ merely describe the manner in which physical substances actually behave. They differ from municipal and federal laws in that they do not prescribe behavior.” The distinction here is valid, but the first assertion is not. For example, the “description” of the manner in which substances actually behave is not law, but a language-construct making the “law” understandable to human minds. The physical law itself is seemingly inherent within matter and energy. But there is a sharp contrast between physics (the laws of which seem inviolable) and biology, the science of life. In physics all phenomena can be reduced to mathematical formulae; not so in biology. Life itself appears to violate the laws of physics, so the “natural law” application to ecology, sociology, and so on, has always been problematic.

The assertion “anything uncommon or abnormal is unnatural” is closer to the meaning of the anti-homosexual argument. Again, I take issue with his apparent equating of “uncommon” with “abnormal.” Uncommon, as a matter of fact, is usually considered a virtue, as in “uncommon valor.” Uncommon may be thought of as anything that is more than two or three standard deviations away from the mean of a normal (bell) curve. Uncommon can be good or bad. Consider I.Q., for example. Uncommonly high I.Q. is admired, while uncommonly low I.Q. (or idiocy) is not. Sometimes, however, persons with a genius level I.Q. can suffer profound psychological effects. But that which is abnormal may indeed be considered unnatural. In this sense, homosexuality can be considered a form of deviant behavior, not unlike pedophilia or countless other perversions. Many forms of heterosexual behavior may also be considered abnormal, such as Sadomasochism. We can be glad that there are no “morality police” spying on people’s bedrooms, and what consenting adults do behind closed doors is no one’s business. But the issue of what is morally acceptable—and in this respect “normal”—is, as I said, a religious view. I’ll say this: homosexuality is incompatible with Judeo-Christian values. Whether one accepts the practice as normal or natural depends upon one’s spiritual orientation.


References

Leiser, Burton. "Is Homosexuality Unnatural?". Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 144-152.


***



Double-Effect

As the United States is, apparently, gearing up for war with Iraq (it may be only a week or two away), I’ve been hearing some of the same issues discussed as those in Simone Sandy’s piece, “Bombing the Bunker in Baghdad" (1997, pp.27-30). For example, there is talk of sending in U.N. peacekeepers to assist the weapons inspectors, and fears that Saddam Hussein will use them as “human shields,” thinking that the United States will not attack if it has to cut through innocents to get to Iraqi forces. In the 1991 bunker-bombing incident, it seems as though U.S. military planners made reasonable efforts to determine that the structure did indeed serve a military purpose, and was a legitimate target. I don’t believe a known civilian building would be deliberately chosen (although targeting civilians was routine during World War II—i.e. Dresden, Hiroshima). No one likes war or wants war—myself included. It baffles me that in the 21st Century—2003 A.D.—that nations have not yet found a better way to settle their differences. War itself is an abomination, even when unavoidable. Clearly, the world could not just sit on its hands in 1939 and do nothing as Hitler’s armies annexed all of Europe and tried to exterminate the Jewish population. War was inevitable. So it’s my view that if a war MUST be fought, it’s better to go all out. Half-hearted efforts (Vietnam) solve nothing—they just prolong the agony. If George Bush Sr. had continued Desert Storm until Saddam Hussein was deposed, we wouldn’t be having this new confrontation today. Leaving Hussein in power virtually guaranteed another war.

And another thing: our enemies do not always share our humanitarian concerns (as exemplified by the Geneva Conventions on warfare). The Japanese demonstrated during WWII that no atrocity was too horrible to be used. Study the history of the Japanese occupation of Korea—1910-1945—and you’ll get a sense of what I’m saying. Similarly, our new enemies, Islamic militants (terrorists), care nothing for the welfare of humanity in the general sense. They are too caught up in their delusional, religious fantasies to see non-Muslims as fellow human beings. Saddam Hussein, from what I can tell, is motivated by a more secular ambition for raw power, but it would be a mistake to think he has any humanitarian notions similar to ours. The Bush Administration nowadays likes to compare Hussein to Hitler and says the situation is analogous—but that’s ridiculous. Iraq may pose a threat to its neighbors; how can it possibly threaten the United States? Political assassinations are not the official policy of this country, but realistically, there may arise situations where an unofficial assassination will have to be carried out. If there is a war, it seems likely that chemical or biological weapons will be used against American troops. I’d rather see Hussein assassinated than have tens of thousands of our troops perish horribly. The faster this thing is over, the better off we’ll all be.

References

Sandy, Simone. "Bombing the Bunker in Baghdad". Rpt. in Ethics & Politics: Cases and Comments. eds. Amy Gutmann and Dennis Thompson. Chicago: Nelson Hall, 1997. 27-30.


***



Debate on Euthanasia

I doubt if there is any kind of definitive, absolute answer to the dilemmas of euthanasia. In the abstract, I tend to agree with Richard Doerflinger’s position: if life comes (involuntarily) from God, then only God really has the authority to end it (2003, pp.180-188). But such abstractions become meaningless when one is faced by the kind of human suffering described in the Rachels piece (2003, pp.175-179). It seems somewhat disingenuous for those of us who are not terminally ill or subjected to excruciating pain to debate the philosophy/morality of euthanasia. One would think that the victim of such an illness should have the final word in this matter. According to the Alsop story (in “The Morality of Euthanasia”), “ ‘If Jack were a dog’ I thought, ‘what would be done with him?’” Problem is—human beings are not equivalent to animals. Not legally, morally, spiritually, or philosophically. Otherwise, there would be no point in having this discussion. A few observations:

As a theist I will readily agree that God (or whatever you wish to call it) created all life. But death is an integral part of the equation. God “created” that too, and one would think for very compelling reasons. Humans have the power of decision in helping to create new life—as when we choose to procreate. So there should be no inherent prohibition against the ending of life either, not in the absolute sense.

In certain circumstances “killing” is officially sanctioned, and even mandated. Where capital punishment is practiced, the State legally ends the lives of condemned persons. In the military soldiers are trained for combat and expected to kill in times of war. Police officers and other law enforcement are authorized to use deadly force under certain conditions. Even ordinary citizens can be legally exonerated in cases of “justifiable homicide” (self-defense).

What about suicide? There is indeed a strong moral taboo against the taking of one’s own life—and rightly so. It is, in most cases, a “permanent solution to a temporary problem.” Take teen suicide for example. A fifteen year-old may feel that life’s problems are insurmountable, to the point where he/she considers ending it all. In 99.9% of those cases, however, simply growing up will resolve the problem. But there is a stark difference between the “coward’s way out” and instances where people willingly sacrifice their lives for some cause. A soldier on a battlefield may toss himself upon a live grenade to save the lives of his unit…is this not suicide? Or what about Christian martyrs in ancient Rome who chose death rather than renounce their faith in Christ? Or what about the police and firefighters who perished in the World Trade Center on 9/11? Patrick Henry said, “Give me liberty, or give me death!” Some things are worth dying for. Therefore, there is no inherent prohibition against “suicide” either.

I don’t buy any of Doerflinger’s “slippery slope” arguments, which are all based on assumptions. Not that the issues he raises are unworthy of argument—they are. The position that favors euthanasia is bound to divide the medical profession. And Dr. Death (Jack Kervorkian) does give me the creeps. But do we have the right to deny terminally ill patients a choice in their own fate? I say that such individuals should be able to legally request the means to end their own suffering.



References

Doerflinger, Richard. "Assisted Suicide: Pro-Choice or Anti-Life." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 180-188.

Rachels, James. "The Morality of Euthanasia." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 175-179.


***


Loyalty vs. Civic Responsibility

In cases of conflicting values--as in that of loyalty vs. truth-telling (to authority figures, that is)--problems arise when we take an absolute position on either side. There is also what is known as the "either-or fallacy"--i.e. presenting two, and only two, options when others may be available. I found myself in this quandary in a job situation a few years ago. There were security issues in the store and the District Manager (supervisor) came in one day to conduct personal interviews with each employee. I was among the last to be interviewed, but when my time came he told me, "I already know everything that is going on here, I know that x is doing this and y is doing that, so if you try to cover up for them I'll know you're lying." I told him that I was very uncomfortable informing on friends and co-workers and that I thought their misdeeds were "none of my business." The DM disagreed and said, "If you know something is going on and don't say something about it, you're just as guilty as they are." He had a valid point, of course, and since I had no new revelations to reveal I confirmed what he already knew. The store manager eventually lost his job and accused me and several others of "conspiring" against him, (which is ridiculous).

But there is indeed a conflict between what we call our "civic responsibility" and the natural loyalties that arise among human beings in their everyday encounters. Both are needed and both, in fact, are sanctioned in law. Civic responsibility demands that ordinary citizens cooperate with law enforcement in the apprehension and prosecution of criminals. In other words, crime is everybody's problem and everybody's responsibility--not just those who are directly involved. It is understandable that some may fear for their lives, with reason, and how can you blame somebody who opts for self-preservation? But this mindless "I don't want to get involved" attitude is contemptible. Outright refusal to give testimony or serve as a witness in court could be considered an obstruction of justice. It might even be possible, under some circumstances, for an uncooperative witness to be treated more harshly than the accused. You could wind up sitting in jail on a contempt charge while the offender goes free! So our collective responsibility must be taken seriously. Loyalty, on the other hand, is much more primal and basic. Essentially, it is a survival mechanism by which human beings bond together for common purposes and goals. No one should understand this better than police officers, whose "blue wall of silence" is the stuff of legend. The reason is simple: officers trust one another with their lives on a daily basis; loyalty is born out of that trust, a kind of comradeship that is hard to find anywhere else. And the law recognizes that essential loyalty in a number of ways. For instance, husbands and wives cannot be compelled to testify against one another, even when the charges are grave. There is doctor-patient privilege, priest-confessor privilege, and most significantly, attorney-client privilege [note: under certain conditions that privilege can be broken]. So even if an accused confesses to his lawyer and says, "I did it," the lawyer cannot reveal the fact and must still defend his client--even to the point of acquittal.

Perhaps it would surprise you to learn, therefore, that privileged communications are now subject to government eavesdropping by authority of the USA Patriot Act. According to the ACLU, "the Justice Department, unilaterally, without judicial oversight, and without meaningful standards, has issued rules that give it the power to decide when to eavesdrop on the confidential attorney-client conversations of a person whom the Justice Department itself may be seeking to prosecute. This regulation, implemented without the usual opportunity for prior public comment, is an unprecedented frontal assault on the attorney-client privilege and the right to counsel guaranteed by the Constitution. It is especially disturbing that these provisions for monitoring confidential attorney-client communications apply not only to convicted prisoners in the custody of the Bureau of Prisons, but to all persons in the custody of the Department of Justice, including pretrial detainees who have not yet been convicted of crime and are presumed innocent, as well as material witnesses and immigration detainees, who are not accused of any crime. 28 C.F.R. § 501.3(f) (proposed amendment)" (1). Initially, this invasion was believed to apply only to non-U.S. citizens deemed "enemy combatants," but since its introduction two years ago has been used against natural born citizens as well--e.g. John Walker Lindh. Apparently, all it takes is to be labeled a "suspected terrorist" by the DOJ and your constitutional rights are voided.

Source:

(1) http://archive.aclu.org/congress/l112801a.html


***



Abortion

One is loath to enter this abortion debate--the emotions generated are too far beyond rational argument to make the exercise meaningful. No one who is firmly committed to one side or the other will yield an inch. The debate masquerades as a religious or philosophical disagreement. This means there are religious overtones and justifications, but I contend it is something other than religion that compels a "man of God" to pick up a shotgun and murder a doctor and his bodyguard as they arrive at an abortion clinic (e.g. Rev. Paul Hill); and philosophical disagreements do not usually involve the planting of high explosives (e.g. Eric Rudolph). Some other psychological force is at work here, a force that manifests itself as terrorism (which also has religious justifications). Let me give an example: Islamic radicals rail against the "immorality" of the West--the United States in particular--yet think nothing of slaughtering innocent people, including women and children. Similarly, those who vehemently oppose abortion, even to the point of violence, belong to the same conservative bent that supports cancellation of welfare programs, government assistance to the needy, and so on. While defending the "unborn" with raised fists, knives, and firearms, there is a curious disregard for the born--i.e. children of impoverished mothers. Obviously, the debate has more to do with the self-perception of the protester than the object of protest.

Having stated the secondary importance of the religious/philosophical argument, we should, nevertheless, briefly consider the form. It is not so much "when does life begin" as it is "when does one become a complete and independent human being"--and as such under the full protection of law. Obviously, life begins at conception and must be carefully considered from that point. But a complete and independent human being is, by definition, one that is born--a separate entity from its mother. Although a premature baby can be removed from the womb and survive, its status as an independent human being depends on separation from the mother. The pro-life movement insists, however, that not only does life begin at conception, so does the individual's status as an independent human being--with citizenship, equal rights, civil liberties, the whole enchilada. If that is the extent of one's religious faith or philosophy, so be it. But there are innumerable difficulties that arise from equating unborn fetuses to born human beings, not the least of which include the rights of the mother. Personally, I do not regard an unborn fetus as "fully human" until the moment of birth--when the umbilical is cut and the child draws its first breath. Before that point, then, abortion is a medical procedure. Thus, I see no problem with the government funding abortions for poor women--so long as it is a legal procedure. Roe v. Wade established the legality of it; the moral arguments against it are a complete distraction so far as I'm concerned.

On the basis of the personal conviction listed above (my own, that is), I have to say that legislation regarding reproductive technology is both moralistic and paternalistic--and an issue the government should steer away from. The government's legitimate concern begins once an individual is born, but not before. In communist China, for instance, the government once adopted a policy of no more than two children per family; those who ignored the policy were subject to criminal sanctions. Should we have similar laws in the United States? I would not deny anyone their religious persuasion (as long as it does not inflict upon me), and there is a certain religious bias in favor of large families. My mother is from a large family (six brothers and sisters) and those that I've met from similar families attracted my admiration and a bit of envy. As for cloning, that is another non-issue. Nature has been cloning for billions of years, to good effect. Human clones occur naturally--that's what identical twins are, after all. They have the very same DNA. But even though identical twins--with identical DNA--may be regarded as the "same person" in the technical sense, that abruptly comes to an end after they are born. Differing experiences, social distinctions, as well as legal distinctions (no matter how identical they may be, from a legal standpoint they are separate individuals and could just as well be unrelated) sunder them into unique entities. Thus, even if you could clone 100 identical copies of, say, Paul Trible, each would develop into a unique and particular human being--just as identical twins do. There's no difference at all, so what business is it of government?

The crux of these issues--abortion and human cloning--is this: should human beings emulate, manipulate, or in any way circumvent nature? Both things occur naturally, without human intervention. For example, miscarriages and stillbirths--which happen frequently--are natural forms of abortion; and, as stated, monozygotic twins are naturally occurring clones. There is some cloning in the plant and animal worlds as well. The fact that human beings "play God" by manipulating nature's methods is nothing new. After all, what is agriculture? What is aviation? What is chemistry? These are all examples of man emulating what nature already does on its own. About 100 years ago there was a great deal of opposition to the birth of aviation: "If man were meant to fly he'd have wings" went the saying. Before that there was even more opposition (religiously based) against science. Although the development of agriculture occurred in pre-historic times (i.e. before the invention of writing), one can imagine that some must have regarded it as a threat to traditional hunter-gatherer cultures. In short, there is no advance of technology or science unaccompanied by controversy or outright hostility.

***


Ethics in Science

I have often pondered the enormous popularity of science fiction as a literary genre—what is the source of its appeal? It is only occasionally good fiction, and much of it is distressingly hackneyed. Nevertheless, I cut my literary teeth on books by H.G. Wells, Isaac Asimov, Ray Bradbury, and Robert A. Heinlein. Its point of fascination must be the emphasis on science, applied technology, and futurism. In short, science is interesting. A cursory reading of history will show that it is science, more than anything else, which has defined the world we know: space-flight, computer technology, global communications, nuclear power—all are made possible by science. Yet we still do not live in a technological paradise such as that depicted in the mythos of Star Trek. Why is that? It is because science is subject to misuse. As the machine becomes more and more important, human beings themselves seem to shrink to insignificance. It is as if we are mere caretakers (and poor ones at that).

Thus emerges someone like the Unabomber—thinking to single-handedly halt what he sees as a rush to environmental armageddon. He is obviously a disturbed individual, but should we perhaps read his manifesto anyway? Aren’t there at least some who agree with his premise? Nowadays, for example, people fear the so-called “millennium bug” as if it were the Apocalypse. Even Pat Robertson of CBN has got in on the act, predicting nothing short of global catastrophe. The controversy over nuclear power seems to have subsided in recent years, but the weapons are still with us. What if one should fall into the hands of terrorists? What if some self-appointed prophet should think it his Divine mission to liquidate half a city, half a nation, or half the world? Who do we have to thank for these and other dilemmas? Again, modern science.

In many ways, science occupies a place in society that was once filled by religion, and though the two seem to be completely at odds, there are striking similarities between them. Both offer a distinct worldview. Both explain the origin of the universe and project its long-term fate. Professional scientists are almost like clergy, operating in realms inaccessible to the layperson. Scientific journals function much like holy scripture, and the giants of bygone centuries (Galileo, Copernicus, Newton, etc.) like saints. Some of these men even suffered persecution for their “faith”—like Galileo before the Inquisition in 1615. Science has its own Holy Grail-type obsessions, too—the Human Genome Project, for instance, or the search for a Grand Unified Theory (GUT) in physics. Is all this mere coincidence? Although we are long accustomed to thinking of religion and science as being mutually inimical, they do share a common, albeit contrasting, purpose: 1) the removal of human ignorance, and 2) the overcoming of a state of discord with some desired natural order.


What Is Science?

According to physicist Morris Shamos, science is “our formal contact with nature, our window on the universe, so to speak. It is a very special way that humans have devised for looking at ordinary things and trying to understand them" (1995, p.46). It is “special” because our traditional way of explaining natural phenomena is inadequate. After all, thunder and lightning does not result from Zeus hurling thunderbolts or God “moving His furniture around.” Science is the art of accurate description, of extending our senses through instrumentation, but more importantly, it is “the design of conceptual schemes, models, and theories that serve to account for major segments of our experience with nature" (ibid).

Science cannot accept a supernatural or magical explanation for anything, which, unfortunately, cuts against the grain of religion. Experience shows that nature is orderly, that under the same conditions the same phenomena are likely to occur. There is an element of predictability. For example, the sun always rises in the East, table salt is always a compound of sodium and chlorine, and apple seeds always produce apple trees, never some other kind of tree. Science is thus a search for verifiable truth (ibid). For this reason, truth tests become important, and one such test involves the concept of falsifiability. In other words, for a premise to be scientific, one must be able to prove it wrong. The statement “there is no life on other planets” can be proven wrong—the discovery of extra-terrestrial life would accomplish that. However, the statement “life exists on other worlds” is not quite scientific because although one can prove it correct, it can never be proven wrong (if one probed the galaxy and found no life anywhere, that does not preclude the possibility of it being found somewhere else).

Why is it insufficient merely to prove a theory “correct”? It is because obtaining the same result from an experiment 100 times does not mean that a contradictory result will not be observed should one perform it 101 times—it only increases the probability of a theory being correct. Both tests are essential for verifying scientific truth, and are the indispensable tools for exposing what can be called pseudoscience: astrology, flying saucers, alien abductions and the like. Interestingly enough, two of the most cherished and widely accepted scientific theories cannot be regarded as absolutes due to the non-applicability of both truth tests—namely, the atomic theory of matter and the theory of evolution (ibid). In these matters, one could say that the jury is still out. It is important to understand these rigorous (and often inconvenient) tests because failure to adhere to the rules results in embarrassing mistakes.

Regarding ethics, the first and foremost consideration is one of integrity: science must be true to itself. Here, a distinction can be made between “good science” and “bad science.” The former, based on its noble foundations in classical Greece, duly adheres to its own restrictions, does not jump to hasty conclusions, and only offers theories that are of the highest order of probability. Thus, Isaac Newton, who discovered the law of universal gravitation, waited thirty-two years to publish his findings (he wanted to make sure there were no mistakes in his calculations). However, in this age of publicity seeking and Nobel Prize-coveting, the competition among scientists to be “the first” to make some breakthrough discovery is so rabid that inexcusable ethical lapses occur.


Good Science / Bad Science

Consider the story of cold fusion—one of the more embarrassing moments in modern physics. On March 23, 1989, two chemists at the University of Utah announced that they had achieved the impossible—cold fusion, or nuclear fusion in a test tube. If true, this would certainly have been the discovery of the century. Science writer Gary Taubes (1993, p.xviii) puts it this way:

It was considered the energy source that would save humankind: the mechanism that powers the sun and stars, harnessed to provide limitless amounts of electricity. Since shortly after the Second World War, physicists had worked to induce, tame, and sustain fusion reactions by re-creating the hellish heat and pressure at the center of the sun in a controlled setting. The conventional wisdom was that sustained nuclear fusion could only be achieved in the laboratory with enough heat—tens of millions of degrees—and extraordinary technological wizardry.

There are, of course, two ways to release nuclear energy. Fission involves splitting the atomic nucleus and setting up a chain reaction, usually in a dense radioactive metal, such as uranium or plutonium. Fusion is the forcible joining of two nuclei under extreme conditions—such as those found at the center of stars. Albert Einstein’s equation e = mc² states that matter can be converted to energy, and vice-versa; but in nature fusion reactions only occur when heat and pressure have reached critical levels. At the center of our sun, for example, the temperature is an estimated 15.5 million Kelvins and the density, under billions of tons of pressure, about 160 per cubic centimeter. By comparison, the Earth’s mean density is only 5.52 grams per cubic centimeter (Kaufmann, 1985, pp.158,335). Fusion reactions are exponentially greater than fission reactions, making the latter seem insignificant compared to the former. To make this point clear, one should realize that a hydrogen (fusion) bomb uses an atomic (fission) bomb as a detonator. It is like comparing a stick of dynamite to a firecracker. Thus, while stable fission reactors are feasible for domestic use, fusion reactors remain beyond the reach of current technologies. The March 23rd press conference announcing fusion reactions at room temperature, understandably, sent shock waves of excitement and disbelief through the scientific community.

The experiment at Utah State, which was conducted “for the fun of it,” (qtd. in Taubes, p.4) was the brainchild of Stanley Pons and Martin Fleischmann. An electrode—a solid block of palladium—suspended on a wire had been submerged in a large beaker filled with heavy water and lithium. An electric current was passed between the palladium electrode and a platinum one. The apparatus was continuously charged for seven months, until “one fateful evening when young Joey Pons, who had been running the experiments for his father, lowered the current and left for the night" (ibid). A meltdown of some sort occurred during the night. No one witnessed it. From this point there were varying reports of the result: some said that half the palladium cube had dissolved, others that the entire apparatus had been destroyed, still others that an enormous hole had been blown through several feet of concrete (ibid). There were also conflicting reports about radiation levels (if fusion had indeed occurred, lethal doses of radiation should have been released). To make matters worse, there was virtually no data accompanying the experiment.

None of this violates the basic rules of science, however. It was by publicizing the (unconfirmed) results of the experiment, with little hope of verifying that cold fusion had actually taken place, that the scientists crossed the line. They had submitted a proposal to the Department of Energy (DOE) to request funding. The DOE sent the proposal to physicist Steven Jones at Brigham Young University for review, who promptly tried to duplicate the experiment. When rumors began to circulate that Jones was planning to call a press conference, University of Utah president Chase Peterson hastily scheduled his own, and the discovery of cold fusion was announced to the world. In just a few short years, the reporters were confidently assured, commercial fusion reactors using their technology would be built—reactors fueled by ordinary seawater (ibid).

Needless to say, this extravagant promise was never fulfilled. Cold fusion power plants running on seawater have never materialized. What is the reason? Simply put, cold fusion had never actually occurred, and there was no real scientific breakthrough at all. As the months went by, every independent attempt to repeat the results of the original experiment failed miserably. According to Taubes: “Cold fusion—as defined by Stanley Pons and Martin Fleischmann, or Steve Jones… or whomever—did not exist. It never had. There was at least as much empirical evidence, if not more, to support the existence of any number of pseudoscientific phenomena, from flying saucers to astrology “(ibid).

This whole fiasco was an example of bad science—a failure to follow the rules, compounded by the ego-driven motives of those would-be Einsteins. But an even more astounding collapse of ethics occurred in the early part of this century, one that involved not just sloppiness, but outright chicanery: namely, the Piltdown hoax.

In 1911 lawyer and amateur paleontologist Charles Dawson unearthed a most unusual specimen in Piltdown, southern England. It consisted of what appeared to be a human skull with a decidedly apelike jaw. Several non-human teeth were also found. It was an amazing discovery because although hominid (pre-human) remains had been found in such places as Java, China, Africa, and continental Europe, very little had been found in England. This creature was given the scientific name Eoantropus dawsoni (which means “Dawson’s dawn man”), and was thought to have lived about two million years ago. The press dubbed it “the First Englishman” (although why any self-respecting Englishman would want to claim descent from a half-ape is beyond me). From the beginning, though, Piltdown Man had its critics. According to researcher John Evangelist Walsh:

The Piltdown mandible (jaw), especially, precipitated loud disagreement, as it had from the first. A jaw so thoroughly apelike, critics insisted, simply did not belong with a cranium (brain case) so undeniably human. Piltdown, it was charged, had been mistakenly manufactured from two separate creatures, a fossil man and fossil ape: the remains of the two just happened to come together in the ground, a freakish prank of nature. Combining them only created a monstrosity that never in fact existed. (1996, p.6)

In the fossilized record of hominid development it was clear that cranium and mandible evolved together—that as the brain case increased in size, the jaw became less and less apelike.

But the Piltdown Man had staunch defenders also, respected and reputable men like Sir Arthur Smith Woodward of the Natural History Museum in London, and physician/author Sir Arthur Conan Doyle. They argued that the odds of the fossils being deposited separately, from two different animals, were astronomical—they had to have come from the same creature. Nevertheless, the debate raged on for some forty years.

As more and more was learned of hominid evolution, the Piltdown fossils posed more of a puzzle. They did not seem to fit in with the rest of the slowly emerging picture. Piltdown Man was eventually regarded as an exceedingly strange evolutionary dead-end. In the end, however, it was the rigorous tests of developing science that revealed the truth.

In 1949 the new technique of fluorine testing was applied to all the available Piltdown artifacts. During the slow process of fossilization, trace amounts of fluorine are absorbed by the bones from the soil. The relative amounts of that substance in a fossil can give a rough estimate of its age. If different ages for the cranium and jawbone could be established, it would prove that they came from two separate creatures. As it turned out, the fluorine content was the same for all the artifacts, but, unexpectedly, they were revealed to be of much more recent origin than anyone imagined—they were no more than 50,000 years old (ibid). This was puzzling because at that date, anatomically modern humans were widespread on earth. Piltdown Man was “not anywhere near a ‘dawn man,’ let alone a missing link. He was a shocking anachronism, an impossible survival out of a dim and far distant past" (ibid).

Finally, in 1953 a painstaking examination by anthropologist Joseph Weiner confirmed the skeptic’s suspicions. The teeth were primate in origin but skillfully filed down to resemble human wear patterns. All the fossils had been chemically treated to give the appearance of great antiquity. The jawbone was positively identified as that of an orangutan, the cranium of a modern human. A more refined radiocarbon dating technique revealed that the skull was about 620 years old, the jawbone slightly younger. Thus, the Piltdown Man was a forgery, a hoax, and “the most famous creature ever to grace the prehistoric scene, had been ingeniously manufactured from a medieval Englishman and a Far-Eastern ape" (ibid).

Blame for the Piltdown forgery has never been adequately determined, but most suspect Dawson. We will never know for sure, however, since Dawson died in 1916, but in this case “good science” triumphed over an unscrupulous attempt to muddy the waters of research and perhaps discredit the whole field of anthropology.



Ethical Considerations

Obviously, it is essential for science to maintain its own integrity, but does it have an obligation to the larger human community? If so, to whom should it be obeisant—the government? Industry? Whatever the source of its funding? In this regard, science shares another role with religion: it is, by nature, an independent force. Just as no government can mandate belief in God or define an acceptable theology (at least not from an American point of view), it cannot alter the nature of science. The laws of physics, thermodynamics, mathematics, and so on, are not subject to legislation.

Until recently, scientists were usually content to pursue their work, taking no thought for what was done with their discoveries. That mindset changed during World War II, however, as physicists edged closer and closer to the development of nuclear power. Speculations as to this power began as early as 1903 when the atomic structure was beginning to be understood. As more and more was learned, it gradually became not a theoretical problem but one of technological capability. The most significant progress was made in Germany, France, and England during the 1930s.

In 1933—the year Hitler came to power—physicist Leo Szilard filed a patent in England describing the laws of nuclear fission, but according to Einstein biographer Ronald W. Clark, the British War Office was “not interested" (1971, p.664). Meanwhile, Enrico Fermi, a refugee from fascist Italy, was conducting experiments with uranium. These experiments were later repeated at the Kaiser Wilhelm Institute in Berlin by Lise Meitner, Otto Hahn, and Fritz Strassman. They were simultaneously performed in Paris by Irene and Frederick Joiliot-Curie (ibid). When Hitler invaded Austria, Meitner and many other Jewish scientists, among them Albert Einstein, fled for their lives. When Danish physicist Niels Bohr flew to America to attend the Fifth Washington Conference on Theoretical Physics, he caused a mild sensation in the scientific community with news of the Berlin and Paris experiments. In 1939 the dean of graduate faculties at Columbia University wrote to Admiral Hooper of the United States Navy, warning him of “the possibility that uranium might be used as an explosive that would liberate a million times as much energy per pound as any known explosive" (qtd. in Clark, p.666). Similar warnings were being given in Holland, France, Belgium, and England, and these countries began scrambling to obtain stockpiles of uranium. And all of this activity preceded that famous letter to President Roosevelt, signed by Einstein, which eventually resulted in the top-secret Manhattan Project.

Apparently, the scientific community was anxious to keep the secret of nuclear power out of Hitler’s hands, but this raises ethical questions: should science really care who benefits from its research? Why shouldn’t the results simply go to the highest bidder? In too many cases it does, but the story of nuclear energy, described above, shows how science cannot afford to be careless. Whatever one may think about nuclear weapons—dreadful though they are—no one will disagree that keeping them out of Hitler’s arsenal was the right thing to do.

After the development of the Bomb, Albert Einstein—perhaps the century’s most important scientific figure—headed a compendium of concerned scientists regarding their ethical obligations. Clearly, science has a moral obligation to serve humanity and promote the betterment of the world. It must rise above nationalistic, ideological, and economic constraints.

Again, science fulfills a role comparable to that of religion or philosophy, and as it marches forward into the realm of the purely theoretical, it encroaches upon territory once reserved for mystics and dreamers. In his book The Edges of Science, physicist Richard Morris writes:

… there are some scientific fields in which the frontiers have been pushed so far forward that scientists have found themselves asking questions that have always been considered to be metaphysical, not scientific, in nature. Nobel-prize winning physicists have been so taken aback by some of their colleague’s speculation that they call some of the new theories nonsense, or even compare them to exercises in medieval theology. (1990, p.x)

As modern science edges closer and closer to religion, religion must recognize its need for science. That is because nowadays, men of intellect and rational thought cannot accept reliance upon magic and the supernatural. Such an incongruity causes many to reject faith in God altogether. But if God did create the universe, it seems likely that He did so, not through magic, but through the very physical, chemical, and biological laws that science endeavors to explain. Once it is understood that God is a God of science—not magic—then another ethical dimension emerges. Science is not the enemy of religion or of faith, but should be its principal partner in building a viable future for all.


References

Clark, R.W. (1971). Einstein: The Life and Times. New York: Avon.

Kaufmann, W. J. (1985). Universe. New York: Freeman.

Morris, R. (1990). The Edges of Science: Crossing the Boundary from Physics to Metaphysics. New York: Prentice Hall.

Shamos, M.H. (1995). The Myth of Scientific Literacy. New Brunswick: Rutgers.

Taubes, G. (1993). Bad Science: The Short Life and Weird Times of Cold Fusion. New York: Random House.

Walsh, J.E. (1996). Unraveling Piltdown: The Science Fraud of the Century and its Solution. New York: Random House.