Distilling Science from Philosophy: Aristotle, Galileo, and NewtonI must admit the study of mechanics is a dreary science, but it is nonetheless fundamental. Take aerodynamics as an example. How could one ever succeed in flying an airplane, launching a missile, putting a spacecraft in orbit, or sending it to land safely on the moon, without a thorough understanding of motion, acceleration, inertia, gravity, and all associated forces? For this, science owes a debt of gratitude to the great minds who first explored these areas: Aristotle, Galileo, and Newton, just to name three. To this list dozens of other names might be added, but for our purposes we shall focus on the three and the chain of thought, beginning more than twenty-five centuries ago, that guided man's understanding from primitive superstition to modern physics.
The struggle between our supernatural preoccupations and a more logical, rational way of thinking is perhaps as old as the human race itself. For most of that time, supernaturalism had the greatest credence. The sun shone, the rain fell, the earth shook, all by the will of various gods (or one God). Yet even the ancients knew of certain patterns--the inevitable cycle of day and night, regular motions of stars and planets, four orderly seasons, and so on. Aristotle was not the first to apply logic to explaining the natural world--in fact, he was one of the last in the pre-Christian era to do so. Contained in the most important of his preserved writings--Physics, Metaphysics, Ethics, Poetics, Categories, and many others--is the most profound synthesis of human knowledge assembled to that time. He was heir to a great body of research, from the pre-Socratics beginning with Thales of Miletus, who successfully predicted a solar eclipse in 585 B.C., to Plato, at whose academy he studied for twenty years.
Aristotle, no doubt, annoyed his teacher Plato with his youthful brilliance and open disagreement with some cherished Platonic precepts. Plato, according to Aristotle, was too immersed in the metaphysical to adequately deal with the real physical universe. For example, when addressing the question of "what is the ideal state?", Plato waxed eloquent in the Republic. Aristotle, on the other hand, studied 158 actual constitutions to determine which of them worked the best (Mitchell, 1996, p.363). In grappling with the physical workings of the universe, Plato regarded this world as a mere "copy" of an immaterial world of ideal forms. Between the two was a "demiurge" translating perfect ideas into imperfect, tangible beings. Aristotle dismissed this notion altogether and looked for answers solely in the material world. Here, we see science beginning to separate itself from philosophy. According to Jefferson Hane Weaver:
Aristotle was less concerned than Plato about the relationship between the ideal world and the real world. One of his greatest contributions to science may have been his efforts to systematically classify plants and animals. He also sought to discover the primary forces of nature. According to Aristotle, the universe is balanced by opposite movements of the four elements. Air and fire move upward naturally while earth and water move downward, thereby preserving a general equilibrium. Aristotle used a fifth element, ether, to explain the movements of stars, which he supposed consisted of ether. (p.288)
His common sense approach, although later proven inaccurate, began the process of divorcing pure science from metaphysical speculation. How far Aristotle might have gone had he been born in a later century is a tantalizing thought, for he was severely limited by a lack of technology. Durant points out, "See, here, how inventions make history: for lack of a telescope Aristotle's astronomy is a tissue of childish romance; for lack of a microscope his biology wanders endlessly astray" (1961, p.55).
In light of later discoveries it is too easy to dismiss him, yet Aristotle's writings, once approved by Catholic authorities, assumed a status comparable to Holy Scripture. Though the rise of Christianity reasserted the dominance of supernaturalism during the Middle Ages, Aristotle continued to be studied and adhered to, despite unexplainable facts and discrepancies. It was not until the Renaissance that "natural philosophy" began to break free of Aristotelian limits. This made it possible for a pioneer like Galileo to make tremendous strides in the study of motion and astronomy.
No scientist works in a vacuum, therefore Galileo's work was preceded by a thorough familiarization with the theories of Copernicus, who maintained that the sun, rather than the earth, was at the center of planetary motion, and Kepler, who demonstrated that planetary orbits were elliptical rather than perfect circles around the sun. The Church, of course, was unwilling to admit to any of this.
Galileo, a professor of mathematics at the University of Pisa, taught courses of astronomy to medical students, comparing Copernican and Ptolemaic cosmologies (Weaver, pp.424-25). In early experiments with pendulums he arrived at a primitive law of energy conservation, claiming that without an impeding force, the pendulum would continue to rise and fall forever. And despite Aristotle's claim that any moving object would naturally come to rest, Galileo held that in the absence of friction, a moving body would continue its path in a straight line indefinitely. He was the first to measure acceleration rates of spheres down an incline and actually believed in a simple theory of relativity--a notion that Newton later rejected. In short, Galileo was instrumental in overturning almost all of Aristotle's time-worn assumptions. He introduced mathematics into physics and helped separate science from philosophy (Weaver, p.453). Later in life, when he turned his telescope toward the night sky, Galileo observed mountain ranges on the moon, phases of the planet Venus, and moons orbiting Jupiter. Because he firmly held to the Copernican view of heliocentricity, he found himself before the Inquisition in 1615, at which time he recanted. Galileo was placed under house arrest and barred from further research, although he was allowed to write one last book in 1632. Obviously, supernaturalism was not quite ready to surrender to the truth.
It was Isaac Newton, born in 1642--the same year Galileo died--who finally nailed the coffin shut on the occult in science. Beginning his work where Kepler and Galileo left off, Newton created a cosmogony that worked without the need of constant Divine Intervention. He was a sickly child, an inept farmer, and, later, an undistinguished college student. But when he retreated to his family's farm in 1665 to avoid a plague then sweeping London, he "[i]n a single year... (1) discovered the binomial theorem; (2) discovered the basic principles of the differential and integral calculus; (3) worked out the theory of gravitation; and (4) discovered the spectrum..." (Weaver, p.484).
By this time the scientific world was ready for a revolution, and Newton was a national hero by 1667. Eventually, what became known as "Newtonian physics" left its indelible mark on science and remained unchallenged for more than 200 years. Newton was criticized by some for refusing to insert causality in his system, preferring instead to identify fundamental rules of order, but old patterns of thought were slowly giving way to the new. It wasn't until 1685 that he finally wrote, in Latin, the summation of his life's work, Mathematical Principles of Natural Philosophy. This remains one of the most important scientific works ever written.
Thus, classical physics reigned unchallenged and nearly unquestioned until the 20th century, providing the basis of the "clockwork universe," a view that had theological, as well as philosophical implications. The 18th century Deists, for example (American President Thomas Jefferson was a Deist), held to the Newtonian cosmos, limiting God's role to that of a "watchmaker" who, having created the universe, immediately withdrew to allow it to run by itself without further input. This left men free to pursue their own destinies.
Isaac Newton himself, however, remained an enigma. He never married, was obsessed with alchemy and the occult, and was tormented by religious questions till the end of his life. He might have been comforted somewhat by such modern discoveries as relativity and quantum mechanics, which prove that at the sub-atomic level, as well as the macrocosmic, reality doesn't operate according to Newton's laws. Perhaps God did not withdraw after all.
ReferencesDurant, W. (1961). The Story of Philosophy. New York: Washington Square.
Mitchell, H. (1996). Roots of Wisdom. Albany: ITP.
Weaver, J. (1987) The World of Physics. New York: Simon and Schuster.
***
Black Holes, Dark Matter, and the Great AttractorWhen Albert Einstein published his theory of General Relativity in 1915 it was considered the end of an era--the era of "Newtonian” or classical physics. The elegance and simplicity of Newton's law of universal gravitation, once thought an absolute, could not explain certain phenomena under specialized conditions. But there was no need to relegate Newtonian physics to the same curio shop as, say, Ptolemaic astronomy. In the realm of ordinary experience, dealing with quantities that require measurement for practical purposes, the old theory of gravity worked perfectly well. In the Newtonian universe there were "objects" containing mass, and "forces" which acted upon the objects. Time and space were two distinct parameters within which, and through which, all measurements were made. Thus, gravity was a force that acted between all particles of matter possessing mass. Where that "force" came from or how it was generated classical physics could not speculate, but its observed effects could be calculated. For example, to determine the mass of the sun was a straightforward problem easily solved. Obviously, one could not put the sun on a scale and weigh it, but its mass could be determined indirectly by measuring the orbital speed of the planets. If the sun were more massive than it is, the planets would travel at a higher rate of speed; if it were less massive, their motions would be much slower (Morris, 1990, p.97). As it turns out, the sun's mass is roughly 3.3 million times that of the earth.
General Relativity, of course, attributes the effects of gravity to a curvature of space and time in the vicinity of a massive body, so there is no "force" of gravity per se. To be sure, an apple falling from a tree or an asteroid hurtling toward the sun seems to be in the grip of a powerful force, but in reality these objects are merely following their natural trajectories--that is, they are following the path of least resistance. For example, a freight train traveling 100 mph due east makes a slow 90 degree turn and heads south. There is no "force" acting upon the train making it turn south (other than its own running force), it is simply following the course of the tracks. The curvature of space has a comparable effect on massive bodies. Not only is it possible to determine the velocities of nearby objects like planets, those of distant stars can also be measured. One might wonder, how? Even through a powerful telescope stars do not appear to move, but remain in fixed positions night after night, year after year. In fact, it would take many human lifetimes to observe even the slightest shift of a star's position. Yet astronomers know they are all moving at high rates of speed and can assign them accurate velocities. How is it done? By analyzing the light these objects emit.
Although it was long known that light, when passed through a prism, produced a spectrum of colors, Isaac Newton was the first to demonstrate that white light was actually an admixture of colors (it had been previously assumed that the prism somehow added colors to the light). In the 1800s it became possible to analyze the chemical composition of a light source through a device called the spectroscope. Through spectral analysis, then, scientists could determine the hydrogen/helium ratio of a distant star (and thus the star's approximate age), traces of heavier elements in interstellar gas, and many other things. It is also possible to measure the velocity of a moving light source through a phenomenon known as the Doppler Effect. When a light-emitting object is approaching, the waves are shifted toward the blue end of the spectrum; when receding, they are shifted toward the red end, thus an object's relative redshift or blueshift indicates its velocity away from or toward the earth. Not only is visible light analyzed this way (being only a tiny fraction of the electromagnetic spectrum), but all wavelengths from low-frequency radio waves to high-frequency gamma rays can be analyzed. Telescope observation may represent the romantic aspect of astronomy, but radio telescopes provide the fullest range of data, from which information about the cosmos can be gleaned. With this understanding, then, we can now briefly consider some of the more puzzling questions, and outright mysteries confronting scientists.
Implicit in Einstein's General Relativity equations is the possibility of such exotic objects as black holes. After a star exhausts its hydrogen fuel through nuclear fusion, a series of changes take place, the end result of which depends on the mass of the star. Low mass stars eject most of their outer layers, which remain as planetary nebulae. A high mass star dies violently in a supernova explosion, the remnants of which, if enough mass remains (about 3 solar masses), can collapse under the weight of its own gravity to form an object from which nothing, not even light, can ever escape. That means the escape velocity from the collapsed star exceeds the speed of light, thus it disappears from the universe, forming a black hole. It is not possible to observe a black hole directly, but its effects can be observed. Any matter in the vicinity of the object will be accelerated by the intense gravitational field. Hot interstellar gasses, when accelerated, produce high energy radiation that can be detected. Thus, the x-ray source Cygnus X-1 is strongly suspected of being a black hole. Since what we experience as gravity is really the warping of space-time, the condition reaches an extreme in a black hole. According to Kaufmann (1985, p.446):
Inside a black hole, powerful gravity distorts the shape of space and time so severely that the directions of space and time become interchanged. In a limited sense, inside a black hole you can have freedom to move through time. It does you no good, however, because you lose a corresponding amount of freedom to move through space. Whether you like it or not, you are inexorably dragged from the event horizon [surface] to the singularity [core]. Just as no force in the universe can prevent the forward march of time (past to future) outside a black hole, no force in the universe can prevent the inward march of space (event horizon to singularity) inside a black hole.
Black holes are thought to reside at the centers of many, if not most, galaxies. There is strong evidence to suggest that one of these massive objects lurks at the center of our own Milky Way galaxy.
As observational techniques have become more refined, it has become possible to measure relative velocities of even the most distant objects in the universe--galaxies and galaxy clusters. In 1929 American astronomer Edwin Hubble discovered that nearly all galaxies showed a significant redshift, meaning that they were receding from the earth. Moreover, he realized that the redshift was proportional to the galaxy's distance from us--the further away it was, the faster it was receding. The inescapable conclusion was that the universe as a whole was expanding, carrying the galaxies along with it (something that was also implied by Einstein's equations). This led to the Big Bang theory of the universe's origin. All debate about the veracity of this theory was silenced in 1964 when physicists Arno Penzias and Robert Wilson discovered a background radiation of 2.7 Kelvins prevalent throughout the universe. Only one explanation for this uniform microwave radiation has ever been accepted: it is a remnant of the Big Bang itself, thought to have occurred 15-20 billion years ago (Morris, p.38). Interestingly, by using the microwave background as a reference point, it is possible to measure the peculiar motions of bodies through space, separate and distinct from their general motions.
In 1977 it was discovered that the Milky Way galaxy along with its small cluster--known as the Local Group--was moving at a speed of about 600 kilometers per second. According to Morris, "Astronomers soon concluded that the peculiar motion of the Local Group must be caused by the gravitational attraction of a concentration of mass that lay millions of light-years away and could have no other cause" (p.127). Scientists thus became aware of the Great Attractor. They did not have a clear idea of how far away this concentration of mass had to be, but when telescopes were aimed in that direction nothing could be seen. In 1987 a group of astrophysicists known as the Seven Samurai completed a five-year study which revealed that an enormous volume of the local universe, including two super-clusters of galaxies, were streaming at high velocity toward the (as yet undiscovered) Great Attractor (Morris, p.129). Calculations as to the mass required to attract entire galaxy clusters revealed that it must be equal to tens of thousands of galaxies and must be at least 400 million light-years away. But what is the Great Attractor? Scientists do not know.
Fact is, the existence of large quantities of mass that cannot be seen is a problem scientists have been grappling with for more than fifty years. Dutch astronomer Jan Oort was the first to note this problem while studying the motions of stars in the disk of our own galaxy. A star would occasionally veer away from the galactic plane only to be yanked back in place by gravity. Yet there was not enough mass from observed sources (stars, inter-stellar gasses, etc.) to produce the effect. The "missing" amount of mass was at least 50% of what should have been there. Eventually it was determined that as much as 90% of a galaxy's mass resided beyond the luminous disk of stars, as a form of as yet undiscovered dark matter. Dark matter makes it possible for galaxies to maintain their beautiful spirals (computer models indicate that the spiral structure, without the unseen mass, would dissipate after only a few million years), group together in clusters and superclusters, and form the long string-like filaments that are the large scale structural features of the universe. Indeed, the mysterious dark matter may be the predominant form of matter in the cosmos. Scientists do not know for sure what dark matter is, but a clue was recently found when it was discovered that the neutrino (a particle that far outnumbers ordinary protons, neutrons, and electrons, previously thought massless) has a small mass after all. Though it is too early to tell, dark matter may indeed be a vast ocean of neutrinos. Thus, the mystery of dark matter is one of the most intriguing questions in science.
ReferencesMorris, R. (1990). The Edges of Science: Crossing the Boundary from Physics to Metaphysics. New York: Prentice Hall.
Kaufmann, W. J. (1985). Universe. New York: W. H. Freeman.
***
The Einstein-Bohr Debate: Does God Play Dice?Imagine, if you will, the following philosophical dilemma: one flips a coin, knowing it will come up either heads or tails. What is the likelihood that it will come up heads (or tails) each and every time? The probability of such an occurrence decreases proportionately as the number of trials increases—the more times the coin is flipped, the more minute the possibility that it will always be heads (or tails). It is not impossible, but improbable. Indeed, there is a whole realm of mathematics devoted to the study of probability and statistics, and there are laws to describe such things. In the case of a coin, for example, where there are only two possible outcomes, the laws of chance state that in the long run the coin will come up heads 50% of the time and tails 50% of the time, with some small margin of variance. Again, the degree of variance decreases as the number of trials increases. A similar thing applies to biology as well—in the determination of sex, for instance. Generally, there is a 50/50 split between male and female offspring in most species. Otherwise, the species would be at a reproductive disadvantage and in danger of extinction.
Unfortunately, the above-mentioned laws of chance fly in the face of what we now call classical, or Newtonian physics. Isaac Newton, in his grand vision of universal order, set forth a series of postulates about the natural world that, at the time, seemed inviolable. His theory of universal gravitation provided the mathematical framework that made it possible to fully describe planetary motions and orbits—one of the all time great achievements of science. It is important to point out, however, that mere theories do not science make. Theory must match experimental observations. That’s not to say there is no room for error, but there should be at least an acceptable degree of coincidence between theory and hard fact.
Whether it was intended to or not, Newton’s theories became the springboard of what we might call absolute determinism in nature. The universe was compared to a gigantic clock and God assigned the role of clock-maker. Once the clock was built and set in motion, the Creator’s role was finished and He presumably withdrew from the workaday concerns of running the universe. Thus, if it were possible to possess omniscience—to know all there is to know about every atom and every particle, and every force that acts upon them—then it would be possible to predict, with unerring certainty, whether the coin would wind up heads or tails on any given flip, just as one can use Newton’s laws to predict the position and/or velocity of a planet at any given moment. What appears to us as “chance” is an illusion of the senses—the result of our ignorance, our understandable lack of omniscience.
Such was the argument of Albert Einstein during the late 1920s in his celebrated debate with Danish physicist Niels Bohr. Einstein had the famous quote, saying, “God does not play dice,” but Bohr, in the end, won the debate. According to Bohr there was no absolute determinism. Even if all quantities were known, with unlimited precision, there is still an element of chance in physical processes. Although there is a high degree of predictability in science, even at the atomic level, it is because the probability factor is high, not because all things are somehow predetermined. Einstein, for philosophical, aesthetic, and even religious reasons, found Bohr’s position to be completely unacceptable.
The disagreement between Albert Einstein and Niels Bohr was understandable since the two men were utter contradictions. Both were major figures in the early 20th century revolution of physics, but there the similarity ends. Einstein was born March 17, 1879 in Ulm, Germany, the son of an engineer. According to Motz and Weaver (1989, p.72), "Albert [as a child] was slow to learn to speak and his propensity to daydream and ignore the world around him caused his parents to fear that he might be retarded." Although Jewish, he attended a Catholic school until the age of ten. Einstein's parents were completely secularized, so the religious orientation of the school was of no concern. At the Luitpold Gymnasium, however, Albert showed little interest in any subject but mathematics, and neglected his study of the classics. His academic performance was so poor and his distaste for authority so pronounced, that he was expelled from the Gymnasium in 1894 (ibid). Unable to enroll in a university without a gymnasium certificate, he eventually attended the Federal Polytechnic Academy in Zurich, Switzerland, with the intent of becoming a teacher.
Einstein graduated in 1900, but his continued rebellious attitude toward authority--that is, toward Prussian-style regimentation--and unwillingness to devote himself fully to academics made it impossible to secure employment as a teacher. Therefore, he accepted a job in 1902 as an examiner in the Swiss Patent office in Bern. Einstein soon became a valued member of the staff, and during his seven-year stint with the patent office, he completed his dissertation and received a doctorate from the University of Zurich in 1905. That same year he published three papers in the Annalen der Physik, one explaining the cause of Brownian motion, one on the photoelectric effect, and one on the special theory of relativity. Any one of these papers would have established Einstein as a major figure in physics, but his ideas were slow to catch on. All three papers together represented a broadsided assault on classical physics.
After 1909 Einstein held a series of professorships--at the University of Zurich, the German University of Prague, the Zurich Polytechnic, and the University of Berlin. All the while his theories were gaining greater and greater acceptance. In 1916 he published the general theory of relativity--essentially a theory of gravity--and in its aftermath became a world-famous celebrity. In 1921 Einstein received the Nobel Prize in physics for his work on the photoelectric effect. Why did he not receive the prize for his epoch-making theory of relativity? There was a little known clause in the will of Alfred Nobel that the award must go toward discoveries with practical applications. What practical use was there for knowledge of what happens to objects at or near the speed of light? Thus, the award was given for Einstein's work on the photoelectric effect. By the 1920s it was obvious that the world's most famous scientist had almost single-handedly overturned Newtonian physics.
Niels Bohr, in contrast, was as much a product of academia as Einstein was an anathema to it. Bohr was born October 7, 1885 in Copenhagen, Denmark, the son of a university professor. The Bohr home received a steady stream of distinguished visitors and young Niels would often sit and listen to the weekly conversations on topics ranging from politics to science, to art and religion. In a way, these conversations helped prepare him for the rigors of his formal education. Bohr was both an outstanding student and a good athlete who enjoyed soccer and sailing. He began his undergraduate work in 1903 at the University of Copenhagen where he excelled in all subjects, but especially in mathematics and physical sciences. There was never any doubt that he would pursue a career in science.
After receiving his doctorate in 1911, he worked with Ernest Rutherford in Manchester, England. By the time Bohr returned to Denmark in 1913, he had already finished the first of his papers on quantum mechanics, which was quietly revolutionizing the realm of particle physics. Finding little interest for his work in Denmark, Bohr returned to England to work as a reader in mathematical physics while continuing his own theoretical research. He moved back to Denmark in 1916, and in 1921 realized a long-cherished dream by founding the Institute for Theoretical Physics. Under Bohr's leadership, the Institute became one of the major centers of research in Europe (Motz & Weaver, p.198). Niels Bohr received the Nobel Prize in physics in 1922--the year after Einstein received his.
Bohr was an accomplished speaker and his lectures well attended. This work took him to many foreign countries, including the United States. All the while, he probed the philosophical implications of modern physics, developing what came to be called the Copenhagen interpretation of quantum theory. In time, Bohr convinced nearly all leading physicists as to the merits of his views, but much to his regret he was never able to convince Einstein, whom he admired greatly. It was truly ironic that Einstein himself, in his various scientific papers, had provided Bohr with the theoretical tools to make his own breakthrough discoveries. For example, it was Einstein, years earlier, who had suggested that electrons, like photons of light, could display wave as well as particle properties.
Although we take for granted nowadays the image of the atom as a sort of miniature solar system, with electrons orbiting the nucleus just as planets orbit the sun, during the early part of the 20th century it was not so clear just what an atom was. Scientists of the 19th century thought, like the ancient Greeks, that it must be a simple, featureless sphere. Rutherford proved that the atom had a dense central core, and it was later demonstrated that electrons (discovered more or less independently) formed an outer shell. The atom's chemical properties originated largely from its electrons and each element had its own characteristic spectrum. But it was in trying to account for the phenomenon of spectral lines (among other things) that led physicists to doubt the "planetary model" and seek an alternative in quantum mechanics. That's because if one applied strict Newtonian principles to the atom, with the electrical force playing the role of gravity, then no atom could be stable. The electrons would radiate energy constantly, thus losing angular momentum, and would spiral into the nucleus. No element would have a specific spectrum and all atoms would rapidly decay, much like radioactive thorium. Observations told otherwise, however. Most atoms are stable, with their own spectral "fingerprint." Alternatives to the planetary model were proposed, such as the "raisin-pudding" model, with electrons embedded in the atomic mass.
It was Niels Bohr who resurrected the planetary model and who worked out the equations to account for the observed features of the hydrogen atom. Indeterminacy, however, was at the heart of his theory and there was, as a consequence, no clear distinction between the observer and the observed. In other words, the mere attempt to measure the velocity or position of quantum particles influenced the results--something that is unacceptable in science. Einstein attributed such uncertainty to the clumsiness of technology--that if more precise instruments could only be developed, eventually it would be proven that a particle's velocity and position could be ascertained, just as any larger object's could. Einstein thus rejected the central argument of quantum theory, saying, "Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not bring us any closer to the secret of the Old One. I, at any rate, am convinced that He does not throw dice" (qtd. in Clark, 1971, p.414).
Other scientists, such as Bohr, Heisenberg, and Born, accepted indeterminacy because, on that statistical basis, it was possible to account for observed phenomena in a coherent way. The implications of all this, nonetheless, were unsettling, even to the most adventurous mind. Quantum particles, far from being solid physical objects, could almost be seen as phantoms, existing everywhere at once or nowhere at all. The atom could be compared to a large insurance company that on the whole is stable and continuous. But when one gets down to the level of individual policy holders (quanta), random events make certainty and predictability a mere pipe-dream. Who can say when this policy holder dies, or that one has an accident?
Sadly, Albert Einstein made himself something of an anomaly in the scientific community due to his stubborn belief in some sort of ultimate causality. Although such causality may exist, it is beyond the reach of science. There is an intangible realm of mind and consciousness that is not reducible to physical processes. The God Einstein spoke of is not the God of Jewish and Christian theology (who is little more than a myth), but the God of Spinoza--a fundamental aspect of nature. Indeed, this whole idea that the universe sprang out of nothing, without cause or reason, and has developed since then simply on the basis of blind, random forces is patently absurd. You would have better luck convincing me that out of a roomful of monkeys, each equipped with a typewriter, one of them could, given enough time, eventually come up with something like Macbeth. It defies logic and borders on insanity. Thus, like Einstein I have little choice but to believe in the Old One. But there is no such thing as magic or the supernatural, and there is no need for constant Divine Intervention to make the universe work. In that respect, science carves a closer pathway to the truth than any religion or philosophy.
ReferencesClark, R. (1971). Einstein: The Life and Times. New York: Avon.
Motz, L. & Weaver, J.H. (1989). The Unfolding Universe: A Stellar Journey. New York: Plenum Press.
***
Measuring the Age of the EarthThe recent controversy over John Glenn's return to space has got me thinking about the concept of age. Much ado was made over the fact that he is seventy-seven years old, and some have been rather strident in their remarks. It was just a publicity stunt, they said--why send someone that old up on the shuttle? But it occurred to me that if Glenn had been, say, forty-seven, not a word of complaint would have been uttered. The sole reason for the criticism was the man's age, not that he wasn't qualified. It's just another example of how our present-day culture places no value on age. We even deem the elderly worthless. In other, more traditional cultures, seniors are held in high esteem. I say this not because of my own age (which I'm comfortable with), but because it doesn't seem right. I know a seventeen year-old girl who thinks that anyone over 25 is a fossil. "But you're old..." I've heard her say to individuals still in their twenties. Is "old" necessarily bad? After all, think of the planet Earth--it is ancient beyond counting, yet we all cherish it. John Glenn reported having an emotional experience seeing the Earth from orbit once again; other astronauts have felt the same thing. How old is the Earth, anyway?
To all appearances the planet seems to be unchanging. The seasons, years, and centuries pass with scarcely an observable change. It could easily be assumed that the Earth has always existed, as it probably always will. Yet all human civilizations, from the very beginning, have had a creation myth--a specific explanation of how the world came to be. The Babylonians, for example, believed that in the beginning was chaos, or what they called Tiamat. Then the gods and goddesses were born and they shaped the formless void into heaven, earth, and ocean. Scholars believe that the Jewish people, who were held captive in Babylon during the sixth century B.C., adapted the Babylonian tale for their own use--including it in what eventually became the Old Testament. Anyway, there are striking similarities between the two accounts. All this points to a universal human need to discover our origins, which are lost in the mists of time. Most ancient peoples considered the Earth to be thousands of years old, but no one could say precisely how many thousands. The concept of a "million," even if it existed, would have been inconceivable.
According to Isaac Asimov, one of the first attempts to calculate the age of the Earth was made by Anglican bishop James Ussher (1581-1656). Ussher worked his way backward through the Bible, assigning probable dates to important events. The oldest firm date seemed to be that of the ascension of Saul as King of Israel, thought to have occurred about 1020 B.C. (1989, p.21). The conquest of Canaan under Joshua was given as 1451 to 1425 B.C., the Exodus from Egypt about 1491 B.C., and the arrival of Abraham in Canaan about 2126 B.C. Noah's flood was placed by Ussher in 2349 B.C. and the whole creation at about 4004 B.C.--exactly four thousand years before the birth of Christ (ibid). Therefore, it was commonly believed that the Earth could not be much more than 6000 years old--that is, if one rigorously held to the Biblical record.
In 1715 the English astronomer Edmond Halley (1656-1742) made the first attempt to measure the Earth's age scientifically by using the uniformitarian principle--i.e. the idea that change on Earth takes place slowly, over long periods of time. The method he devised was to calculate the rate of the ocean's salinization. Rivers, as they flow into the ocean, carry with them quantities of salt and other minerals. If one were to assume that the ocean was fresh water to begin with, and if one could determine the amount of salt deposited annually, it should be possible to measure the age of the ocean by analyzing how much salt is in seawater. After all, when water is evaporated by sunlight, all minerals are left behind. Seawater, it turns out, contains 3.3% salt. Using these facts, Halley estimated the Earth's age to be 1,000 million years, which, according to Asimov, was "quite a respectable estimate for the first time around" (p.158). The religious argument against this figure, of course, was that God simply created the ocean with that much salt content.
Another way of calculating the planet's age was to measure sedimentation rates. Rivers, lakes, and oceans laid down layers of sludge, or sediment, which hardened into rock over time. Much of this rock contained the fossilized remains of extinct sea-creatures. Examining layer after layer of sedimentary rock was almost like looking into a time machine, since the more primitive organisms were found toward the bottom. It was difficult, however, to determine the rate of sedimentation, as some years may have been more turbulent than others weather-wise, thus producing varying amounts of sediment year by year. Still, geologists put forth estimates of the Earth as being at least 500 million years old, and that was good news to naturalist Charles Darwin. At that time (the mid 1800s), Darwin was formulating his theories of evolution through natural selection--a mechanism that required millions of years to work, if such evolution had in fact taken place.
But even before Darwin published his landmark book The Origin of Species in 1859, scientists were busy upsetting traditional notions of a 6000 year-old Earth by using incontrovertible physical laws. By the 1840s the First Law of Thermodynamics--that of the conservation of energy--was being formulated. For the first time, the source of the Sun's prodigious power began to be considered. Having as yet no idea about nuclear energy, the only logical explanation was, according to physicist Hermann von Helmholtz (1821-1894), that the Sun, a vast ball of hydrogen, was slowly contracting. Its inward falling mass, by weight of gravity, was constantly being converted to light and heat energy. A contraction of only 1/2000th of the Sun's radius--which was hardly noticeable--could account for all sunlight emitted since the dawn of civilization. However, the Scottish physicist Lord Kelvin (1824-1907) estimated that the Sun's radius, if contraction was the sole source of energy, would have to have been the size of the Earth's orbit a mere 50 million years ago. Geologists and biologists, convinced that the Earth was far older than that, were dismayed, to say the least (Asimov, p.161). Some other power source had to be found for the Sun.
The solution came in 1896 when physicist Antoine Henri Becquerel (1852-1908) accidentally discovered that the element uranium gave off energetic radiation. Other radioactive substances were soon found, and although the rates at which energy was thus released was slow, it was comparatively greater--much greater--than that of ordinary coal-burning. Scientists began to suspect that this was the source of the Sun's power. By 1911 Ernest Rutherford (1871-1937) demonstrated that the atom, once thought to be a featureless ball, actually consisted of a tiny nucleus in which almost all the mass was concentrated, surrounded by an electron shell. It was further determined that radioactivity altered the makeup of the nucleus, and thus converted one element into another. An ordinary explosion (which is a violent release of energy) is chemical in nature--that is, a release of electron energy--that has no effect on the nucleus. A nuclear explosion, on the other hand, results from forces within the nucleus and is millions of times more powerful. Therefore, it was nuclear energy that allowed the Sun to shine for billions of years with no perceptible change.
It was by using the properties of radioactivity that scientists finally began to understand the age of the Earth. Radioactive elements have what is called a half-life--a time period over which half the substance breaks down to some other element. Uranium, for example, turns to lead over a half-life of 4,500 million years. Other elements have their own characteristic half-lives, and since a wide variety of these substances are found in rocks all over the Earth, it was possible to determine the age of such rocks by analyzing their rates of radioactive decay. Some rocks were one billion years old; by 1931 some that were two billion years old had been found. Some rocks in western Greenland topped the 3 billion-year mark (Asimov, p.165). Eventually it was determined, from the changing proportions of rubidium and strontium in rocks that the Earth was approximately 4.55 billion years old (incidentally, analysis of meteorites, moon-rocks, and other extraterrestrial artifacts indicate that the entire solar system formed at roughly the same time).
Now that the age of the Earth has been accurately determined, I would like to demonstrate graphically the relative lengths of its various epochs by comparing the 4.5 billion years to the length of one solar year: the Earth's entire history symbolized by 365 days--January 1st to December 31st.
January 1, of course, represents the beginning of the Earth as a recognizable entity. At this point the planet is molten. By the second week of February, the Earth has solidified its outer regions, and as a result of volcanic activity, hot gasses are spewed out to form a primordial atmosphere. Among the gasses are vast quantities of water vapor. Oceans begin to form (through condensation and endless rainfall), and strangely enough, life appears--nothing more complicated than single-strand RNA. By the end of February primordial viruses and microspheres exist. The middle of March sees the first prokaryotes (bacteria): some with chloroplasts give rise to algae, others are more animal-like. These micro-organisms rule the Earth for 2.2 billion years.
It isn't until late September that the planet has an oxygen-rich atmosphere--oxygen being a waste product of blue-green algae. The first true cells, or eukaryotes, with DNA, nucleus, cytoplasm, and so on, appear about mid-October. The Earth is now 3.5 billion years old. By the end of October, multicellular life finally appears--mostly porifa (sponges). November represents the start of the Cambrian period, an age of coelenterates (jellyfish). By November 15th the Ordovician period begins with the appearance of arthropods, annelids (trilobites), and the like. By the 25th jawed fish arise and plant life begins to colonize the land. The end of November is the Silurian period--an age of coelacanths and rhipidistians. The latter eventually gives rise to amphibians. December 1st is the Devonian period--the Age of Fish. At this time insects and arachnids follow the rich plant life onto land. December 5th is the Carboniferous period, which sees the earliest amphibians. The 7th of December is the Permian period, wherein the first reptiles evolve. December 10th is the Triassic--the beginning of the Age of Dinosaurs. December 15th is the Jurassic and the 20th the Cretaceous. By Christmas Day, December 25th, the dinosaurs are extinct and the Age of Mammals begins. On the very last day of the year, December 31st, primates, apes, and hominids finally appear. The entire history of man has only occupied the last hour or so of the last day, and the recorded history, since the earliest civilizations, has all taken place in about the last minute.
As should be abundantly clear by now, a seventy-seven year-old man like John Glenn cannot be considered truly old-for one human life is naught but a blink of the eye.
ReferencesAsimov, I. (1989). Beginnings: The Story of Origins--of Mankind, Life, the Earth, the Universe. New York: Berkley.
***
ScaleOne of the most challenging aspects of any scientific endeavor is the fact that one is confronted with numbers representing physical quantities, which are sometimes unimaginably large or incredibly small. For example, what distance is really represented by a light year? The human mind cannot easily comprehend such a distance, yet scientists assure us, without a second thought, that this star or that, this galaxy or that, is x light years away. Similarly, how can one really grasp what is meant when one refers to the relative sizes of microscopic entities—the realm of eukaryotic and prokaryotic cells, bacteria and viruses, RNA and DNA, or the diameter of an atom?
To express these kinds of numbers, scientists use an amazingly versatile tool called scientific notation. Rather than write out the entire number representing, say, the astronomical unit (AU: the average distance from the Earth to the Sun—apprx. 150,000,000,000 meters), it would be expressed as 1.50 x 1011 m. For the extremely small, a negative exponent would be used; thus the diameter of a sodium atom might be written as 8.2 x 10-11 m.
The two parts of the scientific notation expression serve two distinct purposes. The second part, containing the positive or negative exponent, tells us the magnitude of the quantity. This is sometimes more of a problem than one might think, as explained by physicist Lawrence Krauss:
I was flabbergasted several years ago when teaching a physics course for nonscientists at Yale—a school known for literacy, if not numeracy—to discover that 35 percent of the students, many of them graduating seniors in history or American studies, did not know the population of the United States to within a factor of 10! Many thought the population was between 1 and 10 million—less than the population of New York City, located not even 100 miles away. (1993, p.28)
An America with a population of only 10 million would be a much different place that it really is—with a population approaching 300 million, or 3.0 x 108. The exponent tells us there are eight zeros following the one—in other words, a range of 100 million. The first part of the expression, written as a number between one and ten followed by a decimal, gives us the exact figure to any desired degree of accuracy (the greater the accuracy, the more numbers behind the decimal). Negative exponents, on the other hand, are fractions. Thus, 10-8 meters is 0.00000008 meters, or 1/100000000 m. A centimeter is 10-2 meters and a millimeter is 10-3. Therefore, 10-8 meters is one hundred thousand times smaller than a millimeter, or 1/100000th of a millimeter.
The handy thing about scientific notation, of course, is that it fits the metric system. The advantage of the metric system is primarily mathematical since it is based on a scale of 10. Whether the quantity measured is length (meters), mass (grams), liquid volume (liters), or density (grams/cm3), all have a factor of 10. In science, such numerical convenience is essential. One can always express something like the AU in terms of miles, yards, and inches, but the common system is based on the number 12—similar to the way we reckon time. Such a 12-based number system makes advanced computation unnecessarily cumbersome.
Therefore, it is not surprising that the scientific community would try to foist the metric system—devised after the French Revolution of 1789—upon all humanity, replacing the old system. Their efforts were highly successful in all nations except Great Britain and the United States. We Anglophiles stubbornly cling to our old system of inches, feet, and miles; ounces, pounds, and tons; pints, quarts, and gallons; and so on. There is a reason for that, however. Despite the obvious advantages of the metric system, the common system is tailor-made to human physiology. A foot, for example, is the average length of a grown man’s appendage; an inch about the length of a finger’s end-joint. A yard is about the length of a person’s out-stretched arms, a mile approximately 1000 paces (1 pace = 5 ft.). The fact is, these units of measurement feel much more natural to us than centimeters or kilometers ever will. For everyday use, then, I say keep the old system; but for science the metric system is invaluable. Fortunately, nature has simplified things greatly since there are only three fundamental dimensions of matter: length, time, and mass. I dealt with the aspect of time in “Measuring the Age of the Earth,” so here I will focus on length, or distance. A few observations on relative mass may also be in order.
Let’s begin with the realm of living organisms, taking ourselves as a standard measure. The human being is a comparatively large creature, though by no means the largest. As mammals, we are in the intermediate range—smaller and less massive than some (elephant, bear, whale, etc.), but larger than many (cat, mouse, rabbit, etc.). The “average” U.S. male is approximately 1.72 meters tall and weighs about 73.48 kilograms (Diagram Group, 1980, p.72). A much wider range of sizes, however, can be found among arthropods—a classification that includes insects. The longest known insect is the Tropical Stick insect, which may reach 33 centimeters, but the smallest are invisible to the unaided eye. Most of these are parasites (for whom infinitesimal size is a tremendous advantage), such as the mange mite, which measures about .25 millimeters. Smaller still are certain unicellular organisms, such as the amoeba, which is between .2 and .3 mm, or the Euglena (.15 mm). Among the smallest of these is the Chlamydomona, at .02 mm. But far smaller than all the above are bacteria, such as cocci or spirilla, which are about .001 to .002 mm in length. More than ten thousand times smaller than the tiniest bacteria, however, are viruses, which may be .1 to .3 micrometers in length—that is, 0.00001 mm (ibid). Viruses are so small that ordinary microscopes cannot resolve them; electron microscopes are needed. They represent the smallest known forms of life.
To demonstrate the scale of these things consider this: if a virus were the size of a flea (which looks like a tiny black speck to our eyes), then the flea would be the size of a whale—about 65 ft. long. If an amoeba were the size of an elephant, the Chlamydomona would be as big as a cat and a bacterium about the size of that flea (ibid). So we see that there is an enormous range of sizes even among organisms that are smaller than the tiniest insect.
Compared to atoms, however, all the above are unfathomably huge. Atoms may be measured in terms of nanometers (10-9 m), which are 0.000001 mm, or one millionth of a millimeter. An iron atom may be about .2 nanometers in diameter, or sulfur about .08 nanometers. Smaller still are the individual components of atoms—electrons, protons, and neutrons. The nucleus of a hydrogen atom—a single proton—is about .01 picometers in diameter (0.000000001 mm). To give a sense of this scale, if a picometer were magnified to the size of a centimeter (keeping in mind that a proton is only 1/100th of a picometer), then a small raindrop—diameter 1.4 mm—would be the size of the Sun (ibid). There may very well be things smaller than the quanta making up atoms, but such a realm is beyond the reach of current knowledge.
The masses of these particles, though, are fascinating to consider. The mass of a proton is estimated to be 1.6726 x 10-27 kg. That’s 0.000000000000000000000016726 kilograms. As minute as that seems, the fact is a proton is incredibly massive compared to its size. If a bunch of protons could be forced together side by side, into a lump one cubic centimeter big, it would weigh 133 million tons. The only reason why we, or anything else, are not impossibly massive is due to the fact that the atom is mostly empty space—and empty space has no mass. The mass of an electron—9.1096 x 10-31 kg.—is miniscule compared to that of a proton. The mass of a virus is about 10-21 kg. By way of comparison, the mass of a virus is to a human being what that of a human being is to the planet Earth.
Now we turn our attention to the other end of the spectrum—to that of the astronomical, and here we are met with numbers that are equally incomprehensible. Within the solar system, distances can be measured in millions of kilometers, but beyond that the light year is needed (a light year is defined as the distance electromagnetic waves travel in one year through a vacuum—about 9.4605 x 1012 km). The distance from the Earth to the Moon is 384,400 km; to the Sun, as mentioned before, 1.496 x 108 km. Venus is 41, 400,000 km away, Mercury 91,700,000 km. Going the other way, Mars is 78,300,000 km distant, Jupiter 628,700,000 km. The rest of the planets, in increasing distance, are as follows: Saturn, 1,277,400,000 km; Uranus, 2,720, 000,000 km; Neptune, 4,347,000,000 km; and Pluto, 5,750,400,000 km. The diameter of the solar system at Pluto’s orbit is about 1.18 x 1010 km or 11,800,000,000 km.
However, the solar system extends well beyond the domain of planets, as far as the Oort cloud (where comets originate). The Oort cloud is an estimated 7.48 x 1012 km, which is 7,480,000,000,000 km out from the Sun. That distance is about two trillion kilometers shy of a light year. The nearest star, Proxima Centauri, is 4.25 light years away. At the fastest speeds available to current spacecraft (between 200,000 and 300,000 km/hr), it would take nearly 20,000 years to reach Proxima Centauri. All the other stars are much farther away than that. The diameter of our Milky Way galaxy is about 100,000 ly., and the distance to the great Andromeda Galaxy is 2,250,000 ly.
Now, to illustrate the above graphically, let’s make a light year equivalent to one kilometer and reduce the size of the Earth accordingly. At that scale, the Moon is just .038 cm away, Mars 7.83 cm, Jupiter 6.287 cm, Saturn 12.774 cm, Uranus 27.2 cm, Neptune 43.47 cm, Pluto 57.504 cm, and the Oort cloud 74.8 cm away. Thus, our scaled-down solar system can fit comfortably in a medium sized room. The nearest star, of course, is about 2.5 miles away. The galaxy extends beyond the atmosphere to about 1/6 the distance to the Moon, while the Andromeda Galaxy is about three times more distant than the Moon. On this scale, the most distant (visible) objects in the universe would be located at about Saturn’s orbit. And there are, no doubt, vast reaches of the cosmos that are beyond visible range due to the finite velocity of light. So, whether one looks to the very small or the very big, infinity, for all intents and purposes, blurs the horizon of the unknown—something that mathematical models make clear.
References Krauss, L.M. (1993). Fear of Physics: A Guide for the Perplexed. New York: BasicBooks.
The Diagram Group. (1980). Comparisons. New York: St. Martin’s.
***
The Relative Abundance of Chemical ElementsThere is a certain fascination with classification schemes, and science produces these aplenty. For example, the division of living organisms into kingdom, phylum, class, order, family, genus, and species, not only shows the similarities of one creature to another, but also provides support for a widely-held theory—that of biological evolution. Similarly, the periodic table of chemical elements, with its seven periods and various groupings, lends itself to another great theory—the atomic theory of matter. The fact that Mendeleev created the periodic table before the atom was really understood speaks volumes for the validity of the theory. If the atomic theory were not true, then it is hard to see how the periodic table could exist at all.
An element, of course, is defined as “[a] substance which cannot be decomposed by simple chemical processes into two or more different substances" (qtd. in Moore, 1987, p.127). There are 92 naturally occurring elements, all but two of which have been found on earth (technetium, Tc 43, and promethium, Pm 61, have not yet been discovered in their natural state). They have a wide variety of physical and chemical properties; some occur naturally in their elemental state (iron, copper, gold, etc.), others can only be found in compounds (silicon, calcium, sodium, etc.). Some are radioactive, most are not; some are quite abundant in nature—oxygen, for instance—others are exceedingly rare.
What accounts for the relative abundance or rarity of an element? The answer, it turns out, can only come from the relatively new branch of science known as astrophysics—the merger of astronomy/cosmology and nuclear physics. I will explore that area momentarily; but first, how did we come by our knowledge of the elements?
As many as ten elements have been known from antiquity: copper and tin, for instance, were used to make bronze--a strong alloy. Their Latin names were cuprum (a reference to the Island of Cyprus) and stannum, respectively. Ferrum, or iron, later became extremely important in the manufacture of tools and weapons. Other known metals included aurum (gold), plumbum (lead), hydragyrum (mercury), and argentum (silver). Antimony, carbon, and sulfur were also well known and commonly used. The only other element discovered before the age of science was arsenic, isolated by Albertus Magnus in 1250 A.D. Phosphorus was identified in 1669, and during the 1700s these elements were named: bismuth, chlorine, chromium, cobalt, hydrogen, manganese, molybdenum, nickel, nitrogen, oxygen, platinum, tellurium, titanium, tungsten, uranium, zinc, and zirconium. And during the 1800s, every year it seemed, new substances were being added to the list: aluminum, barium, beryllium, boron, bromine, cadmium, calcium, cerium, iodine, iridium, lithium, magnesium, niobium, osmium, palladium, potassium, silicon, sodium, tantalum, and so on. Thus, at least 50 different elements had been positively identified at the time Mendeleev began putting together the periodic table. It was perhaps inevitable that such a table be created at some point--some sense of order had to be imposed on the chaos.
The seven periods of the table are the horizontal rows representing the elements in ascending order according to atomic number--that is, the number of protons in the nucleus. An element may gain or lose electrons to become ionized, or occur in several isotopes (according to the number of neutrons), but the number of protons, and thus the strength of its positive charge, defines the element. Its chemical properties arise chiefly from the electrons swarming about the nucleus.
The vertical columns of the table indicate periodicity, or the point where the various properties begin to repeat themselves. Thus, the elements of column IA--H 1, Li 3, Na 11, K 19, Rb 37, Cs 55, Fr 87--are similar, as are those of column 0--He 2, Ne 10, Ar 18, Kr 36, Xe 54, Rn 86. Generally, elements fall into three categories: metals, metalloids, and non-metals. There are a number of subdivisions as well.
As far as relative abundance goes, group IA contains the most plentiful of all elements, hydrogen (making up 88.6% of all the matter in the universe) as well as one of the rarest, francium. There are only an estimated 15-25 grams of francium on earth. According to Kaufmann (1985, pp.110-11), "From chemical analysis of Earth rocks, Moon rocks, and meteorites, scientists have been able to determine relative abundances of the elements in our part of the galaxy." They are (in order of abundance): 1) hydrogen, 2) helium, 3) carbon, 4) nitrogen, 5) oxygen, 6) neon, 7) magnesium, 8) silicon, 9) sulfur, 10) iron, 11) sodium, 12) aluminum, 13) argon, 14) calcium, and 15) nickel (ibid). Here on earth, though, we have what might be called a "specialized environment," and so the most abundant substances do not reflect cosmic ratios.
The 21 most abundant elements in the earth's crust are scattered throughout the table [note: rankings in ascending order are presented along with periodicity--i.e. elements in the same vertical column]: vanadium, which ranks 21st, makes up .017% (by weight); strontium is 20th at .019% and is in group IIA with calcium (5th) at 3.63%, magnesium (8th) at 2.09%, and barium (13th) at .05%; nickel is 19th at .02%, which is in group VIII with iron (4th) at 5.01%. Zirconium is 18th at .026% and shares group IVB with titanium (9th) at .63%; group VIIA includes fluorine (17th) at .03% and chlorine (14th) at .048%; carbon is the 16th most abundant at .034% and is in group IVA with silicon, which is ranked 2nd overall at 27.72%; chromium is 15th at .037%; group VIA contains sulfur (12th) at .052% and 1st ranked oxygen, which accounts for 46.59% of the earth's crust--by far the most abundant element; group VIIB includes manganese (11th) at .10%; surprisingly, hydrogen accounts for only .13% and shares 10th place with phosphorus; hydrogen, though a gas, is part of group IA, the alkaline metals, along with sodium (6th) at 2.85% and potassium (7th) at 2.6%; finally, aluminum is ranked 3rd at 8.13% (Chen, 1975, p.77). All other elements combined account for only .056% of the earth's crust. These abundances are summarized below:
21 Most Abundant Elements on Earth
Ranking % of Earth’s Crust
1 Oxygen 46.59
2 Silicon 27.72
3 Aluminum 8.13
4 Iron 5.01
5 Calcium 3.63
6 Sodium 2.85
7 Potassium 2.6
8 Magnesium 2.09
9 Titanium 0.63
10 Hydrogen, Phosphorus 0.13
11 Manganese 0.10
12 Sulfur 0.052
13 Barium 0.05
14 Chlorine 0.048
15 Chromium 0.037
16 Carbon 0.034
17 Fluorine 0.03
18 Zirconium 0.026
19 Nickel 0.02
20 Strontium 0.019
21 Vanadium 0.017
The relative abundances of elements on earth do not, of course, reflect their occurrences throughout the universe. Again, 88.6% of all matter in the universe (matter, that is, that can be accounted for) is hydrogen and 11.3% helium. All other elements atomic number 3 and up account for only 0.1%. The reason why hydrogen and helium are scarce on earth is due to the way the solar system formed--most of the hydrogen went into the sun's makeup as well as that of the massive outer planets. The small inner planets simply did not have sufficient gravity to retain the lighter elements. They are, as a result, made up of heavier elements--albeit the rarer ones.
Now, how did the elements originate in the first place? Hydrogen and helium were created in the aftermath of the Big Bang--the hypothetical origin of the universe. These simple atoms were the fundamental stuff out of which all else came. Heavier elements were formed as byproducts of thermonuclear reactions within stars--for only there are the temperatures and pressures great enough to induce fusion. In our own sun, for example, its original composition was about 75% hydrogen and 25% helium, with trace amounts of heavier elements. Nuclear fusion, which causes the star to shine and radiate energy, means the transformation of hydrogen nuclei to helium nuclei, a process called core hydrogen burning (Kaufmann, p.392). Thus, helium is the "ash" of hydrogen burning. It is the outward push of radiation that halts a complete gravitational collapse, giving the star its equilibrium during what astronomers call the "main sequence."
After five billion years there is now more helium in the sun's core than hydrogen. In the final years of its life (about another five billion years from now), the sun's hydrogen fuel will be almost depleted and its energy output will decrease. Then gravity will dominate and begin to collapse the star. This causes increased pressure and makes the temperature rise. When the core temperature reaches 100 million Kelvins, helium burning is ignited--i.e. two helium nuclei fused to form an isotope of beryllium. But the beryllium is unstable and rapidly degrades, finally forming carbon through additional bombardment by helium nuclei. Then some of the carbon combines to form oxygen, so carbon and oxygen are the "ash" of helium burning (ibid).
In stars that are more massive than the sun, a greater variety of elements can be formed. When a star's core temperature reaches 600 million Kelvins carbon burning begins, and this produces neon, magnesium, oxygen, and helium. At one billion Kelvins neon burning begins, which produces more oxygen and magnesium. At 1.5 billion Kelvins oxygen burning begins, the principal product of which is sulfur. Oxygen burning also produces silicon, phosphorus, and more magnesium. At 3 billion Kelvins silicon burning is ignited. This is a furious process in which hundreds of complex nuclear reactions take place, but the final result is a stable isotope of iron (ibid). So iron is the end-of-the-line for this particular path of element creation.
It is obvious, however, that iron is not the end of the periodic table. Whence come elements of atomic number 27 and above? These, it turns out, are created in the violent reactions of super-massive stars at the end of their lives--supernova explosions. The beautiful Crab Nebula (NGC 1952), for instance, is the remnant of a supernova that was observed in 1054 A.D. The types of nuclear reactions that take place in these events are unpredictable. The heavier the nucleus of the atom, the higher the energies required for fusion to occur, and the energies associated with supernovae depend upon the mass of the star.
On earth, elements with even atomic numbers are about ten times more abundant than those with odd numbers (Moore, p.128). A graph of the elements' relative abundance (with the even-odd fluctuation adjusted) shows a smoothly descending curve with definite peaks and troughs. There is, for example, a carbon, nitrogen, oxygen peak, an iron peak, and a lead peak. This is somewhat a reflection of the table's periodicity.
One might wish that gold or silver were more common than they are (ignoring the fact that such over-abundance would lessen their monetary value) or that tungsten were more easily obtained, but in the cosmic scheme, all of these--aside from hydrogen and helium--are of little consequence. All the heavier elements put together are little more than trace impurities. Without the stars and their various life-cycles, no heavier element would ever exist. Out of the death-throes of stars comes the stuff of life, literally.
ReferencesChen, P.S. (1975). A New Handbook of Chemistry. Camarillo, CA: Chemical Elements Publishing.
Kaufmann, W. J. (1985). Universe. New York: Freeman.
Moore, P. ed. (1987). The International Encyclopedia of Astronomy. New York: Orion Books.
***
Mechanisms of ChangeIt is often said that truth is stranger than fiction, and nowhere is this more the case than in the discovery of continental drift. Nothing in anyone’s experience would tend to suggest that the face of the earth is changing in any significant way. That’s not to say that minor changes are not taking place: rivers change course, mountains and hills wear down, coastlines slowly erode. But the overall configuration of oceans and land-mass seems just as permanent as the stars in the night sky—something the Bible alludes to in Ecclesiastes 1:4: “A generation goes, and a generation comes, but the earth remains forever.”
This kind of permanence is deeply desired by all people who have precious little in the way of security. Political, social, and cultural upheavals never seem to end, and the earth itself is often a hostile environment: earthquakes, floods, droughts, violent storms. Indeed, history seems to be the tale of one catastrophe after another. But man-made disasters notwithstanding, one must question the real process of change in the natural world. Is the earth evolving, slowly changing over time, or is it the hapless victim of continual catastrophism?
Historically, catastrophe and natural disaster were held to be the main agents of change in this world. The best known example of this is the Great Flood described in Genesis 5:13-17: “And God said to Noah, ‘I have determined to make an end of all flesh; for the earth is filled with violence through them… behold, I will bring a flood of waters upon the earth to destroy all flesh in which is the breath of life from under heaven; everything that is on the earth shall die,’” Noah’s flood was a global catastrophe of epic proportions, and if not for the efforts of one man all would have been lost. From a scientific perspective, however, the story of the Great Flood is implausible at best. If there was such a planet-wide deluge, drowning every continent and submerging even the tallest mountains, where did the water go? The sheer volume of it has to be accounted for somehow. There is also a problem with the ark. The dimensions of the ark are very specific in the Bible. How could two specimens of every animal on earth possibly be crammed into such a vessel? Clearly, it’s not possible. The modern view is to dismiss the story altogether as myth.
Whence comes the story of the Flood, anyhow? It is derived from a classic of ancient Mesopotamian literature, The Epic of Gilgamesh, which describes a great prehistoric flood. And indeed, such floods were commonplace in the Tigris-Euphrates valley (what is now southern Iraq). Archaeological excavations have revealed evidence of a particularly devastating flood in that region during those dark years; thus, the story of Noah’s Ark likely has some basis in fact. Is it so surprising that the tale would later become more elaborate through the constant retelling before it was finally written down?
The earth does appear to be unchanging, nonetheless. The first hint that the planet’s past might have been vastly different from anything one could imagine came in the 16th century, shortly after the European’s discovery of the Americas. Rough-hewn maps of the North and South American coastlines were being made. Portuguese explorers were concurrently doing the same for the African continent. It did not take long for those who studied the new maps to come to a startling realization: that the east coast of South America and the west coast of Africa matched one another like pieces of a jigsaw puzzle. The first to write about this was philosopher Francis Bacon (1561-1626). In 1620 he recorded the observation in his book Novum Organum, saying that it could not be mere coincidence. Africa and South America must have once been part of a single landmass that was somehow torn asunder. The question is, how? The clergy, never too comfortable with philosophers and scientists anyhow, were quick to attribute it to the biblical Flood—the force of the deluge must have ripped the continents apart, if indeed they were ever joined. The legendary Flood, it seemed, was a ready-made answer for any incongruity. For instance, the ancient Greeks had once noted that sea shells and fossils of what was obviously marine life were to be found high in the mountains. They speculated that what was now dry land must have at one time been under the sea. The clerical answer to that? The Flood, of course.
In 1784 American statesman Benjamin Franklin (1706-1790) suggested that the earth’s solid crust might be a relatively thin shell floating on an ocean of molten rock. Thus, it could break up into sections that could slowly drift around over time. It was a brilliant speculation, but what Franklin lacked was a mechanism to make it work.
In Great Britain during the 18th century, retired chemist James Hutton (now considered the “Father of Geology”) published a book, Theory of the Earth, in which he ascribed large-scale changes on the planet’s surface to natural processes—sedimentation, volcanic eruption, erosion by wind and water—and judging by the rates of change, concluded that the earth must be millions of years old.
As Hutton’s views gained wider acceptance and the science of geology became firmly established, additional evidence of continental drift was being gathered from around the globe. Not only were the coastlines of Africa and South America similar, their geologies were virtually identical. Moreover, many species of plant and animal, which could not have crossed the Atlantic Ocean, were common to both continents. Similarly, the Island of Madagascar, off the coast of Africa, had few species in common with Africa but many in common with India, which was much further away. How could that be? Since the concept of continental drift was not universally accepted at that time, the notion of “land bridges” came into vogue.
In 1909 Austrian geologist Eduard Suess completed a three-volume work called The Face of the Earth in which he postulated the existence of a super-continent called Gondwanaland. It included South America, India, Australia, and Antarctica, all in their present positions but joined together by land bridges. The assumption was that continents could rise or sink vertically, as if they were corks bobbing in water. This is the phenomenon of isostasy, which holds that landmasses are made of lighter rock (granite) than ocean basins (basalt). Because they are less dense, continents naturally float—that’s why they are continents. But can they move sideways too?
Alfred Wegener (1880-1930) came to the conclusion that they could, that indeed they must. He dismissed the idea of land bridges that somehow once existed (to account for observed data) then conveniently disappeared. The very concept of isostasy ruled that out because any land bridge would continue to float—it would not just disappear. Therefore, the continents must be slowly drifting across the face of the earth. For evidence there was the fit of the shorelines; and when the continental shelves were considered instead of the coastlines, the fit was even better. It was also known that Antarctica had not always been ice-bound. According to James Dyson:
Throughout the long, intervening periods, the Earth’s climate was mild and uniform. The tropics frequently extended into high latitudes; reef-building corals spread through warm seas much further north than they do today, sometimes to the latitude of Greenland and Alaska; palms, tree ferns, breadfruit trees, and other subtropical plants grew equally far north over a time span counted in millions of years. During much of Earth’s history even the Antarctic continent was free of ice. At times it was covered with dense forests.
In fact, fossils of creatures that could only have lived in the tropics were to be found throughout Antarctica. Thus, either the South Pole once enjoyed a warm climate (which is unlikely) or Antarctica has not always occupied its current location. Wegener proposed that all the continents had once been part of a single mass called Pangea, which had long ago broken up and dispersed. But since he was unable to provide a mechanism for the continental drift theory, his ideas were ridiculed. Unlike many of his predecessors, Wegener refused to fall back on legendary floods or other cataclysms to account for it. The continents just drifted.
Knowledge of the all-important mechanism that drove continental drift, it turned out, finally came from undersea exploration. During the mid-1800s an attempt was made to lay a telegraph cable across the ocean floor, connecting the United States and Great Britain. For this purpose, information about the ocean bottom was needed. Oceanographer Matthew Maury (1806-1873) was commissioned to collect data on the depths. His methods (using weighted ropes) were laborious and costly, but by 1854 it was clear that the middle of the ocean was much shallower than either side. One would think that the deepest part of the ocean would be the very center, but not so. Maury called it Telegraph Plateau.
After World War I the invention of sonar made it possible to obtain a detailed picture of the ocean bottom, and it revealed more than just a plateau: there was a mountain range—higher, longer, and more rugged than any on land—running the length of the ocean. This came to be called the Mid-Atlantic Ridge. After World War II it was discovered that the ridge curved around southern Africa and extended into the Indian Ocean. There it divided and worked its way around Australia and then formed a vast circle in the Pacific Ocean. Moreover, in the midst of this globe-spanning mountain range, there were deep, deep canyons—what appeared to be cracks in the earth’s crust. Here it was that sea-floor spreading was discovered as molten basalt from the earth’s mantle constantly welled up, building the Mid-Oceanic Ridge and expanding, centimeter by centimeter, the ocean floor. This is what was driving the continents apart, and Wegener’s theory at last had to be taken seriously.
Therefore, it was through persistence, acute observation, and refusal to rely on apocryphal tales of ancient catastrophes that the truth was finally laid bare. The earth was revealed to be a dynamic, living planet, constantly in motion. If it appears unchanging it is because a human life is so short. In fact, humanity has occupied the planet for only a minute fraction of geological time. Compared to the earth’s 4.5 billion-year history, that’s no time at all.
Although there are catastrophes and natural disasters, these are only local events and insignificant in the long-term. The real mechanisms of change are natural, slow moving, and inexorable. That is why, for example, I doubt the newly popular theory of dinosaur extinction due to an asteroid collision. Perhaps such a collision did take place, but I question if that in itself wiped out the reptiles. Rather, I believe that the break-up of Pangea (which began in the Triassic) and the repositioning of the continents, resulting in climatic changes, was what really triggered their demise. In other words, I feel more comfortable with a uniformitarian explanation than a catastrophic one.