Primary Opinion

Collected Essays: 1997-2004

Name:
Location: Portsmouth, VA

Currently a graduate student at Old Dominion University

Thursday, October 27, 2005

Two: Ethics

Mammon and America

When I was growing up I often heard my father refer to the "Almighty Dollar," making me think that money was some sort of religion. Indeed, money is a god for some--if not money, then the tangible things it buys. This crass materialism, which dominated our nation in the '50s, was temporarily rejected during the '60s, but enjoyed a resurgence in the '80s, is a kind of mammonism--a love of money. In the movie Oh God! John Denver asks God (played by George Burns) why He created man naked. God replies that when one has clothes one also has pockets, and where there are pockets there has to be something to put in them. The implication is that material wealth has little to do with God. In Matthew 22:17 a group of Pharisees confronts Jesus, asking him whether it is lawful to pay taxes to Caesar. Jesus replies, "Show me the money for the tax." They bring him a coin and he says, "Whose likeness and inscription is this?" They reply that it is Caesar's, and Jesus says, "Render therefore to Caesar the things that are Caesar's, and to God the things that are God's."

Nonetheless, so long as one is alive in this world, one's livelihood is a real concern. Let us not think that abject poverty is the guaranteed road to virtue. In fact, just the opposite seems to be the case. The impoverished areas of this nation are invariably hotbeds of crime, drug abuse, prostitution, and crushing despair. It is debatable whether poverty causes crime, but it is certainly true that crime begets poverty. Businesses simply cannot operate in crime-ridden neighborhoods, so they pack up and leave--taking badly needed jobs with them. Is this ethical? Like any other entity, a business's first instinct is to survive. It can hardly be blamed for that, but is there a point where ethical lines are crossed and profits are gained through people's ruin? Historically, there have been two opposing ethical approaches to American businesses: one based on the concept of Social Darwinism (survival of the fittest), the other on the Protestant work ethic (an honest wage for an honest day's work), but in recent years a third approach may be seen--a more utilitarian view.

The emergence of the United States as the world's wealthiest nation began shortly after the Civil War, centering on the burgeoning railroads. According to Nevins and Commager, America during Abraham Lincoln's day "was a nation of small enterprises. A monopoly was practically unknown..., furniture came from the local cabinet maker, shoes from the neighborhood shoemaker" (1981, pp.267-68). Within forty years, however, all this had changed. Companies like International Harvester and Standard Oil owned virtual monopolies in their fields, driving small businesses (and their owners) to ruin. One of the most spectacular successes was the United States Steel Corporation, born in 1901 with a capitalization of 1.4 billion dollars--a sum that was "larger than the total national wealth a century earlier" (ibid). The accumulation of such unbelievable wealth was the result of setting up a corporation--an entity that could "enjoy the legal advantages but avoid most of the moral responsibilities of a human being" (ibid). Then came the trust. A trust was a joining of corporations acting in concert to monopolize resources, eliminate less profitable subdivisions, bargain with labor, compete with foreign interests, and most importantly, control prices. Because of their vast reserves of capital, the trusts came to have an undue influence over the government. By the end of the 19th century democracy itself was an endangered species because most of the country's natural resources, industries, railroads, and utilities were all generating profit for a handful of men. According to Nevins and Commager:

Exorbitant charges, discrimination, and wholesale land grabs by the railroads, the malpractices of Rockefeller, Carnegie, and others in crushing competitors,
the savage power with which many giant corporations beat down labor, the pocketing by the trusts of the savings that came from science and invention,
the spectacle of corporation agents lobbying favorable laws through state legislatures and corporate lawyers finding loopholes in state tax or regulation laws, all aroused widespread alarm and bitterness. (p.273)

Such business practices, described above, were operating according to what has been called Social Darwinism. Charles Darwin, of course, was the English naturalist who spent years studying wildlife all over the world. In 1859 he published The Origin of Species, the central thesis of which was that most animals produce far too many offspring to subsist on available food supplies, therefore they must compete amongst themselves and against other species to survive. In this way, the weak, the sickly, and the maladapted get weeded out. It is nature's method of insuring that only the strongest, most intelligent, cunning, and energetic creatures live to mate and pass on their genes. It was not difficult for the Rockefellers and the Carnegies of the world to adopt Darwin's theory of natural selection as a means to justify their ruthlessness.

This kind of brutality, after all, is nothing new--it is as old as the human race. All the Caesars, Khans, and Hitlers have operated according to the basic principle of "might makes right...” Ethics, the philosophical consideration of right and wrong, hardly enters into the discussion. Probably the most brilliant exposition of this kind of thinking was that of the philosopher Freidrich Nietzsche (1844-1900). Nietzsche was a pious youth--schoolmates derided him as a "Jesus in the Temple" (Durant, 1961, p.403). But at some point he rejected his reverent past and spent the remainder of his life bitterly denouncing Christianity. In such books as Beyond Good and Evil (1886) and Thus Spake Zarathustra (1883), Nietzsche taught that Christian morality was slave morality, a system devised by the weak to fetter the strong. Thus, he pronounced, God is dead, and in His place arises the Superman (Ubermensch), the source of what he called "hero morality"--the ethos of the conqueror and the dictator. That which we call "good" is whatever triumphs, whatever succeeds, whatever obtains victory. "Evil" is what fails, gives way, or is overcome by superior force. The world as a whole operates according to what Nietzsche called the Will to Power. Here, there is no room for compassion, weakness, or sentiment. Altruism is a sham and Christian charity hypocrisy.

Such thinking, obviously, is nothing more than the application of Darwin's theory to human value systems, and, frankly, it is a little hard to argue with its basic premise. Is not strength better than weakness, victory better than defeat? And nature is appallingly indifferent to the welfare of any living thing, evidenced by the fact that 99% of all species that ever existed on earth are now extinct--driven to destruction by environmental pressures. The question is, should we translate these indisputable facts to our own species? Are we human beings to scratch and claw and compete, just as animals do?

Apparently, the U.S. government disagreed, and stepped in to enact antitrust laws in the early 1900s. In his First Inaugural address, President Woodrow Wilson said:

The evil has come with the good, and much fine gold has been corroded. With riches has come inexcusable waste. We have squandered a great part of what we might have used, and have not stopped to conserve the exceeding bounty of nature... scorning to be careful, shamefully prodigal as well as admirably efficient. We have been proud of our industrial achievements but we have not hitherto stopped thoughtfully enough to count the human cost, the cost of lives snuffed out, of energies overtaxed and broken, the fearful physical and spiritual cost to the men and women and children upon whom the dead weight and burden of it all has fallen pitilessly the years through... (qtd. in Nevins & Commager, p.338)

Wilson was perhaps the very embodiment of a more traditional approach to business: that of the Protestant work ethic--so named to differentiate it from more aristocratic Roman Catholic values. Hierarchies are, after all, inherent within Catholicism. Europe was historically a society divided by class, so the Protestant Reformation was as much an economic struggle as a religious one. America is much more egalitarian, although vestiges of class-consciousness remain. But the Protestant work ethic was at the heart of colonial and early American entrepreneurialism. In a social climate preoccupied with "salvation"--the benefits of which are supposedly unobtainable until after death--it was considered best to occupy one's time with useful work. Laziness, sloth, gluttony, and self-indulgence (so beloved of aristocracy) were sins to be avoided. There may have been a time when a man's word was his bond, when written contracts were mere formalities, when business owner and employee were not sundered by vast oceans of wealth and privilege, but those days seemed to vanish in the wake of Andrew Carnegie, John D. Rockefeller, and J.P. Morgan.

The Protestant value system is based upon the simple--but surprisingly radical--teachings of Jesus on the subject of money. In Matthew 6:24 he says, "No one can serve two masters; for either he will hate the one and love the other, or he will be devoted to the one and despise the other. You cannot serve God and mammon." Thus, in Jesus’ opinion, men are too concerned about money and material--the externals of life. In Mark 10:23-25 he says, "How hard it will be for those who have riches to enter the kingdom of God!" and, "It is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God" (in that passage the word "camel" is probably a mistranslation of the original Greek word meaning "rope"). There was a time in American history, I'm sure, when many took those sayings to heart and therefore dealt honestly with others, seeking simply to feed themselves and their families through honorable means. The idea of building a financial empire and living like royalty must have seemed downright un-American. Indeed, the U.S. government had to step in and pass antitrust laws because if it had not, the country would have become an oligarchy and the government little more than a rubber stamp. Democracy itself was at stake.

In the 20th century we have passed through world wars, a cold war, civil rights struggles, political assassinations, and shifts between conservative and liberal agendas, but American business continues unabated. The Protestant work ethic seems hopelessly outdated now, and although Social Darwinism is still practiced, most people do not feel comfortable with it. These days a more utilitarian approach to ethics is the desired norm. Companies want to maintain profitability but workers must be able to make a decent living too. In 1906 a novel by Upton Sinclair called The Jungle shocked the nation with its vivid descriptions of the Chicago stockyards and the plight of (mostly immigrant) workers. Here is a brief passage:

Marija and Elzbieta and Ona, as part of the machine, began working fifteen or sixteen hours a day. There was no choice about this--whatever work there was to be done they had to do, if they wished to keep their places; besides that it added another pittance to their incomes, so they staggered on with the awful load. They would start work every morning at seven, and eat their dinners at noon, and then work until ten or eleven at night without another mouthful of food... When they got home they were always too tired either to eat or to undress; they would crawl into bed with their shoes on, and lie like logs. (p.142)

After The Jungle was published, public indignation forced the government to enact pure-food laws. Labor laws and occupational safety laws soon followed.

There are some who believe that socialism is the only solution to economic woes, but if the results are like those in Russia, or North Korea (where the famine is so bad people are eating the bark off trees), where economies are in shambles, there is little to recommend that route. The United States, European Union, and Japan seem to be the best models of prosperity for the present. No one will deny that there are inequities to be resolved, but such problems are best addressed in an environment of freedom and opportunity, rather than oppression and fear.



References

Durant, W. (1961). The Story of Philosophy. New York: Washington Square.

Nevins, A. & Commager, H.S. (1981). A Pocket History of the United States. New York: Washington Square.

Sinclair, U. (1906). The Jungle. New York: Signet.


***



Racism and Racialism

Racism is racism, one could argue, but the distinction between extrinsic and intrinsic racism presented by Kwame Appiah (2003, pp.264-279) appears to be a matter of the depth to which such views are held—the latter being more insidious than the former (if I understand Appiah correctly). The former—extrinsic racism—is best expressed by the classic and ironic declaration, “Some of my best friends are…(you fill in the blank).” It’s ironic because, if you truly consider someone a “friend,” you would not, it seems to me, identify him or her in ethnic terms. This cuts to the heart of the problem of racism (or bigotry) in general: the failure, or outright refusal, to regard other human beings primarily as individuals, but as members of particular groups. We often consider it normal, even admirable, to treat certain persons according to their “status.” For example, we deal with small children, in every respect, much differently than we do with adults. This is not limited to one’s own children, but all children. It is considered normal, and morally correct, to take especial care and concern when relating to children because of their vulnerability as minors. Similarly, a white person—particularly one of the older generation—may consider it perfectly natural to regard blacks or other non-whites as subordinates; there may be no real malice involved, simply an ingrained assumption (white superiority) that remains unchallenged. Such assumptions are usually unconscious, unless brought to the fore through some shocking challenge.

Intrinsic racism, on the other hand, seems to be more deeply rooted, and thus has the potential to cause greater harm. It is expressed by the Crummell quote on
p273: “Races, like families, are the organisms and ordinances of God: and race feeling, like family feeling, is of divine origin. The extinction of race feeling is just as possible as the extinction of family feeling. Indeed, a race is a family.” Here, no matter how many of your “best friends” may be (blank), you would never consider giving preference to them over others of your own ethnic background. If you’re white, you treat other whites—indeed all whites—differently than you do non-whites, for no other reason than “race.” Here, I should point out, as Appiah does in his essay, that intrinsic and extrinsic racism may not be so easily separated or distinguished; in fact it seems best to regard the whole psychology of it as a spectrum with extreme intrinsic on one end and extreme extrinsic on the other. But all racism, I believe, emanates from a core of false assumptions that Appiah refers to as “racialism”—the notion that there are identifiable and heritable characteristics that a) all members of a particular “race” have in common, and b) make it possible to subdivide humanity into distinct racial groups.

Actually, I am not unfamiliar with these lines of thought that are so clearly discussed in “Racisms." Racialism, which holds that “traits and tendencies characteristic of a race constitute…a sort of racial essence,” and give rise to “what the nineteenth century called the ‘Races of Man,’” has resulted in all sorts of dubious and pseudo-scientific theory. The most notorious example would be the elaborate scheme created by the Nazis to identify Jews according to their relative genealogies (e.g. one Jewish grandparent on the mother’s side—German; two Jewish great-grandparents on the father’s side—Jew; etc.). But in all actuality, there is no scientific basis for any of it. The whole idea of “race”—as expressed in racialism—is itself a fallacy. There is no “sub-species” of human being; there is only one, remarkably homogeneous, species. Genetically, ALL humans are virtually identical, and in fact lineally related. Samples of human DNA taken from every ethnic group in the world—from European to Chinese to Arab to African to Australian aborigine—have proven that all humans alive today are lineally descended from a single woman (sometimes whimsically referred to as “Eve”) who lived in sub-Saharan Africa about 200,000 years ago. I could go on and on about this, of course, but such arguments mean little to those who insist on holding their (irrational) racial views.


References

Appiah, Kwame Anthony. "Racisms." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 264-279.


***


Decision to Drop the Bomb

The debate over the “morality” of war is essentially a conflict between two value systems: the Greco-Roman and Judeo-Christian. Greco-Roman values, of course, accept the inevitability of war, and prize such virtues as courage, honor, obedience to authority, adherence to duty, and so on. Very chivalrous. But Christianity, as Lackey points out, was originally understood to be pacifist: "The early Christians, living at the time the New Testament was being written and shortly afterward, thought that Jesus' teaching was perfectly unambiguous. He did not permit meeting violence with violence, period" (2003, p.221). Such a view appeals to us intuitively; deep down we know it is wrong to fight amongst ourselves. It’s idealistic, even if a bit naïve. I would point out, however, that even Jesus was capable of violent acts: in the Gospel of John he fashions a whip out of bits of rope and uses it to drive the money-changers out of the Temple; in another passage he urges his followers to “pluck out your eye” if it causes you to sin, or cut off your hand if that causes you to sin. These are not pacifist notions. So there are some things worth resisting, by force if necessary. If one were to insist upon utter pacifism, then we would have to eliminate the police and other forms of law enforcement; we would have to let criminals run free and terrorize whomever they wished. Pacifism, carried to such an extreme, can be just as dangerous and harmful as the most strident jingoism. The rational “Golden Mean” is what we need.

As for the use of nuclear weapons in war, specifically the decision to bomb Hiroshima and Nagasaki—I see little that makes one form of destruction morally superior to another. Is it better to kill someone by putting him in front of a firing squad, hanging him from the gallows, or sending him to the guillotine? Our conventional “humanitarian” view says that death should be as quick and painless as possible. Extrapolate that to the international scene during the final days of World War II, and the U.S. decision to drop the Bomb on those Japanese cities seems just. Otherwise, the war would have likely dragged on for another six months or so. The fire-bombing of Tokyo several weeks earlier left more than 100,000 Japanese civilians dead in a single night—but nobody debates the ethics of that. The debate is over nuclear weapons, and whether they should be used at all. It is the fear of nuclear energy that fuels the ongoing argument.

Should nuclear energy be feared? Wouldn’t it be better if there were no such thing? Nuclear power is what makes the Sun shine, and all the stars. Most of us know that atoms are composed of nuclei with orbiting electrons. Conventional explosives, such as TNT or nitro, release electron energy. These can be deadly enough, but atomic nuclei remain untouched. When atomic nuclei are either split (fission) or forced together (fusion), energies bound up in the nuclei are released. Theoretically, nuclear power should not be feared anymore than sunlight pouring through your kitchen window. The reason why there has not been a World War III is because nuclear weapons guarantee destruction to any nation attacking a country that has them. Humanity has, in effect, been pushed into a corner with two options: abandon war altogether or perish in a nuclear holocaust. Some have argued that the decision to deploy atomic bombs against the Japanese was intended to “send a message” to the Soviet Union. But according to Dueck, "Although Truman and his advisors spoke of gaining diplomatic leverage with Stalin through possession of the bomb, there is no reason to believe that the primary reason for dropping the bomb on Japan was anything other than what Truman said it was--to end the war as soon as possible and save American lives" (1997, p.21). All fears aside, in terms of sheer utility, using the Bomb was the quickest way to end the War.



References

Dueck, Colin. "Alternatives to the Bomb." Rpt. in Ethics & Politics: Cases and Comments. eds. Amy Gutmann and Dennis Thompson. Chicago: Nelson Hall, 1997. 16-25.

Lackey, Douglas P. "The Ethics of War and Peace." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 221-229.


***




Feeding the Hungry

The respective articles by Singer and Narveson are diametrically opposed—with the latter appearing, at first glance, to be lacking all compassion. Anyone with an ounce of compassion for his or her fellow human being would surely agree that feeding the hungry (or healing the sick, clothing the naked, housing the homeless, etc.) is a moral imperative. Yet reason—and the sheer reality of life on Planet Earth—contradicts Singer’s decidedly self-righteous pronouncements at every turn. To save needy children’s lives, he says, you have only to donate to Unicef or Oxfam America. In fact, you should donate every penny that’s not absolutely essential to your own survival. Otherwise, you’re practicing “the kind of ethics that led many Germans to look away when the Nazi atrocities were being committed" (2003, p.158). Well, there is certainly nothing wrong with donating to Unicef or Oxfam America or any other charity, and I would encourage those who feel so inclined to donate. But are these the only means of assisting others in need? Singer’s unbelievably simplistic view defies common sense in the following ways: 1) Individual conscience—by attempting to foist one, and only one (his own), moral choice on everyone, he denies the primacy of individual conscience. Those who are too selfish or callous to help others, shame on them. But for those who do feel compelled, should it not be their own personal choice how to help, and to what extent? 2) Too great a need—no matter how much you do, you cannot single-handedly save everyone. Someone will be left outside the boundaries of your charity. What about them? How do you justify helping x number of people and no more? 3) Defining “essential”—according to Singer most families in America can subsist on $30 K per annum, and whatever is left should be donated. Fine. But why stop there? What we call “poverty” in America would be tremendous wealth in many other nations. Will Rodgers said, “America is the only country in the world where people drive to the poorhouse in their cars!” And I see people in public housing projects walking around talking on their cell phones. If you want to go down to “bare essentials” then you should sleep in a cardboard box on the street and beg for your meals. 4) Charity does not address root problems—charities are great for emergencies, like floods or earthquakes, but the root problems which cause poverty, disease, and suffering in the world can never be solved that way. Such problems have political, economic, cultural, and religious roots, and can only be addressed in those ways. Rather than simply give food to the hungry, one should ask the question: Why are they hungry? It is better to teach someone how to provide for his own needs than to simply provide for him.

I agree with Jan Narveson: “In fact, all of the incidence of substantial starvation (as opposed to the occasional flood)) has been due to politics, not agriculture" (2003, p.173). When I was in high school I participated in a world hunger study project and came to the same realizations. There is ample food supply to feed the world’s population several times over. The fundamental difficulty is political strife between nations and the insurmountable problems associated with lack of adequate infrastructure: even if you get the food there, how do you get it to those in remote areas who are starving? How do you stop corrupt regimes, such as North Korea, from confiscating the aid and using it to feed only the military? A wealthy nation like America has abundance because the system of government protects private property, a free market economy, civil rights and civil liberties, and so on. Without such political and economic stability, it would be impossible for individuals to amass much wealth. And we do have a de facto redistribution of wealth in the form of progressive tax scales and government assistance to the needy. I would argue that poorer nations, instead of resenting America or other Western democracies, learn from our example. There are many inequities associated with capitalism and free market economies, granted, but so far nothing seems to work better.



References

Narveson, Jan. "Feeding the Hungry." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 162-173.

Singer, Peter. "The Singer Solution to World Poverty." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 154-160.


***



Civil Disobedience

Regardless of how powerful the U.S. government may appear to be, it is still a government beholden to the will (or the whims) of the people: senators, congressmen, governors, presidents, always have at least one eye on public opinion polls. Politicians know that they must maintain at least the appearance of acting on behalf of their constituencies. That’s why Senator John Stennis of Mississippi, for example, could bargain with the Nixon Administration to assist in delaying implementation of public school desegregation—a clear violation of federal law. That’s also why Governor George Wallace of Alabama made a spectacle of standing in the doorway of one of his public schools to stop blacks from gaining access to white education. Despite their respective personal opinions (although it’s hard to imagine a middle-aged white Southerner of that era having anything other than a traditional view of blacks), both men were acting to please their constituents. This also goes a long way toward explaining why the Supreme Court found itself unable—or unwilling—to deal with the serious issues of racial discrimination in American society for nearly one hundred years after emancipation: the notion of white supremacy was too deeply embedded in the national psyche. That’s one reason for the necessity of civil disobedience as an integral part of the Civil Rights movement.

This is not mere academic analysis on my part. First, I’ve actually lived in these places—Jackson, Mississippi; Birmingham, Alabama; Columbia, South Carolina—and know a thing or two about the racism in those regions. Second, I happen to be half of an inter-racial couple—my wife of sixteen years is African American. We, and others like us, seem to represent a veritable stake through the heart of that racist Dracula. I am well aware of the fact that not too many years ago, under Virginia law, my marriage would have been a felony, punishable by steep fines and imprisonment. Thus, laws are not always just and actions taken by the government are not ipso facto legitimate. In Greenberg’s article “Revolt at Justice,” he and his colleagues found themselves in a quandary: the boss, Attorney General John Mitchell, expected his employees to tow the line on the administration’s Civil Rights policy—a policy which, to all appearances, violated Constitutional and Supreme Court mandates (1997, pp.143-151). Nevertheless, lawyers take oaths to support the Constitution, not simply the directives of whoever occupies the position of authority. Clearly, men of conscience and legal training are to be held accountable for individual moral choices, political expediency notwithstanding. On these grounds, civil disobedience is not only justified, but warranted.

And all the above, curiously enough, is now being replayed in some of the strange emanations coming from the current administration. For example, where once Colin Powell opposed U.S. military intervention in Iraq (during the 1991 Gulf War), even to repel invading Iraqi armies from Kuwait, he now trumpets the Bush Administration’s official party line. I suppose it’s either that or lose his job. And Condoleezza Rice goes on national television splitting the finest hairs to the nth degree on whether she personally supports this new impending war (it was quite clear to me that she has reservations but will not express them). Okay… she wants to keep her job too. Or unnamed Bush Administration officials informing CBS that this year’s Grammy Awards show was NOT to be used as an anti-war forum for Grammy-winning artists—a directive that CBS duly heeded. What happened to freedom of speech, anyway? Without debating the merits or necessity of this looming war, I have to say that one grows weary of having one’s intelligence insulted by this administration, or one’s “patriotism” called into question for opposing its policy toward Iraq—a nation which, so far as I know, has never attacked the United States. The growing anti-war movement, which probably will not prevail, is therefore another form of civil disobedience that must not be taken lightly.


References

Greenberg, Gary J. "Revolt at Justice." Rpt. in Ethics & Politics: Cases and Comments. eds. Amy Gutmann and Dennis Thompson. Chicago: Nelson Hall, 1997. 143-151.


***


Natural vs. Unnatural

In today’s prevailing political climate it is nearly impossible to criticize homosexuality as a form of behavior without sounding like some gay-basher. Essentially, the argument against it is a religious one. It has to do with one’s sense of morality, and thus ethics. But the natural vs. unnatural argument stems from the medieval rapprochement of Christian theology and Aristotelian philosophy—what was considered science in that era. Later came the development of real science, and through the work of such figures as Newton, Galileo, Copernicus, and many others, the “natural law” era of political and social theory began. Problems with moral condemnations of homosexuality, I suppose, led to the “violation of nature” argument. In his article Leiser (2003, pp.144-152) does a good job of picking it apart, a task made all the easier by the archaic nature of the concept. First, he makes a distinction between the “descriptive” laws of nature and the “prescriptive” laws of men: “These ‘laws’ merely describe the manner in which physical substances actually behave. They differ from municipal and federal laws in that they do not prescribe behavior.” The distinction here is valid, but the first assertion is not. For example, the “description” of the manner in which substances actually behave is not law, but a language-construct making the “law” understandable to human minds. The physical law itself is seemingly inherent within matter and energy. But there is a sharp contrast between physics (the laws of which seem inviolable) and biology, the science of life. In physics all phenomena can be reduced to mathematical formulae; not so in biology. Life itself appears to violate the laws of physics, so the “natural law” application to ecology, sociology, and so on, has always been problematic.

The assertion “anything uncommon or abnormal is unnatural” is closer to the meaning of the anti-homosexual argument. Again, I take issue with his apparent equating of “uncommon” with “abnormal.” Uncommon, as a matter of fact, is usually considered a virtue, as in “uncommon valor.” Uncommon may be thought of as anything that is more than two or three standard deviations away from the mean of a normal (bell) curve. Uncommon can be good or bad. Consider I.Q., for example. Uncommonly high I.Q. is admired, while uncommonly low I.Q. (or idiocy) is not. Sometimes, however, persons with a genius level I.Q. can suffer profound psychological effects. But that which is abnormal may indeed be considered unnatural. In this sense, homosexuality can be considered a form of deviant behavior, not unlike pedophilia or countless other perversions. Many forms of heterosexual behavior may also be considered abnormal, such as Sadomasochism. We can be glad that there are no “morality police” spying on people’s bedrooms, and what consenting adults do behind closed doors is no one’s business. But the issue of what is morally acceptable—and in this respect “normal”—is, as I said, a religious view. I’ll say this: homosexuality is incompatible with Judeo-Christian values. Whether one accepts the practice as normal or natural depends upon one’s spiritual orientation.


References

Leiser, Burton. "Is Homosexuality Unnatural?". Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 144-152.


***



Double-Effect

As the United States is, apparently, gearing up for war with Iraq (it may be only a week or two away), I’ve been hearing some of the same issues discussed as those in Simone Sandy’s piece, “Bombing the Bunker in Baghdad" (1997, pp.27-30). For example, there is talk of sending in U.N. peacekeepers to assist the weapons inspectors, and fears that Saddam Hussein will use them as “human shields,” thinking that the United States will not attack if it has to cut through innocents to get to Iraqi forces. In the 1991 bunker-bombing incident, it seems as though U.S. military planners made reasonable efforts to determine that the structure did indeed serve a military purpose, and was a legitimate target. I don’t believe a known civilian building would be deliberately chosen (although targeting civilians was routine during World War II—i.e. Dresden, Hiroshima). No one likes war or wants war—myself included. It baffles me that in the 21st Century—2003 A.D.—that nations have not yet found a better way to settle their differences. War itself is an abomination, even when unavoidable. Clearly, the world could not just sit on its hands in 1939 and do nothing as Hitler’s armies annexed all of Europe and tried to exterminate the Jewish population. War was inevitable. So it’s my view that if a war MUST be fought, it’s better to go all out. Half-hearted efforts (Vietnam) solve nothing—they just prolong the agony. If George Bush Sr. had continued Desert Storm until Saddam Hussein was deposed, we wouldn’t be having this new confrontation today. Leaving Hussein in power virtually guaranteed another war.

And another thing: our enemies do not always share our humanitarian concerns (as exemplified by the Geneva Conventions on warfare). The Japanese demonstrated during WWII that no atrocity was too horrible to be used. Study the history of the Japanese occupation of Korea—1910-1945—and you’ll get a sense of what I’m saying. Similarly, our new enemies, Islamic militants (terrorists), care nothing for the welfare of humanity in the general sense. They are too caught up in their delusional, religious fantasies to see non-Muslims as fellow human beings. Saddam Hussein, from what I can tell, is motivated by a more secular ambition for raw power, but it would be a mistake to think he has any humanitarian notions similar to ours. The Bush Administration nowadays likes to compare Hussein to Hitler and says the situation is analogous—but that’s ridiculous. Iraq may pose a threat to its neighbors; how can it possibly threaten the United States? Political assassinations are not the official policy of this country, but realistically, there may arise situations where an unofficial assassination will have to be carried out. If there is a war, it seems likely that chemical or biological weapons will be used against American troops. I’d rather see Hussein assassinated than have tens of thousands of our troops perish horribly. The faster this thing is over, the better off we’ll all be.

References

Sandy, Simone. "Bombing the Bunker in Baghdad". Rpt. in Ethics & Politics: Cases and Comments. eds. Amy Gutmann and Dennis Thompson. Chicago: Nelson Hall, 1997. 27-30.


***



Debate on Euthanasia

I doubt if there is any kind of definitive, absolute answer to the dilemmas of euthanasia. In the abstract, I tend to agree with Richard Doerflinger’s position: if life comes (involuntarily) from God, then only God really has the authority to end it (2003, pp.180-188). But such abstractions become meaningless when one is faced by the kind of human suffering described in the Rachels piece (2003, pp.175-179). It seems somewhat disingenuous for those of us who are not terminally ill or subjected to excruciating pain to debate the philosophy/morality of euthanasia. One would think that the victim of such an illness should have the final word in this matter. According to the Alsop story (in “The Morality of Euthanasia”), “ ‘If Jack were a dog’ I thought, ‘what would be done with him?’” Problem is—human beings are not equivalent to animals. Not legally, morally, spiritually, or philosophically. Otherwise, there would be no point in having this discussion. A few observations:

As a theist I will readily agree that God (or whatever you wish to call it) created all life. But death is an integral part of the equation. God “created” that too, and one would think for very compelling reasons. Humans have the power of decision in helping to create new life—as when we choose to procreate. So there should be no inherent prohibition against the ending of life either, not in the absolute sense.

In certain circumstances “killing” is officially sanctioned, and even mandated. Where capital punishment is practiced, the State legally ends the lives of condemned persons. In the military soldiers are trained for combat and expected to kill in times of war. Police officers and other law enforcement are authorized to use deadly force under certain conditions. Even ordinary citizens can be legally exonerated in cases of “justifiable homicide” (self-defense).

What about suicide? There is indeed a strong moral taboo against the taking of one’s own life—and rightly so. It is, in most cases, a “permanent solution to a temporary problem.” Take teen suicide for example. A fifteen year-old may feel that life’s problems are insurmountable, to the point where he/she considers ending it all. In 99.9% of those cases, however, simply growing up will resolve the problem. But there is a stark difference between the “coward’s way out” and instances where people willingly sacrifice their lives for some cause. A soldier on a battlefield may toss himself upon a live grenade to save the lives of his unit…is this not suicide? Or what about Christian martyrs in ancient Rome who chose death rather than renounce their faith in Christ? Or what about the police and firefighters who perished in the World Trade Center on 9/11? Patrick Henry said, “Give me liberty, or give me death!” Some things are worth dying for. Therefore, there is no inherent prohibition against “suicide” either.

I don’t buy any of Doerflinger’s “slippery slope” arguments, which are all based on assumptions. Not that the issues he raises are unworthy of argument—they are. The position that favors euthanasia is bound to divide the medical profession. And Dr. Death (Jack Kervorkian) does give me the creeps. But do we have the right to deny terminally ill patients a choice in their own fate? I say that such individuals should be able to legally request the means to end their own suffering.



References

Doerflinger, Richard. "Assisted Suicide: Pro-Choice or Anti-Life." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 180-188.

Rachels, James. "The Morality of Euthanasia." Rpt. in The Right Thing to Do: Basic Readings in Moral Philosophy. ed. James Rachels. Boston, McGraw Hill, 2003. 175-179.


***


Loyalty vs. Civic Responsibility

In cases of conflicting values--as in that of loyalty vs. truth-telling (to authority figures, that is)--problems arise when we take an absolute position on either side. There is also what is known as the "either-or fallacy"--i.e. presenting two, and only two, options when others may be available. I found myself in this quandary in a job situation a few years ago. There were security issues in the store and the District Manager (supervisor) came in one day to conduct personal interviews with each employee. I was among the last to be interviewed, but when my time came he told me, "I already know everything that is going on here, I know that x is doing this and y is doing that, so if you try to cover up for them I'll know you're lying." I told him that I was very uncomfortable informing on friends and co-workers and that I thought their misdeeds were "none of my business." The DM disagreed and said, "If you know something is going on and don't say something about it, you're just as guilty as they are." He had a valid point, of course, and since I had no new revelations to reveal I confirmed what he already knew. The store manager eventually lost his job and accused me and several others of "conspiring" against him, (which is ridiculous).

But there is indeed a conflict between what we call our "civic responsibility" and the natural loyalties that arise among human beings in their everyday encounters. Both are needed and both, in fact, are sanctioned in law. Civic responsibility demands that ordinary citizens cooperate with law enforcement in the apprehension and prosecution of criminals. In other words, crime is everybody's problem and everybody's responsibility--not just those who are directly involved. It is understandable that some may fear for their lives, with reason, and how can you blame somebody who opts for self-preservation? But this mindless "I don't want to get involved" attitude is contemptible. Outright refusal to give testimony or serve as a witness in court could be considered an obstruction of justice. It might even be possible, under some circumstances, for an uncooperative witness to be treated more harshly than the accused. You could wind up sitting in jail on a contempt charge while the offender goes free! So our collective responsibility must be taken seriously. Loyalty, on the other hand, is much more primal and basic. Essentially, it is a survival mechanism by which human beings bond together for common purposes and goals. No one should understand this better than police officers, whose "blue wall of silence" is the stuff of legend. The reason is simple: officers trust one another with their lives on a daily basis; loyalty is born out of that trust, a kind of comradeship that is hard to find anywhere else. And the law recognizes that essential loyalty in a number of ways. For instance, husbands and wives cannot be compelled to testify against one another, even when the charges are grave. There is doctor-patient privilege, priest-confessor privilege, and most significantly, attorney-client privilege [note: under certain conditions that privilege can be broken]. So even if an accused confesses to his lawyer and says, "I did it," the lawyer cannot reveal the fact and must still defend his client--even to the point of acquittal.

Perhaps it would surprise you to learn, therefore, that privileged communications are now subject to government eavesdropping by authority of the USA Patriot Act. According to the ACLU, "the Justice Department, unilaterally, without judicial oversight, and without meaningful standards, has issued rules that give it the power to decide when to eavesdrop on the confidential attorney-client conversations of a person whom the Justice Department itself may be seeking to prosecute. This regulation, implemented without the usual opportunity for prior public comment, is an unprecedented frontal assault on the attorney-client privilege and the right to counsel guaranteed by the Constitution. It is especially disturbing that these provisions for monitoring confidential attorney-client communications apply not only to convicted prisoners in the custody of the Bureau of Prisons, but to all persons in the custody of the Department of Justice, including pretrial detainees who have not yet been convicted of crime and are presumed innocent, as well as material witnesses and immigration detainees, who are not accused of any crime. 28 C.F.R. § 501.3(f) (proposed amendment)" (1). Initially, this invasion was believed to apply only to non-U.S. citizens deemed "enemy combatants," but since its introduction two years ago has been used against natural born citizens as well--e.g. John Walker Lindh. Apparently, all it takes is to be labeled a "suspected terrorist" by the DOJ and your constitutional rights are voided.

Source:

(1) http://archive.aclu.org/congress/l112801a.html


***



Abortion

One is loath to enter this abortion debate--the emotions generated are too far beyond rational argument to make the exercise meaningful. No one who is firmly committed to one side or the other will yield an inch. The debate masquerades as a religious or philosophical disagreement. This means there are religious overtones and justifications, but I contend it is something other than religion that compels a "man of God" to pick up a shotgun and murder a doctor and his bodyguard as they arrive at an abortion clinic (e.g. Rev. Paul Hill); and philosophical disagreements do not usually involve the planting of high explosives (e.g. Eric Rudolph). Some other psychological force is at work here, a force that manifests itself as terrorism (which also has religious justifications). Let me give an example: Islamic radicals rail against the "immorality" of the West--the United States in particular--yet think nothing of slaughtering innocent people, including women and children. Similarly, those who vehemently oppose abortion, even to the point of violence, belong to the same conservative bent that supports cancellation of welfare programs, government assistance to the needy, and so on. While defending the "unborn" with raised fists, knives, and firearms, there is a curious disregard for the born--i.e. children of impoverished mothers. Obviously, the debate has more to do with the self-perception of the protester than the object of protest.

Having stated the secondary importance of the religious/philosophical argument, we should, nevertheless, briefly consider the form. It is not so much "when does life begin" as it is "when does one become a complete and independent human being"--and as such under the full protection of law. Obviously, life begins at conception and must be carefully considered from that point. But a complete and independent human being is, by definition, one that is born--a separate entity from its mother. Although a premature baby can be removed from the womb and survive, its status as an independent human being depends on separation from the mother. The pro-life movement insists, however, that not only does life begin at conception, so does the individual's status as an independent human being--with citizenship, equal rights, civil liberties, the whole enchilada. If that is the extent of one's religious faith or philosophy, so be it. But there are innumerable difficulties that arise from equating unborn fetuses to born human beings, not the least of which include the rights of the mother. Personally, I do not regard an unborn fetus as "fully human" until the moment of birth--when the umbilical is cut and the child draws its first breath. Before that point, then, abortion is a medical procedure. Thus, I see no problem with the government funding abortions for poor women--so long as it is a legal procedure. Roe v. Wade established the legality of it; the moral arguments against it are a complete distraction so far as I'm concerned.

On the basis of the personal conviction listed above (my own, that is), I have to say that legislation regarding reproductive technology is both moralistic and paternalistic--and an issue the government should steer away from. The government's legitimate concern begins once an individual is born, but not before. In communist China, for instance, the government once adopted a policy of no more than two children per family; those who ignored the policy were subject to criminal sanctions. Should we have similar laws in the United States? I would not deny anyone their religious persuasion (as long as it does not inflict upon me), and there is a certain religious bias in favor of large families. My mother is from a large family (six brothers and sisters) and those that I've met from similar families attracted my admiration and a bit of envy. As for cloning, that is another non-issue. Nature has been cloning for billions of years, to good effect. Human clones occur naturally--that's what identical twins are, after all. They have the very same DNA. But even though identical twins--with identical DNA--may be regarded as the "same person" in the technical sense, that abruptly comes to an end after they are born. Differing experiences, social distinctions, as well as legal distinctions (no matter how identical they may be, from a legal standpoint they are separate individuals and could just as well be unrelated) sunder them into unique entities. Thus, even if you could clone 100 identical copies of, say, Paul Trible, each would develop into a unique and particular human being--just as identical twins do. There's no difference at all, so what business is it of government?

The crux of these issues--abortion and human cloning--is this: should human beings emulate, manipulate, or in any way circumvent nature? Both things occur naturally, without human intervention. For example, miscarriages and stillbirths--which happen frequently--are natural forms of abortion; and, as stated, monozygotic twins are naturally occurring clones. There is some cloning in the plant and animal worlds as well. The fact that human beings "play God" by manipulating nature's methods is nothing new. After all, what is agriculture? What is aviation? What is chemistry? These are all examples of man emulating what nature already does on its own. About 100 years ago there was a great deal of opposition to the birth of aviation: "If man were meant to fly he'd have wings" went the saying. Before that there was even more opposition (religiously based) against science. Although the development of agriculture occurred in pre-historic times (i.e. before the invention of writing), one can imagine that some must have regarded it as a threat to traditional hunter-gatherer cultures. In short, there is no advance of technology or science unaccompanied by controversy or outright hostility.

***


Ethics in Science

I have often pondered the enormous popularity of science fiction as a literary genre—what is the source of its appeal? It is only occasionally good fiction, and much of it is distressingly hackneyed. Nevertheless, I cut my literary teeth on books by H.G. Wells, Isaac Asimov, Ray Bradbury, and Robert A. Heinlein. Its point of fascination must be the emphasis on science, applied technology, and futurism. In short, science is interesting. A cursory reading of history will show that it is science, more than anything else, which has defined the world we know: space-flight, computer technology, global communications, nuclear power—all are made possible by science. Yet we still do not live in a technological paradise such as that depicted in the mythos of Star Trek. Why is that? It is because science is subject to misuse. As the machine becomes more and more important, human beings themselves seem to shrink to insignificance. It is as if we are mere caretakers (and poor ones at that).

Thus emerges someone like the Unabomber—thinking to single-handedly halt what he sees as a rush to environmental armageddon. He is obviously a disturbed individual, but should we perhaps read his manifesto anyway? Aren’t there at least some who agree with his premise? Nowadays, for example, people fear the so-called “millennium bug” as if it were the Apocalypse. Even Pat Robertson of CBN has got in on the act, predicting nothing short of global catastrophe. The controversy over nuclear power seems to have subsided in recent years, but the weapons are still with us. What if one should fall into the hands of terrorists? What if some self-appointed prophet should think it his Divine mission to liquidate half a city, half a nation, or half the world? Who do we have to thank for these and other dilemmas? Again, modern science.

In many ways, science occupies a place in society that was once filled by religion, and though the two seem to be completely at odds, there are striking similarities between them. Both offer a distinct worldview. Both explain the origin of the universe and project its long-term fate. Professional scientists are almost like clergy, operating in realms inaccessible to the layperson. Scientific journals function much like holy scripture, and the giants of bygone centuries (Galileo, Copernicus, Newton, etc.) like saints. Some of these men even suffered persecution for their “faith”—like Galileo before the Inquisition in 1615. Science has its own Holy Grail-type obsessions, too—the Human Genome Project, for instance, or the search for a Grand Unified Theory (GUT) in physics. Is all this mere coincidence? Although we are long accustomed to thinking of religion and science as being mutually inimical, they do share a common, albeit contrasting, purpose: 1) the removal of human ignorance, and 2) the overcoming of a state of discord with some desired natural order.


What Is Science?

According to physicist Morris Shamos, science is “our formal contact with nature, our window on the universe, so to speak. It is a very special way that humans have devised for looking at ordinary things and trying to understand them" (1995, p.46). It is “special” because our traditional way of explaining natural phenomena is inadequate. After all, thunder and lightning does not result from Zeus hurling thunderbolts or God “moving His furniture around.” Science is the art of accurate description, of extending our senses through instrumentation, but more importantly, it is “the design of conceptual schemes, models, and theories that serve to account for major segments of our experience with nature" (ibid).

Science cannot accept a supernatural or magical explanation for anything, which, unfortunately, cuts against the grain of religion. Experience shows that nature is orderly, that under the same conditions the same phenomena are likely to occur. There is an element of predictability. For example, the sun always rises in the East, table salt is always a compound of sodium and chlorine, and apple seeds always produce apple trees, never some other kind of tree. Science is thus a search for verifiable truth (ibid). For this reason, truth tests become important, and one such test involves the concept of falsifiability. In other words, for a premise to be scientific, one must be able to prove it wrong. The statement “there is no life on other planets” can be proven wrong—the discovery of extra-terrestrial life would accomplish that. However, the statement “life exists on other worlds” is not quite scientific because although one can prove it correct, it can never be proven wrong (if one probed the galaxy and found no life anywhere, that does not preclude the possibility of it being found somewhere else).

Why is it insufficient merely to prove a theory “correct”? It is because obtaining the same result from an experiment 100 times does not mean that a contradictory result will not be observed should one perform it 101 times—it only increases the probability of a theory being correct. Both tests are essential for verifying scientific truth, and are the indispensable tools for exposing what can be called pseudoscience: astrology, flying saucers, alien abductions and the like. Interestingly enough, two of the most cherished and widely accepted scientific theories cannot be regarded as absolutes due to the non-applicability of both truth tests—namely, the atomic theory of matter and the theory of evolution (ibid). In these matters, one could say that the jury is still out. It is important to understand these rigorous (and often inconvenient) tests because failure to adhere to the rules results in embarrassing mistakes.

Regarding ethics, the first and foremost consideration is one of integrity: science must be true to itself. Here, a distinction can be made between “good science” and “bad science.” The former, based on its noble foundations in classical Greece, duly adheres to its own restrictions, does not jump to hasty conclusions, and only offers theories that are of the highest order of probability. Thus, Isaac Newton, who discovered the law of universal gravitation, waited thirty-two years to publish his findings (he wanted to make sure there were no mistakes in his calculations). However, in this age of publicity seeking and Nobel Prize-coveting, the competition among scientists to be “the first” to make some breakthrough discovery is so rabid that inexcusable ethical lapses occur.


Good Science / Bad Science

Consider the story of cold fusion—one of the more embarrassing moments in modern physics. On March 23, 1989, two chemists at the University of Utah announced that they had achieved the impossible—cold fusion, or nuclear fusion in a test tube. If true, this would certainly have been the discovery of the century. Science writer Gary Taubes (1993, p.xviii) puts it this way:

It was considered the energy source that would save humankind: the mechanism that powers the sun and stars, harnessed to provide limitless amounts of electricity. Since shortly after the Second World War, physicists had worked to induce, tame, and sustain fusion reactions by re-creating the hellish heat and pressure at the center of the sun in a controlled setting. The conventional wisdom was that sustained nuclear fusion could only be achieved in the laboratory with enough heat—tens of millions of degrees—and extraordinary technological wizardry.

There are, of course, two ways to release nuclear energy. Fission involves splitting the atomic nucleus and setting up a chain reaction, usually in a dense radioactive metal, such as uranium or plutonium. Fusion is the forcible joining of two nuclei under extreme conditions—such as those found at the center of stars. Albert Einstein’s equation e = mc² states that matter can be converted to energy, and vice-versa; but in nature fusion reactions only occur when heat and pressure have reached critical levels. At the center of our sun, for example, the temperature is an estimated 15.5 million Kelvins and the density, under billions of tons of pressure, about 160 per cubic centimeter. By comparison, the Earth’s mean density is only 5.52 grams per cubic centimeter (Kaufmann, 1985, pp.158,335). Fusion reactions are exponentially greater than fission reactions, making the latter seem insignificant compared to the former. To make this point clear, one should realize that a hydrogen (fusion) bomb uses an atomic (fission) bomb as a detonator. It is like comparing a stick of dynamite to a firecracker. Thus, while stable fission reactors are feasible for domestic use, fusion reactors remain beyond the reach of current technologies. The March 23rd press conference announcing fusion reactions at room temperature, understandably, sent shock waves of excitement and disbelief through the scientific community.

The experiment at Utah State, which was conducted “for the fun of it,” (qtd. in Taubes, p.4) was the brainchild of Stanley Pons and Martin Fleischmann. An electrode—a solid block of palladium—suspended on a wire had been submerged in a large beaker filled with heavy water and lithium. An electric current was passed between the palladium electrode and a platinum one. The apparatus was continuously charged for seven months, until “one fateful evening when young Joey Pons, who had been running the experiments for his father, lowered the current and left for the night" (ibid). A meltdown of some sort occurred during the night. No one witnessed it. From this point there were varying reports of the result: some said that half the palladium cube had dissolved, others that the entire apparatus had been destroyed, still others that an enormous hole had been blown through several feet of concrete (ibid). There were also conflicting reports about radiation levels (if fusion had indeed occurred, lethal doses of radiation should have been released). To make matters worse, there was virtually no data accompanying the experiment.

None of this violates the basic rules of science, however. It was by publicizing the (unconfirmed) results of the experiment, with little hope of verifying that cold fusion had actually taken place, that the scientists crossed the line. They had submitted a proposal to the Department of Energy (DOE) to request funding. The DOE sent the proposal to physicist Steven Jones at Brigham Young University for review, who promptly tried to duplicate the experiment. When rumors began to circulate that Jones was planning to call a press conference, University of Utah president Chase Peterson hastily scheduled his own, and the discovery of cold fusion was announced to the world. In just a few short years, the reporters were confidently assured, commercial fusion reactors using their technology would be built—reactors fueled by ordinary seawater (ibid).

Needless to say, this extravagant promise was never fulfilled. Cold fusion power plants running on seawater have never materialized. What is the reason? Simply put, cold fusion had never actually occurred, and there was no real scientific breakthrough at all. As the months went by, every independent attempt to repeat the results of the original experiment failed miserably. According to Taubes: “Cold fusion—as defined by Stanley Pons and Martin Fleischmann, or Steve Jones… or whomever—did not exist. It never had. There was at least as much empirical evidence, if not more, to support the existence of any number of pseudoscientific phenomena, from flying saucers to astrology “(ibid).

This whole fiasco was an example of bad science—a failure to follow the rules, compounded by the ego-driven motives of those would-be Einsteins. But an even more astounding collapse of ethics occurred in the early part of this century, one that involved not just sloppiness, but outright chicanery: namely, the Piltdown hoax.

In 1911 lawyer and amateur paleontologist Charles Dawson unearthed a most unusual specimen in Piltdown, southern England. It consisted of what appeared to be a human skull with a decidedly apelike jaw. Several non-human teeth were also found. It was an amazing discovery because although hominid (pre-human) remains had been found in such places as Java, China, Africa, and continental Europe, very little had been found in England. This creature was given the scientific name Eoantropus dawsoni (which means “Dawson’s dawn man”), and was thought to have lived about two million years ago. The press dubbed it “the First Englishman” (although why any self-respecting Englishman would want to claim descent from a half-ape is beyond me). From the beginning, though, Piltdown Man had its critics. According to researcher John Evangelist Walsh:

The Piltdown mandible (jaw), especially, precipitated loud disagreement, as it had from the first. A jaw so thoroughly apelike, critics insisted, simply did not belong with a cranium (brain case) so undeniably human. Piltdown, it was charged, had been mistakenly manufactured from two separate creatures, a fossil man and fossil ape: the remains of the two just happened to come together in the ground, a freakish prank of nature. Combining them only created a monstrosity that never in fact existed. (1996, p.6)

In the fossilized record of hominid development it was clear that cranium and mandible evolved together—that as the brain case increased in size, the jaw became less and less apelike.

But the Piltdown Man had staunch defenders also, respected and reputable men like Sir Arthur Smith Woodward of the Natural History Museum in London, and physician/author Sir Arthur Conan Doyle. They argued that the odds of the fossils being deposited separately, from two different animals, were astronomical—they had to have come from the same creature. Nevertheless, the debate raged on for some forty years.

As more and more was learned of hominid evolution, the Piltdown fossils posed more of a puzzle. They did not seem to fit in with the rest of the slowly emerging picture. Piltdown Man was eventually regarded as an exceedingly strange evolutionary dead-end. In the end, however, it was the rigorous tests of developing science that revealed the truth.

In 1949 the new technique of fluorine testing was applied to all the available Piltdown artifacts. During the slow process of fossilization, trace amounts of fluorine are absorbed by the bones from the soil. The relative amounts of that substance in a fossil can give a rough estimate of its age. If different ages for the cranium and jawbone could be established, it would prove that they came from two separate creatures. As it turned out, the fluorine content was the same for all the artifacts, but, unexpectedly, they were revealed to be of much more recent origin than anyone imagined—they were no more than 50,000 years old (ibid). This was puzzling because at that date, anatomically modern humans were widespread on earth. Piltdown Man was “not anywhere near a ‘dawn man,’ let alone a missing link. He was a shocking anachronism, an impossible survival out of a dim and far distant past" (ibid).

Finally, in 1953 a painstaking examination by anthropologist Joseph Weiner confirmed the skeptic’s suspicions. The teeth were primate in origin but skillfully filed down to resemble human wear patterns. All the fossils had been chemically treated to give the appearance of great antiquity. The jawbone was positively identified as that of an orangutan, the cranium of a modern human. A more refined radiocarbon dating technique revealed that the skull was about 620 years old, the jawbone slightly younger. Thus, the Piltdown Man was a forgery, a hoax, and “the most famous creature ever to grace the prehistoric scene, had been ingeniously manufactured from a medieval Englishman and a Far-Eastern ape" (ibid).

Blame for the Piltdown forgery has never been adequately determined, but most suspect Dawson. We will never know for sure, however, since Dawson died in 1916, but in this case “good science” triumphed over an unscrupulous attempt to muddy the waters of research and perhaps discredit the whole field of anthropology.



Ethical Considerations

Obviously, it is essential for science to maintain its own integrity, but does it have an obligation to the larger human community? If so, to whom should it be obeisant—the government? Industry? Whatever the source of its funding? In this regard, science shares another role with religion: it is, by nature, an independent force. Just as no government can mandate belief in God or define an acceptable theology (at least not from an American point of view), it cannot alter the nature of science. The laws of physics, thermodynamics, mathematics, and so on, are not subject to legislation.

Until recently, scientists were usually content to pursue their work, taking no thought for what was done with their discoveries. That mindset changed during World War II, however, as physicists edged closer and closer to the development of nuclear power. Speculations as to this power began as early as 1903 when the atomic structure was beginning to be understood. As more and more was learned, it gradually became not a theoretical problem but one of technological capability. The most significant progress was made in Germany, France, and England during the 1930s.

In 1933—the year Hitler came to power—physicist Leo Szilard filed a patent in England describing the laws of nuclear fission, but according to Einstein biographer Ronald W. Clark, the British War Office was “not interested" (1971, p.664). Meanwhile, Enrico Fermi, a refugee from fascist Italy, was conducting experiments with uranium. These experiments were later repeated at the Kaiser Wilhelm Institute in Berlin by Lise Meitner, Otto Hahn, and Fritz Strassman. They were simultaneously performed in Paris by Irene and Frederick Joiliot-Curie (ibid). When Hitler invaded Austria, Meitner and many other Jewish scientists, among them Albert Einstein, fled for their lives. When Danish physicist Niels Bohr flew to America to attend the Fifth Washington Conference on Theoretical Physics, he caused a mild sensation in the scientific community with news of the Berlin and Paris experiments. In 1939 the dean of graduate faculties at Columbia University wrote to Admiral Hooper of the United States Navy, warning him of “the possibility that uranium might be used as an explosive that would liberate a million times as much energy per pound as any known explosive" (qtd. in Clark, p.666). Similar warnings were being given in Holland, France, Belgium, and England, and these countries began scrambling to obtain stockpiles of uranium. And all of this activity preceded that famous letter to President Roosevelt, signed by Einstein, which eventually resulted in the top-secret Manhattan Project.

Apparently, the scientific community was anxious to keep the secret of nuclear power out of Hitler’s hands, but this raises ethical questions: should science really care who benefits from its research? Why shouldn’t the results simply go to the highest bidder? In too many cases it does, but the story of nuclear energy, described above, shows how science cannot afford to be careless. Whatever one may think about nuclear weapons—dreadful though they are—no one will disagree that keeping them out of Hitler’s arsenal was the right thing to do.

After the development of the Bomb, Albert Einstein—perhaps the century’s most important scientific figure—headed a compendium of concerned scientists regarding their ethical obligations. Clearly, science has a moral obligation to serve humanity and promote the betterment of the world. It must rise above nationalistic, ideological, and economic constraints.

Again, science fulfills a role comparable to that of religion or philosophy, and as it marches forward into the realm of the purely theoretical, it encroaches upon territory once reserved for mystics and dreamers. In his book The Edges of Science, physicist Richard Morris writes:

… there are some scientific fields in which the frontiers have been pushed so far forward that scientists have found themselves asking questions that have always been considered to be metaphysical, not scientific, in nature. Nobel-prize winning physicists have been so taken aback by some of their colleague’s speculation that they call some of the new theories nonsense, or even compare them to exercises in medieval theology. (1990, p.x)

As modern science edges closer and closer to religion, religion must recognize its need for science. That is because nowadays, men of intellect and rational thought cannot accept reliance upon magic and the supernatural. Such an incongruity causes many to reject faith in God altogether. But if God did create the universe, it seems likely that He did so, not through magic, but through the very physical, chemical, and biological laws that science endeavors to explain. Once it is understood that God is a God of science—not magic—then another ethical dimension emerges. Science is not the enemy of religion or of faith, but should be its principal partner in building a viable future for all.


References

Clark, R.W. (1971). Einstein: The Life and Times. New York: Avon.

Kaufmann, W. J. (1985). Universe. New York: Freeman.

Morris, R. (1990). The Edges of Science: Crossing the Boundary from Physics to Metaphysics. New York: Prentice Hall.

Shamos, M.H. (1995). The Myth of Scientific Literacy. New Brunswick: Rutgers.

Taubes, G. (1993). Bad Science: The Short Life and Weird Times of Cold Fusion. New York: Random House.

Walsh, J.E. (1996). Unraveling Piltdown: The Science Fraud of the Century and its Solution. New York: Random House.

0 Comments:

Post a Comment

<< Home