Speaking Truth to Power

From Hewitt:

Bishop Thomas Tobin opens a can of whoop-ass on Congressman Patrick Kennedy, on his “I’m pro-choice and a good Catholic, too” shtick:

“The fact that I disagree with the hierarchy on some issues does not make me any less of a Catholic.” Well, in fact, Congressman, in a way it does. …

There’s lots of canonical and theological verbiage there, Congressman, but what it means is that if you don’t accept the teachings of the Church your communion with the Church is flawed, or in your own words, makes you “less of a Catholic.”

But let’s get down to a more practical question; let’s approach it this way: What does it mean, really, to be a Catholic? After all, being a Catholic has to mean something, right?

Well, in simple terms … being a Catholic means that you’re part of a faith community that possesses a clearly defined authority and doctrine, obligations and expectations. It means that you believe and accept the teachings of the Church, especially on essential matters of faith and morals; that you belong to a local Catholic community, a parish; that you attend Mass on Sundays and receive the sacraments regularly; that you support the Church, personally, publicly, spiritually and financially.

Congressman, I’m not sure whether or not you fulfill the basic requirements of being a Catholic, so let me ask: Do you accept the teachings of the Church on essential matters of faith and morals, including our stance on abortion? Do you belong to a local Catholic community, a parish? Do you attend Mass on Sundays and receive the sacraments regularly? Do you support the Church, personally, publicly, spiritually and financially?

In your letter you say that you “embrace your faith.” Terrific. But if you don’t fulfill the basic requirements of membership, what is it exactly that makes you a Catholic? Your baptism as an infant? Your family ties? Your cultural heritage?

Bravo. Look, if you’re pro-choice, fine. But spare us the hypocrisy of claiming to be a “faithful Catholic” and pro-abortion at the same time. That dog won’t hunt, and it’s long past time our vaunted political leadership got called on it.

Cult of Death or Heart of Man

Today is the fifth anniversary of the massacre at Beslan. The following post was written shortly thereafter. Michelle Malkin also has a remembrance of this horror.

 
David Brooks, in his NY Times Op-Ed piece, Cult of Death, says the following about the Muslim terrorists and the Beslan school massacre:

We should be used to this pathological mass movement by now. We should be able to talk about such things. Yet when you look at the Western reaction to the Beslan massacres, you see people quick to divert their attention away from the core horror of this act, as if to say: We don’t want to stare into this abyss. We don’t want to acknowledge those parts of human nature that were on display in Beslan. Something here, if thought about too deeply, undermines the categories we use to live our lives, undermines our faith in the essential goodness of human beings.

It should come as no surprise to me – yet it still does – that people have any confidence remaining in idea of the “essential goodness of human beings.” Yet this is perhaps one of the most durable myths of our modern secular age. It underlies both public policy and private perception, and forms the basis of many failed government and social programs. If you have the stomach for it and the honesty to look objectively, even a brief glance at human history both ancient and modern reveals vastly more evidence of the depravity of man than his essential goodness. Consider briefly the following examples: the Inquisition, slavery, Ghengis Kahn, the Holocaust, the Bataan Death March, the Cambodian killing fields, Rwanda, Idi Amin, Columbine, Saddam’s rape rooms and shredders, suicide bombers on school buses and in pizza parlors, the rape of Nanking, the gulags, and Wounded Knee. And these are only the large historical events, easy to bring to mind. Left unmentioned but vastly outnumbering these are the countless murders, rapes, child molesters, serial killings, drug dealing, and any number of other smaller – but still profoundly evil – events which now barely if ever make the news.

I am not a misanthrope, and am fully aware of the potential for man to achieve great goodness and nobility. From the selfless volunteer at an inner city school to Mother Theresa, countless examples of such goodness and nobility exist, often hidden and far less noticed than deeds of evil. The issue is about the natural inclination, the deep inner nature of man – is it toward good, or rather toward evil? Your answer to this question profoundly affects your worldview.

By taking the position that man is essentially good, you are left with the problem of understanding inexplicable evil, such as torturing school children and shooting them in the back as they flee, as occurred at Beslan. In evil of lesser scope, psychology and social theory are often recruited for this task: the child molester or rapist was abused as a child; inner city crime is a result of racism; the root of terrorism is poverty, injustice, and the oppression of the Palestinians by the Jews. Even there the answers fall short. But could any such combination of social liabilities give rise to such extreme evil, as seen at Beslan or Auschwitz – particularly in beings whose natural bent is toward goodness?

The Judeo-Christian viewpoint on man’s essential nature is that man is fallen: created by a good God to be by nature good, but given free will either to submit to the good or to choose evil. Having rejected the good for personal autonomy independent of God, the natural gravity of the soul is away from God, not toward Him. In God is an unspeakable and unimaginable goodness; in His rejection is the potential for equally unimaginable evil. The Judeo-Christian solution is redemption, not psychology; inner transformation, not social programs.

To resist evil, you must know the face of evil, and recognize the face of good. The secularist denies the existence of God (or counts Him or it irrelevant), and therefore all goodness must have its source within man. The religious liberal believes God is good, but impotent, and therefore man is responsible to do the heavy lifting of all good works. The traditional Christian or Jew understands that man, created by God with enormous potential for good, but corrupted by failure to submit to God and therefore by nature far more prone to evil than good.

Religious affiliation is an unreliable indicator of good or evil behavior. The combination of evil motives with the compulsion of legalistic religion is a potent and dangerous mix, where men pursue their evil goals under the lash of and laboring for an angry god of their own making.

Man’s tendency to evil can be restrained, either by force of law, by force of arms, or ideally by inner transformation, repentance and submission to the power of humility and service. Wishful thinking and false assumptions about the goodness of man will prove woefully inadequate for the encroaching and fearsome evil of our current century.

Killing Mercy

The ethics of euthanasia, which as an issue generally stays just barely on our radar screens, given the host of contentious social issues taking up our political and cultural bandwidth, nevertheless may ultimately prove to be an enormous dilemma, with profound impact on both our lives as a society and as individuals. While the issue has only occasionally nosed into the political limelight–usually associated with some initiative regarding physician-assisted suicide–the underlying currents which keep this matter very much alive are powerful and unlikely to be resolved easily or painlessly.

There is broad appeal for the idea of euthanasia. It seems to fit perfectly into our Western democratic principles of the autonomy of the individual, rights and freedom, and the desire to control our own destinies. It seems as well an ideal solution to an out-of-control health care system, where technology and advances in life-sustaining capabilities seem to have taken on a life of their own, driving health care costs to extraordinary levels in the final years of our life, and seemingly removing much of the dignity we believe should be the inherent right of the dying. Patient’s families watch helplessly as their loved ones appear to be strung along in their dying days, tubes and wires exiting from every orifice, a relentless train of unknown physicians and ever-changing nurses breezing in and out of their rooms to tweak this medication or that machine. We all wish for something different for ourselves as well as our loved ones, but seem to be incapable of bringing that vision to fruition.

Euthanasia offers what appears to be an ideal solution to many of these difficulties. We love the idea that the individual may choose the time and place of their own demise; we see an easy and painless exit to prolonged suffering; we visualize a measure of mastery returning to a situation where are all seems out of control; we see a solution to pointless expenditures of vast sums of money on patients with little or no hope of recovery. It is for these reasons that initiatives to legalize this process are commonly called “death with dignity” or some similar euphemism reflecting these positive aspects–and when put forward, often find as a result a substantial degree of public approval.

This appeal grows ever stronger as our culture increasingly emphasizes personal autonomy and de-emphasizes social responsibility. We are, after all, the captains of our own ship, are we not? A culture which believes that individual behavior should be virtually without limit as long as “no one is harmed” can see little or no rational reason why such individual autonomy should not be extended to end-of-life decisions.

The reality, unfortunately, is that “no one is harmed” is a uniquely inadequate standard for human behavior, and our autonomy is far less than we would like to believe. It assumes that human behavior occurs in a vacuum. Thus we hear that sexual relations between consenting adults are entirely reasonable if “no one is harmed”–a standard commonly applied to relationships outside of marriage, for example, which often end up having a profound and destructive effect both on the spouse–and particularly on the children. “No one is harmed” serves as mere justification for autonomous behavior while denying or minimizing the inevitable adverse consequences of this behavior. When Joe has an affair with Susie at the office, and ends up in divorce court as a result, there can be little question that many are harmed: Joe’s children, not the least; his wife; perhaps the husband and children of the woman with whom he has had an affair. Yet in the heat of passion, “no one is harmed” is self-evident–believed even if false. And to mention these obvious ramifications of a supposedly “harmless” behavior is to be “judgmental” and therefore must be assiduously avoided.

But the consequences are real, and their ripple effect throughout society is profound: to cite one simple example, children from broken homes are far more prone to become involved in gangs or crime, to be abused sexually or physically; to initiate early sexual activity and become unwed mothers; to under-perform academically, and to have greater difficulty with relationships as teenagers and adults. These effects–particularly when magnified on a society-wide scale–have effects vastly broader than the personal lives of those who have made such autonomous choices.

Similarly, an argument is often used by libertarians (and others) for drug legalization using this same hold-harmless rationale. After all, who could argue with personal drug use in the privacy of your home, since “no one is harmed?” No one is harmed, of course–unless the residual, unrecognized effects of your drug use affects your reflexes while driving the next day, resulting in an accident; or impairs your judgment at work, costing your employer money or resulting in a workplace injury; or when, in the psychotic paranoia of PCP use, you decide your neighbor is trying to kill you, and beat him senseless with a baseball bat; or when the drug itself, in those so physiologically prone, leads to addictive behavior which proves destructive not merely to the individual, but to family, fellow workers, and society as a whole. Burning up every spare dollar of a family’s finances to support a drug habit, and stealing to support it–surely not an unusual scenario–can hardly be qualified as “no one is harmed.” To claim that there is no societal impact from such individual autonomous behavior is profoundly naive, and represents nothing more than wishful thinking.

But what about euthanasia? Surely it is reasonable to end the life of someone who is suffering unbearably, who is beyond the help of medical science, and who has no hope of survival, is it not? This, of course, is the scenario most commonly presented when legalization of euthanasia is promoted. It should be stated without equivocation that such cases do indeed exist, and represent perhaps the most difficult circumstances in which to argue against euthanasia. But it should also be said that such cases are becoming far less common as pain management techniques and physician training in terminal care improve: in my experience, and in the experience of many of my peers who care for the terminally ill, is a rare occurrence indeed that a patient cannot have even severe, intractable pain managed successfully.

But the core arguments used in support of euthanasia in such dire circumstances are easily extended to other terminal situations–or situations not so very terminal at all. Intractable terminal pain merges seamlessly into hopeless prognosis, regardless of time frame; then flows without interruption to chronic diseases such as multiple sclerosis or severe disabilities. Once the principle of death as compassion becomes the guiding rule, the Grim Reaper will undergo metamorphosis into an angel of light, ready to serve one and all who suffer needlessly.

To mitigate the risk of this so-called “slippery slope,” it has been suggested that safeguards against such mission creep be crafted. Such measures may invoke mandatory second opinions, waiting periods, or committee review, prior to approval of an act of euthanasia. That such measures are ultimately doomed to fail is self-evident: in effect, they impose a roadblock between patient autonomy and relief of suffering and its amelioration through euthanasia–and thus run counter to the core principle sustaining it. It is not difficult to foresee that such roadblocks will quickly be made less “burdensome,” if not rendered utterly impotent, by relentless pressures to prevent patients from needlessly suffering, regardless of their underlying disease.

Perhaps more importantly, the process of assessing and approving an act of euthanasia through second opinions or committee review is not some ethically neutral decision, such as vetting budget items or inventory purchases. Those who serve in such advisory or regulatory capacity must by necessity be open to–indeed supportive of–the idea of euthanasia, lest all reviewed cases be denied. As demand for euthanasia increases, such approvals will become rubber-stamped formalities, existing solely to provide defensive cover for unrestricted assisted termination.

But such arguments against euthanasia are in essence process-oriented, and miss the much larger picture of the effects of individual euthanasia on our collective attitudes about life and death, and our societal constitution. There can be little question that the practice of actively terminating ill or dying patients will have a profound effect on the physicians who engage in this practice. The first few patients euthanized may be done in a spirit of compassion and mercy–but repetition deadens the soul and habitualizes the process. This is routinely seen in many areas of health care training and practice: the first cut of a novice surgeon is frightening and intimidating; the thousandth incision occurs with nary a thought. One’s first autopsy is ghoulish; the hundredth merely objective fact-finding. Euthanasia, practiced regularly, becomes simply another tool: this can be readily seen in the statistics from the Netherlands, where even 15 years ago, a startling percentage of reported cases of euthanasia by physicians took place without explicit patient request — reflecting far more a utilitarian attitude toward euthanasia than some diabolical conspiracy to terminate the terminal. The detached clinicians, utterly desensitized to the act of taking a life, now utilize it as they would the initiation of parenteral nutrition or the decision to remove a diseased gallbladder.

Such false assumptions about the objective impartiality of the decision-making process leading to euthanasia can be seen as well when looking at the family dynamics of this process. We are presented with the picture of the sad but compassionate family, quietly and peacefully coming to the conclusion that Dad–with his full assent, of course–should mercifully have his suffering ended with a simple, painless injection. Lost in this idyllic fantasy is the reality of life in families. Anyone who has gone through the death of a parent and the settlement of an estate knows first-hand the fault lines such a life crisis can expose: old grievances brought back to life, old hot buttons pushed, greed and avarice bubbling to the surface like a toxic witch’s brew. Does brother John want Dad’s dignified death so he can cop the insurance cash for his gambling habit? Does sister Sue, who hates her father and hasn’t spoken to him in years, now suddenly want his prompt demise out of genuine concern for his comfort and dignity? Are the children–watching the estate get decimated by the costs of terminal care–really being objective about their desire for Mom’s peaceful assisted death? And does Mom, who knows she’s dying, feel pressured to ask for the needle so she won’t be a burden to her children? Bitter divisions will arise in families who favor euthanasia and those who oppose it–whether because of their relationship, good or bad, with the parent, or their moral and ethical convictions. To make euthanasia the solution to difficult problems of death and dying, as suggested by its proponents, will instead require the death of our spirits: a societal hardness of heart whose effects will reach far and wide throughout areas of life and culture far beyond the dying process. Mercy killing will kill our mercy; death with dignity so delivered will leave us not dignified but degraded.

The driving force behind legalized euthanasia and physician-assisted suicide is patient autonomy: the desire to maintain control over the dying process, by which, is it hoped, we will maintain our personal dignity. But the end result of legalized euthanasia will instead, in many cases, be loss of patient autonomy. When legalized, medical termination of life will by necessity be instituted with a host of safeguards to prevent its abuse. Such safeguards will include restricting the procedure to those in dire straights: intolerable suffering, a few months to live, and the like. Inherent in these safeguards are the seeds of the death of patient autonomy: such determinations must rely on medical judgments–and therefore will ultimately lie in the hands of physicians rather than patients. It will be physicians who will decide what is intractable pain; it is physicians who will judge how long you have to live; it is physicians who will have the last say on whether your life has hope or is no longer worth living. Such decisions may well be contested–but the legal system will defer to the judgment of the health care profession in these matters. Patient autonomy will quickly become physician autocracy. For those who request euthanasia, it will be easy; for those who do not wish it, but fit the criteria, it will also be far too easy.

This has been the legal and practical evolution of euthanasia in the Netherlands. The legal progression from patient autonomy with safeguards to virtual absence of restrictions on euthanasia is detailed in a superb paper from Brooklyn Law School’s Journal of International Law (available here as a PDF), in which this evolution is detailed:

Soon after the Alkmaar case was decided, the Royal Dutch Medical Association (KNMG) published a set of due care guidelines that purported to define the circumstances in which Dutch physicians could ethically perform euthanasia.

The KNMG guidelines stated that, in order for a physician to respond to a euthanasia request with due care,

  • The euthanasia request must be voluntary, persistent, and well-considered.
  • The patient must suffer from intolerable and incurable pain and a discernible, terminal illness.

Thereafter, Dutch courts adopted the KNMG guidelines as the legal prerequisites of due care in a series of cases between 1985 and 2001. Despite the integration of the KNMG’s due care provisions, courts remained confused regarding what clinical circumstances satisfied the requirements of due care. In 1985, a court acquitted an anesthesiologist who provided euthanasia to a woman suffering from multiple sclerosis. The court thereby eliminated the due care requirement that a patient must suffer from a terminal illness. By 1986, courts decided that a patient need not suffer from physical pain; mental anguish would also satisfy the intolerable pain due care requirement.

Similarly, all reported prosecutions of euthanasia prior to 1993 involved patients who suffered from either physical or mental pain. Then, in the 1993 Assen case, a district court acquitted a physician who had performed active voluntary euthanasia on an otherwise healthy, forty-three year old woman. The patient did not suffer from any diagnosable physical or mental condition, but had recently lost both of her sons and had divorced her husband. With the Assen case, Dutch courts seemed to abandon the requirement that a patient suffer from intolerable pain or, for that matter, from any discernible medical condition as a pre-condition for the noodtoestand [necessity] defense.

The requisite ambiguity of all such safeguards will invariably result in their legal dilution to the point of meaninglessness–a process which increasingly facilitates the expansion not only of voluntary, but also involuntary euthanasia. This is inevitable when one transitions from a fixed, inviolable principle (it is always wrong for a physician to kill a patient) to a relative standard (you may end their lives under certain circumstances). The “certain circumstances” are negotiable, and once established, will evolve, slowly but inexorably, toward little or no standards at all. When the goalposts are movable, we should not be surprised when they actually get moved.

Another effect rarely considered by those favoring euthanasia is its effect on the relationship between patients and their physicians. The physician-patient relationship at its core depends upon trust: the confidence which a patient has that their physician always has their best interests at heart. This is a critical component of the medical covenant–which may involve inflicting pain and hardship (such as surgery, chemotherapy, or other painful or risky treatments) on the patient for their ultimate benefit. Underlying this trust is the patient’s confidence that the physician will never deliberately do them harm.

Once physicians are empowered to terminate life, this trust will invariably erode. This erosion will occur, even were involuntary euthanasia never to occur–a highly unlikely scenario, given the Dutch experience. It will erode because the patient will now understand that the physician has been given the power to cause them great harm, to kill them–with the full legal and ethical sanction of the law. And the knowledge of this will engender fear: fear that the physician may abuse this power; fear that he or she may misinterpret your end-of-life wishes; fear that he may end your life for improper motives, yet justify it later as a legal and ethical act. The inevitable occurrence of involuntary euthanasia–which in an environment of legalized voluntary euthanasia will rarely if ever be prosecuted–will only augment this fear, especially among the elderly and the disabled. In the Netherlands, many seniors carry cards specifying that they do not wish to have their lives terminated–a reflection of a widespread concern that such an occurrence is not uncommon, and is feared.

Montana judge: man has right to assisted suicide

Effects on physicians:

Helen

Effects on Physicians

PHYSICIAN-ASSISTED SUICIDE IN OREGON:
A MEDICAL PERSPECTIVE

The Children Whom Reason Scorns

Nazi German euthanasia posterIn the years following the Great War, a sense of doom and panic settled over Germany. Long concerned about a declining birth rate, the country faced the loss of 2 million of its fine young men in the war, the crushing burden of an economy devastated by war and the Great Depression, further compounded by the economic body blow of reparations and the loss of the German colonies imposed by the Treaty of Versailles. Many worried that the Nordic race itself was threatened with extinction.

The burgeoning new sciences of psychology, genetics, and medicine provided a glimmer of hope in this darkness. An intense fascination developed with strengthening and improving the nation through Volksgesundheit–public health. Many physicians and scientists promoted “racial hygiene” – better known today as eugenics. The Germans were hardly alone in this interest – 26 states in the U.S. had forced sterilization laws for criminals and the mentally ill during this period; Ohio debated legalized euthanasia in the 20’s; and even Oliver Wendall Holmes, in Buck v. Bell, famously upheld forced sterilization with the quote: “Three generations of imbeciles are enough!” But Germany’s dire circumstances and its robust scientific and university resources proved a most fertile ground for this philosophy.

These novel ideas percolated rapidly through the social and educational systems steeped in Hegelian deterministic philosophy and social Darwinism. Long lines formed to view exhibits on heredity and genetics, and scientific research, conferences, and publication on topics of race and eugenics were legion. The emphasis was often on the great burden which the chronically ill and mentally and physically deformed placed on a struggling society striving to achieve its historical destiny. In a high school biology textbook – pictured above – a muscular German youth bears two such societal misfits on a barbell, with the exhortation, “You Are Sharing the Load!–a hereditarily-ill person costs 50,000 Reichsmarks by the time they reach 60.” Math textbooks tested students on how many new housing units could be built with the money saved by elimination of long-term care needs. Parents often chose euthanasia for their disabled offspring, rather than face the societal scorn and ostracization of raising a mentally or physically impaired child. This widespread public endorsement and pseudo-scientific support for eugenics set the stage for its wholesale adoption — with horrific consequences — when the Nazi party took power.

The Nazis co-opted medicine fully in their pursuit of racial hygiene, even coercing physicians in occupied countries to provide health and racial information on their patients to occupation authorities, and to participate in forced euthanasia. In a remarkably heroic professional stance, the physicians of the Netherlands steadfastly refused to provide this information, forfeiting their medical licenses as a result, and no small number of physicians were deported to concentration camps for their principled stand. As a testimony to their courage and integrity, not a single episode of involuntary euthanasia was performed by Dutch physicians during the Nazi occupation.

Would that it were still so.

The Netherlands was the first country in the world in which euthanasia and assisted suicide was legally performed, having fully legalized the practice in 2006 after several decades of widespread illegal–but universally unpunished–practice. The Dutch have come into the public consciousness periodically over the past 30vyears, initially with the consideration of assisted suicide laws in Oregon, Washington, Michigan and elsewhere in the early 90’s, and again with their formal legalization of physician-assisted suicide and euthanasia in 2001. Once again they are on the ethical radar, with the disclosure last week of the Groningen Protocol for involuntary euthanasia of infants and children.

The Groningen Protocol is not a government regulation or legislation, but rather a set of hospital guidelines for involuntary euthanasia of children up to age 12:

The Groningen Protocol, as the hospital’s guidelines have come to be known, would create a legal framework for permitting doctors to actively end the life of newborns deemed to be in similar pain from incurable disease or extreme deformities.

The guideline says euthanasia is acceptable when the child’s medical team and independent doctors agree the pain cannot be eased and there is no prospect for improvement, and when parents think it’s best.

Examples include extremely premature births, where children suffer brain damage from bleeding and convulsions; and diseases where a child could only survive on life support for the rest of its life, such as severe cases of spina bifida and epidermosis bullosa, a rare blistering illness.

The hospital revealed last month it carried out four such mercy killings in 2003, and reported all cases to government prosecutors. There have been no legal proceedings against the hospital or the doctors.

While some are shocked and outraged at this policy of medical termination of sick or deformed children (the story has been widely ignored by the mainstream media, and has gotten only limited attention on the Internet), it is merely a logical extension of a philosophy of medicine widely practiced and condoned in the Netherlands for many years, much as it was in Germany between world wars. It is a philosophy where the Useful is the Good, whose victims are the children whom Reason scorned.

Euthanasia is the quick fix to man’s ageless struggle with suffering and disease. The Hippocratic Oath — taken in widely varying forms by most physicians at graduation — was originally administered to a minority of physicians in ancient Greece, who swore to prescribe neither euthanasia nor abortion — both common recommendations by healers of the age. The rapid and widespread acceptance of euthanasia in pre-Nazi Germany occurred because it was eminently reasonable and rational. Beaten down by war, economic hardship, and limited resources, logic dictated that those who could not contribute to the betterment of society cease being a drain on its lifeblood. Long before its application to ethnic groups and enemies of the State, it was administered to those who made us most uncomfortable: the mentally ill, the deformed, the retarded, the social misfit. While invariably promoted as a merciful means of terminating suffering, the suffering relieved is far more that of the enabling society than of its victims. “Death with dignity” is the gleaming white shroud on the rotting corpse of societal fear, self-interest and ruthless self-preservation.

It is sobering and puzzling to ponder how the profession of medicine – whose core article of faith is healing and comfort of the sick – could be so effortlessly transformed into a calculating instrument of judgment and death. It is chilling to read the cold scientific language of Nazi medical experiments or Dutch studies on optimal techniques to minimize complications in euthanasia. Yet this devolution of medicine, with some contemplation, is not hard to discern. It is the natural gravity of man detached from higher principles, operating out of the best his reason alone has to offer, with its inevitable disastrous consequences. Contributing to this march toward depravity:

 ♦ The power of detachment and intellectualization: Physicians by training and disposition are intellectualizers. Non-medical people observing surgery are invariably squeamish, personalizing the experience and often repulsed by the apparent trauma to the patient. Physicians overcome this natural response by detaching themselves from the personal, and transforming the experience into a study in technique, stepwise logical processes, and fascination with disease and anatomy. Indeed, it takes some effort to overcome this training to develop empathy and compassion. It is therefore a relatively small step with such training to turn even killing into another process to be mastered.

 ♦ The dilution of personal responsibility: In Germany, the euthanasia of children was performed with an injection of Luminal, a barbiturate also used for seizures and sedation of the agitated. As a result, it was difficult to determine who was personally responsible for the deed: was it the nurse, who gave too much? The doctor, who ordered too large a dose? Was the patient overly sensitive to the drug? Was the child merely sedated, or in a terminal coma? Of course, all the participants knew what was going on, but responsibility was diluted, giving rationalization and justification full reign. The societal endorsement and widespread practice of euthanasia provided additional cover. When all are culpable, no one is culpable.

 ♦ Compartmentalization: an individual involved in the de-Baathification of Iraq said the following:

There is a duality in Baathists. You can find a Baathist who is a killer, but at home he’s completely normal. It’s like they split their day into two twelve-hour blocks. When people say about someone I know to be a Baathist criminal, ‘No, he’s a good neighbor!’, I believe him.

Humans have the remarkable ability to utterly separate disparate parts of their lives, to accommodate cognitive dissonance. Indeed, there is probably no other way to maintain sanity in the face of enormous personal evil.

 ♦ The banality of evil: Great evil springs in countless small steps from lesser evil. Jesus Christ was doubtless not the first innocent man Pilate condemned to death; soft porn came before child porn, snuff films, and rape videos; in the childhood of the serial killer lies cruelty to animals. Small evils harden the heart, making greater evil easier, more routine, less chilling. We marvel at the hideousness of the final act, but the descent to depravity is a gentle slope downwards.

 ♦ The false optimism of expediency: Solve the problem today, deny any future consequences. We are nearsighted creatures in the extreme, seeing only the benefits of our current actions while dismissing the potential for unknown, disastrous ramifications. When Baby Knauer, an infant with blindness, mental retardation and physical deformities, became the first child euthanized in Germany, who could foresee the horrors of Auschwitz and Dachau? We are blind to the horrendous consequences of our wrong decisions, but see infinite visions of hope for their benefits. As a child I watched television shows touting peaceful nuclear energy as the solution to all the world’s problems, little imagining the fears of the Cuban missile crisis, Chernobyl and Three Mile Island, the minutes before midnight of the Cold War, and the current ogre of nuclear terrorism.

Reason of itself is morally neutral; it can kill children or discover cures for their suffering and disease. Reason tempered by humility, faith, and guidance by higher moral principles has enormous potential for good – and without such restraints, enormous potential for evil.

The desire to end human suffering is morally good. Despite popular misconception, the Judeo-Christian tradition does not view suffering as something good, but rather something evil which exists, but which may be transformed and redeemed by God and grace, to ultimately produce a greater good. This is a difficult sell to a materialistic, secular world, which does not accept the transformational power of God or the existence of spiritual consequences, or principles higher than human reason.

Yet the benefits of suffering, subtle though they may be, can be discerned in many instances even by the unskilled eye. What are the chances that Dutch doctors will find a cure for the late stage cancer or early childhood disease, when they now so quickly and “compassionately” dispense of their sufferers with a lethal injection? Who will teach us patience, compassion, unselfish love, endurance, tenderness, and tolerance, if not those who provide us with the opportunity through their suffering, or mental or physical disability? These are character traits not easily learned, though enormously beneficial to society as well as individuals. How will we learn them if we liquidate our teachers?

Higher moral principles position roadblocks to our behavior, warning us that grave danger lies beyond. When in our hubris and unenlightened reason we crash through them, we do so at great peril, for we do not know what evil lies beyond. The Netherlands will not be another Nazi Germany, as frightening as the parallels may be. It will be different, but it will be evil in some unpredictable way, impossible to foresee when rationalism took the first step across that boundary to kill a patient in mercy.

The Epiphany of Evil

GargoyleRoger L. Simon recently had an epiphany. While reporting from the Durban II conference, he encountered the face of evil: President Ahmadinejad of Iran. He describes this encounter thus:

I heard screaming sirens followed by shrieking motor cycles when Ahmadinejad himself entered … and marched straight across the lobby in what seemed at the time like a goose step a few feet away from me, staring directly at me while waving and smiling in my direction.

I did not wave or smile back.

I couldn’t. Indeed, I was frozen. I felt suddenly breathless and nauseated, as if I had been kicked brutally in the stomach. I was also dizzy. I wanted to throw up. But no one had touched me and I hadn’t eaten anything for hours.

It was then, I think, that I found, or noticed, or understood, religion personally for a moment.

Here’s what I mean.

For most of my life I had rationalized the existence of “bad people” or, more specifically, placed them in therapeutic categories. They were aberrant personalities, psychologically disturbed. It wasn’t that I thought better economic conditions or psychoanalysis or medication or whatever could fix everyone. I was long over that. Some people–serial killers, etc.–had to be locked away forever. They would never get better. But they were simply insane. That’s what they were.

Still, I had seen whacked murderers like Charles Manson, late OJ Simpson, up close and this wasn’t the same. This was more than the mental illness model. Far more. For one thing, I had never before had this intense physical sensation when confronted with another human being. Nor had I wanted to vomit. Not for Manson. Not for anyone. This was different.

It was almost unreal, like being in a movie, in a certain way. I know comparisons to Hitler are invidious, in fact usually absurd, but I was feeling the way I imagined I would have felt opposite Hitler.

I was in the presence of pure Evil.

In the seemingly seamless garment which is secular rationalism, there is no place for evil. Oh, to be sure, the word is flung about like sweat from a boxer’s well-placed uppercut — slathered and spit upon all who deviate from progressive secular orthodoxy. But true evil — that inexplicable behavior which chills the soul and touches that primal inner fear — finds no satisfactory solution in our modern world. The salve of psychology is oft applied — the perpetrators are invariably “loners”, “abused”, “neglected”, “rejected”, “oppressed”, or “victimized” — but the hatred which spawns such unspeakable actions cannot be so easily trivialized or dismissed. It rises up like a hideous ogre, demanding acknowledgment and rebuke — and yet we, in response, simply slap our banal therapeutic band-aids on while frantically averting our eyes to the never-ending distractions which numb the inner terror and allow us to move on, undisturbed, our materialistic narrative intact, unperturbed, and unchallenged.

But evil cannot be so easily confined to the therapist’s couch. Our shallow rationalism shoves evil into the overstuffed closet of the therapeutic, where irrationality, mental illness, and all forms of perplexing puzzles are placed, quickly bolting the door before it can escape. Yet evil is in its own way coldly rational, progressive, efficient: the years of planning behind a Columbine; the detailed protocols and meticulous records of Nazi medical experiments; the systematic efficiency of the Holocaust; the careful coordination of a Beslan. All these display, neither mental instability nor unhinged psychosis, but rather highly rational, intelligent, goal-directed purpose. If anything, evil is often more creative, more ingenious, more well-organized and executed than the pursuit of good. In the hard calculus of rational materialism, there is unspoken contempt for the foolishness of caring for the weak, protecting the vulnerable, elevating the dignity of the imperfect, nurturing the neglected.

When we envision evil, we evoke the ghastly: the school massacre, the genocide, the imprisonment and torture of political prisoners, the rape and abuse of children. But though we long to sequester our discomfort in the realm of the rare and horrible, evil will not be thus constrained. It is alive and well in the corporate boardroom, in the street gang, on the drug dealer’s corner, in the steamy affair which destroys a family. It reaches into every corner of our lives — though we struggle to deny and rationalize the monster as it draws nigh to our souls. Indeed, it dwells quite close to home, in the dark rooms of the mind, the dank cellars of the soul, in whispered desires and demons in the depths of the spirit. The newspaper headlines are but harbingers of the heart; what horrifies without dwells within, though hidden deep beneath denial and jaded self-justification. We are what we fear — and we tremble to acknowledge it.

Yet evil, for all its pervasiveness, does not stand alone as a distinct entity. Like one hand clapping, it is meaningless except in the context of a moral framework, a system of absolutes against which it is measured and found wanting. There can be no “evil” where there is no “good.” Yet our secular age ridicules such a position, rejecting the universal for the relative — we determine our own standards of good and evil, in harmony with our individual and cultural narrative, where the notion of truth is nothing more than an instrument of and a means to power. And thus we have no reference by which to comprehend and measure the phenomenon of evil. We know it when we see it — at least in its more egregious and hideous forms — yet have an inadequate and conflicted worldview with which to grasp it. Our evolutionary mindset should provide some cold comfort, as the prime directive of survival of the fittest predicts the destruction of the weak and the triumph of the strong — yet in our heart of hearts we know this to be foolish, and frightening, and fraught with incongruity — for we know we too are among the weak. The resulting cognitive dissonance leads to a pitiful and wholly inadequate response to the horrors which confront us almost daily. When a Columbine occurs, we immediately call in the counselors — when we should be crying out for the priests.

Our materialism and technology, and the secular relativism they have spawned, have given rise to the delusion that we may control the metaphysical just as we control the physical, through science and technology. Hence we each determine our own morality, deciding for ourselves what is right and wrong — a calculus which always favors ourselves over others. Yet in a reality based on transcendent absolutes, the consequences of their violation — evil — are just as inviolable as the laws of physics. We hope to bend the metaphysical to our wants and desires — and the results are entirely predictable. When evil results, we resort to the only tools in our arsenal: education, knowledge, psychology, sociology. Their inevitable failure at resolving the catastrophe only deepens the dilemma. Our cultural witch doctors dance and cant, shaking their shaman wands in fevered frenzy, hoping to drive off the demons with the magical sayings and sacred books of science and sociology. Yet the evil persists, empowered and enlarged by our enfeebled response.”

C.S. Lewis, writing in The Abolition of Man, finds in our materialistic scientific mindset much of the magic of old:

There is something which unites magic and applied science while separating both from the wisdom of earlier ages. For the wise men of old, the cardinal problem had been how to conform the soul to reality, and the solution had been knowledge, self-discipline, and virtue. For magic and applied science alike, the problem is how to subdue reality to the wishes of men: the solution is a technique; and both, in the practice of this technique, are ready to do things hitherto regarded as disgusting and impious…

Evil is indeed real, and growing, and we are poorly equipped to grasp or grapple with it. It is a greedy demon whose goal is destruction and whose power is immense. We would be wise to seek the proper antidote lest its poison destroy us all. Our rare glimpses into the heart of darkness, as Roger L. Simon experienced, are a wake-up call we ignore at our peril.

Truth & Consequences

Jesus before Pilate
In the trial of Jesus, ancient texts have recorded this exchange:

Pilate replied, “You are a king then?” “You say that I am a king, and you are right,” Jesus said. “I was born for that purpose. And I came to bring truth to the world. All who love the truth recognize that what I say is true.”

“What is truth?” Pilate asked.

Some questions are truly timeless.

We live in an age where the notion of truth, of absolutes which transcend the individual and society, is increasingly under assault. Ours is an age of radical individualism, wherein man alone becomes the sole arbiter of what is right or wrong, where moral relativism reigns, where postmodernism trades absolute truth for “narratives”, which vary from individual to individual, culture to culture, and age to age.

It is no small irony that ours is an age of science and technology — disciplines which depend by their very nature on the absolute, unchanging, and permanent laws of nature. Yet this same age rejects or disdains the concept of absolutes and transcendent truth. No one questions the speed of light, or the Pythagorean theorem, or the laws of gravity, or the quirky and counter-intuitive physics of subatomic particles. The postmodernist whose narrative does not accept the law of gravity will still need a sidewalk cleanup crew when he flings himself from a tall building, believing he can fly.

The difference, of course, is that the absolutes of physics and science apply to the physical world, quantifiable and tangible in greater or lesser measure, while the absolutes of ethics, morality, and religion touch on the metaphysical, the invisible, the philosophical, the theological. The materialist rejects such notions outright, as superstition, “values” (i.e., individual beliefs or preferences based on nothing more than feelings or bias), as mindless evolutionary survival skills, or the dying remnants of an age of ignorance. Absolutes are rejected because of the presuppositions of constricted materialism, the arrogance and conceits of intellectualism, the notion that if it cannot be weighed or measured it does not exist. But the deeper and more fundamental reason for the rejection of transcendent absolutes is simply this: such absolutes make moral claims upon us.

In truth, man cannot exist without transcendent absolutes, even though he denies their existence. Our language and thought are steeped in such concepts, in notions of good and evil, love and hate, free will and coercion, purpose and intentionality. We cannot think, or communicate, or be in any way relational without using the intangible, the metaphysical, the conventions, the traditions. We are by our very nature creatures who compare: we judge, and accept or reject; we prefer or disapprove; we love or hate, criticize or applaud. All such choices involve the will as a free agent — and free will is meaningless if it is not used in the context of an ethereal yet unchanging standard against which a choice is measured. We say a rose smells beautiful and a rotten egg rotten, because we judge those smells against an invisible standard which determines one to be pleasant and the other offensive. We cannot measure the love of a child, or or weigh the sorrow of a death, or calculate the anger at an injustice or the beauty of a Bach concerto; yet such reactions, and the standards by which we recognize and judge such intangibles, are every bit as real as the photons and protons, the law of gravity or the principles of physics. Even the most hardened Darwinist, atheistic to the core, by necessity must speak the language of purpose and transcendence and choice, as Mother Nature “selects”, and “chooses”, and “intends”, and “prefers” this genetic trait or that survival skill. We are incapable of describing even the purported randomness, mindlessness, and purposelessness of evolutionary biology without concepts and language of intentionality, preference, good and evil.

No, the rejection of absolutes is the rejection of their claim upon our wills. To reject that absolute truth exists, to deny that standards and principles stand apart from mere constructs of human imagination, is to affirm the absolute that we are absolutely autonomous, answerable to nothing and no one, masters and gods accountable only to ourselves. To deny absolutes is to deny free will — and to deny the consequences of choices which violate the very principles we dismiss as foolish, ignorant, prejudiced, and superstitious. To deny dogma is to be dogmatic; to reject absolutes absolutely is to affirm absolutes, even if unknowingly. Transcendent absolutes define our very humanity; dogs do not have dogmas, nor are cats categorical.

G.K. Chesterton, prescient and insightful as ever in his vision of the foolishness of man in his intellectual hubris, said:

Man can be defined as an animal that makes dogmas. As he piles doctrine on doctrine and conclusion on conclusion in the formation of some tremendous scheme of philosophy and religion, he is, in the only legitimate sense . . . becoming more and more human. When he drops one doctrine after another in a refined skepticism, when he says that he has outgrown definitions, when he says that he disbelieves in finality, when, in his own imagination, he sits as God, holding to no form of creed and contemplating all, then he is by that very process sinking slowly backwards into the vagueness of the vagrant animals and the unconsciousness of grass. Trees have no dogmas. Turnips are singularly broad-minded.

Ideas have consequences, philosophies have predicates, and the rejection of absolutes absolutely dehumanizes us, for we devolve from a species of high principles and moral light to denizens of a depravity far lower than the animals. For animals have rational restraints on behavior, brutish though it may be, while there is no end of evil for the human mind unleashed from absolutes.

Speaking of the fall of Carthage, with its materialism, wealth, and power, steeped in a religion whose worship sacrificed infants in the fires of Moloch, Chesterton says thus:

This sort of commercial mind has its own cosmic vision, and it is the vision of Carthage. It has in it the brutal blunder that was the ruin of Carthage. The Punic power fell, because there is in this materialism a mad indifference to real thought. By disbelieving in the soul, it comes to disbelieving in the mind … Carthage fell because she was faithful to her own philosophy and had followed out to its logical conclusion her own vision of the universe. Moloch had eaten her own children.

The rejection of absolutes, with the resulting moral relativism and narcissistic nihilism, is no mere intellectual folly nor faddish foolishness. It is instead a corrosive toxin, appealing in its seeming rationality and reasonableness, but pervasive and deadly for both person and polity.

If the Truth will set you free — and it most surely will — its rejection will surely enslave you. Slavery or freedom: your choice.

A Life Not Long

sunset

Recently, I’ve been ruminating on a topic which a frequent topic online and elsewhere: the endless pursuit of a longer, or eternal, life.

Here’s the question I’ve been pondering: is it an absolute good to be continually striving for a longer life span? Such a question may seem a bit odd coming from a physician, whose mission it is to restore and maintain health and prolong life. But a recent article describing the striking changes in health and longevity of our present age, seemingly presents this achievement as an absolute good, and thereby left me a tad uneasy — perhaps because I find myself increasingly ambivalent about this unceasing pursuit of longer life.

Of course, long life and good health have always been considered blessings, as indeed they are. But long life in particular seems to have become a goal unto itself — and from where I stand is most decidedly a mixed blessing.

Many of the most difficult health problems with which we battle, which drain our limited resources, are largely a function of our longer life spans. Pick a problem: cancer, heart disease, dementia, crippling arthritis, stroke — all of these increase significantly with age, and can result in profound physical and mental disability. In many cases, we are living longer, but doing so restricted by physical or mental limitations which make such a longer life burdensome, both to ourselves and to others. Is it a positive good to live to age 90, spending the last 10 or more years with dementia, not knowing who you are nor recognizing your own friends or family? Is it a positive good to be kept alive by aggressive medical therapy for heart failure or emphysema, yet barely able to function physically? Is it worthwhile undergoing highly toxic chemotherapy or disfiguring surgery to cure cancer, thereby sparing a life then severely impaired by the treatment which saved that life?

These questions, in some way, cut to the very heart of what it means to be human. Is our humanity enriched simply by living longer? Does longer life automatically imply more happiness–or are we simply adding years of pain, disability, unhappiness, burden? The breathlessness with which authors often speak of greater longevity, or the cure or solution to these intractable health problems, seems to imply a naive optimism, both from the standpoint of likely outcomes, and from the assumption that a vastly longer life will be a vastly better life. Ignored in such rosy projections are key elements of the human condition — those of moral fiber and spiritual health, those of character and spirit. For we who live longer in such an idyllic world may not live better: we may indeed live far worse. Should we somehow master these illnesses which cripple us in our old age, and thereby live beyond our years, will we then encounter new, even more frightening illnesses and disabilities? And what of the spirit? Will a man who lives longer thereby have a longer opportunity to do good, or rather to do evil? Will longevity increase our wisdom, or augment our depravity? Will we, like Dorian Gray, awake to find our ageless beauty but a shell for our monstrous souls?

Such ruminations bring to mind a friend, a good man who died young. Matt was a physician, a tall, lanky lad with sharp bony features and deep, intense eyes. He was possessed of a brilliant mind, a superb physician, but left his mark on life not solely through medicine nor merely by intellect. A convert to Christianity as a young adult, Matt embraced his new faith with a passion and province rarely seen. His medical practice became a mission field. His flame burned so brightly it was uncomfortable to draw near: he was as likely to diagnose your festering spiritual condition as your daunting medical illness — and had no compunction about drilling to the core of what he perceived to be the root of the problem. Such men make you uneasy, for they sweep away the veneer of polite correction and diplomatic encouragement which we physicians are trained to deliver. Like some gifted surgeon of the soul, he cast sharp shadows rather than soft blurs, brandishing his brilliant insight on your now-naked condition. The polished conventions of medicine were never his strength — a characteristic which endeared him not at all to many in his profession. But his patients — those who could endure his honesty and strength of character — were passionate in their devotion to him, personally and professionally. For he was a man of extraordinary compassion and generosity, seeing countless patients at no charge, giving generously of his time and finances far beyond the modest means earned from his always-struggling practice.

The call I received from another friend, a general surgeon, requesting an assist at his surgery, was an unsettling one: Matt had developed a growth in his left adrenal gland. His surgery went deftly, with much confidence that the lesion had been fully excised. The pathology proved otherwise: Matt had an extremely rare, highly aggressive form of adrenal cancer. Fewer than 100 cases had been reported worldwide, and there was no known successful treatment. Nevertheless, as much for his wife and two boys as for himself, he underwent highly toxic chemotherapy, which sapped his strength and left him enfeebled. In spite of this, the tumor grew rapidly, causing extreme pain and rapid deterioration, bulging like some loathsome demon seeking to burst forth from his frail body. I saw him regularly, although in retrospect not nearly often enough, and never heard him complain; his waning energies were spent with his family, and he never lost the intense flame of faith. Indeed, as his weakened body increasingly became no more than life support for his cancer, wasting him physically and leaving him pale and sallow, there grew in him a spirit so remarkable that one was drawn to him despite the natural repulsion of watching death’s demonic march.

Matt died at age 38, alert and joyful to the end. His funeral was a most remarkable event: at an age in life where most would be happy to have sufficient friends to bear one’s casket, his funeral service at a large church was filled to overflowing — thousands of friends, patients, and professional peers paying their respects in a ceremony far more celebration than mourning. There was an open time for testimony — and such a time it was, as one after another took to the lectern to speak through tears of how Matt had touched their lives; of services rendered, small and large, unknown before that day; of funny anecdotes and sad remembrances which left few eyes dry, and not one soul of that large crowd untouched or unmoved.

A journey such as his casts critical light on our mindless pursuit of life lived only to endure longer. In Matt’s short life he brought more good into the world, touched more people, changed more lives, than I could ever hope to do were I to live a century more. It boils down to purpose: mere years are no substitute for a life lived with passion, striving for some goal greater than self, with transcendent purpose multiplying and compounding each waking moment. This is a life well-lived, whether long or short, whether weakened or well.

Like all, I trust, I hope to live life long, and seek a journey lived in good health and sound mind. But even more — far more indeed — do I desire that those days yet remaining — be they long or short — be rich in purpose, wise in time spent, drenched in prayer, and graced by love for others and for God.

Redefining Humanity


Gerard Vanderleun recently posted a thoughtful and moving essay on the topic of abortion, and his own personal reflections and experiences with it.

The crux of the abortion dispute is, as mentioned above, the question of when human life begins. At this point, we all know the opposing political and religious positions. At some point, human life begins and the fate of the fetus is either at the absolute will of the mother or it is not. Nevertheless, it is still hard to say exactly when humanness happens since: 1) We do not agree on the term “human,” and 2) as a result, all evidence on this issue remains anecdotal once you strip away the slant of the “research” that supports your preferred result.

When does the fetus become human?

This question, on one hand, seems all-important, yet at another level seems absurd beyond belief. It is a question which would never be asked were it not for the idea of ending a pregnancy by abortion. What reason would there be for such a question? A woman becomes pregnant, and is expecting a baby: this is the expectation of motherhood since man and woman first began procreating. In its natural course, barring unforeseen problems, a child is born — a unique instance of humanity, a living being like none other before or after. It is only in the context of deliberately interrupting this process — ending the pregnancy deliberately — that the question of of the humanity of the unborn fetus has been raised.

That such a question is raised with any seriousness is evidence of a profound denial — the denial required to end an unborn child’s life in the womb. To raise the issue of the humanity of those not yet born, to imply that the fetus is anything other than a human being, is to salve the deep discomfort of the soul inherent in the termination of a life. For we know, innately, that the unborn is alive, and human, and to justify its extinction we must engage in extraordinary contortions of conscience. Thus we say the fetus is an extension of the mother’s body, which it clearly is not; we refer to it as a blob of tissue or protoplasm, dehumanizing its unique and extraordinary human potential; we call it a “potential human”, as if at some magic point a switch is thrown to turn on its humanity — while never stopping to define what that humanity is, or why there is no humanity in the split second before our chosen transition time. We draw false and foolish analogies: the fetus is no different than a skin cell, or a “sacred sperm”, or a tumor — thus denying the extraordinary creation which occurs when the genetic map of two parents fuses into a new life, with an infinite capacity for uniqueness, change, experience, and creativity of its own. For we are created to create; we are engendered to engender; we are conceived to conceive again in an endless and infinite way: to conceive new ideas, new works, new accomplishments, new relationships, new failures and successes, and new life itself, in the generation which we ourselves engender.

From the moment of its conception, that which we so dismissively call a “fetus” begins a journey extraordinary beyond imagination. Using the inscrutable road map of its unique DNA, the developing human undergoes constant change and growth — a process which ends not at birth but some 25 years later when its full physical maturity is reached. Organs form; primitive cells differentiate into complex systems dedicated to tasks both present and future. Before its mother knows of the pregnancy, at 6 weeks, the heart and circulatory system is formed, and the heart is beating; the primitive cells forming the brain and spinal cord are in place and developing; facial features, including eyes, ears, mouth and nose are evident. By 8 weeks, fingers, toes and fingernails are present, as is the digestive system. By 12 weeks, virtually every organ system is formed and differentiated; the rest of the pregnancy is almost entirely about growth and the maturing of these intact systems. The information map for this extraordinary yet orderly complexity — and for far more, including intellect, personality, gifts and skills, — and yes, liabilities — is contained in the fertilized egg in its entirety. We are what we will be, from the the instant of our conception.

We deny what is self-evidently human for many reasons. Our secular and utilitarian culture has lost its sense of wonder at the miracle of that which is the creation of a new human life. Our children are no longer gifts but burdens, impeding our acquisitional materialism and imposing themselves on our pursuit of self-interest and self-gratification. We must dehumanize first, then destroy, the unborn child, that we may live out the delusional fantasy of unrestricted sexual license without consequences; that we may continue the self-deception that somehow we are masters of our own destiny; that we may perpetuate the fraudulent vision that our relationships are about self-fulfillment rather than sacrifice for the good of our progeny and the society and culture in which they will partake.

In introspective moments of regret we may mourn the potential loss, the wistful thought, that we have aborted a Beethoven or a Ben Franklin. Yet even this mild melancholy misses the point, showing the shallowness of our own humanity, as we find comfort in the rarity of such genius, while dismissing the loss of that far more tragic: the loss of the common, in all its richness and variety. It is not the loss of a Mozart we should mourn; it is the empty place where a merchant, a mechanic, a muse, a minstrel might have stood. It is the compassionate mother, the inspirational teacher, the clever repairman or comical co-worker who will never live to enrich the lives of others in ways trivial and transcendent. Our losses are incalculable, because we have destroyed them before we knew their worth. We sacrifice our hope and our future on the altar of calculated convenience and cold rationality.

It is not merely the loss of those who might have lived which we suffer; it is we who survive, who make these mortal choices, who are changed as well. For if the humanity of our children is fungible, redefined, discarded and spent on the expediency of convenience and self-interest, such expediency will not long remain in the dark chambers of the abortion suite. We will, in banal, measured, rational steps, soon judge the humanity of all with the same jaundiced eye. The disabled, the mentally ill, the elderly and frail will soon find our cold and rational eye cast upon them, as we find their lives ever more a burden, ever more useless and wasted, all too easily discarded as we pursue our utopian vision of perfection through self-worship.

Yet our Darwinian dream marches on, leaving the weakest to fall by the wayside in our evolution from compassionate humans to rational beasts. Survive we may — but at the ghastly price of wagered humanity lost.