购买
下载掌阅APP,畅读海量书库
立即打开
畅读海量书库
扫码下载掌阅APP

CHAPTER ONE

O N SEPTEMBER 12, 1876, the crowd overflowing the auditorium of Baltimore's Academy of Music was in a mood of hopeful excitement, but excitement without frivolity. Indeed, despite an unusual number of women in attendance, many of them from the uppermost reaches of local society, a reporter noted, “There was no display of dress or fashion.” For this occasion had serious purpose. It was to mark the launching of the Johns Hopkins University, an institution whose leaders intended not simply to found a new university but to change all of American education; indeed, they sought considerably more than that. They planned to change the way in which Americans tried to understand and grapple with nature. The keynote speaker, the English scientist Thomas H. Huxley, personified their goals.

The import was not lost on the nation. Many newspapers, including the New York Times, had reporters covering this event. After it, they would print Huxley's address in full.

For the nation was then, as it so often has been, at war with itself; in fact it was engaged in different wars simultaneously, each being waged on several fronts, wars that ran along the fault lines of modern America.

One involved expansion and race. In the Dakotas, George Armstrong Custer had just led the Seventh Cavalry to its destruction at the hands of primitive savages resisting encroachment of the white man. The day Huxley spoke, the front page of the Washington Star reported that “ the hostile Sioux, well fed and well armed” had just carried out “a massacre of miners.”

In the South a far more important but equally savage war was being waged as white Democrats sought “redemption” from Reconstruction in anticipation of the presidential election. Throughout the South “rifle clubs,” “saber clubs,” and “rifle teams” of former Confederates were being organized into infantry and cavalry units. Already accounts of intimidation, beatings, whippings, and murder directed against Republicans and blacks had surfaced. After the murder of three hundred black men in a single Mississippi county, one man, convinced that words from the Democrats’ own mouths would convince the world of their design, pleaded with the New York Times, For God's sake publish the testimony of the Democrats before the Grand Jury.”

Voting returns had already begun to come in—there was no single national election day—and two months later Democrat Samuel Tilden would win the popular vote by a comfortable margin. But he would never take office as president. Instead the Republican secretary of war would threaten to “force a reversal” of the vote, federal troops with fixed bayonets would patrol Washington, and southerners would talk of reigniting the Civil War. That crisis would ultimately be resolved through an extra-constitutional special committee and a political understanding: Republicans would discard the voting returns of three states—Louisiana, Florida, South Carolina—and seize a single disputed electoral vote in Oregon to keep the presidency in the person of Rutherford B. Hayes. But they also would withdraw all federal troops from the South and cease intervening in southern affairs, leaving the Negroes there to fend for themselves.

The war involving the Hopkins was more muted but no less profound. The outcome would help define one element of the character of the nation: the extent to which the nation would accept or reject modern science and, to a lesser degree, how secular it would become, how godly it would remain.

Precisely at 11:00 A . M., a procession of people advanced upon the stage. First came Daniel Coit Gilman, president of the Hopkins, and on his arm was Huxley. Following in single file came the governor, the mayor, and other notables. As they took their seats the conversations in the audience quickly died away, replaced by expectancy of a kind of declaration of war.

Of medium height and middle age—though he already had iron-gray hair and nearly white whiskers—and possessed of what was described as “a pleasant face,” Huxley did not look the warrior. But he had a warrior's ruthlessness. His dicta included the pronouncement: “The foundation of morality is to have done, once and for all, with lying.” A brilliant scientist, later president of the Royal Society, he advised investigators, “Sit down before a fact as a little child, be prepared to give up every preconceived notion. Follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing.” He also believed that learning had purpose, stating, “The great end of life is not knowledge but action.”

To act upon the world himself, he became a proselytizer for faith in human reason. By 1876 he had become the world's foremost advocate of the theory of evolution and of science itself. Indeed, H. L. Mencken said that “it was he, more than any other man, who worked that great change in human thought which marked the Nineteenth Century.” Now President Gilman gave a brief and simple introduction. Then Professor Huxley began to speak.

Normally he lectured on evolution, but today he was speaking on a subject of even greater magnitude. He was speaking about the process of intellectual inquiry. The Hopkins was to be unlike any other university in America. Aiming almost exclusively at the education of graduate students and the furtherance of science, it was intended by its trustees to rival not Harvard or Yale—neither of them considered worthy of emulation—but the greatest institutions of Europe, and particularly Germany. Perhaps only in the United States, a nation ever in the act of creating itself, could such an institution come into existence both so fully formed in concept and already so renowned, even before the foundation of a single building had been laid.

“His voice was low, clear and distinct,” reported one listener. “The audience paid the closest attention to every word which fell from the lecturer's lips, occasionally manifesting their approval by applause.” Said another, “Professor Huxley's method is slow, precise, and clear, and he guards the positions which he takes with astuteness and ability. He does not utter anything in the reckless fashion which conviction sometimes countenances and excuses, but rather with the deliberation that research and close inquiry foster.”

Huxley commended the bold goals of the Hopkins, expounded upon his own theories of education—theories that soon informed those of William James and John Dewey—and extolled the fact that the existence of the Hopkins meant “finally, that neither political nor ecclesiastical sectarianism” would interfere with the pursuit of the truth.

In truth, Huxley's speech, read a century and a quarter later, seems remarkably tame. Yet Huxley and the entire ceremony left an impression on the country deep enough that Gilman would spend years trying to edge away from it, even while simultaneously trying to fulfill the goals Huxley applauded.

For the ceremony's most significant word was one not spoken: not a single participant uttered the word “God” or made any reference to the Almighty. This spectacular omission scandalized those who worried about or rejected a mechanistic and necessarily godless view of the universe. And it came in an era in which American universities had nearly two hundred endowed chairs of theology and fewer than five in medicine, an era in which the president of Drew University had said that, after much study and experience, he had concluded that only ministers of the Gospel should be college professors.

The omission also served as a declaration: the Hopkins would pursue the truth, no matter to what abyss it led.

In no area did the truth threaten so much as in the study of life. In no area did the United States lag behind the rest of the world so much as in its study of the life sciences and medicine. And in that area in particular, the influence of the Hopkins would be immense.

By 1918, as America marched into war, the nation had come not only to rely upon the changes wrought largely, though certainly not entirely, by men associated with the Hopkins; the United States Army had mobilized these men into a special force, focused and disciplined, ready to hurl themselves at an enemy.

● ● ●

The two most important questions in science are “What can I know?” and “How can I know it?”

Science and religion in fact part ways over the first question, what each can know. Religion, and to some extent philosophy, believes it can know, or at least address, the question “Why?”

For most religions the answer to this question ultimately comes down to the way God ordered it. Religion is inherently conservative; even one proposing a new God only creates a new order.

The question “why” is too deep for science. Science instead believes it can only learn “how” something occurs.

The revolution of modern science and especially medical science began as science not only focused on this answer to “What can I know?” but more important, changed its method of inquiry, changed its answer to “How can I know it?”

This answer involves not simply academic pursuits; it affects how a society governs itself, its structure, how its citizens live. If a society does set Goethe's “Word ... supremely high,” if it believes that it knows the truth and that it need not question its beliefs, then that society is more likely to enforce rigid decrees, and less likely to change. If it leaves room for doubt about the truth, it is more likely to be free and open.

In the narrower context of science, the answer determines how individuals explore nature—how one does science. And the way one goes about answering a question, one's methodology, matters as much as the question itself. For the method of inquiry underlies knowledge and often determines what one discovers: how one pursues a question often dictates, or at least limits, the answer.

Indeed, methodology matters more than anything else. Methodology subsumes, for example, Thomas Kuhn's well-known theory of how science advances. Kuhn gave the word “paradigm” wide usage by arguing that at any given point in time, a particular paradigm, a kind of perceived truth, dominates the thinking in any science. Others have applied his concept to nonscientific fields as well.

According to Kuhn, the prevailing paradigm tends to freeze progress, indirectly by creating a mental obstacle to creative ideas and directly by, for example, blocking research funds from going to truly new ideas, especially if they conflict with the paradigm. He argues that nonetheless researchers eventually find what he calls “anomalies” that do not fit the paradigm. Each one erodes the foundation of the paradigm, and when enough accrue to undermine it, the paradigm collapses. Scientists then cast about for a new paradigm that explains both the old and the new facts.

But the process—and progress—of science is more fluid than Kuhn's concept suggests. It moves more like an amoeba, with soft and ill-defined edges. More important, method matters. Kuhn's own theory recognizes that the propelling force behind the movement from one explanation to another comes from the methodology, from what we call the scientific method. But he takes as an axiom that those who ask questions constantly test existing hypotheses. In fact, with a methodology that probes and tests hypotheses—regardless of any paradigm—progress is inevitable. Without such a methodology, progress becomes merely coincidental.

Yet the scientific method has not always been used by those who inquire into nature. Through most of known history, investigators trying to penetrate the natural world, penetrate what we call science, relied upon the mind alone, reason alone. These investigators believed that they could know a thing if their knowledge followed logically from what they considered a sound premise. In turn they based their premises chiefly on observation.

This commitment to logic coupled with man's ambition to see the entire world in a comprehensive and cohesive way actually imposed blinders on science in general and on medicine in particular. The chief enemy of progress, ironically, became pure reason. And for the bulk of two and a half millennia—twenty-five hundred years—the actual treatment of patients by physicians made almost no progress at all.

One cannot blame religion or superstition for this lack of progress. In the West, beginning at least five hundred years before the birth of Christ, medicine was largely secular. While Hippocratic healers—the various Hippocratic texts were written by different people—did run temples and accept pluralistic explanations for disease, they pushed for material explanations.

Hippocrates himself was born in approximately 460 B.C. On the Sacred Disease, one of the more famous Hippocratic texts and one often attributed to him directly, even mocked theories that attributed epilepsy to the intervention of gods. He and his followers advocated precise observation, then theorizing. As the texts stated, “For a theory is a composite memory of things apprehended with sense perception.” “But conclusions which are merely verbal cannot bear fruit.” “I approve of theorizing also if it lays its foundation in incident, and deduces its conclusion in accordance with phenomena.”

But if such an approach sounds like that of a modern investigator, a modern scientist, it lacked two singularly important elements.

● ● ●

First, Hippocrates and his associates merely observed nature. They did not probe it.

This failure to probe nature was to some extent understandable. To dissect a human body then was inconceivable. But the authors of the Hippocratic texts did not test their conclusions and theories. A theory must make a prediction to be useful or scientific—ultimately it must say, If this, then that —and testing that prediction is the single most important element of modern methodology. Once that prediction is tested, it must advance another one for testing. It can never stand still.

Those who wrote the Hippocratic texts, however, observed passively and reasoned actively. Their careful observations noted mucus discharges, menstrual bleeding, watery evacuations in dysentery, and they very likely observed blood left to stand, which over time separates into several layers, one nearly clear, one of somewhat yellowy serum, one of darker blood. Based on these observations, they hypothesized that there were four kinds of bodily fluids, or “humours”: blood, phlegm, bile, and black bile. (This terminology survives today in the phrase “humoral immunity,” which refers to elements of the immune system, such as antibodies, that circulate in the blood.)

This hypothesis made sense, comported with observations, and could explain many symptoms. It explained, for example, that coughs were caused by the flow of phlegm to the chest. Observations of people coughing up phlegm certainly supported this conclusion.

In a far broader sense, the hypothesis also conformed to the ways in which the Greeks saw nature: they observed four seasons, four aspects of the environment—cold, hot, wet, and dry—and four elements—earth, air, fire, and water.

Medicine waited six hundred years for the next major advance, for Galen, but Galen did not break from these teachings; he systematized them, perfected them. Galen claimed, “I have done as much for medicine as Trajan did for the Roman Empire when he built the bridges and roads through Italy. It is I, and I alone, who have revealed the true path of medicine. It must be admitted that Hippocrates already staked out this path... . He prepared the way, but I have made it possible.”

Galen did not simply observe passively. He dissected animals and, although he did not perform autopsies on humans, served as a physician to gladiators whose wounds allowed him to see deep beneath the skin. Thus his anatomic knowledge went far beyond that of any known predecessor. But he remained chiefly a theoretician, a logician; he imposed order on the Hippocratic body of work, reconciling conflicts, reasoning so clearly that, if one accepted his premises, his conclusions seemed inevitable. He made the humoral theory perfectly logical, and even elegant. As the historian Vivian Nutton notes, Galen raised the theory to a truly conceptual level, separating the humours from direct correlation with bodily fluids and making them invisible entities “ recognizable only by logic.”

Galen's works were translated into Arabic and underlay both Western and Islamic medicine for nearly fifteen hundred years before facing any significant challenge. Like the Hippocratic writers, Galen believed that illness was essentially the result of an imbalance in the body. He also thought that balance could be restored by intervention; a physician thus could treat a disease successfully. If there was a poison in the body, then the poison could be removed by evacuation. Sweating, urinating, defecating, and vomiting were all ways that could restore balance. Such beliefs led physicians to recommend violent laxatives and other purgatives, as well as mustard plasters and other prescriptions that punished the body, that blistered it and theoretically restored balance. And of all the practices of medicine over the centuries, one of the most enduring—yet least understandable to us today—was a perfectly logical extension of Hippocratic and Galenic thought, and recommended by both.

This practice was bleeding patients. Bleeding was among the most common therapies employed to treat all manner of disorders.

Hippocrates and most of those who followed him—even deep into the nineteenth century—also believed that natural processes must not be interfered with. The various kinds of purging were meant to augment and accelerate natural processes, not resist them. Since pus, for example, was routinely seen in all kinds of wounds, pus was seen as a necessary part of healing. Until the late 1800s, physicians routinely would do nothing to avoid the generation of pus, and were reluctant even to drain it. Instead they referred to “laudable pus.”

Similarly, Hippocrates scorned surgery as intrusive, as interfering with nature's course; further, he saw it as a purely mechanical skill, beneath the calling of physicians who dealt in a far more intellectual realm. This intellectual arrogance would subsume the attitude of Western physicians for more than two thousand years.

This is not to say that for two thousand years the Hippocratic texts and Galen offered the only theoretical constructs to explain health and disease. Many ideas and theories were advanced about how the body worked, how illness developed. And a rival school of thought gradually developed within the Hippocratic-Galenic tradition that valued experience and empiricism and challenged the purely theoretical.

It is impossible to summarize all these theories in a few sentences, yet nearly all of them did share certain concepts: that health was a state of equilibrium and balance, and that illness resulted either from an internal imbalance within the body or from external environmental influences such as an atmospheric miasma, or from some combination of both.

But in the early 1500s three men began to challenge at least the methods of medicine. Paracelsus declared he would investigate nature “not by following that which those of old taught, but by our own observation of nature, confirmed by ... experiment and by reasoning thereon.”

Vesalius dissected human corpses and concluded that Galen's findings had come from animals and were deeply flawed. His De humani corporis fabrica , likely illustrated by a student of Titian, became a cornerstone of the Renaissance.

Fracastorius, an astronomer, mathematician, botanist, and poet, meanwhile hypothesized that diseases had specific causes and that contagion “passes from one thing to another and is originally caused by infection of the imperceptible particle.” One medical historian called his body of work “a peak maybe unequalled by anyone between Hippocrates and Pasteur.”

The contemporaries of these three men included Martin Luther and Copernicus, men who changed the world. In medicine the new ideas of Paracelsus, Vesalius, and Fracastorius did not change the world. In the actual practice of medicine they changed nothing at all.

But the approach they called for did create ripples while the scholasticism of the Middle Ages that stultified nearly all fields of inquiry was beginning to decay. In 1605 Francis Bacon in Novum Organum attacked the purely deductive reasoning of logic, calling “Aristotle ... a mere bondservant to his logic, thereby rendering it contentious and well nigh useless.” He also complained, “The logic now in use serves rather to fix and give stability to the errors which have their foundation in commonly received notions than to help the search after truth. So it does more harm than good.”

In 1628 Harvey traced the circulation of the blood, arguably perhaps the single greatest achievement of medicine—and certainly the greatest achievement until the late 1800s. And Europe was in intellectual ferment. Half a century later Newton revolutionized physics and mathematics. Newton's contemporary John Locke, trained as a physician, emphasized the pursuit of knowledge through experience. In 1753 James Lind conducted a pioneering controlled experiment among British sailors and demonstrated that scurvy could be prevented by eating limes—ever since, the British have been called “limeys.” David Hume, after this demonstration and following Locke, led a movement of “empiricism.” His contemporary John Hunter made a brilliant scientific study of surgery, elevating it from a barber's craft. Hunter also performed model scientific experiments, including some on himself—as when he infected himself with pus from a gonorrheal case to prove a hypothesis.

Then in 1798 Edward Jenner, a student of Hunter's—Hunter had told him “ Don't think. Try.”—published his work. As a young medical student Jenner had heard a milkmaid say, “I cannot take the smallpox because I have had cowpox.” The cowpox virus resembles smallpox so closely that exposure to cowpox gives immunity to smallpox. But cowpox itself only rarely develops into a serious disease. (The virus that causes cowpox is called “vaccinia,” taking its name from vaccination.)

Jenner's work with cowpox was a landmark, but not because he was the first to immunize people against smallpox. In China, India, and Persia, different techniques had long since been developed to expose children to smallpox and make them immune, and in Europe at least as early as the 1500s laypeople—not physicians—took material from a pustule of those with a mild case of smallpox and scratched it into the skin of those who had not yet caught the disease. Most people infected this way developed mild cases and became immune. In 1721 in Massachusetts, Cotton Mather took the advice of an African slave, tried this technique, and staved off a lethal epidemic. But “variolation” could kill. Vaccinating with cowpox was far safer than variolation.

From a scientific standpoint, however, Jenner's most important contribution was his rigorous methodology. Of his finding, he said, “ I placed it upon a rock where I knew it would be immoveable before I invited the public to take a look at it.”

But ideas die hard. Even as Jenner was conducting his experiments, despite the vast increase in knowledge of the body derived from Harvey and Hunter, medical practice had barely changed. And many, if not most, physicians who thought deeply about medicine still saw it in terms of logic and observation alone.

In Philadelphia, twenty-two hundred years after Hippocrates and sixteen hundred years after Galen, Benjamin Rush, a pioneer in his views on mental illness, a signer of the Declaration of Independence, and America's most prominent physician, still applied logic and observation alone to build “a more simple and consistent system of medicine than the world had yet seen.”

In 1796 he advanced a hypothesis as logical and elegant, he believed, as Newtonian physics. Observing that all fevers were associated with flushed skin, he concluded that this was caused by distended capillaries and reasoned that the proximate cause of fever must be abnormal “convulsive action” in these vessels. He took this a step further and concluded that all fevers resulted from disturbance of capillaries, and, since the capillaries were part of the circulatory system, he concluded that a hypertension of the entire circulatory system was involved. Rush proposed to reduce this convulsive action by “depletion,” i.e., venesection—bleeding. It made perfect sense.

He was one of the most aggressive of the advocates of “heroic medicine.” The heroism, of course, was found in the patient. In the early 1800s praise for his theories was heard throughout Europe, and one London physician said Rush united “in an almost unprecedented degree, sagacity and judgment.”

A reminder of the medical establishment's acceptance of bleeding exists today in the name of the British journal The Lancet, one of the leading medical journals in the world. A lancet was the instrument physicians used to cut into a patient's vein.

But if the first failing of medicine, a failing that endured virtually unchallenged for two millennia and then only gradually eroded over the next three centuries, was that it did not probe nature through experiments, that it simply observed and reasoned from observation to a conclusion, that failing was—finally—about to be corrected.

● ● ●

What can I know? How can I know it?

If reason alone could solve mathematical problems, if Newton could think his way through physics, then why could not man reason out the ways in which the body worked? Why did reason alone fail so utterly in medicine?

One explanation is that Hippocratic and Galenic theory did offer a system of therapeutics that seemed to produce the desired effect. They seemed to work. So the Hippocratic-Galenic model lasted so long not only because of its logical consistency, but because its therapies seemed to have effect.

Indeed, bleeding—today called “phlebotomy”—can actually help in some rare diseases, such as polycythemia, a rare genetic disorder that causes people to make too much blood, or hemachromatosis, when the blood carries too much iron. And in far more common cases of acute pulmonary edema, when the lungs fill with fluid, it could relieve immediate symptoms and is still sometimes tried. For example, in congestive heart failure excess fluid in the lungs can make victims extremely uncomfortable and, ultimately, kill them if the heart cannot pump the fluid out. When people suffering from these conditions were bled, they may well have been helped. This reinforced theory.

Even when physicians observed that bleeding weakened the patient, that weakening could still seem positive. If a patient was flushed with a fever, it followed logically that if bleeding alleviated those symptoms—making the patient pale—it was a good thing. If it made the patient pale it worked.

Finally, a euphoric feeling sometimes accompanies blood loss. This too reinforced theory. So bleeding both made logical sense in the Hippocratic and Galenic systems and sometimes gave physicians and patients positive reinforcement.

Other therapies also did what they were designed to do—in a sense. As late as the nineteenth century—until well after the Civil War in the United States—most physicians and patients still saw the body only as an interdependent whole, still saw a specific symptom as a result of an imbalance or disequilibrium in the entire body, still saw illness chiefly as something within and generated by the body itself. As the historian Charles Rosenberg has pointed out, even smallpox, despite its known clinical course and the fact that vaccination prevented it, was still seen as a manifestation of a systemic ill. And medical traditions outside the Hippocratic-Galenic model—from the “subluxations” of chiropractic to the “yin and yang” of Chinese medicine—have also tended to see disease as a result of imbalance within the body.

Physicians and patients wanted therapies to augment and accelerate, not block, the natural course of disease, the natural healing process. The state of the body could be altered by prescribing such toxic substances as mercury, arsenic, antimony, and iodine. Therapies designed to blister the body did so. Therapies designed to produce sweating or vomiting did so. One doctor, for example, when confronted with a case of pleurisy, gave camphor and recorded that the case was “suddenly relieved by profuse perspiration.” His intervention, he believed, had cured.

Yet a patient's improvement, of course, does not prove that a therapy works. For example, the 1889 edition of the Merck Manual of Medical Information recommended one hundred treatments for bronchitis, each one with its fervent believers, yet the current editor of the manual recognizes that “none of them worked.” The manual also recommended, among other things, champagne, strychnine, and nitroglycerin for seasickness.

And when a therapy clearly did not work, the intricacies—and intimacies—of the doctor-patient relationship also came into play, injecting emotion into the equation. One truth has not changed from the time of Hippocrates until today: when faced with desperate patients, doctors often do not have the heart—or, more accurately, they have too much heart—to do nothing. And so a doctor, as desperate as the patient, may try anything, including things he or she knows will not work as long as they will not harm. At the least, the patient will get some solace.

One cancer specialist concedes, “I do virtually the same thing myself. If I'm treating a teary, desperate patient, I will try low-dose alpha interferon, even though I do not believe it has ever cured a single person. It doesn't have side effects, and it gives the patient hope.”

Cancer provides other examples as well. No truly scientific evidence shows that echinacea has any effect on cancer, yet it is widely prescribed in Germany today for terminal cancer patients. Japanese physicians routinely prescribe placebos in treatment. Steven Rosenberg, a National Cancer Institute scientist who was the first person to stimulate the immune system to cure cancer and who led the team that performed the first human gene therapy experiments, points out that for years chemotherapy was recommended to virtually all victims of pancreatic cancer even though not a single chemotherapy regimen had ever been shown to prolong their lives for one day. (At this writing, investigators have just demonstrated that gemcitabine can extend median life expectancy by one to two months, but it is highly toxic.)

● ● ●

Another explanation for the failure of logic and observation alone to advance medicine is that unlike, say, physics, which uses a form of logic—mathematics—as its natural language, biology does not lend itself to logic. Leo Szilard, a prominent physicist, made this point when he complained that after switching from physics to biology he never had a peaceful bath again. As a physicist he would soak in the warmth of a bathtub and contemplate a problem, turn it in his mind, reason his way through it. But once he became a biologist, he constantly had to climb out of the bathtub to look up a fact.

In fact, biology is chaos. Biological systems are the product not of logic but of evolution, an inelegant process. Life does not choose the logically best design to meet a new situation. It adapts what already exists. Much of the human genome includes genes which are “conserved”; i.e., which are essentially the same as those in much simpler species. Evolution has built upon what already exists.

The result, unlike the clean straight lines of logic, is often irregular, messy. An analogy might be building an energy efficient farmhouse. If one starts from scratch, logic would impel the use of certain building materials, the design of windows and doors with kilowatt-hours in mind, perhaps the inclusion of solar panels on the roof, and so on. But if one wants to make an eighteenth-century farmhouse energy efficient, one adapts it as well as possible. One proceeds logically, doing things that make good sense given what one starts with, given the existing farmhouse. One seals and caulks and insulates and puts in a new furnace or heat pump. The old farmhouse will be—maybe—the best one could do given where one started, but it will be irregular; in window size, in ceiling height, in building materials, it will bear little resemblance to a new farmhouse designed from scratch for maximum energy efficiency.

For logic to be of use in biology, one has to apply it from a given starting point, using the then-extant rules of the game. Hence Szilard had to climb out of the bathtub to look up a fact.

Ultimately, then, logic and observation failed to penetrate the workings of the body not because of the power of the Hippocratic hypothesis, the Hippocratic paradigm. Logic and observation failed because neither one tested the hypothesis rigorously.

Once investigators began to apply something akin to the modern scientific method, the old hypothesis collapsed.

● ● ●

By 1800 enormous advances had been made in other sciences, beginning centuries earlier with a revolution in the use of quantitative measurement. Bacon and Descartes, although opposites in their views of the usefulness of pure logic, had both provided a philosophical framework for new ways of seeing the natural world. Newton had in a way bridged their differences, advancing mathematics through logic while relying upon experiment and observation for confirmation. Joseph Priestley, Henry Cavendish, and Antoine-Laurent Lavoisier created modern chemistry and penetrated the natural world. Particularly important for biology was Lavoisier's decoding of the chemistry of combustion and use of those insights to uncover the chemical processes of respiration, of breathing.

Still, all these advances notwithstanding, in 1800 Hippocrates and Galen would have recognized and largely agreed with most medical practice. In 1800 medicine remained what one historian called “the withered arm of science.”

In the nineteenth century that finally began to change—and with extraordinary rapidity. Perhaps the greatest break came with the French Revolution, when the new French government established what came to be called “the Paris clinical school.” One leader of the movement was Xavier Bichat, who dissected organs, found them composed of discrete types of material often found in layers, and called them “tissues”; another was René Laennec, inventor of the stethoscope.

Meanwhile, medicine began to make use of other objective measurements and mathematics. This too was new. Hippocratic writings had stated that the physician's senses mattered far more than any objective measurement, so despite medicine's use of logic, physicians had always avoided applying mathematics to the study of the body or disease. In the 1820s, two hundred years after the discovery of thermometers, French clinicians began using them. Clinicians also began taking advantage of methods discovered in the 1700s to measure other bodily functions precisely.

By then in Paris, Pierre Louis had taken an even more significant step. In the hospitals, where hundreds of charity cases awaited help, using the most basic mathematical analysis—nothing more than arithmetic—he correlated the different treatments patients received for the same disease with the results. For the first time in history, a physician was creating a reliable and systematic database. Physicians could have done this earlier. To do so required neither microscopes nor technological prowess; it required only taking careful notes.

Yet the real point at which modern medicine diverged from the classic was in the studies of pathological anatomy by Louis and others. Louis not only correlated treatments with results to reach a conclusion about a treatment's efficacy (he rejected bleeding patients as a useless therapy), he and others also used autopsies to correlate the condition of organs with symptoms. He and others dissected organs, compared diseased organs to healthy ones, learned their functions in intimate detail.

What he found was astounding, and compelling, and helped lead to a new conception of disease as something with an identity of its own, an objective existence. In the 1600s Thomas Sydenham had begun classifying diseases, but Sydenham and most of his followers continued to see disease as a result of imbalances, consistent with Hippocrates and Galen. Now a new “nosology,” a new classification and listing of disease, began to evolve.

Disease began to be seen as something that invaded solid parts of the body, as an independent entity, instead of being a derangement of the blood. This was a fundamental first step in what would become a revolution.

Louis's influence and that of what became known as “the numerical system” could not be overstated. These advances—the stethoscope, laryngoscope, ophthalmoscope, the measurements of temperature and blood pressure, the study of parts of the body—all created distance between the doctor and the patient, as well as between patient and disease; they objectified humanity. Even though no less a personage than Michel Foucault condemned this Parisian movement as the first to turn the human body into an object, these steps had to come to make progress in medicine.

But the movement was condemned by contemporaries also. Complained one typical critic, “ The practice of medicine according to this view is entirely empirical, is shorn of all rational induction, and takes a position among the lower grades of experimental observations and fragmentary facts.”

Criticism notwithstanding, the numerical system began winning convert after convert. In England in the 1840s and 1850s, John Snow began applying mathematics in a new way: as an epidemiologist. He had made meticulous observations of the patterns of a cholera outbreak, noting who got sick and who did not, where the sick lived and how they lived, where the healthy lived and how they lived. He tracked the disease down to a contaminated well in London. He concluded that contaminated water caused the disease. It was brilliant detective work, brilliant epidemiology. William Budd borrowed Snow's methodology and promptly applied it to the study of typhoid.

Snow and Budd needed no scientific knowledge, no laboratory findings, to reach their conclusions. And they did so in the 1850s, before the development of the germ theory of disease. Like Louis's study that proved that bleeding was worse than useless in nearly all circumstances, their work could have been conducted a century earlier or ten centuries earlier. But their work reflected a new way of looking at the world, a new way of seeking explanations, a new methodology, a new use of mathematics as an analytical tool. *

● ● ●

At the same time, medicine was advancing by borrowing from other sciences. Insights from physics allowed investigators to trace electrical impulses through nerve fibers. Chemists were breaking down the cell into its components. And when investigators began using a magnificent new tool—the microscope equipped with new achromatic lenses, which came into use in the 1830s—an even wider universe began to open.

In this universe Germans took the lead, partly because fewer French than Germans chose to use microscopes and partly because French physicians in the middle of the nineteenth century were generally less aggressive in experimenting, in creating controlled conditions to probe and even manipulate nature. (It was no coincidence that the French giants Pasteur and Claude Bernard, who did conduct experiments, were not on the faculty of any medical school. Echoing Hunter's advice to Jenner, Bernard, a physiologist, told one American student, “ Why think? Exhaustively experiment, then think.”)

In Germany, meanwhile, Rudolf Virchow—both he and Bernard received their medical degrees in 1843—was creating the field of cellular pathology, the idea that disease began at the cellular level. And in Germany great laboratories were being established around brilliant scientists who, more than elsewhere, did actively probe nature with experiments. Jacob Henle, the first scientist to formulate the modern germ theory, echoed Francis Bacon when he said, “ Nature answers only when she is questioned.”

And in France, Pasteur was writing, “I am on the edge of mysteries and the veil is getting thinner and thinner.”

Never had there been a time so exciting in medicine. A universe was opening.

Still, with the exception of the findings on cholera and typhoid—and even these won only slow acceptance—little of this new scientific knowledge could be translated into curing or preventing disease. And much that was being discovered was not understood. In 1868, for example, a Swiss investigator isolated deoxyribonucleic acid, DNA, from a cell's nucleus, but he had no idea of its function. Not until three-quarters of a century later, at the conclusion of some research directly related to the 1918 influenza pandemic, did anyone even speculate, much less demonstrate, that DNA carried genetic information.

So the advances of science actually, and ironically, led to “therapeutic nihilism.” Physicians became disenchanted with traditional treatments, but they had nothing with which to replace them. In response to the findings of Louis and others, in 1835 Harvard's Jacob Bigelow had argued in a major address that in “the unbiased opinion of most medical men of sound judgment and long experience ... the amount of death and disaster in the world would be less, if all disease were left to itself.”

His address had impact. It also expressed the chaos into which medicine was being thrown and the frustration of its practitioners. Physicians were abandoning the approaches of just a few years earlier and, less certain of the usefulness of a therapy, were becoming far less interventionist. In Philadelphia in the early 1800s Rush had called for wholesale bloodletting and was widely applauded. In 1862 in Philadelphia a study found that, out of 9,502 cases, physicians had cut a vein “in one instance only.”

Laymen as well were losing faith in and becoming reluctant to submit to the tortures of heroic medicine. And since the new knowledge developing in traditional medicine had not yet developed new therapies, rival ideas of disease and treatment began to emerge. Some of these theories were pseudoscience, and some owed as little to science as did a religious sect.

This chaos was by no means limited to America. Typical was Samuel Hahnemann, who developed homeopathy in Germany, publishing his ideas in 1810, just before German science began to emerge as the dominant force on the Continent. But nowhere did individuals feel freer to question authority than in America. And nowhere was the chaos greater.

Samuel Thomson, founder of a movement bearing his name that spread widely before the Civil War, argued that medicine was simple enough to be comprehended by everyone, so anyone could act as a physician. “May the time soon come when men and women will become their own priests, physicians, and lawyers—when self-government, equal rights and moral philosophy will take the place of all popular crafts of every description,” argued his movement's publication. His system used “botanic” therapeutics, and he charged, “ False theory and hypothesis constitute nearly the whole art of physic.”

Thomsonism was the most popular layman's medical movement but hardly the only one. Dozens of what can only be called sects arose across the countryside. A Thomsonian rhyme summed up the attitude: “The nest of college-birds are three, / Law, Physic and Divinity; / And while these three remain combined, / They keep the world oppressed and blind / ... Now is the time to be set free, / From priests’ and Doctors’ slavery.”

As these ideas spread, as traditional physicians failed to demonstrate the ability to cure anyone, as democratic emotions and anti-elitism swept the nation with Andrew Jackson, American medicine became as wild and democratic as the frontier. In the 1700s Britain had relaxed licensing standards for physicians. Now several state legislatures did away with the licensing of physicians entirely. Why should there be any licensing requirements? Did physicians know anything? Could they heal anyone? Wrote one commentator in 1846, “There is not a greater aristocratic monopoly in existence, than this of regular medicine—neither is there a greater humbug.” In England the title “Professor” was reserved for those who held university chairs, and, even after John Hunter brought science to surgery, surgeons often went by “Mister.” In America the titles “Professor” and “Doctor” went to anyone who claimed them. As late as 1900, forty-one states licensed pharmacists, thirty-five licensed dentists, and only thirty-four licensed physicians. A typical medical journal article in 1858 asked, “To What Cause Are We to Attribute the Diminished Respectability of the Medical Profession in the Esteem of the American Public?”

By the Civil War, American medicine had begun to inch forward, but only inch. The brightest lights involved surgery. The development of anesthesia, first demonstrated in 1846 at Massachusetts General Hospital, helped dramatically, and, just as Galen's experience with gladiators taught him much anatomy, American surgeons learned enough from the war to put them a step ahead of Europeans.

In the case of infectious and other disease, however, physicians continued to attack the body with mustard plasters that blistered the body, along with arsenic, mercury, and other poisons. Too many physicians continued their adherence to grand philosophical systems, and the Civil War showed how little the French influence had yet penetrated American medicine. European medical schools taught the use of thermometers, stethoscopes, and ophthalmoscopes, but Americans rarely used them and the largest Union army had only half a dozen thermometers. Americans still relieved pain by applying opiate powders on a wound, instead of injecting opium with syringes. And when Union Surgeon General William Hammond banned some of the violent purgatives, he was both court-martialed and condemned by the American Medical Association.

After the Civil War, America continued to churn out prophets of new, simple, complete, and self-contained systems of healing, two of which, chiropractic and Christian Science, survive today. (Evidence does suggest that spinal manipulation can relieve musculoskeletal conditions, but no evidence supports chiropractic claims that disease is caused by misalignment of vertebrae.)

Medicine had discovered drugs—such as quinine, digitalis, and opium—that provided benefits, but, as one historian has shown, they were routinely prescribed indiscriminately, for their overall effect on the body, not for a specific purpose; even quinine was prescribed generally, not to treat malaria. Hence Oliver Wendell Holmes, the physician father of the Supreme Court justice, was not much overstating when he declared, “I firmly believe that if the whole materia medica, as now used, could be sunk to the bottom of the sea, it would be all the better for mankind—and all the worse for the fishes.”

There was something else about America. It was such a practical place. If it was a nation bursting with energy, it had no patience for dalliance or daydreaming or the waste of time. In 1832, Louis had told one of his most promising protégés—an American—to spend several years in research before beginning a medical practice. The student's father was also a physician, James Jackson, a founder of Massachusetts General Hospital, who scornfully rejected Louis's suggestion and protested to Louis that “in this country his course would have been so singular, as in a measure to separate him from other men. We are a business doing people... . There is a vast deal to be done and he who will not be doing must be set down as a drone.”

In America the very fact that science was undermining therapeutics made institutions uninterested in supporting it. Physics, chemistry, and the practical arts of engineering thrived. The number of engineers particularly was exploding—from 7,000 to 226,000 from the late nineteenth century to just after World War I—and they were accomplishing extraordinary things. Engineers transformed steel production from an art into a science, developed the telegraph, laid a cable connecting America to Europe, built railroads crossing the continent and skyscrapers that climbed upward, developed the telephone—with automobiles and airplanes not far behind. The world was being transformed. Whatever was being learned in the laboratory about biology was building basic knowledge, but with the exception of anesthesia, laboratory research had only proven actual medical practice all but useless while providing nothing with which to replace it.

Still, by the 1870s, European medical schools required and gave rigorous scientific training and were generally subsidized by the state. In contrast, most American medical schools were owned by a faculty whose profits and salaries—even when they did not own the school—were paid by student fees, so the schools often had no admission standards other than the ability to pay tuition. No medical school in America allowed medical students to routinely either perform autopsies or see patients, and medical education often consisted of nothing more than two four-month terms of lectures. Few medical schools had any association with a university, and fewer still had ties to a hospital. In 1870 even at Harvard a medical student could fail four of nine courses and still get an M.D.

In the United States, a few isolated individuals did research—outstanding research—but it was unsupported by any institution. S. Weir Mitchell, America's leading experimental physiologist, once wrote that he dreaded anything “removing from me the time or power to search for new truths that lie about me so thick.” Yet in the 1870s, after he had already developed an international reputation, after he had begun experiments with snake venom that would lead directly to a basic understanding of the immune system and the development of antitoxins, he was denied positions teaching physiology at both the University of Pennsylvania and Jefferson Medical College; neither had any interest in research, nor a laboratory for either teaching or research purposes. In 1871 Harvard did create the first laboratory of experimental medicine at any American university, but that laboratory was relegated to an attic and paid for by the professor's father. Also in 1871 Harvard's professor of pathologic anatomy confessed he did not know how to use a microscope.

But Charles Eliot, a Brahmin with a birth defect that deformed one side of his face—he never allowed a photograph to show that side—had become Harvard president in 1869. In his first report as president, he declared, “The whole system of medical education in this country needs thorough reformation. The ignorance and general incompetency of the average graduate of the American medical Schools, at the time when he receives the degree which turns him loose upon the community, is something horrible to contemplate.”

Soon after this declaration, a newly minted Harvard physician killed three successive patients because he did not know the lethal dose of morphine. Even with the leverage of this scandal, Eliot could push through only modest reforms over a resistant faculty. Professor of Surgery Henry Bigelow, the most powerful faculty member, protested to the Harvard Board of Overseers, “[Eliot] actually proposes to have written examinations for the degree of doctor of medicine. I had to tell him that he knew nothing about the quality of the Harvard medical students. More than half of them can barely write. Of course they can't pass written examinations... . No medical school has thought it proper to risk large existing classes and large receipts by introducing more rigorous standards.”

Many American physicians were in fact enthralled by the laboratory advances being made in Europe. But they had to go to Europe to learn them. Upon their return they could do little or nothing with their knowledge. Not a single institution in the United States supported any medical research whatsoever.

As one American who had studied in Europe wrote, “I was often asked in Germany how it is that no scientific work in medicine is done in this country, how it is that many good men who do well in Germany and show evident talent there are never heard of and never do any good work when they come back here. The answer is that there is no opportunity for, no appreciation of, no demand for that kind of work here... . The condition of medical education here is simply horrible.”

● ● ●

In 1873, Johns Hopkins died, leaving behind a trust of $3.5 million to found a university and hospital. It was to that time the greatest gift ever to a university. Princeton's library collection was then an embarrassment of only a few books—and the library was open only one hour a week. Columbia was little better: its library opened for two hours each afternoon, but freshmen could not enter without a special permission slip. Only 10 percent of Harvard's professors had a Ph.D.

The trustees of Hopkins's estate were Quakers who moved deliberately but also decisively. Against the advice of Harvard president Charles Eliot, Yale president James Burrill Angell, and Cornell president Andrew D. White, they decided to model the Johns Hopkins University after the greatest German universities, places thick with men consumed with creating new knowledge, not simply teaching what was believed.

The trustees made this decision precisely because there was no such university in America, and precisely because they recognized the need after doing the equivalent of market research. A board member later explained, “There was a strong demand, among the young men of this country, for opportunities to study beyond the ordinary courses of a college or a scientific school... . The strongest evidence of this demand was the increased attendance of American students upon lectures of German universities.” The trustees decided that quality would sell. They intended to hire only eminent professors and provide opportunities for advanced study.

Their plan was in many ways an entirely American ambition: to create a revolution from nothing. For it made little sense to locate the new institution in Baltimore, a squalid industrial and port city. Unlike Philadelphia, Boston, or New York, it had no tradition of philanthropy, no social elite ready to lead, and certainly no intellectual tradition. Even the architecture of Baltimore seemed exceptionally dreary, long lines of row houses, each with three steps, crowding against the street and yet virtually no street life—the people of Baltimore seemed to live inward, in backyards and courtyards.

In fact, there was no base whatsoever upon which to build ... except the money, another American trait.

The trustees hired as president Daniel Coit Gilman, who left the presidency of the newly organized University of California after disputes with state legislators. Earlier he had helped create and had led the Sheffield Scientific School at Yale, which was distinct from Yale itself. Indeed, it was created partly because of Yale's reluctance to embrace science as part of its basic curriculum.

At the Hopkins, Gilman immediately recruited an internationally respected—and connected—faculty, which gave it instant credibility. In Europe, people like Huxley saw the Hopkins as combining the explosive energy and openness of America with the grit of science; the potential could shake the world.

To honor the Hopkins upon its beginnings, to honor this vision, to proselytize upon this new faith, Thomas Huxley came to America.

The Johns Hopkins would have rigor. It would have such rigor as no school in America had ever known.

The Hopkins opened in 1876. Its medical school would not open until 1893, but it succeeded so brilliantly and quickly that, by the outbreak of World War I, American medical science had caught up to Europe and was about to surpass it.

● ● ●

Influenza is a viral disease. When it kills, it usually does so in one of two ways: either quickly and directly with a violent viral pneumonia so damaging that it has been compared to burning the lungs; or more slowly and indirectly by stripping the body of defenses, allowing bacteria to invade the lungs and cause a more common and slower-killing bacterial pneumonia.

By World War I, those trained directly or indirectly by the Hopkins already did lead the world in investigating pneumonia, a disease referred to as “the captain of the men of death.” They could in some instances prevent it and cure it.

And their story begins with one man. /6KI9XFx4Dmhw1TJ0qUVqDfq3lpn8TlzMPSRITuwbt8KkIA6GNBt3JLn6mpMEkdh

点击中间区域
呼出菜单
上一章
目录
下一章
×