Ivan Ramirez, “Climate, Poverty, and Cholera Epidemics”

Poverty strikes again: cholera

HM6

Cholera and poverty: motors of public health

 Historians traditionally viewed the 19th-century public health movement as a direct response to the series of cholera pandemics of the period. The first epidemic to reach Europe (the first pandemic of 1817–1823 petered out after it spread from India to the Middle East and northern Africa) certainly raised consciousness about communal disease. From 1827, when the second pandemic began to spread out from its ordinary home in eastern India, Europe watched anxiously as the disease moved ever closer. Many European nations sent delegates at some stage during the four-year waiting game, to investigate the disease and make recommendations on how best to prevent its reaching Europe.

There were two main sources of concern. First, the disease was new to the West, an ‘exotic’ disease with which only tropical colonials would have had previous experience. The second pandemic moved throughout Europe and into North America, and introduced the medical profession to a serious new disorder with alarming symptoms and mortality rate. Its newness and epidemic character led many commentators to speak of the return of the plague, all the more disturbing since old-style bubonic plague seemed to have disappeared permanently from the West.

Second, the pattern of spread was puzzling. Two polarized explanatory paradigms were current to explain epidemic diseases: miasmatic and contagious. Miasmatists argued that communal diseases were spread through the air, the result of atmospheric conditions or particles contained in the air. The most commonly postulated source of the disease was rotting organic matter, such as refuse, faeces – anything, in fact, that was oppressive or smelled badly. The power of this paradigm is easily appreciated: the air is a common feature of a locality and could explain why many individuals might be affected. It also helped differentiate ‘healthy’ from ‘unhealthy’ localities, within a paradigm that would have been familiar to the author of the Hippocratic treatise Airs, Waters, Places. It was the dominant explanation for the complex of diseases, many of them unknown in the Old World, which Europeans encountered in tropical areas. They were generally known simply as ‘diseases of warm climates’, and oppressive heat and humidity and exotic vegetation were so obvious that evoking them to explain disease patterns made rational sense.

Contagionists postulated that epidemic diseases were spread from one afflicted individual to another. This could account for many aspects of epidemic disease, such as the fact that people nursing sick individuals often came down with the disease themselves. Contagionism justified the instinctive wish to avoid contact with people suffering from dangerous diseases, and underlay the practice of quarantine. It also preyed on collective fears of the origin of plague and other frightful diseases in marginalized groups.

A middle position, ‘contingent contagionism’, was less hard-line, and more easily adaptable to the anomalies that both the main positions had difficulty explaining. Contingent contagionists argued that diseases might be either miasmatic or contagious, depending on the circumstances. For instance, a disease might enter the community through corrupt air but some individuals could develop the disease in such a way that they then became foci of contagious spread. This mixed the categories in ways that the observations required, and covered all fronts. Unfortunately, theories that explain everything often explain very little.

A few diseases, such as smallpox and measles, were always viewed as contagious, but most communicable diseases had patterns of incidence and spread that were sufficiently complicated to leave much room for debate. Germ theory was later to offer a new paradigm for communicable and epidemic diseases, although there were still anomalies: why could two people exposed to the same source of infection react in such different ways, so that one came down with the disease and the other remained completely well?

Before germ theory, there was little consensus, and in practice communities often covered both alternatives. In plague outbreaks, for instance, quarantine and isolation were accompanied with fires, to purify the air, and nosegays, infusing the immediate inhalations. When in doubt, do both.

Cholera threw up these age-old issues in an urgent manner. The observers who went to watch its westward march came back with mixed reactions. Some thought that it was contagious and Europe’s best response was isolation and quarantine. Others believed that the air was the vehicle and that ordinary sanitary improvements – improving drainage, keeping the streets clean – were the best defence. European governments listened to the variety of opinions but mostly fell back on the time-honored solution of quarantine and inspection of people and goods arriving from the infected areas.

Even Britain, home of laissez-faire, dabbled with quarantine during the first pandemic to reach Western Europe, from 1830. Cholera arrived in Britain in late 1831, in Sunderland, a port in the northeast, and then travelled gradually in all directions, reaching London in early 1832. Its pattern of spread convinced miasmatists that the air was the culprit, and contagionists that it was propagated by human beings. Almost everyone had to conclude, after the epidemic had played itself out, that the system of quarantine had not done its job. Thereafter, British policy relied primarily on port inspection and isolation of suspicious cases, covering both paradigms. Britain had then by far the largest maritime commitment, and therefore the most to lose by costly and disruptive employment of quarantine. A regular series of International Sanitary conferences were held from 1851, primarily concerned with cholera. Britain and British India stood firm together in opposing quarantine as a routine agent of disease control. The economic consequences of such a policy were clear to all, and Britain’s scientific policy was blatantly dictated by commercial considerations.

The miasmatic position was consolidated by the leading figure in the early British public health movement, Edwin Chadwick (1800–1890). A lawyer by training, Chadwick had been the last secretary of the utilitarian philosopher and reformer Jeremy Bentham (1748–1832). From Bentham, Chadwick absorbed the doctrines of efficiency and the simple equation of good with happiness (‘the greatest good for the greatest number’ is the slogan of utilitarianism). Chadwick came to public health through his concern with poverty, and in particular the operation of the Poor Law, the legislative means of dealing with issues relating to the relief of poverty and destitution. The Old Poor Law, dating back to the late 16th century, had become woefully inadequate in a society undergoing rapid industrialization and urbanization. Britain was the first industrial nation, and the older ways of dealing with the poor were inappropriate in an industrial wage economy, with seasonal unemployment, urban poverty, and a growing class consciousness.

The brunt of the first European cholera epidemic was felt in 1832, an eventful year in other ways. A Reform Bill in Parliament went some way towards redressing unequal Parliamentary representation, the result of population shifts consequent on the rapid growth of industrial cities; the Bill also extended the franchise. Parliament set up a Poor Law Commission to examine how the Old Poor Law operated and make recommendations for its reform. This came after years of intense debate, part of it stimulated by T. R. Malthus’s Essay on the Principle of Population (first edition, 1798; sixth edition, 1826). Malthus had pointed out the double-edged nature of poor relief: keeping the poor alive could simply compound the misery of penury in later generations, when breeding paupers reproduced yet more dependency. The ‘law of population’ that Malthus elaborated stated that throughout nature, the capacities of organisms to reproduce always outstripped the number of offspring that could actually survive. Human beings were not exempt from this stern law, with the disparity caused by geometrical population increase set against the arithmetical increase in the means of subsistence. Disease, misery, war, vice, and want kept human populations down, and interfering with the system by keeping more pauper children alive did no good in the long run.

The Malthusian dilemma was merely one of the issues that the 1832 Poor Law Commission had to consider. Chadwick was its secretary and dominant figure, masterminding the systematic survey of how the 15,000 local parishes actually administered the Old Poor Law. Initiated in the time of Elizabeth I, in the late 16th century, this statute was designed to provide, from local taxes, a last safeguard for people who could not support themselves, through sickness, injury, unemployment, or other misfortunes. Designed for an overwhelmingly static, rural society, the Law had become increasingly inadequate, as Britain became more mobile, industrial, and urban, and reached a crisis after the close of the Napoleonic Wars in the 1810s, when thousands of military men returned home and could not find work. With 15,000 different local authorities administering it, there was wide disparity, something which deeply offended Chadwick’s utilitarian leanings. The Commissioners’

Report, published in 1834 and the basis for the New Poor Law of the same year, recommended streamlining and unifying its operations, so that similar rules and regulations extended over the whole country.

This New Poor Law, so hated by many for its harshness, served as the mechanism for poor relief until its abolition in 1929. Chadwick wanted to be a Commissioner of the new government department but had to content himself with being its paid Secretary. Administering the New Poor Law on a daily basis inevitably confronted Chadwick with the relationships between poverty and disease. Doctors had long noticed that epidemic diseases generally afflicted the poor more than the rich, and assumed that this was associated with their overcrowded living conditions, sparse diet, and other trappings of want.

Chadwick’s initial concern was with the fact that many of the demands on the Poor Law were because the breadwinner had fallen sick and could not work.

Disease could thus impoverish a family. The reverse proposition was more subtle: does poverty itself cause disease? Chadwick and many of his contemporaries preferred to put a moral spin on poverty per se, arguing that its ultimate cause lay in individual failing: imprudent marriages, failure to save, spending on drink and other vices. Nevertheless, since disease was a major factor in the causation of poverty, it followed that preventing what he called ‘filth diseases’ would ease the burden on the Poor Rate. As an ardent miasmatist, he attributed filth diseases such as cholera, typhus, and scarlet fever to the bad smells of rotting organic matter. The solution was easy: cleanliness. Dirt caused disease; cleanliness prevented it.

Chadwick’s journey from a Poor Law reformer to one obsessed with preventing disease occurred over the few years from 1834 to 1842, when he published a classic text of the early public health movement: Report on the Sanitary Condition of the Labouring Population of Great Britain. He used the new statistical approaches of the day (the civil registration of births, marriages, and deaths had started in 1837) to quantify the staggering differences of mortality rates and average expectation of years at birth between overcrowded, urban areas and rural ones, and between the rich and the poor. To solve the problem of filth diseases, Chadwick proposed what he called an arterio-venous system of water supply and sewage disposal. If running water under pressure were supplied to households, cleanliness would be easier; if sewage were taken away in glazed pipes impervious to leakage, the problems of cesspits and ground contamination would be solved. Further, if the sewage were taken away from cities to treatment plants, it could be turned to guano, sold to farmers at a profit, and crops would be increased, thereby improving nutrition. It was a neat engineering solution to public health, good in its context, though not solving all the problems that Chadwick’s limited view of disease causation envisioned.

He got his chance to influence public health in 1848, when cholera returned, and a Board of Health was established, with Chadwick one of three members (a fourth, a doctor, was added later). The Parliamentary Act setting up the Board was largely permissive, allowing communities to appoint a Medical Officer of Health (MoH) if 10% of their rate-payers petitioned for it. The MoH was obligatory only if the crude death rate in the area was greater than 23 per 1,000. The permissive clause was something of a Trojan Horse, since the MoHs raised the profile of prevention, and agitated for such officers throughout the country, on a statutory basis. This passage from permissive to statutory legislation became the pattern in liberal, laissez-faire societies, in ways that are still resonant.

Investigating almost any social issue uncovers others that need attention. Throughout his long life, Chadwick never abandoned his notion of filth disease, nor of the healing power of cleanliness. He left office against his will, in 1854, despite the return of cholera. His dictatorial style made too many enemies, and he wanted compulsory legislation to enter through the front door. It came, piecemeal and gradually, through the back one.

In the meantime, the nature of filth diseases was being reconceptualized. Only in hindsight did people realize that the Italian microscopist Filippo Pacini (1812–1883) had described during the 1854 pandemic the causative organism of cholera. Of equal moment, the London anaesthetist, epidemiologist, and general practitioner John Snow (1813–1858) demonstrated that cholera is not air but water borne. Snow was a medical apprentice during the original cholera outbreak in 1831–1832, and studied the disease as an established and ambitious practitioner during the 1848 and 1854 London epidemics. He provided good evidence from the 1848 epidemic that the disease was transmitted through water contaminated by faeces; he nailed his case through two classic community experiments during 1854. The Broad Street Pump is the most famous – the stuff of legends. This pump, in Soho, central London (the street is now called Broadwick Street), served many houses, most of which had no direct access to running water. By systematically investigating house to house the cases that occurred in the area of a single water pump, and tracing cases further afield from people who had drunk water from the pump, he incriminated it as the source of the disease. An open sewer drained into it. The dramatic removal of the pump handle was more symbolic than effective, since the epidemic was already on the wane, but the incident attracted a good deal of attention.

His second epidemiological investigation was more impressive. He compared the incidence of people buying Thames water from two separate companies: one filtered their water and drew it upstream, before the sewers of London had emptied into it; the other used unfiltered water from downstream, sewerage and all. In some instances, people in the same streets, living in similar housing and breathing the same air, had contracts with each of the two companies. He showed that people using the water of the ‘bad’ company had 13 times the chance of coming down with cholera than people using the better supplies.

Snow’s evidence seems obvious to us. It wasn’t to most of his contemporaries, and the nature and cause of cholera continued to be debated for decades, even, it turns out, after Robert Koch described the organism in 1884, in an age of bacteriology. Old ways of thinking die hard, although when cholera struck Hamburg during the 1890s pandemic, more people listened to Koch than had to Snow four decades previously. His evidence was impressive, but so was Snow’s.

Establishing the public health bureaucracy

‘In the beginning was the Word’, St John’s Gospel has it. Now, there is mostly the number. We live by the clock; follow the ups and downs of the stock markets or mortgage rates, experience the hottest, or wettest, month since records began. Contemporary society is permeated with numbers; they rule our lives.

Public health evidence is inevitably numerical. If the public health movement was in large measure a product of the industrialization and urbanization that transformed the Western world from the late 18th century, it also relied on the numerical mentality that accompanied the profits and losses of the factory system, the harnessing of steam, double-entry book-keeping, and the national census. Like us, the Victorians felt overwhelmed with facts and data.

Three dimensions to the quantification of medicine (and society more generally) should be highlighted: surveys, surveillance, and significance. The survey is the most basic. The 1832 Poor Law Commission has been described as the pioneering national survey, and it certainly was novel for its times. Chadwick and his fellow commissioners sent out a detailed questionnaire to each of the parishes responsible for Poor Law relief, and attempted to coordinate the replies. In the late 1830s, Chadwick commissioned surveys of the relationship between poverty, overcrowding, and filth diseases. One of the first acts of Chadwick’s successor as leader of the British Public Health Movement, John Simon (1816–1904), was a European wide survey of vaccination and its effectiveness, in relation to the issue of enforcing compulsory vaccination. This survey convinced him that the way to prevent smallpox was to have an active policy of free vaccination. During his years in office, Simon gradually became disillusioned with persuasion as a tool to achieve public health ends, and under his leadership, Britain acquired a vaccination system that was publicly funded, free, universal, and compulsory, with penalties for non-compliance.

Throughout the developed world, during the middle decades of the 19th century, the power of the number became appreciated. Social issues with medical ramifications were repeatedly investigated by surveys. Issues of poverty, child labour, and factory conditions, food adulteration, water supply, prostitution, building standards, and, of course, epidemic diseases, all came under scrutiny. Investigating one issue often threw up others that called out for attention. For instance, concern with the employment of young children in poorly paid and grinding jobs raised more general issues of education and child health. Charles Dickens’s Mr Gradgrind was not the only one in 19th-century Europe who wanted ‘the facts’, and ‘facts’ increasingly came in a table or other quantitative form. If surveys threw up all kinds of medical and social issues, surveillance was a complementary strategy, aimed at systematically following trends or following up on troubling problems. Many surveillance structures have long histories. For example, from medieval times, French butchers could expect periodic visits from inspectors examining the meat they were selling. Markets and fairs were conducted under regulations. Borders, ports, and walled towns were manned, especially during outbreaks of plague and other epidemic diseases; people and goods could expect to be inspected. In any case, absolute monarchs and despots needed information about the comings and goings of their enemies. The FBI, CIA, MI5, and KGB have many forerunners, although most earlier networks of surveillance were concerned with security and control rather than with health.

Once statutes are on the books, they need to be policed, and Medical Officers of Health, factory surgeons, port medical authorities, and the host of other individuals concerned with the public’s health became a visible part of 19th-century Western society. The starkest instance of the police functions of public health officials, as well as ordinary medical practitioners, is seen in the development of the concept of the notifiable disease. A number of local communities had insisted that cases of smallpox had to be reported to central authorities. From the 1880s, in the wake of bacteriology, national schemes were inaugurated and several diseases were identified as contagious and public health risks.

Smallpox, scarlet fever, typhoid fever, and, eventually, tuberculosis and syphilis became diseases in which the risk to the general public was deemed greater than the value of privacy and individual treatment by a medical man. Medical practitioners were required to add surveillance to their other tasks (resistance to the bureaucracy lessened after they were paid for filling out the forms), and although MoHs and equivalent officials in different countries occupied the front line, all doctors were expected to serve in the ranks.

The range of legal, medical, and ethical issues involved in surveillance is starkly seen in the famous case of Mary Mallon (1869–1939), ‘Typhoid Mary’. This Irish-born woman served as a cook for a series of wealthy New York families in the first decade of the 20th century. She was completely well but displayed all the characteristics that Robert Koch had recently identified as the ‘carrier state’, that is, she shed the bacteria of typhoid fever without suffering from the symptoms herself. She infected members of several families, and the isolated outbreaks were investigated by public health officials. A female immigrant, with limited education, and conscious of no wrong-doing, Mary was nevertheless a public health hazard, and incarcerated for her ‘crime’. Surveying was the activity of officials intent on uncovering new associations; surveillance became the duty of all doctors who encountered a patient with a notifiable disease. Statistics became the expertise of those especially trained to understand the nature of correlations and causations. The modern public health movement emerged simultaneously with statistical societies, and for many of the same reasons. Both were responses to industrialization, and the movement and the societies were peopled by many of the same concerned individuals.

Although the mathematics of probability had been developed from the late 17th century, its contemporary mathematical partner ‘statistics’ was in the early 19th century much less sophisticated. Statistical societies were mostly devoted to collecting many observations and presenting these in tabular form. The introduction of civil death registration in many European countries led to annual presentation of tabular causes of death, and at the same time required international attempts to standardize diagnostic categories. Although many of the symptom-based disease categories (such as ‘fever’ or ‘jaundice’) had to be abandoned as diseases in their own right, nosology still maintained its importance, as doctors both nationally and internationally wanted to be certain of the diseases that were put on death certificates or annual hospital reports.

Of equal lasting importance, ‘significance’ entered statistics, originally through the work of Charles Darwin’s cousin, Francis Galton (1822–1911). Galton became intrigued with the nature of heredity, and developed mathematical methods to examine the relative contributions of parents, as well as grandparents and other ancestors, to the inherited makeup of an individual. As the father of eugenics, he was especially concerned with what he perceived as the differential birth rate between feckless poor and responsible middle-class parents. He measured many human attributes, such as height, longevity, muscle strength, and ‘success’ in life. He put inheritance into the public health equation, in a field that had hitherto mostly concerned itself with environmental issues such as overcrowding and dirt. After Galton, both ‘nature’ and ‘nurture’ had to be considered.

Although Galton trained in mathematics and medicine (he never practised), it was his disciple Karl Pearson (1857–1936) who placed statistics at the centre of both experimental science and clinical medicine. Our notions of significance, with its ‘p’ value (the level of 95% confidence that the variable being measured is correct), owe much to Pearson. He studied inheritance in tuberculosis and alcoholism, but he was mostly interested in the role of inheritance in evolutionary biology. His pupils and followers placed mathematics at the centre of epidemiology and the evaluation of new therapies through the development of the clinical trial.

These 20th-century developments have transformed the simple surveys and tabulations of earlier public health advocates. But the 19th-century message of those concerned with diseases within the community has stuck: facts are important, and so are numbers. The méthode numérique that Louis had used so well within the hospital had resonance outside of it. Data had to be evaluated, in the hospital, community and the laboratory, and the mathematical and statistical tools to effect this have gained increasing importance in modern health research and disease prevention.

Medicine in the laboratory

Making medicine scientific

Western medicine has always fancied that it was ‘scientific’, but what that means has changed. The Hippocratics would have counted themselves in the ranks of science (the Greeks would have used words like ‘natural philosophy’). So would the many followers of Galen. The medicine they practiced had two fundamental ‘scientific’ attributes.The first was an underlying rationality, which surmised that, given their world views, their actions – the diagnoses and therapies – made sense. This is of course a relativistic view of science, since astrological medicine is also rational, assuming that one accepts the influence of the planets and stars on human behaviour and earthly events. To dismiss it, one needs to discount the underlying principles, not the rationality that governed the whole process of reasoning. The second was that medical practice has always been rooted in ‘experience’, from which we also derive the word ‘experiment’. ‘Experience’ told doctors and their patients that bloodletting, for instance, helped, or that a thousand other remedies that seem ineffective, even disgusting, to us, were just what the doctor ordered. Historians can attribute these encounters to the healing power of nature, to the patient getting better despite, not because of, his or her treatment, or to the old logical fallacy we have already encountered: post hoc, ergo propter hoc. These retrospective judgements do not invalidate what historical participants interpreted as ‘rational’, ‘scientific’ medicine. From the early-modern period, however, experience came increasingly to incorporate experiment, which was often situated in a laboratory. The word literally means a place where someone works, and laboratories were initially in people’s homes, and were simply rooms set aside by those with sufficient leisure to enquire into the secrets of nature. The quintessential early laboratory, and the one most frequently illustrated, was that of the alchemist, as natural philosophers sought to learn how to turn base metals into gold. The alchemist’s tools were the furnace, distiller, reagents, balance, and flasks of various sizes. Those interested in anatomy, physiology, and other life sciences would possess dissecting tables, surgical instruments, and other equipment to measure whatever parameter was under investigation. The Belgian physician J. B. Van Helmont (1579–1644) kept a young sapling in a pot for five years, watering it regularly with rain water. He then weighed the tree and its surrounding soil. The soil was more or less the same weight as when he had planted the sapling, whereas the tree then weighed 164 pounds, an increase Van Helmont attributed to the water. In Italy, Santorio Santorio (1561–1636) designed a chair in which he could carefully weigh himself, keeping a thorough tally of the weight of food and drink he ingested, and the weight of his excreta. The difference was what he lost in ‘insensible perspiration’, as he called it. William Harvey dissected snakes, toads, and other cold-blooded creatures, the better to observe the details of the heart-beat, in his quest to understand the ‘motion of the heart’ and circulation of the blood. Albrecht von Haller (1708–1777) conducted an extensive series of experiments on living animals in his differentiation of irritability (the capacity to react to external stimuli, a property of muscles) and sensitivity (the capacity to feel, the result of nervous function). The experimental impulse in medicine has a long tradition, often involving the quantitative spirit. What could be measured could be known.

One tool among many that might be found in these early scientific workplaces was the microscope. There were problems, realized at the time, of distortion and aberration, and historians have sometimes dismissed microscopy before the 19th century as a plaything of rich dilettantes. Recent scholarship has shown how important microscopy was in serious scientific debates from its early use in the 17th century, above all by Antoni van Leeuwenhoek (1632–1723), a self-taught microscopist who worked as a draper in the Netherlands, and Robert Hooke (1635–1703), also from humble origins but a man who rivalled Isaac Newton in the breadth of his research. Hooke coined the word ‘cell’ in his Micrographia (1665). Once the microscope allowed individuals to witness the new world that it revealed, the technical problems were set aside as an inconvenience, compared to the possibilities its use opened up. In the 19th century, the microscope became the symbol of the medical scientist, occupying the identical role that the stethoscope had for the progressive clinician.

Cells: ever smaller

The basic unit of medical understanding of disease has become steadily more refined. Humoralism worked with whole bodies; Morgagni used the organs as his default mode; Bichat noticed how important tissues were for classifying and analysing pathological changes. Cells then became the key, and have remained central, even as sub-cellular units and molecules have since been identified as crucial constituents of the dynamics of disease processes.

The cell theory that finally triumphed from the 1830s can be seen as the foundation stone of both modern medical science and biology. The word ‘biology’ dates from 1801, whereas ‘scientist’ was not coined until 1833. These two words suggest that something fundamental changed during those decades. In the early 19th century, several theories proposed some kind of microscopic unit from which whole organisms were composed. Some of these units, such as ‘globules’, were actually artefacts of the microscopes then in use. The technical problems were largely resolved in the late 1820s. Descriptions appeared regularly of units that are recognizable as our ‘cells’, as well as their contents, especially the nucleus. Then, in the successive years of 1838 and 1839, two German scientists, Mathias Schleiden (1804–81) and Theodor Schwann (1810–1882), proposed that cells are the building blocks of plants and animals, respectively. That they were both German is no accident, for much of modern biomedical research originated in Germany, within the German university system.

Schleiden was an academic botanist, but Schwann, trained as a doctor, was the pupil of the most important teacher of medical science, Johannes Müller (1801–1858). Schwann had a fabulously successful early research career, making fundamental discoveries about the nature of fermentation and digestion, as well as elaborating his cell theory. He argued that complex organisms were collections of integrated cells, and that therefore function, both normal and pathological, had to be understood in terms of the living characteristics of these entities. He believed that primitive cells, for instance, in early embryological development, or in tissues that were inflamed, could crystallize out from an amorphous fluid which he called the ‘blastema’. This theory seemed to square what the microscope could reveal with his notion that life was the product of essentially a physical process. Schwann soon abandoned his confident materialism and spent the last decades of his life in religious and philosophical speculations. His cell theory found general favour, however, and was modified and applied to medicine by others, especially by Rudolf Virchow (1821–1902), the dominant figure within 19th-century German medical science. Virchow was a life-long liberal in an increasingly militant German society, and in his youth had a touch of the political radical about him. He spearheaded a reformist group of young doctors during the revolutions that accompanied the 1848 cholera epidemic, spending a bit of time on the barricades that were thrown up by revolutionaries in Berlin. To remove him to a backwater, the Prussian authorities sent him to investigate an epidemic of typhus in Upper Silesia, now part of Poland but then within the Prussian sphere of influence. He wrote a report that the authorities did not wish to read, blaming the epidemic on social deprivation, poverty, illiteracy, and political inequality. These and similar epidemics were best controlled, he argued, through democracy, education, and economic justice. He believed that one important role of doctors was simply to campaign for such reforms. Doctors were the natural advocates of the poor, since their profession brought them into intimate contact with the economic and social causes of disease.

Virchow always maintained his interest in politics and sanitary reform, serving in the German parliament and the Berlin public health council. He liked to compare the body politic with the human body, cells becoming the body’s citizens. Doctors had to confront in their daily work the adverse effects on health of poverty. This man of incredible energy also pursued his interests in anthropology and archaeology, as well as editing several journals and multi-volume books. The pathology journal he founded and edited for more than half a century is still published, known as Virchows Archiv . And it was primarily as a pathologist that he is remembered. Always convinced that the microscope was central to understanding disease processes (‘Learn to see microscopically’, he taught his students), Virchow took previous cell theories and applied them to medicine. He came to doubt that Schwann’s ‘blastema’ was the source of new cells, such as those in early embryological development, or in inflammatory responses in the tissues, arguing instead that all cells come from mother cells (Omnis cellula e cellula). Although the slogan was not originally his, Virchow convinced the scientific world that cells do not crystallize or otherwise originate spontaneously, but that they are always the result of cell division. He elaborated his cellular pathology in the 1850s, in a series of articles, mostly in his own journals, and in 1858, then back in Berlin after seven years as professor of pathology in Würzburg, published a series of lectures, as Cellular Pathologie. In it he showed how cells were the fundamental units of physiological and pathological activity, and that routine clinical events, such as acute and chronic inflammation, cancer growth and spread, and bodily reactions to external stimulation such as irritation or pressure, could be fruitfully conceptualized in cellular terms. He placed the cell at the centre of pathology, even as he elaborated a more general biological principle. Virchow made many important observations on a variety of diseases, such as phlebitis, embolism, cancers, and amyloidosis, a rare disease that is still not well understood. He also was the most influential teacher of pathology in the 19th century, and many of the subsequent leaders in the field passed through his Institute in Berlin. He carried out some active animal experiments, but much of his work was spent examining pathological tissues and cells, and relating his own findings to the clinical issues that had occurred during the patient’s lifetime. He witnessed the development of new microscopical techniques, such as using microtomes to cut thin slices of tissues, the better to observe them, and stains to highlight features of cells, such as their nucleus and bodies in the cytoplasm. Although he was something of an experimentalist, experimental pathology came into its own only late in Virchow’s life, with bacteriology. Virchow followed this discipline with interest but never wholeheartedly endorsed germ theory.

Germs: the new gospel

In the medical pantheon, there are few saints more revered than St Louis – Louis Pasteur (1822–1895). That he was not even a qualified doctor, but trained in physics and chemistry, says much about the increasing importance of science for medicine. That he worked mostly in the laboratory, coming to the bedside only late in his life, to watch while doctors injected his rabies vaccine, reminds us of the place of the laboratory in our total picture of modern medicine.

Traditionally, the germ theory has been seen as the beginnings of effective, and therefore modern, medicine. Revisionist historians sometimes point out that the discovery that micro-organisms cause many of the most important historical diseases – typhus, tuberculosis, syphilis, cholera, malaria, smallpox, influenza, and many others – took decades of debate before some sort of consensus was reached. Further, so this revisionist account emphasizes, medicine remained therapeutically inept long after Pasteur was dead. The emergence of new diseases, such as HIV infection, Lassa fever, and Legionnaires ’ disease, the widespread development of drug resistance among micro-organisms, and the increasing prevalence of non-infectious chronic diseases in Western societies, have put germ theory into another perspective. From the 1950s, Thomas McKeown (1912–88), a professor of social medicine at Birmingham, published a series of influential studies arguing that the decline in mortality rates in Western societies was primarily affected by improvements in nutrition and general standards of living, and that organized medicine had contributed little, at least until the very recent past.

Within these readings of 19th-century medicine, the work of Pasteur, Robert Koch (1843–1910), and the other proponents of microbiology, bacteriology, and their attendant laboratory disciplines might have been doing interesting research, but its fundamental significance for patients and life expectancies had been exaggerated. What exactly did they find out, and did it matter all that much?

Pasteur was not the first to see bacteria and other micro-organisms, nor the first to talk about the ‘germs of disease’. But his researches, from the late 1850s, had a wonderful logic to them, and for few scientists is it easier to connect the entire career as a series of chance observations and opportunities for which the whole is greater than the considerable sum of its parts. He became interested in microorganisms while studying crystallization, and showed that crystals of tartaric acid (a by-product of the tanning industry) made by ordinary chemical means were always optically neutral, whereas those he obtained after micro-organisms had been at work rotated polarized light. This convinced him that living organisms had special capacities, and led him to study the properties of yeast and other industrially important organisms used in baking, brewing, and fermentation. His iconic experiments on spontaneous generation occupied him for several years in the early 1860s, and had special resonance in the wake of Darwin’s Origin of Species (1859). His famous swan-necked flasks, to exclude air-borne contamination after the solutions had been boiled to sterilize them, are part of our affectionate image of him.

To him, these experiments showed that spontaneous generation of micro-organisms does not occur, and he won the public debates with a colleague, who repeated his experiments and often found organisms swarming in the fluid. Analysis of Pasteur’s laboratory notebooks has shown that Pasteur’s experiments also sometimes ‘failed’ (i.e. had flasks with organisms in them), but that he quietly discarded these results. He was working with the hay bacillus (akin to the causative agent of anthrax) and the spore form of this bacterium is resistant to heat, so one would expect ‘negative’ results to Pasteur’s experiments. By suppressing these, Pasteur got the better of his opponents. He always had the most amazing knack of backing the right horse, and sticking to his guns. Alongside his spontaneous generation experiments, Pasteur worked actively with the role of yeasts and other micro-organisms as the cause of various fermentations: of beer, wine, or the souring of milk. Schwann and other German scientists had concluded that these important everyday reactions were merely chemical, but Pasteur insisted that they need living organisms to produce, and hence were vital processes. He provided important practical knowledge for wine-makers and brewers, as well as introducing ‘pasteurization’ as a means of sterilizing substances like milk, to retard their spoilage.

Such was his reputation by about 1870 that he was asked by the French government to investigate an apparently infectious disease of silkworms that was threatening the silk industry. He took his family with him, put them to work, and identified the two micro-organisms responsible, then showed how they could be prevented from doing their mischief. After this work, he increasingly began to talk about a ‘germ theory’ of disease, and to work on the disease-causing capacity of bacteria. Fittingly, for this non-medically qualified scientist, he tackled a disease common to animals and man, anthrax. Anthrax is unusual: unlike most bacterial infections, when animals or human beings suffer from anthrax, the bacterium can routinely be seen on slides made from the blood (the ‘blood smear’). Accepting that these bacteria were the causative agents, he (and several rival workers) sought ways to ‘attenuate’ the bacterium, so that it might produce immunity without causing the disease. Having what he thought was a satisfactory anthrax vaccine, Pasteur did a daring thing (he was a skilled publicist, perhaps the first major scientist to court the media): he invited journalists to see the inoculation of farm animals with his vaccine, then to witness the injection of live, virulent anthrax bacillae. The public result was the death of many of the unprotected animals, but not those vaccinated (he coined this term as a general one in honour of Jenner). It was reported worldwide. After anthrax, Pasteur lived in the public domain. He was ready for it, for his final major discovery was a treatment for rabies, a relatively rare disease, but one which killed so horribly that it provoked fear and trembling. Pasteur had to work at rabies blind, for rabies is (we now know) caused by viruses, tiny organisms which in Pasteur’s time were known only through their effects. Smallpox, yellow fever, measles, influenza, and a host of other viral diseases had already made their presence known. The word ‘virus’ had long been used in a general sense, as a ‘poison’ that caused disease, but it was given its more precise biological meaning early in the 20th century, as a ‘filterable virus’, that is, a small agent that passed through filters that would trap bacteria and other larger biological causes of disease. Tissue culture methods and, eventually, the electron microscope made identification and classification of viruses possible.

For Pasteur, dealing with the rabies ‘virus’ also meant working with an agent that he did not know how to cultivate. Instead, recognizing that the symptoms of rabies pointed to some kind of infection of the nervous system, he worked with the spinal cords of rabbits, and by passing the infected material serially, learned how to make the ‘poison’ of rabies more or less virulent. The latent time between the bite of a rabid dog or other animal, and the development of symptoms in the victim, meant that there might be time to stimulate resistance in the person who had been bitten. There were so many imponderables that such a grant application would not get past the first hurdle of a modern funding agency, and Pasteur’s whole enterprise, given what he and his contemporaries knew about rabies and viruses, would have been attempted only by a person possessed with what the Greeks called hubris.

Unlike Greek tragic heroes, however, Pasteur brought off his rabies treatment, and went from scientific stardom to scientific sainthood. His first public patient, Joseph Meister, survived after being bitten by a dog which was probably rabid, and other patients were soon treated. The rabies treatment created international acclaim, with patients coming to Paris from all over Europe (time was of the essence) to receive the course of injections. It also convinced many members of the public that medical research was worthwhile, and they voted with their pocketbooks. The Pasteur Institute in Paris was funded largely by public subscription. It opened in 1888 to great fanfare and was followed by many more, throughout the French area of influence and beyond. Many of these peripheral Instituts Pasteur were devoted largely to making vaccines and other biological products: the Paris headquarters manufactured vaccines, but it also had (and has) medical research as its primary objective. Pasteur spent the last seven years of his life presiding over his eponymous institute, where he also lived, died, and is buried.

Robert Koch headed a couple of institutes as well, although his were mostly funded by the German state, symptomatic of the differences in outlook towards scientific research between Germany and the rest of the world. Relations between France and Germany were frosty after the swift defeat of France by Bismarck’s Prussian forces in the Franco-Prussian War (1870–1881). Science was (and is) supposed to be international and objective, cutting across barriers of race, religion, nationality, or gender. The reality has always been different, and Koch and Pasteur actually played out these national antipathies in their personal and professional relationships. Pasteur sent back his honours from German states after the Franco-Prussian War, and refused to drink German beer, and Koch was eager to score as many points as he could when confronted with French microbiological and immunological findings.

Their meetings at international conferences were formal but frosty. There were ample scientific spoils to satisfy them both in the rich pickings of early bacteriology, but they possessed completely different scientific styles. Pasteur preferred to cultivate his microorganisms in flasks, constantly changing the nutrients in the culture soup. He kept much of his research private to himself and his closest colleagues. Koch, a generation younger, was much more precise in his techniques. He introduced photomicrography, the better to present objective data to the world. He cultivated bacteria on agar-agar, a solid medium that minimized the problems of contamination. He pioneered the use of sterilization equipment, and his pupil Petri introduced the eponymous dish. Koch was a medical bacteriologist; Pasteur was a microbiologist whose fascination was with this world of the very small. Pasteur went from triumph to triumph, whereas Koch enjoyed a couple of decades of brilliant achievement and an old age in which he could not recapture the glories of his scientific youth. Koch’s first significant work involved anthrax, and as a general practitioner just after the Franco-Prussian War, he worked out this complex bacterium’s life cycle, which has a spore form accounting for its ability to lie dormant in the soil for many years. The research so impressed one of his old teachers that he secured research facilities for Koch. The early results were little short of amazing: the technical innovations mentioned above, important work on the role of bacteria in the genesis of wound infections, and, crowning it all, the identification of the causative organisms of the most important disease of the 19th century, tuberculosis (1882), and of the most anxiety-provoking, cholera (1884).

Both identifications were considerable technical achievements. The tubercle bacillus is fastidious, slow-growing, and difficult to stain. It was not an obvious candidate for a bacteriological cause, as a chronic disease with an extensive literature relating it to a variety of constitutional and environmental factors.

Koch reported his cholera work from India, whence he had gone after French and German expeditions had travelled to Egypt in 1883, to investigate a cholera outbreak there. The French expedition was disastrous, one of its promising young Pasteurians dying, and the expedition returning without any positive results. Koch believed that he and his group had identified the cholera organism in Egypt, but being certain of any specific organism in the gut is tricky, since there are so many bacteria always living there. Koch then went to India, the traditional home of cholera, and identified a comma-shaped organism in both water supplies and the excretions of cholera victims. Cholera had been perceived so much as a disease of filth, foul water, and high water tables that Koch’s identification of a specific organism was only slowly accepted. The leading German hygienist, Max von Pettenkofer (1818–1901), had his own theory of the necessary interaction of several causative factors, of which the ‘germ’ was only one. In a famous gesture, he publicly swallowed a flask of Koch’s bacillus, and developed only mild diarrhoea, but nothing like the full-blown disease of cholera.

The pros and cons of Koch’s bacillus were still being learnedly debated in the 1890s. A partially effective cholera vaccine prepared in India from the bacillus by the Russian-born bacteriologist Waldemar Haffkine (1860–1930) helped to turn the tide, and its spread by the oral-faecal route seemed to answer most of the epidemiological issues. By the 1890s, scientifically attuned medical opinion on germ theory had shifted, and most debates were whether some specific organism caused some specific disease, or, as more was learned about immunology and the pathophysiology of infection, about the nature of bacterial toxins. The principle of the germ theory had been integrated into medical textbooks, and most medical students would have learned it in their studies. Some medical men still rejected it, of course, and others thought that bacteria might be partially instrumental in infectious diseases, but hardly sufficient. The gold standard of causation was Koch’s postulates, implied but never as concisely articulated by him as by his student Friedrich Löffler (1852–1915), who wrote of diphtheria: If diphtheria is a disease caused by a micro-organism, it is essential that three postulates are fulfilled. The fulfillment of these postulates is necessary in order to demonstrate strictly the parasitic nature of a disease:

1. The organism must be shown to be constantly present in characteristic form and arrangement in the diseased tissue.

2. The organism which, from its behaviour appears to be responsible for the disease must be isolated and grown in pure culture.

3. The pure culture must be shown to induce the disease experimentally.

But the gold standard was hard to achieve in many diseases, and the more bacteriologists and immunologists learned about the pathophysiology of infection, the more subtle the whole process was revealed to be. Bacteria could easily be cultivated from the skin, gut, pharynx, or bodily fluids of people with no obvious signs of disease, and many of these bacteria were identical with those that in other individuals were implicated in disease. Skeptics of the whole process could point to the fact that many germs which one doctor identified as causative, other doctors doubted. Germs were associated with many conditions that were later assigned to other causes. Koch himself identified the ‘carrier’ state, important in the case of Typhoid Mary, in which a pathogenic germ was ‘harbored’ by a completely healthy individual. Outbreaks of many diseases, when investigated, threw up intricate issues of why some people succumbed to a disease, and others, similarly exposed, went scot-free. The viral diseases behaved like ‘germ’ ones, but their agents could not be seen. A number of diseases that we now recognize as viral had bacteria proposed as their causative agents. Much had to be taken on trust, and doctors did disagree.