The Wonderful Century

The ATLANTIC’Shundred years have seen science and technology transform our world not once but repeatedly. BERNARD COHEN, Harvard’s distinguished historian of science, tells the story in terms of the very different worlds of 1850, of 1900, and of 1950.

by I. BERNARD COHEN

THE achievements of the nineteenth century in science and technology — “especially as regards man’s increased power over nature, and the applications of that power to the needs of his life” — were described in 1898 in terms of an “importance and grandeur” never known before. It seemed as if so much had been accomplished in a hundred years that there was hardly any basis for a comparison with what man had done “in any preceding century . . . perhaps even . . . the whole period that has elapsed since the Stone Age.” The author of these phrases, Alfred Russel Wallace, wrote with a sense of participation in the accomplishments of what he called “the wonderful century,” since he was an independent co-discoverer of the principle of organic evolution by natural selection.

In order to prove his point, Wallace enumerated the major inventions and scientific discoveries from the Stone Age to the end of the eighteenth century and compared them with those made during the nineteenth century. It may seem surprising that he listed “lucifer matches” (or friction matches) among the great inventions of his time. Yet Wallace could recall that in his boyhood the first morning’s task had been to search among the ashes in the fireplace for a glowing coal from the previous evening’s fire. If he failed to find one, it would be his job to go to a neighbor’s house with a bucket to “borrow some fire”—a procedure that had not changed appreciably since the Stone Age.

High on his list of major inventions Wallace placed the railroad and the steamboat, and the conveyance of intelligence by the application of electricity to the telegraph and telephone, enabling men to send messages faster than the wind over continents and even oceans. Finally he drew attention to the revolution in illumination: gas lighting and the new infant prodigy, illumination by electricity.

Reviewing the list of inventions made in the nineteenth century, we cannot help noticing that they were largely the products of mechanical ingenuity rather than the results of the application of newly discovered scientific principles and that, furthermore, they were not produced by organized or systematic research. Even the telephone and the telegraph, though both harnessed the power of the electric current (discovered in the opening year of the nineteenth century), did not depend for their operation on any principles that were not generally known. Neither Samuel F. B. Morse nor Alexander Graham Bell had or needed a profound knowledge of the latest advances in electrical science.

But today we take it for granted that large-scale scientific research and development are the necessary conditions for technological, agricultural, and medical progress. A hundred years ago science was not yet conceived to be the pathfinder of the practical arts, and even fifty years ago it was not the custom for industrial concerns to support large-scale research establishments, or for governments to expend huge sums of money in scientific research for public health or national defense.

A few years ago, on a visit to the laboratory of the General Electric Company in Schenectady, I met Dr. Willis R. Whitney, the founder of this laboratory and its first director. Dr. Whitney was still busily engaged in research, as was his successor, Dr. William D. Coolidge. The then director of the laboratory, Dr. Guy C. Suits, was the third in the line of succession. Thus the whole history of this laboratory was encompassed by the span of three careers and the active life of a single man. So new was the idea of industrial scientific research at the beginning of the present century that when Dr. Whitney accepted the new post at General Electric, he suggested that he retain his professorship at M.I.T. on a half-time basis. Today it seems inconceivable that he could have believed the new job would not occupy a man’s full time.

Because scientific research today conditions our health, our standard of living, our national wealth, and our military security, the scientist is no longer — as he still was in the nineteenth century — an academic recluse who can spend his days in ignorance of the travails of society or its pressing needs. Darwin could live out his life in solitude and in isolation, for he did not have the experience that Einstein had of being aware that his additions to knowledge might provide the basis for a new weapon, a new technology, or a new approach to any of the practical problems of the world.

IN CHEMISTRY we see the dramatic demonstration of how even the most abstract and theoretical advances might be applied to practical purposes. Modern scientific chemistry dates only from the “chemical revolution” associated with Lavoisier at the end of the eighteenth century. By the middle of the nineteenth century, the question was not yet fully resolved as to whether the compounds found in animals and plants could be made synthetically in the chemist’s test tube or whether they required for their manufacture some sort of “vital force.” The chemists, their imaginations fortified by new methods of analysis, began to envisage the synthesis of living substances in their laboratories and even the production of wholly new substances with more useful properties than those found in nature in plants and animals. In 1856, a young chemistry student, William Henry Perkin, devoted his Easter holiday to synthesizing quinine — that is, to producing quinine in his test tube from aniline, a product of the distillation of coal to make illuminating gas. That the eighteen-year-old student failed in his endeavor is not surprising. But the experiment that failed yielded a colorant of mauve hue, the first synthetic, coal-tar, or aniline dye.

At the time of Perkin’s discovery, the sources of coloring matter for dyeing cloth were still chiefly of animal and vegetable origin. Black dye was obtained from logwood, blue from the indigo plant, and “Turkey red” or alizarin from the roots of the madder plant. Animal dyes were obtained from such insects as the kermes and cochineal, while the most famous and most costly, “royal Tyrian purple,” was squeezed one drop at a time from the Mediterranean shellfish Murex brandaris — 12,000 shellfish yielding only 23 grains of coloring matter. The beautiful purple dye mauve, synthesized by Perkin in 1856, was not found in animal and plant material, and its discovery opened up to the chemist the possibility of creating an endless variety of colorants whose hues would outrival nature herself.

In 1858, Perkin made a second revolutionary discovery: he synthesized alizarin. Starting with coal tar, he made in the laboratory a chemical substance in every way identical with the alizarin then obtained from the madder plant. Until about 1870, some 400,000 acres of arable land in Southern Europe and Asia Minor were under cultivation for the production of alizarin, France alone having 50,000 acres devoted to growing the madder plant. From this natural supply, there were obtained some 750 tons of pure dye. But by World War I, 2000 tons were being produced synthetically in chemical plants, and madder had become a botanical curiosity of no commercial importance. At the same time the cost of the dye had been greatly reduced, since the chemist found he could make alizarin more efficiently than nature could. The sudden change in the economy of the madder-producing regions showed, for perhaps the first time, the vast social consequences of a single scientific discovery.

Even more dramatic was the story of synthetic indigo. The source of the world’s indigo until the end of the nineteenth century was India, where over a million acres were set aside for the dye-producing plant. In 1897 India produced more than 8000 tons of indigo dye, which yielded an annual income of about $20 million. But by 1914 India had only a few thousand acres still devoted to indigo culture, and Germany had achieved a 96 per cent monopoly. What had happened between 1897 and 1914 was that Germany had captured the market by the force of research in organic chemistry.

The exact composition and structure of the indigotin molecule (the active coloring part of indigo) was elucidated by the German chemist Adolf Baeyer after about fifteen years of research, and in 1880 he produced a small quantity synthetically in his laboratory. Baeyer’s method of synthesizing indigotin was a triumph of pure science and showed the power of the new organic chemistry. Yet the initial cost of making indigo dye in this way so far exceeded the cost of obtaining the dye from the indigo plant that Baeyer’s achievement must have appeared to many practical men as an academic curiosity. Germany was then a rising national power, and the great British indigo monopoly in India appeared a prize well worth capturing. For seventeen years German chemists worked to find a way to make synthetic indigotin cheaply, and some 35 million was spent before the new product was ready for the market. This was the first large-scale program of organized applied research in history.

After the event, one of the foremost German chemists claimed that the success of this unprecedented venture was due to some special “moral” qualities possessed by German science and required specifically for research in organic chemistry. Delivered at a dinner in London to honor Perkin and to celebrate the fiftieth anniversary of the discovery of mauve, this German address must have outraged the British scientists, who were told that they could pursue inorganic chemistry but that they lacked the “moral” qualities that had enabled German organic chemists to wrest the dye industry from the British grasp.

Germany was the first to give a convincing demonstration of the decisive role that long-term applied scientific research might play in the economic contests between nations. Furthermore, the gigantic growth of the synthetic dye industry in Germany was not without military consequences. Since unstable dyes are explosives, Germany had built up before 1914 the vastest potential explosives industry the world had ever known.

Since that time it has been clear that no nation can face the future militarily or economically without a sound system of applied scientific research.

Apart from its effects upon society, the most striking difference between science as it was a hundred years ago and as it is now is the sheer magnitude of the enterprise. In 1800, scientists still published their discoveries in books and pamphlets or in the proceedings of learned societies or academies, such as the Royal Society of London, the Academy of Sciences at Paris, or the American Philosophical Society.

In 1857, it was still possible for a scientist to read all the articles published in his field. Today the physical magnitude of such a job would make it quite impossible. Shortly before World War II, there were about 5,0,000 scientific periodicals published throughout the world, involving the publication of about 1,000,000 articles a year or 20,000 per week. The number has greatly increased since then, and it may be conjectured that now at least some 30,000 scientific articles are published every week. Such is the colossus of present-day science.

A CENTURY ago two contrasting ideas stirred men’s minds. The first was the doctrine of conservation of energy, announced in 1848; the second, the doctrine of evolution through natural selection, announced in 1858. Each of these concepts was a product of multiple discovery, three independent investigators claiming the conservation of energy and two conceiving independently the doctrine of evolution through natural selection. The bare fact that each of these major discoveries was made simultaneously and independently by two or more investigators shows us the closeness of the chase with which scientists pursue similar ideas, which has become a rather constant feature of modern scientific enterprise.

The doctrine of the conservation of energy was very important for the contemplation of the life of man, because it implied that the total energy of the universe is constant, that no new energy will be created. Following the principles of thermodynamics, the present store of available energy will gradually be used up or transformed into a lower or “degraded” state, where it will no longer be usable. According to this sober doctrine, therefore, the life of the universe insofar as man is concerned is finite and limited, and it is even possible to predict the date on which life as we know it on this earth must cease to exist — when the sun’s energy will fail, as it ultimately must.

A conservative doctrine, this principle implied a constancy in the universe. Human events, such as those that marked the year 1848, in which the principle was announced, might betoken change and instability, but the guiding principle for understanding the universe, its very regulating principle, was constancy. How this contrasted with the doctrine of evolution by natural selection! Here was a law of continuous change, which stated in effect that even the forms of life were not constant, but had evolved through centuries. This new doctrine bespoke the limitless possibilities of variation and the survival of those variations most suited to their environment. Evolution thus implied a dynamic world of almost infinite possibilities of change and so ran counter to general modes of thought in the mid-nineteenth century. Much more harmonious with the generally accepted world view was the point of view of the physical scientist, who saw all change as mere rearrangement of the matter, motion, electric charges, and energy that had existed since the beginning of the world.

The harmonious world of physics did not last very long. By the decade 1895-1905, the basic principles of classical physical science had been shattered. This decade saw events that threatened the foundations of the biological sciences, too. Probably the major discovery in the life sciences at this time was in the field of genetics. In 1900 the laws of genetics were established independently by Correns, Tschermak, and De Vries. To their amazement each of these co-discoverers soon found that he had been anticipated by the work of an Augustinian monk named Gregor Mendel, whose careful experiments and theoretical deductions had been published in 1865 and had nestled comfortably in the literature like a butterfly in its cocoon — present but out of sight. One of the reasons for the neglect of Mendel’s work is that it must have seemed out of harmony with the ideas of evolution. Mendel emphasized a conservative aspect of plant and animal societies, a tendency toward constancy rather than change or variation, which is a necessary condition for evolution. In Mendel’s experiments one could discern the constant heritable characters which might, disappear for a time in hybrids only to reappear again unchanged. It was almost as if the physical basis of heredity were a set of particles which maintained their identity somewhat as atoms do when they combine to form molecules. In the last fifty years the work of Thomas H. Morgan and his associates, and of Curt Stern, Theodosius Dobzhansky, G. D. Darlington, and others, has firmly established the theory of the gene (as the unit of the Mendelian factor is now called).

Today the study of genetics is a major part of the subject of evolution. But in the first years of our century the establishment of the principles of genetics seemed to threaten the foundations of evolutionary theory. If a new variation or mutation were to appear in nature, and if it were adapted for survival, genetics could account for the transmission of the new characters; but how could these new characters have appeared in the first place if heredity is governed by genes? In part this question has been answered by the realization that the genes produced in each plant or animal, which transmit the heritable characters to the individuals of the succeeding generation, may on occasion be imperfect copies of the prototype; the result will be an alteration of the character or characters controlled by that gene in future generations. This new gene will remain constant through generation after generation until it mutates once again. Since, as H. J. Muller discovered, the frequency of mutation is increased by X rays and radioactive radiation, one can readily understand the concern of many scientists lest a rise in the radioactive level of the earth upset the normal process of evolution and rapidly produce mutations which may survive even though they may have undesirable but heritable characters.

One of the great puzzles of present biology is why genes should be so stable — why mutations do not occur more frequently. It is generally agreed that the gene is a very complex protein molecule; if so, how does it come to be “copied” so exactly from one generation to the next? The protein molecule itself, the fundamental structural unit of living matter, is not yet very fully understood. We have not, for example, been able to synthesize the natural proteins in our test tubes. But some of the other mysteries of living matter have, in the twentieth century, been resolved by the methods of science.

ONE of the greatest mysteries in science at the end of the nineteenth century was the cause of the apparent heat in radioactive substances. A small block of radium is always one or two degrees higher in temperature than its environment, and this is true whether such a piece of radium is kept at room temperature, placed in boiling water, or put in a deep freeze. According to the established principles of conservation in physics, energy is never created or destroyed, but is only transferred or transformed. Heat energy in particular is always transferred from hot bodies to cold bodies, so that a hot body will always lose heat until it reaches the temperature of its environment. This was the line of reasoning that led to the notion of the “heat death” of the universe, the heat energy of the sun and the stars being transferred to colder bodies in the solar system and to the colder environment, until eventually the available heat becomes degraded throughout the whole of space. From the classical point of view, to produce energy on this earth it is necessary to have a source; hence if a piece of radium stays hotter than its environment, it must either receive the heat from somewhere else or produce it by means of chemical action such as the burning of coal — neither of which can apply. In radium, energy appeared in an unlimited quantity and seemed to have been created ad hoc.

Even more disturbing was the observation that these mysterious radioactive substances — chiefly uranium, thorium, radium, polonium — seemed to have some power within them whereby matter was destroyed. A uranium atom suddenly dies and in its place there appear a thorium atom and a helium atom. Not only did matter seem to be able to change spontaneously from one kind of chemical material into another, contrary to all established rules, but this process proved to be one that was regulated by chance and the laws of probability rather than by the causal or determinative principles. If causality were to apply to radioactive disintegration, it would be necessary that the uranium atoms which disintegrate be noticeably different from the uranium atoms which do not disintegrate; but all of man’s art in exploring physics has not been able to show that there is any discernible difference between the atom of uranium that will die in the next minute, the one that will die one year from now, and the one that will not die until one hundred years have passed. All that can be said about the death of uranium or radium atoms is that each one has the same statistical probability of death and disintegration as any other that according to the laws of pure chance fate selects some atoms rather than others to die and leave wholly new kinds of atoms in their places.

In a sample of radioactive material, the first appearance is that energy is coming from nothing. Such is not really the case. What actually happens is explained by the theory of relativity of 1905, in which Einstein showed that energy and mass are related by the famous equation E = mc2. It is now held true that energy is not always conserved, that energy may disappear, and that energy may be created; but whenever energy does appear or does disappear, there must be a corresponding loss or gain in mass. Thus what seemed originally to be a simple violation of the principle of the conservation of energy turned out to be but the first example of a wider conservation principle, according to which neither energy nor mass (matter) is conserved by itself, but according to which the total amount of matter and energy is a constant within any given system.

The notable equation relating mass and energy was but one of the results of the famous theory of relativity of 1905, the others being the conclusion that space, time, and matter have no meaning in an absolute sense — that space (in the form of distance) and time can only be measured relative to an observer. The result is that if two systems or frames of reference are in relative motion with respect to one another, the observable quantities of size and shape, and of time and matter, cannot be conceived in independence of the relative motion. The conclusions of this theory are confirmed daily in cyclotrons and other particle accelerators throughout the world, and in every nuclear reaction produced in the laboratory. This particular theory has had a greater popular appeal than any other scientific theory in the history of thought with the exception of Darwinian evolution and Newtonian mechanics.

It may seem surprising, therefore, that according to Einstein himself, it was another theory of the year 1905 which was truly revolutionary. Many physicists had almost hit upon the theory of relativity when Einstein announced it in 1905, and Einstein said that if he had not published the theory in that year, it would shortly thereafter have been discovered by the French physicist Paul Langevin. But the theory of photons, published by Einstein in that same year, would not have been discovered by anyone else, at least not for a number of years. How revolutionary that theory was can be seen in the fact that it took about twenty years for it to become generally accepted by physicists, and its implications are not even fully clear at the present time. It involves the basic dualism between the continuous and the discontinuous, between wave and particle, and marks the beginning of our complete inability to conceive the minute physical world on the atomic scale in terms of mechanical models or pictorial concepts of tiny billiard balls.

The triumphs of modern physics have been of a magnitude without precedent in the history of thought. Thanks to the labors of J. J. Thomson, Rutherford, Bohr, and their pupils, we have penetrated within the atom and we have succeeded in tapping the resources of energy stored in the atomic nucleus. We have even realized the alchemist’s dream of changing one type of chemical element into another. But I believe that anyone who appreciates the growth of scientific thought will consider the most interesting aspect of science in our time to be its mysteries. In physics there are the as yet unknown forces that hold the parts of the atomic nucleus together and the question of the meaning and function in the scheme of nature of the rapidly growing number of known cosmic particles. In biology there is still the problem of the basic chemistry of life, and in astronomy the mystery of the outer reaches of space and the creation and destruction of matter.

There are some among us who envisage a time when all the major questions will be solved. But the history of science to date would rather indicate that as long as the human imagination may contemplate the world of nature, the solution of one problem will always give rise to another problem of a different sort. The more experiments that we perform, the greater is our respect for the creative mind of man which can invent a “natural order” to encompass the data of experience in a systematic array. Today it seems clear, as it did to men at the birth of science, that in the pursuit of science, truth is a direction rather than a destination.