购买
下载掌阅APP,畅读海量书库
立即打开
畅读海量书库
扫码下载掌阅APP

3 THE REVEREND EVANS’S UNIVERSE

WHEN THE SKIES are clear and the Moon is not too bright, the Reverend Robert Evans, a quiet and cheerful man, lugs a bulky telescope onto the back deck of his home in the Blue Mountains of Australia, about fifty miles west of Sydney, and does an extraordinary thing. He looks deep into the past and finds dying stars.

Looking into the past is of course the easy part. Glance at the night sky and what you see is history and lots of it—the stars not as they are now but as they were when their light left them. For all we know, the North Star, our faithful companion, might actually have burned out last January or in 1854 or at any time since the early fourteenth century and news of it just hasn’t reached us yet. The best we can say—can ever say—is that it was still burning on this date 680 years ago. Stars die all the time. What Bob Evans does better than anyone else who has ever tried is spot these moments of celestial farewell.

By day, Evans is a kindly and now semiretired minister in the Uniting Church in Australia, who does a bit of freelance work and researches the history of nineteenth-century religious movements. But by night he is, in his unassuming way, a titan of the skies. He hunts supernovae.

Supernovae occur when a giant star, one much bigger than our own Sun, collapses and then spectacularly explodes, releasing in an instant the energy of a hundred billion suns, burning for a time brighter than all the stars in its galaxy. “It’s like a trillion hydrogen bombs going off at once,” says Evans. If a supernova explosion happened within five hundred light-years of us, we would be goners, according to Evans—“it would wreck the show,” as he cheerfully puts it. But the universe is vast, and supernovae are normally much too far away to harm us. In fact, most are so unimaginably distant that their light reaches us as no more than the faintest twinkle. For the month or so that they are visible, all that distinguishes them from the other stars in the sky is that they occupy a point of space that wasn’t filled before. It is these anomalous, very occasional pricks in the crowded dome of the night sky that the Reverend Evans finds.

To understand what a feat this is, imagine a standard dining room table covered in a black tablecloth and someone throwing a handful of salt across it. The scattered grains can be thought of as a galaxy. Now imagine fifteen hundred more tables like the first one—enough to fill a Wal-Mart parking lot, say, or to make a single line two miles long—each with a random array of salt across it. Now add one grain of salt to any table and let Bob Evans walk among them. At a glance he will spot it. That grain of salt is the supernova.

Evans’s is a talent so exceptional that Oliver Sacks, in An Anthropologist on Mars , devotes a passage to him in a chapter on autistic savants—quickly adding that “there is no suggestion that he is autistic.” Evans, who has not met Sacks, laughs at the suggestion that he might be either autistic or a savant, but he is powerless to explain quite where his talent comes from.

“I just seem to have a knack for memorizing star fields,” he told me, with a frankly apologetic look, when I visited him and his wife, Elaine, in their picture-book bungalow on a tranquil edge of the village of Hazelbrook, out where Sydney finally ends and the boundless Australian bush begins. “I’m not particularly good at other things,” he added. “I don’t remember names well.”

“Or where he’s put things,” called Elaine from the kitchen.

He nodded frankly again and grinned, then asked me if I’d like to see his telescope. I had imagined that Evans would have a proper observatory in his backyard—a scaled-down version of a Mount Wilson or Palomar, with a sliding domed roof and a mechanized chair that would be a pleasure to maneuver. In fact, he led me not outside but to a crowded storeroom off the kitchen where he keeps his books and papers and where his telescope—a white cylinder that is about the size and shape of a household hot-water tank—rests in a homemade, swiveling plywood mount. When he wishes to observe, he carries them in two trips to a small deck off the kitchen. Between the overhang of the roof and the feathery tops of eucalyptus trees growing up from the slope below, he has only a letter-box view of the sky, but he says it is more than good enough for his purposes. And there, when the skies are clear and the Moon not too bright, he finds his supernovae.

The term supernova was coined in the 1930s by a memorably odd astrophysicist named Fritz Zwicky. Born in Bulgaria and raised in Switzerland, Zwicky came to the California Institute of Technology in the 1920s and there at once distinguished himself by his abrasive personality and erratic talents. He didn’t seem to be outstandingly bright, and many of his colleagues considered him little more than “an irritating buffoon.” A fitness buff, he would often drop to the floor of the Caltech dining hall or other public areas and do one-armed pushups to demonstrate his virility to anyone who seemed inclined to doubt it. He was notoriously aggressive, his manner eventually becoming so intimidating that his closest collaborator, a gentle man named Walter Baade, refused to be left alone with him. Among other things, Zwicky accused Baade, who was German, of being a Nazi, which he was not. On at least one occasion Zwicky threatened to kill Baade, who worked up the hill at the Mount Wilson Observatory, if he saw him on the Caltech campus.

But Zwicky was also capable of insights of the most startling brilliance. In the early 1930s, he turned his attention to a question that had long troubled astronomers: the appearance in the sky of occasional unexplained points of light, new stars. Improbably he wondered if the neutron—the subatomic particle that had just been discovered in England by James Chadwick, and was thus both novel and rather fashionable—might be at the heart of things. It occurred to him that if a star collapsed to the sort of densities found in the core of atoms, the result would be an unimaginably compacted core. Atoms would literally be crushed together, their electrons forced into the nucleus, forming neutrons. You would have a neutron star. Imagine a million really weighty cannonballs squeezed down to the size of a marble and—well, you’re still not even close. The core of a neutron star is so dense that a single spoonful of matter from it would weigh 200 billion pounds. A spoonful! But there was more. Zwicky realized that after the collapse of such a star there would be a huge amount of energy left over—enough to make the biggest bang in the universe. He called these resultant explosions supernovae. They would be—they are—the biggest events in creation.

On January 15, 1934, the journal Physical Review published a very concise abstract of a presentation that had been conducted by Zwicky and Baade the previous month at Stanford University. Despite its extreme brevity—one paragraph of twenty-four lines—the abstract contained an enormous amount of new science: it provided the first reference to supernovae and to neutron stars; convincingly explained their method of formation; correctly calculated the scale of their explosiveness; and, as a kind of concluding bonus, connected supernova explosions to the production of a mysterious new phenomenon called cosmic rays, which had recently been found swarming through the universe. These ideas were revolutionary to say the least. Neutron stars wouldn’t be confirmed for thirty-four years. The cosmic rays notion, though considered plausible, hasn’t been verified yet. Altogether, the abstract was, in the words of Caltech astrophysicist Kip S. Thorne, “one of the most prescient documents in the history of physics and astronomy.”

Interestingly, Zwicky had almost no understanding of why any of this would happen. According to Thorne, “he did not understand the laws of physics well enough to be able to substantiate his ideas.” Zwicky’s talent was for big ideas. Others—Baade mostly—were left to do the mathematical sweeping up.

Zwicky also was the first to recognize that there wasn’t nearly enough visible mass in the universe to hold galaxies together and that there must be some other gravitational influence—what we now call dark matter. One thing he failed to see was that if a neutron star shrank enough it would become so dense that even light couldn’t escape its immense gravitational pull. You would have a black hole. Unfortunately, Zwicky was held in such disdain by most of his colleagues that his ideas attracted almost no notice. When, five years later, the great Robert Oppenheimer turned his attention to neutron stars in a landmark paper, he made not a single reference to any of Zwicky’s work even though Zwicky had been working for years on the same problem in an office just down the hall. Zwicky’s deductions concerning dark matter wouldn’t attract serious attention for nearly four decades. We can only assume that he did a lot of pushups in this period.

Surprisingly little of the universe is visible to us when we incline our heads to the sky. Only about 6,000 stars are visible to the naked eye from Earth, and only about 2,000 can be seen from any one spot. With binoculars the number of stars you can see from a single location rises to about 50,000, and with a small two-inch telescope it leaps to 300,000. With a sixteen-inch telescope, such as Evans uses, you begin to count not in stars but in galaxies. From his deck, Evans supposes he can see between 50,000 and 100,000 galaxies, each containing tens of billions of stars. These are of course respectable numbers, but even with so much to take in, supernovae are extremely rare. A star can burn for billions of years, but it dies just once and quickly, and only a few dying stars explode. Most expire quietly, like a campfire at dawn. In a typical galaxy, consisting of a hundred billion stars, a supernova will occur on average once every two or three hundred years. Finding a supernova therefore was a little bit like standing on the observation platform of the Empire State Building with a telescope and searching windows around Manhattan in the hope of finding, let us say, someone lighting a twenty-first-birthday cake.

So when a hopeful and softspoken minister got in touch to ask if they had any usable field charts for hunting supernovae, the astronomical community thought he was out of his mind. At the time Evans had a ten-inch telescope—a very respectable size for amateur stargazing but hardly the sort of thing with which to do serious cosmology—and he was proposing to find one of the universe’s rarer phenomena. In the whole of astronomical history before Evans started looking in 1980, fewer than sixty supernovae had been found. (At the time I visited him, in August of 2001, he had just recorded his thirty-fourth visual discovery; a thirty-fifth followed three months later and a thirty-sixth in early 2003.)

Evans, however, had certain advantages. Most observers, like most people generally, are in the northern hemisphere, so he had a lot of sky largely to himself, especially at first. He also had speed and his uncanny memory. Large telescopes are cumbersome things, and much of their operational time is consumed with being maneuvered into position. Evans could swing his little sixteen-inch telescope around like a tail gunner in a dogfight, spending no more than a couple of seconds on any particular point in the sky. In consequence, he could observe perhaps four hundred galaxies in an evening while a large professional telescope would be lucky to do fifty or sixty.

Looking for supernovae is mostly a matter of not finding them. From 1980 to 1996 he averaged two discoveries a year—not a huge payoff for hundreds of nights of peering and peering. Once he found three in fifteen days, but another time he went three years without finding any at all.

“There is actually a certain value in not finding anything,” he said. “It helps cosmologists to work out the rate at which galaxies are evolving. It’s one of those rare areas where the absence of evidence is evidence.”

On a table beside the telescope were stacks of photos and papers relevant to his pursuits, and he showed me some of them now. If you have ever looked through popular astronomical publications, and at some time you must have, you will know that they are generally full of richly luminous color photos of distant nebulae and the like—fairy-lit clouds of celestial light of the most delicate and moving splendor. Evans’s working images are nothing like that. They are just blurry black-and-white photos with little points of haloed brightness. One he showed me depicted a swarm of stars with a trifling flare that I had to put close to my face to see.

This, Evans told me, was a star in a constellation called Fornax from a galaxy known to astronomy as NGC1365. (NGC stands for New General Catalogue, where these things are recorded. Once it was a heavy book on someone’s desk in Dublin; today, needless to say, it’s a database.) For sixty million silent years, the light from the star’s spectacular demise traveled unceasingly through space until one night in August of 2001 it arrived at Earth in the form of a puff of radiance, the tiniest brightening, in the night sky. It was of course Robert Evans on his eucalypt-scented hillside who spotted it.

“There’s something satisfying, I think,” Evans said, “about the idea of light traveling for millions of years through space and just at the right moment as it reaches Earth someone looks at the right bit of sky and sees it. It just seems right that an event of that magnitude should be witnessed.”

Supernovae do much more than simply impart a sense of wonder. They come in several types (one of them discovered by Evans) and of these one in particular, known as a Ia supernova, is important to astronomy because it always explodes in the same way, with the same critical mass. For this reason it can be used as a standard candle to measure the expansion rate of the universe.

In 1987 Saul Perlmutter at the Lawrence Berkeley lab in California, needing more Ia supernovae than visual sightings were providing, set out to find a more systematic method of searching for them. Perlmutter devised a nifty system using sophisticated computers and charge-coupled devices—in essence, really good digital cameras. It automated supernova hunting. Telescopes could now take thousands of pictures and let a computer detect the telltale bright spots that marked a supernova explosion. In five years, with the new technique, Perlmutter and his colleagues at Berkeley found forty-two supernovae. Now even amateurs are finding supernovae with charge-coupled devices. “With CCDs you can aim a telescope at the sky and go watch television,” Evans said with a touch of dismay. “It took all the romance out of it.”

I asked him if he was tempted to adopt the new technology. “Oh, no,” he said, “I enjoy my way too much. Besides”—he gave a nod at the photo of his latest supernova and smiled—“I can still beat them sometimes.”

The question that naturally occurs is “What would it be like if a star exploded nearby?” Our nearest stellar neighbor, as we have seen, is Alpha Centauri, 4.3 light-years away. I had imagined that if there were an explosion there we would have 4.3 years to watch the light of this magnificent event spreading across the sky, as if tipped from a giant can. What would it be like if we had four years and four months to watch an inescapable doom advancing toward us, knowing that when it finally arrived it would blow the skin right off our bones? Would people still go to work? Would farmers plant crops? Would anyone deliver them to the stores?

Weeks later, back in the town in New Hampshire where I live, I put these questions to John Thorstensen, an astronomer at Dartmouth College. “Oh no,” he said, laughing. “The news of such an event travels out at the speed of light, but so does the destructiveness, so you’d learn about it and die from it in the same instant. But don’t worry because it’s not going to happen.”

For the blast of a supernova explosion to kill you, he explained, you would have to be “ridiculously close”—probably within ten light-years or so. “The danger would be various types of radiation—cosmic rays and so on.” These would produce fabulous auroras, shimmering curtains of spooky light that would fill the whole sky. This would not be a good thing. Anything potent enough to put on such a show could well blow away the magnetosphere, the magnetic zone high above the Earth that normally protects us from ultraviolet rays and other cosmic assaults. Without the magnetosphere anyone unfortunate enough to step into sunlight would pretty quickly take on the appearance of, let us say, an overcooked pizza.

The reason we can be reasonably confident that such an event won’t happen in our corner of the galaxy, Thorstensen said, is that it takes a particular kind of star to make a supernova in the first place. A candidate star must be ten to twenty times as massive as our own Sun and “we don’t have anything of the requisite size that’s that close. The universe is a mercifully big place.” The nearest likely candidate he added, is Betelgeuse, whose various sputterings have for years suggested that something interestingly unstable is going on there. But Betelgeuse is fifty thousand light-years away.

Only half a dozen times in recorded history have supernovae been close enough to be visible to the naked eye. One was a blast in 1054 that created the Crab Nebula. Another, in 1604, made a star bright enough to be seen during the day for over three weeks. The most recent was in 1987, when a supernova flared in a zone of the cosmos known as the Large Magellanic Cloud, but that was only barely visible and only in the southern hemisphere—and it was a comfortably safe 169,000 light-years away.

Supernovae are significant to us in one other decidedly central way. Without them we wouldn’t be here. You will recall the cosmological conundrum with which we ended the first chapter—that the Big Bang created lots of light gases but no heavy elements. Those came later, but for a very long time nobody could figure out how they came later. The problem was that you needed something really hot—hotter even than the middle of the hottest stars—to forge carbon and iron and the other elements without which we would be distressingly immaterial. Supernovae provided the explanation, and it was an English cosmologist almost as singular in manner as Fritz Zwicky who figured it out.

He was a Yorkshireman named Fred Hoyle. Hoyle, who died in 2001, was described in an obituary in Nature as a “cosmologist and controversialist” and both of those he most certainly was. He was, according to Nature ’s obituary, “embroiled in controversy for most of his life” and “put his name to much rubbish.” He claimed, for instance, and without evidence, that the Natural History Museum’s treasured fossil of an Archaeopteryx was a forgery along the lines of the Piltdown hoax, causing much exasperation to the museum’s paleontologists, who had to spend days fielding phone calls from journalists from all over the world. He also believed that Earth was not only seeded by life from space but also by many of its diseases, such as influenza and bubonic plague, and suggested at one point that humans evolved projecting noses with the nostrils underneath as a way of keeping cosmic pathogens from falling into them.

It was he who coined the term “Big Bang,” in a moment of facetiousness, for a radio broadcast in 1952. He pointed out that nothing in our understanding of physics could account for why everything, gathered to a point, would suddenly and dramatically begin to expand. Hoyle favored a steady-state theory in which the universe was constantly expanding and continually creating new matter as it went. Hoyle also realized that if stars imploded they would liberate huge amounts of heat—100 million degrees or more, enough to begin to generate the heavier elements in a process known as nucleosynthesis. In 1957, working with others, Hoyle showed how the heavier elements were formed in supernova explosions. For this work, W. A. Fowler, one of his collaborators, received a Nobel Prize. Hoyle, shamefully, did not.

According to Hoyle’s theory, an exploding star would generate enough heat to create all the new elements and spray them into the cosmos where they would form gaseous clouds—the interstellar medium as it is known—that could eventually coalesce into new solar systems. With the new theories it became possible at last to construct plausible scenarios for how we got here. What we now think we know is this:

About 4.6 billion years ago, a great swirl of gas and dust some 15 billion miles across accumulated in space where we are now and began to aggregate. Virtually all of it—99.9 percent of the mass of the solar system—went to make the Sun. Out of the floating material that was left over, two microscopic grains floated close enough together to be joined by electrostatic forces. This was the moment of conception for our planet. All over the inchoate solar system, the same was happening. Colliding dust grains formed larger and larger clumps. Eventually the clumps grew large enough to be called planetesimals. As these endlessly bumped and collided, they fractured or split or recombined in endless random permutations, but in every encounter there was a winner, and some of the winners grew big enough to dominate the orbit around which they traveled.

It all happened remarkably quickly. To grow from a tiny cluster of grains to a baby planet some hundreds of miles across is thought to have taken only a few tens of thousands of years. In just 200 million years, possibly less, the Earth was essentially formed, though still molten and subject to constant bombardment from all the debris that remained floating about.

At this point, about 4.5 billion years ago, an object the size of Mars crashed into Earth, blowing out enough material to form a companion sphere, the Moon. Within weeks, it is thought, the flung material had reassembled itself into a single clump, and within a year it had formed into the spherical rock that companions us yet. Most of the lunar material, it is thought, came from the Earth’s crust, not its core, which is why the Moon has so little iron while we have a lot. The theory, incidentally, is almost always presented as a recent one, but in fact it was first proposed in the 1940s by Reginald Daly of Harvard. The only recent thing about it is people paying any attention to it.

When Earth was only about a third of its eventual size, it was probably already beginning to form an atmosphere, mostly of carbon dioxide, nitrogen, methane, and sulfur. Hardly the sort of stuff that we would associate with life, and yet from this noxious stew life formed. Carbon dioxide is a powerful greenhouse gas. This was a good thing because the Sun was significantly dimmer back then. Had we not had the benefit of a greenhouse effect, the Earth might well have frozen over permanently, and life might never have gotten a toehold. But somehow life did.

For the next 500 million years the young Earth continued to be pelted relentlessly by comets, meteorites, and other galactic debris, which brought water to fill the oceans and the components necessary for the successful formation of life. It was a singularly hostile environment and yet somehow life got going. Some tiny bag of chemicals twitched and became animate. We were on our way.

Four billion years later people began to wonder how it had all happened. And it is there that our story next takes us. US0jZW3ZdMTS5uEVK0S8cUM9oQzxPRk/hmLtkdOCjrbhj4OLjrFX82kfhlMvLTFt

点击中间区域
呼出菜单
上一章
目录
下一章
×

打开