Cosmic Acceleration:
Hounding the White Whale of Cosmology
Philip Petersen, Phd
Empyrean Quest Publishers
PO Box 24
Kinightsen CA 94548
empyreanquest.bodymindandpassion.com
©2004 by Philip Petersen
Library of Congress Cataloging-in-Publication Data
Petersen, P. Stephen
Cosmic Acceleration/Philip Petersen, PhD
ISBN 1-890711-19-5
First Empyrean Quest Edition 2004
Printed in the United States of America
TABLE OF CONTENTS
PREFACE
Ishmael: the Mask of the Universe - 5
INTRODUCTION
Chapter 1
Captain Ahab: Cosmology With A Vengeance - 14
The open sea: Newton’s infinite universe - 14
How Ahab lost his leg: Einstein’s greatest blunder - 18
Maps: is the universe closed, open, or flat? - 20
Chapter 2
Starbuck: the Voice of Reason–
Standard Big Bang Theory - 22
The sameness of the voyage:
homogeneous and isotropic - 22
The origin of the ocean: the Big Bang - 24
Symbolism of the ocean: there is no ether--
Einstein himself - 26
Starbuck warns of the Moby Dick obsession: LeMaitre’s Universe - 27
Chapter 3
The Lost Boat: Steady State Theory capsizes - 30
First sighting of a School of Whales:
the DeSitter Universe - 30
Rescued crew members:
Hoyle, Gold, and Bondi rescue Einstein - 32
A School of whales:
Quasars signal doom for continuous creation - 33
Fedallah--the subordinate phantom:
Cosmic Microwave Background - 34
Chapter 4
The Captain of the British Ship and his lost arm:
other failed theories - 37
How dangerous is Moby Dick?
Conflict over The Hubble Constant - 37
Is Moby Dick immortal?
The age of the universe in doubt - 40
The legend lives on: the Virgo Infall - 42
Chapter 5
Pip goes missing: Missing Matter - 44
Where is that black boy?
Do galaxies have missing mass? - 44
He could be anywhere: galactic clusters - 46
The ocean appears calm:
Is the universe flat? - 47
Pip becomes schizophrenic:
many possibilities for dark matter - 49
Baryonic Dark Matter - 49
Non-baryonic Matter
and Other Possibilities - 53
Chapter 6
Ahab’s ivory leg breaks:
Inflation in the Early Universe - 57
A new course: Linde and Steinhard’s Revised Inflation - 62
The track of the whales–the immortal ocean:
Chaotic and Eternal Inflation - 65
Chapter 7
The Chase: The nature of the ‘White Whale’–
the Cosmological Constant - 68
The Ship, the Ocean, and the Whale--
Matter, Curvature, and the Cosmological Constant - 68
The inferred existence of the ‘White Whale’,
the Cosmological Constant - 70
The wake of the ‘White Whale’:
the effects of a Cosmological Constant - 73
Former observations of the ‘White Whale’:
Truth or Fiction - 77
How to ‘kill’ the ‘White Whale’:
Theory and the Cosmological Constant - 80
Chapter 8
Starbuck almost kills Ahab:
the Universal Expansion is Accelerating
Ahab will not give up:
Supernova Observations and Cosmic Repulsion - 85
The Pequod and the Rachel:
The Two Supernova Teams Provide the First Equation - 89
Ahab is forever Ahab:
The Cosmic Background Provides the Second Equation - 93
Chapter 9
The Hidden Terrors of Moby Dick: Quintessence - 100
The Harpoon is not yet forged:
Time variable cosmological constant? - 102
Ahab as Prometheus:
Does the Fire of Space Expand the Universe? - 105
Explanations for Saint Elmo’s Fire--
conservation: the energy of space and matter - 106
All fired up to hunt Moby Dick:
Andrei Sakharov: is space spongy? - 110
Chapter 10
The High Velocity of the Breach: Variable Speed of Light Theories
Chapter 11
The Sphinx in the Desert–The Earliest Stages in the Universe and How We Know ThemParticles and Light–the Flesh and Blood of the Universe
Astronomer Talk about the Epochs.
Chapter 12
Fate’s Lieutenant: The Destiny of the Universe
Chapter 13
Harborless Immensities:
Extra Dimensions
Chapter 14
The final pursuit and demise of the Pequod crew: Repulsion Kills Cosmology
The ship sinks: old views of cosmology fall - 113
A sequel to ‘Moby Dick’?
what is next for cosmology? - 114
PREFACE
“All visible objects, man, are but as pasteboard masks. But in each event--in the living act, the undoubted deed--there, some unknown but still reasoning thing puts forth the mouldings of its features from behind the unreasoning mask.”
Captain Ahab
The Mask of the Universe
Though not a social creature, I attended a few parties with fellow graduate students and faculty at the University of California, San Diego. I recall a non-academic young lady at one of the parties approached me and asked what I did.
“I am a cosmologist,” I replied with some pomp, proud that I had initiated a journey of great peril and scope.
“Great!” she said, “would you do my face sometime?”
Had I been in full possession of my faculties, I might have replied:
“I don’t make up faces, I make up the Universe!”
The face the universe presents taunts us. As the white whale, ‘Moby Dick’, taunted Ahab and his crew, the facts we gather about the large scale structure and evolution of the universe have led us to one disappointing plateau of false understanding to another. From the comfort of Einstein’s Static Homogeneous Universe, we moved to an expanding one as the data demanded. Hoyle and Bondi imitated sameness over space with sameness over time: their Steady State theory of continuous creation. It went down with observations that the universe had indeed evolved. We then confronted the non-Newtonian behavior of galaxies and galactic clusters and proposed it to be explained by matter which does not shine--dark matter. This seemed to be corroborated by Guth’s Inflationary Theory of a short-lived accelerated expansion which required the universe to have more matter than we see.
Now it seems this type of cosmic acceleration is happening once again. Data from ancient exploding stars--supernovae--reveal a force of cosmic repulsion. This is not what most cosmologists expected.
If God were a NASCAR driver piloting the car of the universe, here’s what would happen with time. At about 10-35 seconds, it’s pedal to the metal. He puts his foot down hard on the accelerator. This inflationary acceleration expands the universe about 1060 times. This has the effect of smoothing out space and time and its energy content.
An instant later, at about 10-30 seconds, he takes his foot off the accelerator and puts it gently on the brake for about 7 billion years. The deceleration that results is the result of the gravity of the matter part of the universe. A baseball thrown straight up into the air slows down. In a way the deceleration period is like rounding a sharp turn in a NASCAR turn where the brake must be applied.
About 7 billion years ago we reach the next phase. As if coming out of a turn, God puts his foot back on the accelerator, not as hard this time. This is the moment astronomers now call ‘the big jerk’. The speeding up process is more gentle this second time. A small, nearly constant acceleration is maintained. As the matter is diluted by universal expansion, the matter component’s gravitational effect lowers until it is negligible, and the funny energy in space continues the acceleration.
Synopsis of Moby Dick
The story of cosmology is presented in parallel to Herman Melville’s Moby Dick. Here is the plot of that book.
A schoolteacher named Ishmael decides to take a break from his teaching and spend a year at sea aboard a whaling ship. He arrives at the whaling port, where at the inn he is made to room with a ferocious and superstitious south sea island harpooner by the name of Queequeg. They become friends, and Queequeg’s talent as a harpooner lands the teacher a berth as a novice aboard the whaler, the Pequod.
The ship is captained by the obsessed Ahab who has lost a leg to the white whale he calls ‘Moby Dick’. The white whale is known for his elusive intelligence and strength.
The obsession of Ahab to retaliate against the white whale grows as the search for ordinary whales goes on. He diverts the ship’s focus toward finding and ending the life of the white whale. Always on the lookout for clues as to the whale’s whereabouts, they follow those clues until a sighting is made. In fact, they follow the whale to the ends of the earth, suffering the cold of the arctic and the storms of the tropics.
On the journey, the ship’s first mate, Starbuck, realizes that Ahab’s obsession is preventing them from making a living, and leading them to ultimate destruction. Starbuck chides the captain for leading them astray, but Ahab is firm in his resolve.
When they encounter a British ship, the captain, who lost his arm to Moby Dick, reveals that he has seen the whale recently.
The chase is on. On the way the black cabin boy, Pip, is temporarily lost at sea, and when found, he seems to have lost his mind, ranting on incoherently about the whale. Queequeg, the nobel savage, becomes ill and blames it on the whale. During his illness, he sleeps in a coffin.
The white whale is eventually sighted and the battle is on. Ahab eventually rides the back of the leviathon. The whale capsizes the ship and the Captain and the entire crew are lost, with the exception of the schoolteacher, Ishmael, who lives to tell the tale.
Why Moby Dick and Cosmology?
The encounter with the supernova Ia observations and the realization of a dominating energy component to the universal expansion has devastated our prior beliefs regarding the history of the cosmos. We have encountered the white whale and it has sunk our ‘Pequod’, the 20th century cosmological ship of thought.
In this book we chronicle the journey of this ‘Pequod’. In a way, the characters in this story are not so much individuals like Einstein, Lemaitre, Hoyle, Bondi, and Guth. Rather, they are the different personalities which live in the psyche of a cosmologist--and perhaps each one of us.
There is ‘Captain Ahab’, the compulsive self which inordinately longs to solve the mystery of the origin and evolution of the cosmos. The dedicated cosmologist strives to wrest the mystery ‘from the hands of God’ with a vigor which can become obsessive. I gave up a spiritual vocation to go on my personal cosmological quest.
There is ‘Queequeg’, the savage mystical part, which longs to intuit the true nature of the cosmos by directly experiencing its relationship to the ‘inner cosmos’ of a man or woman. Drugs, near death experiences, fasting, contemplation of omens (all used by Melville’s South Sea island harpooner), all may be considered products of the desire of the soul to reunite with its source. Again, as in my case, some cosmologists might take up their ‘calling’ in response to a mystical, or at least transforming experience.
Certainly, all scientists can recognize ‘Starbuck’ in themselves--the voice of responsibility and reason: conservative, careful science. He tempers the ‘Ahab’ in us by reminding us our models must tow in to the observational data. Otherwise we will be abandoning our reason and perhaps our ‘living’ and our families. Being a theorist, however, I have often gone on flights of mathematical fancy, ignoring the voice of the ‘Starbuck’ in me.
My own story began much as Ishmael’s in ‘Moby Dick’. A schoolteacher, going back to graduate school, saunters into ‘port’. The goal is to “sail about a little and see the watery part of the world.” In my case, this was a journey into the sea of space, the domain of astronomy and astrophysics, rather than whaling and charting.
In telling this tale, I will relate it to the order of events and appearances of characters in Melville’s story. Chapters and subheadings will be headed by major characters or events in ‘Moby Dick’ and their similarity to cosmologists’ personalities and the events which shaped them.
If the metaphors and similes become strained, forgive me. Hopefully the drama and adventure of the quest for the ‘White Whale’ of Cosmology will compensate the reader. Perhaps we both might understand a little better the “little lower layer” of the mask, the “makeup” of the universe.
INTRODUCTION
This book divides neatly in two. Just as the story of Moby Dick is halved by the first sighting of the white whale by the Pequod., our story of cosmology nicely separates into what was known before the discovery of the accelerating universe in 1998, and what has happened since. There are seven chapters exploring each of the two halves. They are thus divided in time by this watershed revelation.
The supernova observations published in 1998 by Harvard and Berkeley teams brought about a revolution in cosmology unparalleled in the history of science. Before that time, it was thought that gravity dominated the current current expansion of the universe and was in the process of slowing it down. In fact, an important quantity in the theory of the cosmos was named the ‘deceleration parameter’ in honor of gravity’s role.
However, the theory was entirely wrong. Though gravity still appears to have some sway, our new view of the universe includes a ‘funny energy’ in space which currently dominates gravity as a cause of motion. This ‘dark energy’, as it is sometimes called, is responsible for an overall acceleration of galactic super-clusters away from each other.
The first seven chapters present cosmology before 1998. Because many people are acquainted with many of the principles of the ‘Old Cosmology’ and because they are more accessible, most readers will find the first half easy going. The second half is necessarily a bit tougher.
We begin with Chapter 1 and Newton’s infinite universe, pioneered a century earlier by the Italian monk Giordano Bruno.
We proceed to give a view of Einstein’s General Theory of Relativity and how the Russian, Friedmann, showed it provided for two versions of the cosmos: one infinite and saddle-shaped and the other finite and spherical.
Explored also is Einstein’s realization that if the universe were finite and static, as astronomer’s believed before the discovery of the Big Bang, then a repulsive force would be necessary to keep the matter from falling together as a result of gravitational attraction. The repulsion was provided for by a constant in Einstein’s Field equations somewhat like a spring constant in Hooke’s law. This is Einstein’s cosmological constant.
As Captain Ahab loses his leg to the white whale, Einstein loses his credibility in proposing the cosmological constant, when the universe is found to be expanding.
In alignment with the character of Starbuck, the first mate who is always practical and reasoned, Chapter 2 gives a sound basis for the Big Bang, ruled by gravity. First we lay out the principles of Homogeneity and Isotropy for a ‘dust’ inhabited universe. Then the characteristics of an expanding universe are sketched in terms of motion and temperature for the three possible cases: open, closed, and flat.
In this second chapter we also explore very fundamental reasons scientists (up until 1998) believed space should be empty, then explore LeMaitre’s 1930 model in which it is not. The possibilities multiply if we include a cosmological constant in an expanding universe, as he suggested.
Relevant to the history leading up to the acceleration discovery is the Steady State Universe of Hoyle, Bondi, and Gold. This is a lost theory like the rowboat first lost after leaving the home ship of the Pequod to chase a school of whales. Chapter 3 outlines DeSitter expansion and that theory, which proposed a universe uniform in time, perpetuated by continuous creation. Though observations eventually showed that the universe has evolved, Steady State prepared the way for an understanding of an energy in space which may not have evolved. We also discuss the Cosmic Microwave Background: a manifestation of the evolution.
As Ahab confronts a British Captain who also lost his arm to the white whale, so also Chapter 4 confronts other failed theories and observations that change. One side of the conflict over the rate of expansion of the universe we call the Hubble Constant is particularly prescient with regard to a possible energy content in space. When it is discovered that we are falling into the Virgo cluster of galaxies at a certain rate, a cosmological constant seems to be implicated, but is rejected by the mainstream.
Chapter 5 talks of Pip, the ‘black boy, lost in the black night, on the black sea’. Just as hard to find is the dark matter–matter not in stars--which is necessary to explain why stars move the way they do in galaxies, and why galaxies move the way they do in galactic clusters. At this point in the history, most of the matter universe is thought to be dark matter.
Then Ahab’s ivory leg breaks, and his carpenter must make a new one. Similarly, the standard Big Bang must be revised to include an early rapid inflation to handle observations. The inflationary theory is the topic of Chapter 6. We also explore the difficulty in finding a reasonable theory of inflation.
In Chapter 7 we explore the Cosmological Constant up close–theory and observations suggestive of it. We will be finally giving chase to the ‘white whale’.
The half of the book beyond Chapter 7 takes us beyond the Supernova discovery of the accelerating universe in 1998.
Chapter 8 details the observations of the ‘whale’ up close. How did we determine that there is about 70% dark energy and 30% matter? This mystery and the Supernova observations will be explained.
Chapter 9 reveals the hidden terrors of our ‘Moby Dick’. Einstein’s revision of General Relativity may not have been right. A cosmological constant representing the ‘funny energy in space’ may have to vary with time. It also may vary throughout space. Theories explaining it may have to be more complex. These we call ‘quintessence’ theories, relating to the fifth element beyond earth air fire and water.
In Chapter 10 the whale moves rapidly. We consider variable speed of light theories providing not only inflation, but also possible variable energy in space.
Chapter 11 talks of the epochs in cosmology. How do we know how the universe came to be the way it is? The past of the universe haunts us like the face of the Sphinx. Can we understand how particles making up matter came about?
The future of the universe is the topic of Chapter 12. Fate plays a great role in the story of Moby Dick. Is the fate of the universe a whimper or a cosmic rip?
Extra dimensions play a role in superstring theory. Is it possible that they relate to the nature of the recent acceleration of the universe? M Theory, the Randall-Sundrum Model, and Ed Witten’s great ideas are discussed in Chapter 13.
Chapter 14 ties the threads of the story of our ‘white whale’ together. Our ship of cosmology has been devastated, nay destroyed! In the end, all theories will be lost but one. What will it be? New observations via planned projects--the Planck observatory and the James Webb Space Telescope--may tell the entire story of our ‘Moby Dick’ from beginning to end.
Chapter 1
Captain Ahab: Cosmology With A Vengeance
“Already we are boldly launched upon the deep, but soon we shall be lost in its unshored, harbourless immensities.”
The Open Sea--An Infinite Universe?
We head out to sea, the Pequod, our ship of Cosmology, full of sailors--parts of ourselves who relate to the adventure ahead. What is the nature of the sea of space? The world was once thought to be flat, and sailors feared falling off the edge of a finite plane. Colombus had discovered once again that the world was spherical long before Melville wrote ‘Moby Dick’.
The spherical nature of the planet was understood by Pythagoras nearly 2,000 years earlier. Though we don’t know if he was the first to have this knowledge, we do know that he noticed ships appear to sink as they went out to sea, and the circular nature of the earth’s shadow in a partial lunar eclipse. These facts led him to conclude the earth was ‘round’.
What of space? Pythagoras thought of the stars as embedded on a celestial sphere. After all, didn’t they have circular tracks in the sky at night? Simplicity would demand that the stars occupy a surface reflective of the earth itself: a sphere. That meant that all objects in the heavens were less than that distance away. The planets--Greek for ‘wandering stars’--turned on imaginary spheres inside the sphere of the fiery element of the stars. Aristotle, two generations down the line, carried on and expanded this earth-centered, finite universe picture.
Even Copernicus, who in 1543 AD ‘changed the world forever’ with his picture of planets revolving around the sun, still had the stars pasted on a finite sphere. Though he increased their distance so as to conform with observational facts, such as the size of Saturn’s orbit, his universe was still finite.
Aristarchus in the second century BC first lent reasonable support to a sun-centered universe, infinite in extent. However, observations seemed to contradict his great intuitive glimpse. Aristotle, two centuries before him, pointed out that if the sun were at the center of the universe, the angular location of the stars would change with the seasons as the earth orbited the sun. This ‘stellar parallax’ was not observed, so when Aristarchus asserted the universe to be heliocentric, reasonable scientists dismissed it. The problem was with Aristotle’s small distance to the stellar sphere. If stars were much further away, they would shift by too small an amount to be seen as the earth shifted its position.
The universe seemed ‘made for man’ instead of the modern idea of man evolving from the universe. This stigma still clung to the picture of Copernicus. Fifty years after Copernicus passed, the infinite universe was again proclaimed. The monk, Giordano Bruno gave us a vast, untrammeled view:
“The stars are suns like our own and there are countless suns freely suspended in limitless space, all of them surrounded by planets like our own earth, peopled with living beings. The sun is only one star among many, singled out because it is so close to us. The sun has no central position in the boundless infinite.”
Such a profound, revolutionary set of hypotheses has no comparison in the history of science. In 1997, we found evidence of the first planets around other stars. Bruno spoke that hypothesis over 400 years before it was proven. Though many believe UFO’s are evidence of extraterrestrial life, the hypothesis of ‘living beings’ still remains to be openly verified. In addition, cosmologists are still contending over whether the universe is finite or infinite. However, the results of early 1998 dealing with supernova-implied expansion and the WMAP results from 2003 seem to favor a universe which is at least very, very large--much larger than the portion we can see.
Giordano Bruno, arguably the first claimant to the infinite universe, was burned at the stake as a heretic.
Isaac Newton, with Ahabian abandon, also supported the ‘infinite universe’ theory.
Newton was attending Cambridge University, when a version of the plague swept the countryside. His school was shut down, but this didn’t deter his studies. He retreated to his mother’s care at Woolsthorpe, and discovered the laws of motion, but more important to our discussion is his discovery of the nature of gravity.
I often ask my students what they would do if an epidemic of the proportion of the plague were to shut down all colleges. It could be AIDS, ebola virus, or something else. Would they have the initiative to follow their search for knowledge anyway? Would their dream just languish, or perish?
Newton’s contribution to the understanding of gravity was penultimately profound. His famous second law of motion was really the definition of mass, the source of gravity. No one before him had quantified matter, made it amenable to experiment. He defined mass as the ratio of the total applied force on an object to it’s acceleration.
Second, Newton realized that gravity acted from the center of a symmetric spherical object like the earth. It generated an attractive pull he deduced from Kepler’s laws of planetary motion, to be proportional to the masses of the two objects involved. Gravity was found to be a very weak force. It takes the entire earth to produce a pull on your body with a force equal to your weight.
Here is another way of demonstrating how weak gravity is. Take a blown-up balloon and hold it against the wall. Newton said that wall should attract the balloon in proportion to its mass and that of the wall. This is the ‘Universal’ of the ‘Universal Law of Gravitation’--all bodies attract this way.
However, the gravity of the earth overpowers that attraction and the balloon falls to the floor. Nevertheless, if you charge the balloon by rubbing it against your clothing, it will stick to the wall by inducing an opposite charge in it. That is, with a little bit of charge in the balloon and wall, electricity has exerted a force much stronger than the gravity between the balloon and wall. Gravity is weaker than the electric force.
The third thing Newton realized, again by deriving it from Kepler’s laws, was that gravity falls off as square of the distance between centers of symmetric spherical objects. This r-squared law of falloff holds, strangely enough, for light intensity from a spherical source, and for electrical force as well. It is as if gravity is an emanation like light--a field, if you will. The fall-off was discovered around the same time by an English contemporary of Newton’s: Robert Hooke. Newton and Hooke had a bitter dispute about the discovery of gravity. However, Newton won out, as Hooke had only discovered that one fact about gravity, whereas Newton had attained more profound knowledge.
Newton used his grasp of gravity to reason that the universe must be infinite. He assumed stars to be distributed somewhat uniformly throughout space. Each star attracted each other by gravity. If such a universe were finite, the stars would fall together, attracted by the force. This could be stopped by assuming an infinite universe, since each star would have equal pull to all sides. Milne and McCrea in the 1930's discovered that Newton’s laws of motion and gravity could be used to describe the universe, giving the same results as the more profound General Theory of Relativity of Einstein. Only when observing very distant objects must we use Einstein’s theory.
How Ahab Lost His Leg--Einstein’s Greatest Blunder
It was this static, infinite ‘Ahabian’ universe to which Einstein and most physicists clung before the era of the First World War. This was shortly before Edwin Hubble and his assistant Milton Humason discovered the galaxies were flying apart. After all, when we look at the sky, the stars appear to hang there motionless except for the motion related to the earth turning under them. Also, observations on the then new 100 inch Mount Wilson telescope indicated a universe orders of magnitude more vast than previous instruments. For all intents and purposes, it seemed as if it would all go on forever.
Having just formulated his new theory of gravity, General Relativity, in 1915, a static universe was puzzling to Einstein. In this theory of gravity it was possible for the universe to be finite. If the matter in it were dense enough, space and time would bend in response--into a closed form. This is analogous to the surface of a sphere, but in an extra dimension. In this ‘spherical’ closed finite universe, gravity would collapse it. Being static like in Newton’s example, there would be nothing to prevent the force from drawing all the stars together. However, we don’t see this, Einstein surmised.
He concluded that gravity must be countered by a repulsive force. He also included this as an addendum to his General Theory of Relativity. There are many ways of thinking of this force, but perhaps the most graphic is to compare space to a compressed sponge when you let it expand. An ideal gas has a similar outward pressure. It is sort of like having compressed springs between the galaxies. In fact, our friend Robert Hooke would have been pleased to find that the force necessary is proportional to the expansion scale as in his famous ‘Hooke’s Law’ of spring extension. However, instead of being attractive, like for a spring, it is repulsive. As for a spring, the force is imputed a strength by a constant. This universal ‘spring’ constant is lambda (Λ), the ‘cosmological constant’.
There are very convincing mathematical reasons why the force must be represented by a lambda term in Einstein’s Field Equations, but for many years, scientists thought there were observational and theoretical reasons that force should be zero (Λ = 0). We will discuss that later in the book.
When the expanding universe was discovered, the need for a force to balance gravity was done away with. Einstein began to think of the cosmological constant as his ‘Greatest Blunder’. ‘Ahab’ lost his leg in the first encounter with the ‘white whale’.
It was thought that the universe would slow down as it expanded, much like a ball slows down when thrown straight up from the earth. It was believed that gravity was the only force acting on the galaxies and tended to decelerate them. A term was devised as a measure for the amount of deceleration. Thus was born what cosmologists call the ‘deceleration parameter’, q . We will later see, however, that observations of the rate of supernova expansion do not match a state of deceleration. Rather we must hypothesize a force similar to Einstein’s to explain the universal acceleration, and q must be a negative number at present.
Maps: Is the Universe Closed or Open?
Ahab consulted detailed maps and was familiar with the lay of the oceans. Einstein’s General Theory of Relativity gives us possible maps of the universe. It says that the gravitational nature of matter is equivalent to curved space-time. The more matter is in space-time the more curved it is. Generally in cosmology we assume that the matter density at the largest scale is uniform throughout space. Later we will see just how uniform it really is. But for now, consider adding matter uniformly a little at a time to originally flat space-time. The more we add the more curved it is.
Note that for the meantime, we are leaving out any additional components, like the cosmological constant, contributing to the curvature of space. Though we now know that this is probably not valid, such a discussion is at least simple, and will give us ideas which will be useful later.
The addition of mass increasing curvature is analogous to trying to bend a thin rod of steel by applying increasing force on two ends. Such a tensile force will be uniformly distributed. The more pressure (gravity due to mass), the more the rod will bend. A critical amount of pressure (mass) may be reached where the steel bends into a circle.
In a similar way, if a critical mass density is exceeded for models of the universe in Einstein’s theory, space-time is bent into a hyper-sphere. This is one dimension up from the surface of a sphere like the earth, and the universe is finite in extent, a finite hyper-spherical surface. We call this a ‘closed universe’. The Soviet, Friedmann, found the first General Relativistic solutions for the expanding universe. His equations implied that the universe, if only the force of gravity were important, would expand for a time, then recollapse into what we now call ‘the Big Crunch’.
If the present mass density is less than the critical amount, space-time is saddle-shaped and infinite, sort of like an extended ‘Pringle’ potato chip. This is called an ‘open universe’. Friedmann’s mathematics (without cosmological constant) said that this type of universe would expand forever.
The expansion in these models can be grasped if we think of throwing a ball off successively larger asteroids. If your arm can achieve acceleration to a certain velocity, you could throw the ball at that speed straight up in all cases. Newton laws tell us that escape velocity from a gravitational body goes up as the mass goes up.
The gravity of a small asteroid is insufficient to prevent the ball from escaping. In a nearly empty universe, expansion under the influence of gravity only will slow in rate and eventually continue outward at a nearly constant velocity. In fact, this is true for a universe with any density less than the critical amount.
When we try the asteroid with just enough gravity to almost turn the ball around, this is like critical mass density for the universe. All larger asteroids return the ball because escape velocity exceeds the velocity thrown. Thus a universe with enough density doesn’t originally have escape velocity from itself, and recollapses.
However, what if gravity is not the only force important to changes in expansion? Then the entire picture must be modified. LeMaitre, the Catholic priest, did calculations in the late 1920s for a repulsive force added to gravity, and found the behavior to be more complex. We will talk about these possibilities later, in our next encounter with the ‘white whale’ of cosmology.
Chapter 2
Starbuck: the Voice of Reason--
Standard Big Bang Theory
“Vengeance on a dumb brute!” cried Starbuck, “that simply smote thee from blindest instinct! Madness! To be enraged with a dumb thing, Captain Ahab, seems blasphemous.”
The Sameness of the Voyage: Homogeneous and Isotropic
As we look out at the expanse of stars, the band of stars we call the Milky Way takes on the character of an imaginatively semi-uniform veil. Thus it was that early cosmologists assumed the universe to be uniformly sprinkled with matter at the largest scale. Perhaps the assumption was also constrained by the fact that calculations for a non-uniform universe were outrageously difficult in Einstein’s General Theory. Though knowing that proof of such a sweeping generalization was not attainable with the telescopes of the 20's and 30's, Robertson and Walker obtained in that epoch a description of the curvature of a universe which was not only looked the same everywhere, but the same in any direction. They called it ‘homogeneous and isotropic’.
A homogenous universe is invariant under translation from one position in space to another. The density in one locality is identical to that in another locality. How can this be if the universe contains lumps like stars, galaxies, galactic clusters, galactic superclusters, and perhaps complexes of superclusters? Evidence now indicates that beyond the scale of superclusters this is nearly true. Although the universe has not been thoroughly mapped, the overall picture in our locality is that the universe is smooth at the very largest scale.
An Australian based group called the 2-Degree Field Galaxy Redshift Survey, announced in June of 2000, the mapping of 100,000 galaxies surrounding us in space. This allowed us, for the first time, to get a good look at the largest structures and beyond. This is filigree inhomogeneity at the level of structure we call superclusters, and perhaps one level of clumping some call ‘continents’ at a larger scale. However, beyond that, galactic clustering reaches the ‘end of greatness” and is smooth.
The Sloan Digital Sky Survey has finished a survey of a million galaxies. This has pinned down the nature of the smoothness even further. Nevertheless, this is a drop in the bucket of the 50 billion estimated galaxies in the visible universe.
‘Largest scale’ is like looking at the lights of Los Angeles at night from a helicopter at a great height. They all blend into a smooth glow.
There is at least one exception to this smoothness called the Great Attractor, a mass at least a thousand times larger than that of the largest galaxy observed. Hidden by the dust in our galaxy, behind its center, this large gravity source is drawing thousands of galaxies toward it in ‘violation’ of the expanding universe. However, even the Great Attractor pales to insignificance as a lump, if the scale imagined is large enough. This is particularly so if the universe is Open and infinite.
The universe is also isotropic, invariant under rotations. The density we observe at a given distance is the same no matter what direction we look. Maps of the universe at the largest scale look the same wherever we turn.
Thus these two assumptions, very much ad hoc at first, are now fairly firmly founded in observations. They are as rational as Starbuck’s goal to hunt whales without risking the unknown in an encounter with the leviathon, Moby Dick. And, as with Starbuck, one’s livelihood as a cosmologist may be dependent on these realizable conditions.
The Origin of the Ocean: the Big Bang
The discovery of the expanding universe began with Vesto Slipher’s measurements of what we call the ‘redshifts; of galaxies in 1914. Working at Lowell Observatory in Arizona, he noticed the Doppler shifting of spectral lines in 11 out of 15 galaxies was toward the red end of the rainbow, indicating that they were moving away from us. This motion indicated that what was thought to be gas clouds called ‘nebula’ in some cases were not caught up in the motion of the galaxy. Curtis used this fact to try to convince the scientific world that this ‘island universe’ of the Milky Way was not the only one--that there were other galaxies of stars besides our own. His 1920 debate on this issue with Harlow Shapley, who first proved we don’t inhabit the center of the universe, was inconclusive, however.
Obtaining spectra of galaxies at that time was an all night, six hour time-exposure affair, so the sample was limited. This small sample was by no means proof that the universe was expanding. Four out of the 15 galaxies had blueshifts, indicating motion towards the observer.
Edwin Hubble, a law student turned astronomer, was the first to have opportunity to take a larger sample on the Mt. Wilson observatory scope. The actual all night vigils were accomplished by his assistant, Milton Humason. This is sort of like having a crew member up in the crows nest searching for spouting whales.
Humason was a mule team driver who carted the telescope parts up the narrow mountain roads to Mt. Wilson. He had only elementary school education, and was an addicted gambler who was hired on as a janitor in the observatory. His curiosity led Hubble to teach him how to operate the telescope and photographic equipment.
They noticed a trend in the spectra of the more distant of their sample. A little over thirty galaxies seemed to lie along a line in a plot of Doppler shift-implied recession velocity versus distance. The Doppler shift in frequency can be experienced when a fire engine races away from us. The greater its speed, the greater the shift. Shift implies speed. Blue light from a receding source will be shifted toward the red end of the rainbow. This astronomers call a ‘red shift’.
Using distances derived from other sources, they concluded that the farther away a distant galaxy was, the faster it was moving away from us. The exact proportionality of distance and recession velocity meant something absolutely astounding. It meant that the universe was expanding uniformly. Wherever in the universe an observer would view moderately distant galaxies, they would all lay along the same Hubble’s Law straight line. If one thinks backwards, considering gravity as a retarding force to this expansion, the universe must have come from a ball of matter and energy in an explosion named by Fred Hoyle thirty years later--‘The Big Bang’.
The exact mathematical relationship of Hubble’s Law--and its implications--was worked out by Robertson, a theoretical cosmologist. He then took this uniform expansion and made up what is called a ‘metric’ or scaling geometry for it in Einstein’s General Theory of Relativity. The Soviet Friedmann also had solved Einstein’s equations, without understanding they were consistent with an expanding universe.
Since Hubble, Humason, Robertson, and Friedmann were all responsible for understanding the data, we probably should call Hubble’s Law, the Hubble-Humason-Robertson-Friedmann Law. :>)
Symbolism of the ocean: there is no ether--Einstein himself
One key to Friedmann’s Gravitational theory was that gravity was the only force involved in regulating the expansion. Certain discoveries over twenty years earlier made this assumption at least temporarily reasonable.
Nineteenth Century physicists tried to understand the nature of the propagation of light. Light had been discovered to be an electromagnetic wave by the theory of Maxwell and the light-generating circuits of Hertz. The thinking was that all waves require a medium for the transmission of energy.
If that medium were still with respect to the ‘fixed stars’ the earth moving through it should have created a substantial ether wind. Using a device accurate enough by twice to detect such a wind, Michelson and Morley in 1900 bounced light off mirrors in different directions, but noticed no change in the speed of light with direction. This meant that the earth was not ploughing through some ‘stationary’ stuff in space, and that the speed of light could not be ‘boosted’ or retarded.
Albert Einstein was developing a theory which harmonized with the Michelson-Morley experiment. In 1905, as a clerk in the Patent Office in Germany, he published a theory that would transform our understanding of motion forever. His special relativity required that the speed of light would be the same for all observers, no matter how fast they were moving. This required that there could be no motion through a medium that would add or subtract from that speed.
This began to cement the idea in physicists minds that space was indeed empty. Light required no medium, it was thought, for its transmission. Could there be a medium for light that did not behave like an ordinary fluid? Is there something in empty space? Is there something that is spongy that could act as a repulsive force? We will explore these questions later in this book.
This blanket denial of stuff in space was reasonable. It was like the rational Starbuck observing that Ahab’s obsession with white whale hunting was interfering with making a living made by killing the much more abundant darker whales (like the dark matter hypothesis, we will see later). As Starbuck thought it absurd to strike out after the white leviathon, physicists and astronomers for nine decades, for the most part, had thought it folly to search for a compressible ether in space.
Starbuck warns of the Moby Dick obsession:
LeMaitre’s Universe
Friedmann’s three expanding universes: open, closed, and flat--are obtained by setting the cosmological constant, Λ = 0. However, many more possibilities for expanding universes open up with the possibility of a non-zero cosmological constant. Georges LeMaitre, a priest in the Catholic Church obtained a PhD in Physics from the Massachusetts Institute of Technology in 1927. His thesis involved the first understanding of the Big Bang from a theoretical point of view, utilizing Einstein’s General Relativity. He was the first to state that the universe may have emanated from a hot dense state. His cosmological equations yielded, however, a bewildering variety of universes behaving in response to gravity and the cosmic ‘spring’ force. Those with expansion periods could be classified by the following adaptation of Edward R. Harrison:
‘Bang’--an expansion period, with space-time emanating from a point,
‘Static’--a finite period with no expansion or contraction.
‘Whimper’--a period of expansion to infinity.
‘Crunch’--a period of contraction ending in a point. (My addition to Harrison’s treatment.)
Possible Universal Behaviors
(if Λ is allowed to be positive, negative, or zero):
The first three were the only allowed in Friedmann’s picture with Λ = 0, although still possible for Λ≠0.
1. Bang-Crunch (closed universe--universe recollapses in finite time)
2. Bang-Static (flat universe--asymptotic--gradually approaches a halt)
3. Bang-Whimper (open universe--universe expands forever)
4. Bang-Static (universe comes to a dead halt non- asymptotically)
5. Bang-Static-Crunch (universe expands, stays static a while, then contracts)
6. Bang-Static-Whimper (LeMaitre’s Universe)
7. Static-Whimper (Eddington’s Universe)
8. Whimper (Desitter Universe, gravity ignorable, in the usual way of thinking)
9. Crunch-Whimper (universe contracts to a finite size, then expands)
10. Crunch-Static-Whimper
Each of these 10 possibilities have a period of expansion. However, if the universe is presently accelerating, as recent supernova data indicate, this narrows the field to leave only models 3 and 6-10. All of these models extend to more possibilities for the universe’s age from the limiting nature of the Friedmann models.
Lemaitre’s Universe (6) has a long quasi-stationary period, which was first thought to explain an epoch where Quasars were formed. Many of them were found clustering at a certain distance which corresponded to several billion years ago. (That was at a redshift of z = 2. Note, redshift, the relative change in wavelength/wavelength, relates to look-back time.) Later, though, the need for this stable epoch evaporated as more Quasars were discovered at much greater distances.
Eddington’s Universe (7) did away with the need for a beginning, a creation event, with an early static phase extending into the infinite past. During the static phase, no galaxies could have formed. It lasted until about 10 billion years ago. Thus this model also is not viable, since we have observed galaxies as much as 13 billion years old.
‘Crunch-whimper’ (9) is only allowed for a cosmological constant much smaller than Einstein’s original value.
‘Crunch-Static-Whimper’ (10), like LeMaitre’s universe, has a long stable static period, for which we see no evidence.
Though it will take more evidence to motivate it, we will argue that with the latest supernova observations, it would seem that all that is left is the Bang-Whimper (3) and the DeSitter Universe (8). Most cosmologists rule out a currently DeSitter Universe, however, because it is an ‘empty’ universe with no matter in it, and the universe seems to had a phase where it decelerated, a phase when matter dominated.
Chapter 3
The Lost Boat: Steady State Theory capsizes.
“There are certain times and occasions in this strange mixed affair we call life when a man takes this whole universe for a vast practical joke, though the wit thereof he but dimly discerns, and more than suspects that the joke is at nobody’s expense but his own.”
Melville’s comments after a whale chase in Moby Dick resulted in the temporary loss of a boat.
“The Steady State Theory has a sweep and beauty that for some unaccountable reason the architect of the universe appears to have overlooked. The universe is in fact a botched job, but I suppose we shall have to make the best of it.”
Dennis Sciama, 1967
First sighting of a School of Whales: the DeSitter Universe
“ As he stood hovering over you half suspended in air, so wildly and eagerly peering toward the horizon, you would have thought him some prophet or seer beholding the shadows of Fate, and by those wild cries announcing their coming.”
“‘There she blows! there! there! there! she blows! she blows!’”
A friend of Einstein’s, a Dutch theoretician by the name of DeSitter, in 1917 proposed a surprising cosmology. He showed that the universe could expand ever more rapidly--exponentially--if it had no matter in it, only ‘funny energy’ in space described by Einstein’s cosmological constant. Hubble had not discovered the expanding universe yet. That came nearly 10 years later.
DeSitter’s suggestion was a wild stab in the dark with no evidence to back it up. However, if one believes in intuition, it was a great signet of what was to come. Not only was it the first proposed expanding universe, but the expansion paved the way for an important phase in cosmological wandering--the Steady State Universe.
The emptiness of the De Sitter universe could be justified by saying that the ‘funny energy’ of the Cosmological Constant dominated over the effects of gravity, and it just may be what our universe will end up like. So his theory is very likely an asymptotic description of our present cosmological knowledge. At least in the future, the universe will probably break free of its gravitational bonds, and expand under the influence of the cosmological ‘funny energy’ in space itself. That is, as long as the funny energy obeys Einstein’s requirement that its density remain constant, a cosmological constant. We will see later that other types of ‘funny energy’ have been proposed which would accelerate universal expansion, but none of them at this point seem as likely as a cosmological constant, simply for the reason that they would violate Einstein’s General Theory of Relativity–the comfort zone of many physicists.
DeSitter’s universe is a flat universe. The curvature constant, k, is zero, and the mass density is zero in the Friedman equations. The Hubble constant is proportional to the square root of the Cosmological Constant, Λ, implying the scale of distances increases as an exponential function of time. An important measure is the deceleration parameter, q, describing how fast the universe is slowing down. This remains constant at q = - 1. The negative value means the universe is speeding up, not decelerating. This is the very value which the Adam Riess group from Berkeley Labs reported with the first big batch of redshifts of moderately distant supernovae, and the current WMAP value of q < -0.8. Could it be that the universe--at least currently--behaves as if it is empty... even though it has matter in it?
In a way, the DeSitter Universe is a steady state universe. In the conventional way of looking at it, the density of matter remains constant at zero, even though two observers will drift apart exponentially. The universe always looks the same. This is all mathematical and hypothetical--don’t ask me how two observers get to exist in a universe without matter.
The Rescued Crew Members:
Hoyle, Gold, and Bondi rescue Einstein
One of the fascinating features of a pure DeSitter universe is that it has no beginning. It is a way of putting off the question, ‘where does the universe come from’. It never was infinitesimally small, so there is no cause required. At least that was the argument Gold and Bondi gave for the first De Sitter-like cosmological model which proposed to match the evidence--the Steady State Universe with matter in it.
To produce such a universe, Gold and Bondi proposed not only that the universe should look the same throughout all space, but throughout all time. Sameness throughout all space had been tagged ‘The Cosmological Principle’. Space-time sameness they called the ‘Perfect Cosmological Principle’. In the form they gave it, however, it was not so perfect. We have found that the universe does evolve. Nevertheless, the concept prompted the search for cosmic evolution, and gave us some mathematical tools which may help us to model possible objects called ‘white holes’, for example.
To maintain a constant matter density, the Steady State Universe had to involve continuous creation of matter in space at a rate too small to be detected. The rate of matter creation in the theory had to exactly fill in the empty space left by the expansion of the universe. This assumption was completely ad hoc, and thought by some to be a flaw in the theory.
Pascal Jordan in 1933 had modified General Relativity with new mathematical objects called scalar-tensors which he said would produce ‘drops’ of created matter in space. However, there was no evidence indicating that Einstein’s theory needed to be modified.
Fred Hoyle created new objects within Einstein’s General Relativity called C operators which did the job. He is thus credited with inventing the theoretical equipment to make Steady State work. However, his theory gave no indication of what form the created matter would take.
The Steady State universe doubles its matter content in one third of the Hubble time (1/Ho). It expands exponentially as in the DeSitter Universe, and the deceleration parameter is q = - 1. One third the Hubble time is about 4 or 5 billion years, and is indicative of the average age of galaxies in the theory. The Milky Way galaxy is most likely about 12-13.5 billion years old, making it quite out of the ordinary. In the regular big bang universe, most galaxies are the age of the Milky Way, more or less. This was not clear at the time the theory was proposed, however.
A School of whales:
quasars signal doom for continuous creation.
There are two stronger reasons the Steady State Universe has been rejected. First, there are more strong radio sources in the past than in the present. Many of these radio sources are quasars, cores of nuclei of ancient galaxies exploding vast amounts of energy. In fact we see no quasars nearby, but many as we look out in space, and thus back in time. At least some large spiral galaxies’ cores may even evolve from powerful gushers (quasars) in the distant past, through active galaxies (less potent gushers) in the median past, to normal present day spirals (not as bright in the core). The first glimpses of this evolution was discovered in the mid ‘60s, and meant it was time to move on from Steady State Cosmological theories.
Fedallah--the subordinate phantom:
Cosmic Microwave Background
Fedallah, the turbaned assistant of Captain Ahab, came from one of “those insulated, immemorial, unalterable countries, which even in these modern days still preserve much of the ghostly aboriginalness of earth’s primal generations.”
The second reason for the rejection of Steady State theories came in the late ‘60s with the discovery of the Cosmic Microwave Background (CMB) by Penzias and Wilson. They were working on microwave communications at Bell Labs, utilizing a horn-shaped antenna. They noticed they were receiving a mysterious noise, no matter what direction they pointed it.
Thinking it was caused by pigeon droppings, they cleared the horn--to no avail. They soon realized that the wavelength of this radiation corresponded closely to the energy predicted by McKellar and Gamow 20 years earlier as a remnant of the fireball of the Big Bang. They had discovered the cosmic microwave background.
280,000 years after the Big Bang, in most theories, the universe was a soup composed mainly of light, protons, and electrons. Enough space had evolved so that the energy of the light couldn’t keep the electrons from going into ‘orbit’ about the protons. As the electrons fell into orbit, they released ultraviolet photons of a specific energy. The universe expanded, and space with it, increasing the wavelength of the light from ultraviolet to microwaves. Ultraviolet has wavelength of a few billionths of a meter, and microwaves a few centimeters, so this was a substantial stretch. I usually illustrate this in my classes by pulling on the ends of a coat-hanger wire I have bent into a sine wave.
The era that generated the background is called the ‘Recombination Era’ because electrons are recombining with protons. The light radiation released is also sometimes called ‘recombination radiation’.
The CMB is thus one of the great proofs of the Big Bang. The microwave background is difficult to explain in a universe that is never dense enough in the past to prevent electrons and protons from being together. Thus the Steady State Universe requires some special circumstance to explain the background: super-massive stars or masses varying with time, for example.
CMB has so many implications that it really deserves a chapter by itself. It is actually a spread of wavelengths with a peak at a specific wavelength. We call this a Black Body Spectrum because a black body, a star, a hole in an electromagnetic cavity resonator, all give off this characteristic spread of photon energies. A dense cloud of gas in equilibrium with itself produces it, and leaves a clue as to its temperature. Wien’s law tells us that the peak wavelength is related to that temperature. For CMB the corresponding temperature is 2.7 Kelvin above absolute zero--very low indeed.
One of the first things that was done was to identify the shape of the black body curve which would be expected if the radiation was a remnant of the Big Bang. That was done, and remarkably the radiation was coming from all over the sky with very close to the same temperature, betraying a uniformity reminiscent of the primordial gas.
In the early ‘80s, however, George Smoot and his team from UC Berkeley investigated the microwave background with COBE, the Cosmic Background Explorer satellite. They found something even more remarkable. There were small variations of temperature of a few parts in ten thousand, just large enough to be tracers of the original lumpiness of the matter at the Recombination Era. These are just the right size, under the influence of long-term gravity, to become galaxies, clusters of galaxies, and stars. In addition the spread of lump sizes were just those expected in the Inflationary Theory of the quantum generation of the universe. We will have much to say about this later.
The important fact now is that CMB is a nearly indigestible fact for most Steady State Theories, though some, including Philip Peebles, don’t consider it to be a devastating blow to the theory. Cosmic evolution is.
Chapter 4
The Captain of the British Ship and his lost arm: other failed theories
“The White Whale, said the Englishman, pointing his ivory arm toward the east, and taking a rueful sight along it, as if it had been a telescope; “there I saw him on the Line, last season.”
“And he took that arm off didn’t he?”asked Ahab.
How dangerous is Moby Dick? Conflict over The Hubble Constant
There are 10 numbers now reputed by some to be necessary to determine Cosmology. Cosmologists call it the ‘ten parameter’ theory. We have talked about the mass density of the universe, the size of the cosmological constant energy density, Λ, and the current deceleration constant, q0, but the first number we expected to pin down was the Hubble Constant, H0. This is because it can be determined using fairly nearby galaxies.
The Hubble Constant is the slope of the Hubble’s Law plot, the ratio of recession velocity and distance. We get recession velocity from the redshift of a galactic spectrum, so if we can find a galaxy’s distance accurately we get a good value for H0. This tells us how fast the universe is expanding in the present epoch. This is the first approximation for the lower redshift or distance end of the Hubble plot, which is as far as Hubble got. It is of prime importance because it starts our quest for cosmology, being essential for the first approximation of how the universe has expanded. It answers the question: how is the universe expanding now?
If we could get qo, the current deceleration parameter, we would get the second approximation, or how the universe expanded in the moderately recent past. It tells us how the Hubble plot is shaped in moderate redshift regions. However, up until a decade ago there was much dispute over both constants.
Let’s consider H0. Some history is in order, to give us a clear perspective of how difficult the task of determining it is.
Hubble found the distance to the nearby Andromeda galaxy by observing what are called Cepheid Variable stars in that conglomeration. In about 1912, Henrietta Leavitt from Harvard gave us one of our most powerful distance-determining techniques. Cepheid Variables oscillate in brightness over a period of 1 to 50 days, but each star dims and brightens with regularity. Leavitt found that the longer the period of variation the greater the star’s luminosity--power output at the source. From luminosity and apparent brightness at earth (called flux) one can obtain their distance (now called Luminosity distance, dL).
By using other methods to obtain the luminosity of closer Cepheids, she hoped to be able to determine distance to more distant ones. Harlow Shapley did just that. However, his research and that which followed was flawed by the fatal distance factor, because intervening dust was obscuring and lowering the intensity of light from the clusters she used to calibrate these stars. There was also confusion, because Cepheid variable stars in globular clusters turned out to have a different distance scale than those in spiral arms of spiral galaxies.
Up until the late forties, early fifties, it was thought that the time since the Big Bang was much shorter than it is now. This was because the Cepheid variable star scale for luminosity was off by about a factor of 5. Astronomers were using type II Cepheid standards for observing Type I Cepheids. This made the related distances off by about a factor of 3. The main yardstick for close by galaxies was off by that factor, and thus more distant galactic distances were off too, because those distances were based on the nearby measuring technique.
Thus for 20 or 30 years it was thought that the universe was only about 3 billion years old, which was fine because we hadn’t encountered anything older. Then in the early ‘50s, we radioactively dated rocks on the earth at 4.5 billion years old. This ‘time conflict’ was another factor which led Bondi and Gold to their infinite-universal-age Steady State theories. Ordinary Friedmann cosmology wouldn’t allow such antiquity.
More accurate determinations of the Hubble constant were made, implying the universe was also about 5 times larger than once thought–distance scales were up by 5. Alan Sandage, Hubble’s replacement at the Mt. Wilson Observatory, managed this refinement. However, the confidence in the Sandage value of H0 eroded in the 1980's when new determinations of the Hubble Constant emerged which were higher by about 50%. In the mid 1990's the debate heated up with Wendy Friedman and her group using what is thought to be the most accurate method of scaling the universe–Cepheid variables.
Before the Hubble Space Telescope (HST) was fully operational, it was impossible to see Cepheid variable stars in nearby galaxies outside the Local Group. However, HST allowed Wendy and her Harvard-based group to work with Cepheids in the nearby Virgo Cluster of galaxies.
Since Cepheid variables are thought to be accurate to within about 5%, we now have most astronomers settling on a Hubble Constant of about 70 km/s/Megaparsec. This is much larger than Sandage’s stubborn claim of 55. However, cosmologists drifted into the Friedman camp, and later supernova Ia data was closer to Friedman’s value.
Is Moby Dick Immortal? The Age of the Universe In Doubt
“‘Whom call ye Moby Dick?’ ‘A very white, and famous, and most deadly immortal monster...’”
Friedman’s large Hubble Constant makes the age of the universe problematic, if one ignores a cosmological constant. In standard General Relativistic cosmological models, the new H0 means the universe cannot have been much older than 13 billion years, and that’s really stretching it. When Friedman first announced her results, the age was forced under 11 billion years, but under pressure and supposed ‘closer examination’, H0 was given a smaller value--a good thing too, as WMAP Microwave Background measurements in 2003 were announced to give an age of the universe of 13.7 Billion years.
When the 11 billion year age figure was announced, all hell broke loose in the cosmological world. After all, the stars in the globular clusters of the Milky Way were at the very least 12 billion years old, a figure difficult to fudge. Additionally, the oldest galaxy we have found is at minimum 13 billion years, and some suggest there are quasars as old as 14 billion years!! We may still have a problem as we look for older objects. We call this the ‘time problem’ in cosmology.
The time problem was made even worse if one accepted theoretical demands that the density of the universe be as large as the critical value necessary for the Inflationary Scenario of Alan Guth and others. 9 Billion years was incredibly small for those espousing a particle physics-driven early period of exponential expansion. It seemed that the ‘inner space–outer space’ connection--relating particle physics to Cosmology was absurd.
It was at that point that the particle physics cosmologists like Michael Turner from the University of Chicago began to seriously consider the return of Einstein’s ‘greatest blunder’, the cosmological constant. Michael Turner’s lectures had become very popular, with humor and entertaining graphical illustrations, and he became the spokesman for the ‘inner space’ connection. The reemerging cosmological constant was required by observations to be very small. This was still an embarrassment for particle theory. Theorists had calculated that empty space should have an absurdly large cosmological constant or none at all. Nevertheless, they swallowed their pride in the face of necessity.
It’s like the old man who bought a pair of shoes (time scale) too small to fit in a second hand store. He paid half the price of a new pair to get them stretched, and they didn’t fit. He tried getting them stretched a second time, but found he couldn’t even use a new shoehorn to get his feet in them. In desperation he bought a new pair (with cosmological constant), paying double the price he would have if he had bought the new ones in the beginning.
A new cosmological theory was needed. We had outgrown our cosmological ‘shoes’ several times and we still clung to shoes that didn’t fit. The oldest objects we are seeing now are claimed to be products of a just-born universe, but are they? When the cosmological constant is thrown into the mix, the time problem is not as severe. However, it will be humorous to look back on the difficulty in the paradigm shift if we start discovering we need theories with objects 17, 20, or even 25 billion years old.
The Legend Lives On–The Virgo Infall
“‘Avast’ roared Ahab, ... ‘Man the boat! Which way heading?’
‘Good God!’ cried the English captain... ‘What’s the matter? He was heading east, I think–is your captain crazy?’”
The first fairly recent indication of the sighting of the cosmological constant ‘white whale’ was given by two researchers at the University of Hawaii in about 1985. They were investigating the peculiar velocity of the Milky Way relative to the Hubble flow and found that the local group of galaxies was falling into a nearby rich cluster–the Virgo Cluster.
The Virgo galactic cluster has hundreds of galaxies grouped about a supergiant elliptical galaxy called M87, one of the largest galaxies we have seen. Tully and Shaya from Hawaii were able to plot the motion of the Milky Way and found it falling into that massive cluster. Thus to measure the rate of universal expansion using the redshift of galaxies in the Virgo cluster, it was necessary to add the velocity of infall to the recession velocity to get the speed of Hubble expansion.
With the universe thus expanding faster than was thought before, the flat universe needed for inflationary theories required additional complexity to avoid having too little time. Tully and Shaya claimed the necessity for a cosmological constant to resolve the time problem created by a more rapidly expanding universe. The particle cosmologists felt the bite as clearly as if they had lost their left arm to the leviathon.
I recall the reaction of some inflationary theorists was to go into kind of a denial. They ignored the warning sign for a long time, and still went on looking for extra matter–dark matter–to add to visible matter. They needed to have exactly the critical density to make their theories work, and only about a fifth of the matter required had been sighted or inferred gravitationally. Never mind that this was another weakness to the theory.
The theoretical reasons for inflation, however, were impelling, as we will see in the next and following chapters.
Chapter 5
Pip Goes Missing. Missing Matter.
“Pip jumped again,... and was left behind on the sea... Out from the center of the sea, poor Pip turned his crisp, curling, black head to the sun, another lonely castaway...”
‘Where is that black boy?’ Do Galaxies have missing Mass?
The first indication of dark matter in the Milky Way was encountered by the Dutch astronomer Jan Oort. Oort measured velocities of stars that gravity makes bob up and down through the plane of our galaxy’s disk like horses on a carousel. He could not account for their high bobbing speeds without hypothesizing more matter than was seen in stars. He suggested an amount of invisible matter equal to the visible matter. He would later discover the region beyond Pluto where most comets spend most of their time–the famous Oort Cloud, another dark matter conglomeration.
This suggestion was confirmed by observations of galactic rotation. In about 1970, Vera Rubin and W. Kent Ford at the Carnegie Institute were first able to get some data accurately plotting rotational speeds versus distance from the center in the nearby spiral Andromeda galaxy. To Vera’s surprise, the linear velocity spread did not obey Kepler’s laws of planetary motion. This meant that the gravity of the stars they saw was not causing the rotation of galaxy. The velocities remained fairly constant as one observed further and further out from the center, whereas Keplerian motion would have had them tail off, radically departing from what was seen.
To observe the rotation curve of a galaxy, one has to block all but the portion one is observing and find the component of rotational speed in the line of sight from a Doppler redshift or blueshift. If the galaxy is not seen edge-on, a geometric correction must be made to find the actual rotation speed.
A few years later, Vera Rubin collaborated with David Burstein and Bradley Whitmore and observed and analyzed dozens more spirals inclined toward the earth. They all had an anomalous rotational spread similar to Andromeda’s. The conclusion–not seen right away--was that gravity was acting to produce this. The only reasonable way was to have an enormous amount of ‘dark matter’ interspersed throughout the galaxy. Perhaps 1% of the matter needed to produce a flat universe had been seen in stars at that time. Galaxy rotation curves upped the ante substantially, at least in principle.
More than 200 galaxies have been measured since 1978, firmly establishing the necessity for dark matter within them. This is in a proportion of about ten to one, lifting the matter inferred to occupy the universe to about 10% of the critical density. Since then, we have found some of the dark mass necessary, and although some of still eludes our view, only a radical dissenter like Milgrom (in the early ‘80s) claimed that Newton’s laws (equivalent to Kepler’s for orbital motion) are violated for very large masses like galaxies.
In 1973, however, Jeremy Ostriker and Philip Peebles constructed a computer model of the spiral Milky Way, and found that stars could not remain in a disk of matter alone. Some went flying off, and others went into very eccentric orbits. To maintain the spiral disk, they had to model it as being embedded in a much larger spherical ‘halo’ of matter. Observations have somewhat born out the theory, and it is now thought to be five to ten times the diameter of the visible star disk. So it is likely that a nearly equal amount of dark matter resides in the halo, far outside of the dark matter in the disk.
He could be anywhere: Galactic Clusters.
Call him a rebel, or a stubborn individualist, Fritz Zwicky established one of the most memorable careers at Cal Tech. His work led to the understanding of the neutron star. He was a master of crystal and liquid physics. Along with his collaborator, Walter Baade, supernovae were understood as a source of cosmic rays.
He was able to have a special 18 inch wide-field telescope constructed for him at Palomar by which he scanned the skies for supernovae. He boasted that the only two who knew how to use such a small telescope were he and Galileo. In the process, he produced two catalogs of galaxies: one of 30,000 galaxies in the northern hemisphere, and one of active galaxies with bright cores.
He rightfully, however, holds the claim to being the first discoverer of the necessity for dark matter. In the early 1930's he studied the motion of galaxies in the Coma Cluster about 300 million light years away. This rich cluster was found to have its hundreds of members swarming around each other much faster than the masses of its members and Newtonian gravity would indicate. To understand this, one merely has to note that to remain in orbit around a larger planet than the earth requires a higher orbital velocity.
It didn’t take Zwicky long to realize that there was extra matter in that galactic cluster. In a Swiss physics journal he called it ‘dunkle materie’ (dark matter). That was 1933, many decades before Vera Rubin’s encounter with extra matter in spiral galaxies.
Not long after that, Sinclair Smith at Mt. Wilson Observatory applied Zwicky’s techniques and dark matter conclusion to the rich Virgo cluster of galaxies, somewhat closer than Coma. He concluded that there was “a great mass of internebular material within the cluster”.
We now know that if we use Rubin’s masses, including the dark matter internal to galaxies including the halos, it is still necessary to postulate about an equal amount of dark matter between the galaxies, that is, if we wish to rescue Newton’s Laws. This brings the tally of inferred dark matter--at most--to about 20% of the critical density.
Zwicky also paved the way to an independent way of assessing intergalactic dark matter. In 1937, he suggested for the first time that a galaxy might be weighed by noting its effect as a ‘gravitational lens’. Later he made the seemingly outrageous claim, 40 years ahead of the actual observation, that galaxies and large masses could create multiple images or even a ring of images by the gravitational bending of light. This was identically similar to the way General Relativity says light bends around the sun.
Looking at pictures of how light from distant galaxies passes through a large cluster of galaxies, J. Anthony Tyson and his colleagues from Bell Labs were able to map the dark matter in the cluster, Abell 1689. A photograph, taken in the early 90's, shows the distant galaxies as small circular arcs around the center of the cluster. Utilizing this gravitational lensing effect, Tyson was able to map the dark matter in the cluster. All told, then, there is 10-20 times more material acting gravitationally, internal and external to galaxies than exists in stars.
The ocean appears calm. Is the Universe Flat?
Is there more dark matter than the 10-20% of critical mass accounted for? Even considering the missing mass of galaxies and galactic clusters, is it possible that this is not an effect of matter at all?
The argument for even more dark matter is theoretical in the main. As the universe expands, any deviation from flat, Euclidean space-time is magnified by the expansion. The fact that we are at about 20% of critical mass is very close, considering that if we work this backward the universe must have been within 100 trillionth of critical mass away from this critical value in the beginning. Such a finely tuned initial state of the universe is highly improbable. So why not postulate that things were simple–exactly critical density for all time? Or at least a long time after a brief, rapid inflationary period?
The inflationary theory, explained in the next chapter, does just that. It resolves this dilemma, called the ‘flatness problem’, by postulating a nearly eternally flat universe: zero curvature.
In a way, however, without additional arguments, this is much like learning that someone shot a powerful bow and arrow 5 miles. You are able to see the target was hit through powerful binoculars, but not which part of the target. You then conclude that since it’s so unlikely the target was hit, that there must have been a way of aiming the arrow perfectly–that the arrow hit the exact center. This is absurd. In other words, it is more reasonable to conclude that the ‘arrow’ did not hit the center of the target. Similarly, reason suggests that it is possible there is no more than about 30% of critical density in matter, and modern observations confirm that.
Philip Peebles, the Princeton author of the standard book on the evidence–Physical Cosmology--went through earlier phases where he doubted the flatness argument for dark matter. It is however, because of other impelling theoretical arguments that this flatness resolution makes more sense, as we will see in the next chapter.
Pip becomes schizophrenic: many possibilities for dark matter.
When models are computed for the maximum amount of normal matter formed in the Big Bang, theorists can account for about 20% of critical density. This normal matter, sometimes called ‘baryonic matter’ is made up primarily of baryons: protons and neutrons. Electrons make up a negligible portion by mass. Thus normal objects we know and love, which are composed of atoms, could conceivably be all there is. And the majority of it might be hidden. I joke with my students, saying that ‘God put most of the matter in the universe where the sun don’t shine’.
It is usually roughly assumed that when we look for the matter accounting for the gravitational anomalies in galaxies and clusters that we are looking for baryonic matter. However, ‘it ain’t necessarily so’. Some of the baryonic matter may be distributed in realms between the galactic clusters. It is reasonably certain, though, that to justify critical density, we must look for exotic, non-baryonic matter.
Baryonic Dark Matter
Until the cosmic repulsion discovery in 1998, the most conservative among astronomers believed that the only dark matter was to be found internal to galaxies and clusters. Not only did they suggest that it was only in the amount necessary to explain galactic motion, but that all of it was in ordinary, baryonic form.
Think of the many possibilities: dim or obscured galaxies, faint or failed stars, planets and black holes, and interstellar gas and dust. And we keep discovering new forms.
In the late ‘70's, the Einstein X-ray observatory satellite discovered that galactic clusters had globs of x-ray emitting gas in and around them. The Hydrogen gas inhabiting the regions between galaxies and voids between superclusters is visible in the Radio. We also keep seeing more and more stars in galaxies as the light gathering capabilities of our telescopes increase with the area of their apertures and the efficiency of detection technology.
Spiral galactic disks seen before and after increased ability to see look like doughnuts before and after deep frying. There are also dark galaxies, ineffective in producing stars and composed mainly of gas. This decreases the matter needed to explain cluster motions. All of these together, though, make up only a small fraction of the mass needed for galaxies and clusters to be Newtonian.
In addition, IRAS, a 1983 satellite viewing in the infrared, observed gas about newly forming stars. The protoplanetary disks of stars like Vega and Beta Pictoris became visible. These are young stars with pre-planetary dust and rock surrounding them in doughnut-shaped disks. These disks were estimated to be a factor contributing to dark matter for 50% of young stars.
However, we can do a rough calculation. One thousandth of the mass of our solar system is in planets and debris. So that makes one half of one thousandth of a percent of critical mass in planets and protoplanetary disks. That is tiny enough to make dark matter scavenger hunters go stark raving mad.
Speculative forms of dark matter often don’t stand theoretical tests. Cold gas in a halo around galaxies, would have fallen in since the Big Bang, and hot gas would radiate and be seen. Enough dust to make up a goodly amount of dark matter would obscure the stars. So if there is something in the halo surrounding a spiral galaxy (of which there are many, especially early on), it must be in a more condensed form.
The most likely candidate for this condensed matter up until the year 2000 was MACHOS (massive compact halo objects).
In the formation of stars it is reasonable some clumps of matter will not have sufficient gravity to have the 0.08 solar masses necessary to kindle nuclear fusion at 10 million Kelvin degrees. We know now that gas clouds come in all sizes, and like watery dough dropped into rapidly boiling oil, they break up into a spectrum of sizes. It is reasonable to assume that some gas lumps will not quite be large enough to kindle Hydrogen burning. These failed stars may exist in an abundance comparable to ordinary stars. However, since their mass is much smaller than the average star, we cannot hope to make up a substantial fraction of the missing mass in the disk or even core of a galaxy.
Nevertheless, halo matter is probably much more sparse, and some astronomers say that failed stars called brown dwarfs exist in the halo in the trillions and thus could make up as much as half of the missing matter in galaxies. This hope is fading somewhat, however, as the search for brown dwarfs progresses. Several such objects have been found. Gliesse 229B, for example, is 30-55 times the mass of Jupiter, has a surface temperature of 1200 K, and its luminosity is six millionths that of the sun. First viewed at Palomar, this object has no metal spectral lines.
The brown dwarf search also goes on, utilizing the gravitational lens effect. When a brown dwarf passes in front of a more distant star it acts like a magnifying glass focusing that sun and makes the star behind appear much brighter. The duration of the lensing event might give us the mass of the MACHO. They act on one star in a million a year, and the bright stars in the halo are very sparse to begin with.
However, estimates from the number of lensing events seen in a year indicate that brown dwarfs are not living up to early expectations. There are probably much less than a trillion in the Milky Way.
And the conclusion of Katherine Freese from the University of Michigan even indicate severe doubts in the lensing observations. Such stars would have to have the signature of large of mass loss, heavy elements and infrared radiation. None of these have been seen. Besides, a large amount of baryons would not be consistent with our present understanding of elements formed in the Big Bang. She concludes “It’s looking very likely that 50-90 percent of our galaxy is non-baryonic.”
We come to a rankling conclusion: that baryonic matter doesn’t even explain the non-Keplerian rotation curves of spiral galaxies. In addition, baryonic matter is even more sparse between galaxies, and thus we are even further away from explaining the ‘Zwicky anomaly’--galaxies zipping around in clusters like they were a swarm of hornets.
I was once stung by such an angry swarm of these creatures over a dozen times on the top of my head. I won’t forget it. In a way, gravitational astrophysics has been stung by the missing matter hypothesis. It is sad for observational astronomers to have to rely on theorists for explanations, but that is apparently what must be done, at least for now.
When I explain the above paucity of baryonic matter in galaxies, my students invariably come back with what they think is a solution: ‘What about black holes, Dr. Petersen?” I sadly have to tell them that stars with three solar masses left after losing a 30-90% of their mass in supergiant phases are very rare. They are the exception, not the rule. “What about the supermassive black holes in the centers of galaxies?” they might retort. I would have to tell them that if they made up a substantial portion, the stars in and near the core would be moving much, much faster than we observe. And what of the tell-tale traces of mini-black holes left over from the Big Bang? They have never been observed. After all, Stephen Hawking has pointed out that the smaller a black hole is the faster it evaporates. Evaporation in a puff of black body radiation of high temperature would be seen in the x-ray, and it hasn’t.
Non-Baryonic Matter and Other Possibilities.
I used to tell my students, somewhat tongue-in-cheek, that the search for dark matter is not very scientific. If it is truly dark we will never see it. So how can we hypothesize something we will never observe? Seriously, though, it may also violate Popper’s modern adaptation of Francis Bacon’s scientific method which says that an hypothesis must be disprovable. How can you ever show something you can’t observe doesn’t exist?
For this reason, the challenge I hurl to the inflationary theorists, and even the missing matter enthusiasts, is for them to prove that their hypothesized form of dark matter, or (even worse) critical density can be observed or strongly inferred. It looks as though now it can be inferred if we assume there is something substantial in space which is probably not matter, WIMPS, or axions. But before I get into that mystery, lets talk a bit about these exotic forms of non-baryonic matter.
It was thought for quite while that the number of neutrinos visible from the sun was only about 1/3rd of the number predicted by ordinary nuclear physics. However, neutrinos were discovered to have a very small mass, and the theory of the massive neutrino requires that there are three types. The type that was being detected was changing into the other two two-thirds of the time. A neutrino is a very penetrating particle, difficult to detect, because it passes through matter with a very, very low probability of interacting. However, we have been able to detect a fair number of them utilizing big tanks of stuff like what used to be used for cleaning fluid until it was found to cause liver damage: carbon tetrachloride.
Neutrinos interact by means of the weak nuclear force, a weakly-coupling interaction which adjudicates beta decay--the decay of a neutron into a proton plus electron plus neutrino. To run that one backwards–inverse beta decay–requires rare circumstances, and so neutrinos are rarely detected as they change a proton into a neutron and perform the nuclear alchemy of changing an element into one lower on the periodic table. This can be chemically determined by examining the Carbon Tet for the expected impurity.
One of the hypotheses to explain the missing solar neutrino mystery and tie together a new unifying theory of physics called Supersymmetry is called the WIMP (weakly interacting massive particle) theory. It’s amusing that the two most famous candidate for dark matter are MACHOS and WIMPS, the states men fear most as the extremes of male behavior :>). If this Supersymmetry Theory is correct, WIMPS are electrically neutral, and really wimpy: only detectable through their gravity. Some speculative calculations indicate that WIMP matter would be just enough to provide the critical density needed for the inflationary theory.
A controversial development in the year 2000 has been the ‘discovery’ of a ‘WIMP wind’. As the sun orbits the center of the galaxy at 140 miles per second, if WIMP dark matter is stationary with the expanding universe, there should be an excess in particle detections in one season over another.
An Italian-Chinese group, headed by Dr. Alessandro Bottino at the University of Turin, made 13 excess detections in a matter of two years.
Are these really WIMP detections? A U. S. group, represented by Berkeley physicist, Richard Gaitskell, replicated the experiment at Stanford. They got a consistent number of detections. However, they are fairly sure that these are caused by neutrons knocked loose from the surrounding rock.
To me and others, this reason for the existence of something we cannot hope to see is flim-flam. Primack, who put forth the theory, once asked Martin Rees, a sensible conservative astrophysical theorist, whether he believed in WIMPS. He said “I give it about a 20% chance.” Its hard to say whether the ‘WIMP wind’ observations are for real.
If other indications of the Supersymmetry theory pan out, I might consider WIMPS. Strong men like me couldn’t think that wimps even exist, after all.
Pulling my tongue out of my cheek, the same arguments hold for axions. In a theoretical going out on a limb, Wilczek and Weinberg postulated a particle named after a laundry bleach alternative. The going theory of unified particles suggested a neutron’s spin should line up in a magnetic field. It doesn’t, so they invented the axion particle to explain why. The ‘new and improved’ version of the axion suggests they are a billion times lighter than the electron and that a billion of them should occupy every cubic inch of space. This would provide a substantial contribution to dark matter.
It’s all very theoretical, and it may be that WIMPS and AXIONS will go the way of ‘quark nuggets’, shadow matter, magnetic monopoles, and other bizarre suggestions. At one point, particle physicists were wandering around beaches, metal detector-looking devices in hand, searching for quark nuggets. No one ever found any.
The quest for exotic dark matter has now been all but abandoned, with the discovery that the universe may achieve critical density by being composed of about 70% funny energy in space. I had to laugh that 13 years before that was discovered I was suggesting exactly that. Of course it was too speculative at the time for my paper to be published in the Astrophysical Journal, but at least they put my abstract in the back of the volume. It is also humorous, that my paper was rejected because the reviewer thought space acting like negative mass was an ‘impossibility’. Now that is exactly the term being used. The funny energy can contribute to the curvature of the universe in the same way matter does.
The thing some cosmologists don’t get yet, which I suggested a year later in a paper submitted to Nature magazine and rejected for being too speculative, is the possibility that this funny energy in space can be bunched up by galaxies and clusters and act like positive mass, thus providing a substantial component of the galactic and cluster missing mass. This is an idea pioneered by Andrei Sakharov, the great Soviet dissident and physicist. He contended that space when squeezed acts like mass. The question is: if this is true, what is its bulk modulus (measure of compressibility)? And, in addition, how can we reconcile lumps in space with the Special Theory of Relativity? We will discuss these questions later.
Will we ever find something to fill the void? The following poem by Hughes Mearn expresses the frustration of something that is nothing:
“As I was going up the stair
I met a man who wasn’t there.
He wasn’t there again today.
I wish, I wish he’d stay away.”
Chapter 6
Ahab’s Ivory Leg Breaks: Inflation in the Early Universe
“The precipitating manner in which Captain Ahab had quitted the Samuel Enderby of London, had not been unattended with some small violence to his person. He had alighted with such energy on a thwart of his boat that his ivory leg had received a half-splintering shock... he took plain practical procedures: he called the carpenter. And when that functionary had appeared before him, he bade him without further delay set about making a new leg.”
As Ahab was challenged by an untrustworthy leg, cosmology in the ‘80s encountered its challenges too. In 1980, cosmologists were coming to grips with the reality and implications of the Cosmic Microwave Background (CMB). Regions opposite one another in the sky had basically the same intensity of radiation coming toward the earth in opposite directions.
The journey of these ‘tired’ photons had taken nearly as long as the time since the Big Bang. This meant that they came from regions twice the distance from each other in light years as the years since the Big Bang. Thus, at the speed the universe was expanding, these regions could never have been in contact at the time radiation was emitted.
However, they are at the same temperature. How could they have come to equilibrium? They are, in effect, beyond each other’s light speed horizon. This cosmologists call the horizon problem.
Another cosmological realization was that with the luminous matter density being about 2% and the inferred dynamical missing mass in galaxies and clusters adding about another 25%, the energy density of the universe was uncomfortably close to critical density. In fact, using the Friedmann equations, it was possible to think back to the beginning and see that the energy density at Planck time (10-43 seconds) must have been excruciatingly close to critical. Inferences were so close that they thought it was ludicrous not to guess that it was exactly critical from the beginning, that the universe was always flat. This they called the flatness problem.
This was the ‘outer space’ connection. Meanwhile, particle physics had gone through an ‘inner space’ revolution. The exchange W and Z particles, which ‘unified’ the weak nuclear and electromagnetic forces into one back in an extremely early epoch, had been discovered by Rubbio and his crew. Work was proceeding toward the next higher level unification, called ‘Grand Unification’. Even earlier than that epoch was projected an epoch in which the strong nuclear force unified with the other two. Scientists were looking squarely into the eye of a phase change in space very early on which separated out the two families of particles: leptons (involved in the electroweak force) and quarks (the purveyors of the strong nuclear force).
In addition, particle theorists predicted that kinks in spacetime called magnetic monopoles should have been produced in abundance by the Big Bang. Magnetic monopoles are separate magnetic poles: a south without a north or vice versa, a kind of kink in space-time. Friedmann expansion alone could not spread them out enough to keep them from being numerous enough to detect. Yet none were detected. This one could call the ‘magnetic monopole problem’.
The ‘inner space’ theorists also calculated that quantum contributions to empty space should be very large in energy unless they were canceled by some unknown renormalization. These calculations were 10 to the 120th power larger than the present limits on the cosmological constant. Therefore, Λ, the cosmological constant, was assumed to be completely canceled out (zero) by some unknown mechanism. This meant that theorists could not believe that the other 70% of critical density needed for a flat universe could possibly be provided by a cosmological constant. Thus began the search for dark matter, even beyond the confines of that necessary for Newtonian behavior of galaxy rotation and cluster dynamics.
Looking back on this time, it was like a dark night before the first glimmer of an ‘outer space–inner space’ connection. This was provided by Alan Guth of the Massachusetts Institute of Technology. Because his idea of inflation in the early universe addressed three of these problems, it became the rallying point of cosmologists and particle physicists, even though it required tremendous refinements to even become workable hypothesis, much less a theory.
Guth was contemplating what Grand Unification theories might have to say about the problems indicated above, when he realized that the breaking of the symmetry of the unified quark-lepton phase was like a phase transition from a false vacuum. This false vacuum was not empty, but full of energy density, this was suggested by the Russian, Andre Linde some months earlier.
Before this phase transition and after the quantum foam period, matter should exist in this strange state. This did a very strange thing. The energy scaled as the volume, leaving the energy density the same. This implied that the universe underwent a very short period of rapid exponential expansion. The universe should have been DeSitter at that time.
The odd thing was that this meant that the space of the false vacuum actually ended up expanding faster than light speed. This was space expanding and not objects moving past each other above light speed, and therefore it was not prohibited by Einstein’s Special Theory of Relativity. That gave those now-distant regions of space emitting CMB (microwave background) a boost which sent them double light distance away from each other, solving the horizon problem.
Also, the tremendous rapid expansion diluted the matter and stretched out the curves in space like great circles on a balloon blown up by a highly pressurized helium tank. The flatness problem was resolved, but only if the density of the universe is presently critical. This meant that a search for the 70% non-dynamical dark matter was on, since almost no one clung to a cosmological constant. However, the resistance to hypothesizing something that had such shaky justification was substantial, particularly among observational astronomers.
The monopole problem also was solved by inflation spreading out the many monopoles so thinly that it was likely none would be detected.
I must admit, that when I learned of the Inflationary Theory, the search for non-dynamical dark matter made me feel very queezy. Immediately I sided with the necessity for a cosmological constant to make up the deficit, in spite of the fact that only a few considered it even a possibility. I did not like critical dark matter because of the non-disprovability of the hypothesis. How could you ever prove that 70% of critical density was not out there as non-dynamical dark matter as distinct from a cosmological constant? Therefore, the theory, in my mind, did not live up to Carl Popper’s addendum to the scientific method which demanded every hypothesis to be disprovable.
In addition, I had reservations about inflationary theory with a cosmological constant. Though no one else was saying it at the time, because most were denying a non-zero Λ, I claimed in a lecture at the University of California at San Diego I gave as a grad student in 1986, that the need for lambda presented a fine tuning problem much like the closeness to critical density. I figured that it was very mysterious that dynamical matter was so close in value to the necessary lambda. This meant a super-humongous fine tuning shortly after the Big Bang. I called it the ‘lambda problem’, and presented a theory I will discuss later in the book, with some adaptations.
This ‘lambda problem’ is now considered to be a big mystery, with supernova observations indicating it is non-zero with over a 99% probability.
Though Guth’s inflation solved these three major problems in cosmology, it was found to be ultimately unworkable as it stood. To understand why, we need a deeper picture of his scenario. Energy-wise, the false vacuum which made his universe briefly expand exponentially was like a marble in a dent on top of a Mexican sombrero. The marble could roll down to the brim ending inflation, if it could get out of the dent. Classically, it didn’t have enough energy to do that. However, quantum theory says that there is a probability that the marble will tunnel through the circular bump around the dent in a given time.
Another way of thinking about it is that the marble is uncertain about its energy by the uncertainty principle just enough to leap over the lump on the edge after a certain probabilistic waiting period. Statistically, it is like radioactive decay, with a half-life in which the probability is 50% it will tunnel. Before it does, the universe expands with humongous inflation, due to the false vacuum energy.
The ‘marble’-ous universe then rolls into the brim somewhere on its circumference, ending inflation, but the extra potential energy it had at the top is converted into kinetic energy. It oscillates in the brim as that energy is converted into the particles that make up the universe.
Sounds great, but like water boiling when it changes phase, normal space first forms ‘bubbles’ in the false vacuum. To make the present universe, the bubbles must join up in a process of percolation, like in rapidly boiling water, except that it is going from a high temperature phase to a low temperature phase. Careful calculations showed that percolation does not occur.
The idea is that the false vacuum is expanding faster than the speed of light and the bubbles are expanding slower. Though it requires mathematical proof, this means that the space in between the bubbles grows too fast to allow bubbles to combine in large numbers. Without this process, it was thought that there was no way to produce our universe with an inflationary scenario. There was no ‘graceful exit’ to inflation.
Stephen Hawking suggested that if the Mexican Hat potential were sufficiently flat in slope that one bubble would expand sufficiently to make the universe because the inflationary period would be longer. However, experts in particle theory had a hard time finding the right ‘hat’.
A New Course: Linde, Steinhardt, and Albrecht’s ‘New Inflation’
“Avast!” roared Ahab... Man the boat! Which way heading?”
“Good God!” cried the English captain... “What’s the matter? He was heading east, I think–Is your captain crazy? “... In a moment he was standing in the boat’s stern, and the Manilla men were springing to their oars.
The general excitement of swatting three ‘flies’ with one blow--solving the three cosmological problems with inflation–made particle physicists hungry for the hunt for a slow-rolling descent down the ‘hat’. Andrei Linde, a cheerful Soviet particle expert, had earlier beat Guth to the basic idea of inflation. He first suggested that a phase transition might drive the expanding universe, but did not note how it would solve the three ‘problems’.
Now he cut to the chase, and found, about the same time as Steinhardt and Albrecht from the University of Pennsylvania, that Coleman and Weinberg had a Grand Unified Theory (GUT) potential that was slow rolling enough to allow the universe time to expand from one large bubble. In addition, it did not require a dented ‘hat’ and quantum tunneling. The ball merely rolled down the hill slowly enough to allow a large bubble to form. It’s sort of like blowing a large soap bubble by proceeding slowly and calmly. This slow-rolling potential became the archetype for most future inflationary theories, though there were other problems with the particular potential they chose and the version of GUT it was based on.
The Coleman-Weinberg potential was natural in a brand of Grand Unified Theory in vogue at the time, which predicted that the proton--the cornerstone of atomic theory–should decay. If protons decay, matter eventually transforms into non-atomic forms, and the chemistry we know and love becomes extinct. Nevertheless, such a transformation was predicted to take an inordinately long time.
At first experiments utilizing large tanks of material seemed to produce a fair number of decays, but when larger, more reliable systems were used, none were found. Thus the original observations were considered spurious. The length of time in which that particular brand of GUT said a proton should decay was out of the question. Thus the search for a new slow-rolling potential was initiated.
The other objection to the new inflation was more telling, and seemed to legislate against new brands of GUT as well. With everyone madly laboring night and day on the implications of Guth’s paradigm, a conference was held which focused on one problem: would Guth’s inflation provide the right initial conditions for forming stars and galaxies?
The Nuffield Workshop in 1982 brought it all to a head. It had been found by several researchers that inflation produces a scale free spectrum of primordial lumps. This is a boon for the production of matter, as we know it. All sizes would be produced with roughly equal probability, and that is exactly what we see. This was the fourth great victory for inflation. However, the difficult calculations provided an amplitude for these perturbations off substantially from what was thought to be necessary.
Though the COBE satellite a decade later would change the expectations slightly, they were roughly correct, and this is a result that still stands. GUT theories cannot give the right amplitude to primordial density fluctuations. They are off by a whopping factor of 10 to the 5th. This meant that the slow rolling potential needed to be 10 trillion times flatter. In fact, no known particle theory can accommodate the measured amplitude of the fluctuations of the Cosmic Microwave Radiation.
Since inflation was so impelling a paradigm, the ad hoc ‘inflaton field’ was invented, based not on particle theory, as we know it, but on the required density fluctuation amplitude alone. The hope was that the particle theory would come later. The inflaton field replaced the Higgs field of GUT as the reason for inflation.
At least in its simple form, no one has been able to justify the inflaton field in terms of ‘inner space’ particle physics. It remains ad hoc, standing on the basis of the four major cosmological problems which inflation resolves, not on any established particle physics, although extra-dimensional superstring explanations are being explored. This is sad, in a way, for the ‘inner space--outer space’ dream: that particle physics and cosmology should harmonize in some fundamental fashion.
It has become so sad that some physicists have proposed Variable Light Speed (VSL) theories as an alternative to inflation. We will discuss these later.
The track of the whales–the immortal ocean: Chaotic and Eternal Inflation.
Andrei Linde has raised some of the most fascinating issues relating to the inflation paradigm. Not only was he the first to conceive of the inflation idea, but he also gave birth to two ideas which expand our vista to include the universe beyond our own observable region.
His first great contribution to our understanding of the ‘unobservable universe’ was ‘chaotic inflation’. In this theory, the inflaton field is a random fluctuation with a varying magnitude throughout different regions of space. The shape of the potential causing the ‘roll down’ is like a bowl, and different locations in space, having different values of the field, would be higher or lower up the side of the bowl to start with. Thus the potential energy of different regions is greater or smaller, providing more or less inflation.
In some regions of the universe, like ours, where there is sufficient inflation, the curvature is flat. Where there is less inflation, the universe would look quite different. We just happen to be in a flat portion. This takes away the necessity of providing an initial value for the false vacuum energy--it is randomly chosen by quantum fluctuations.
The problem, of course with such a speculation is that it is not provable, unless of course we are able to observe a region of the universe which had different initial conditions. However, it looks like the microwave background is quite uniform, ruling that out.
This raises the interesting question as to whether such theories are worth proposing. They do not pass the Popper ‘disprovability’ test. However, Occam’s razor says that as an hypothesis it is better because it is simpler. Without it, we must come up with a complicated scenario to provide an initial height to the hill. Occam was no ‘Popper’, but he was very sharp.
Superstring theory is another instance of violation of our usual scientific method, in that we cannot make a single prediction (though we are on the verge) using the complex superstring equations, and yet they include all we presently know about particles. Prediction and verification is the essence of the Scientific Method of Francis Bacon. However, we may eventually discover a form of superstring or the related M theory which yields predictable results.
Linde’s second great coup is his concept of ‘eternal inflation’, again non-disprovable, but perhaps comforting and completing the inflation picture. If we consider the possibility that a region of false vacuum could appear as a random quantum fluctuation in a normal universe, then normal universes could spawn inflationary ‘babies’. If one takes that process backward, the overall universe could be composed of relatively isolated regular ‘pocket universes’ resulting from regions where inflation has ended. Parts of a false vacuum universe might ‘decay’ first, while the space in between expands rapidly, remaining in the false vacuum. The rapid expansion of space in between regular universes would keep them from joining together, i. e., percolating.
In such a scenario, it is possible to conceive that the overall universe never had a beginning. The regular universes spawn false vacuums which lead to regular universes, ‘worlds without end’. It is also possible that our region of the universe might suddenly become a false vacuum and inflate rapidly–what a way to go!
In addition, it is conceivable that we could create a false vacuum artificially, and give birth to a Big Bang. The density of energy necessary for such an artificial creation, however, is way beyond our technological means. We must create a density equivalent to 10 to the 80th power grams per cubic centimeter, heated to 10 to the 29 K temperature and then rapidly cooled. Maybe after a million years more progress?
The ‘no beginning’ speculation of Linde has been challenged by the work of Borde and Vilenkin, which says that in a ‘pocket universe’ picture an open universe must have a beginning. That perhaps gives comfort to those who have religious inclinations. One could have an interesting panel discussion as to whether scientific hypotheses are at all correlated to the beliefs of their ‘creators’, and as to whether that biases the research.
Chapter 7
The Chase: The nature of the ‘White Whale’–the Cosmological Constant
The Ship, the Ocean, and the Whale–Matter, Curvature, and the Cosmological Constant
In Einstein’s General Theory of Relativity, three different possible effects drive the expansion of the universe. They are: 1) the gravitational effect of the matter in the universe, slowing it down; 2) the effect of the overall curvature of the universe, slowing it down, speeding it up, or zero, depending on the value of what is called the curvature constant and the relative size of the other two effects; and 3) the cosmological constant, speeding up the expansion. It is the interplay and magnitude of these three effects which determines the rate of expansion at any time.
The amount of matter in the universe is certainly not zero, and we estimate that about 10% of critical density has been found and another 20% (these estimates vary) is inferred to exist as dark matter from the rotation curves of galaxies and motions of galactic clusters as well as in the intergalactic medium.
In the accepted picture (prior to the supernova Ia revolution), matter retards the expanding universe like the gravity of the Earth slows down a ball thrown upward. However, this does not have to be the dominating effect, as was thought by many cosmologists prior to 1998.
It could be that a cosmological constant either overpowers the gravitational retardation with a repulsive force, or did so in the past. At any rate, the influence of the matter term in the cosmological equation is usually assumed to be decreasing in its effect, simply because as space expands the density of matter decreases proportionally. However, it is conceivable that this might not be the case if matter is being created, say by a time varying energy in space. In fact, in the usual inflationary theories the energy invested in the ‘false vacuum’ actually changes into matter when inflation terminates. This obeys the conservation of energy.
However, for such a conversion to be occurring presently, the Cosmological ‘Constant’ must vary with time, an effect not allowed in the usual way of looking at Einstein’s relativity. (There is a way to rescue Einstein, I have found. But I will consider that later. For now, I am giving you the conventional ‘wisdom’.)
Since we do not observe the laws of Special Relativity to be violated in the present, some say that such a time variation of the cosmological constant could not be happening now. Nevertheless, there might be violations too small to be observed.
Further, the Cosmological Constant is believed to originate in the virtual vacuum states of particle species. The Heisenberg Uncertainty principle of quantum theory allows particle antiparticle pairs to appear and disappear ‘without cause’, so to speak. The Uncertainty Principle allows the mass energy of the particles to exist for a certain length of time.
The curvature term in the Friedmann expansion equation, if it is not zero, depends on the inverse square of the scale of space-time. Thus it is sure that at least some time in the future it will become negligible. This is sort of like observing a very large balloon being blown up. Eventually the balloon reaches the size of the earth, for example, and appears to be flat to a dweller on the surface. Although the universe may be more like an infinite saddle shape rather than a balloon in space-time, the same effect holds. Inflationary theorists say that the rapid expansion of the universe during an early inflationary period already has made the curvature term negligible. However, it is possible--but not likely--that we are under continual inflation at a much lower rate and thus it may not be so.
To most astronomers and particle physicists, the key issue is whether the Cosmological Constant is non-zero at present. With the supernova observations reported in 1998, it is likely that it is. The consensus is, however, that ‘life’ on the cosmic ocean may not be as simple as General Relativity suggests, particularly in the early universe where it is suspected that quantum effects are important and perhaps objects like superstrings and membranes called ‘branes’ play a part in the universal evolution. In fact, one could say: ‘In the beginning, the universe blew out its branes’.
The inferred existence of the ‘White Whale’,
the Cosmological Constant
In Moby Dick, those who have never seen the white whale assume that he exists because of wild stories and injured limbs. In physics it has been theoretically inferred that the cosmological constant should be 120 orders of magnitude larger than the observational upper limit on its value.
A vacuum is usually considered to be the ground state, or lowest energy, of a quantum system. For space in this ground state to obey the Special Theory of Relativity, it must look the same to all inertial observers (small accelerations), regardless of their motion. This requires that the metric functions representing the curvature of space-time must all be constants. This is called the Minkowski Metric. A metric is like of map of the ‘terrain’ of the curvature of space-time, and obedience to Special Relativity is Lorentz Invariance.
The perfectly Lorentz Invariant vacuum is a constant cosmological constant, and is also what is called ‘a perfect fluid’. This implies that its pressure is the negative of the density. This relationship between energy density and pressure is a peculiar equation of state required by Einstein’s Special Theory.
A negative pressure is like a pressure applied from the inside pushing out–cosmic repulsion. In the force equation for General Relativistic cosmic expansion, pressure is three times as effective as density (it involves ρ + 3p, where ρ is the vacuum density). Thus the force caused by the vacuum is overall repulsive because of the negative sign of pressure.
The work done by the pressure is negative, since work is pressure times change in volume. To conserve energy, the negative energy must create extra energy as the universe expands, just enough to keep the vacuum energy density constant.
No energy is assumed to flow in our out of the universe. This requirement is that of what is called an ‘adiabatic’ expansion. The cosmological constant is a constant multiple of the constant vacuum energy density.
When we do classical mechanics ala Newton we generally think of the vacuum as containing no energy. However, we actually could set its potential energy to be anything we want, as long as it is constant. Such an arbitrary constant does not affect the motion, and it is usually chosen for convenience in the problem.
However, in particle physics, quantum theory applies and the vacuum has a zero point energy, a ground state which does affect the motion of the universe (if it is non-zero) by manifesting as a cosmological constant. Quantum Field Theory suggests that vibrations of all different energies manifest as fluctuations creating virtual particle pairs in space.
Think of a very strange grassy field in which grass grows to all possible lengths all the way to infinite. The length of a blade represents the energy of a zero point oscillation, and no two blades have the same ‘energy’. A specialized ‘lawn mower’ comes along and pulls out by the root all blades over a certain length. If all lengths were allowed, without this selective pruning, the sum of all the lengths--the total energy would be infinite. Since the next energy above visible light is the ultraviolet, this is called the ‘ultraviolet divergence’.
The energy at which the ‘cutoff’ occurs is called the ‘Planck energy’. This is the point at which field theory is expected to break down due to random quantum processing. When the energy of all the little ‘vibrators’ up to that cutoff are added up, the energy density gives a value for the cosmological constant about 120 powers of 10 larger than the observational upper limit for that constant. Something is rotten in Denmark, or should I say on the grassy knoll. This discrepancy between theory and observation is sometimes called ‘the physicist’s cosmological constant problem’.
In addition, we know there is some kind of vacuum energy when we move two conducting plates very close together. They experience a weak but measurable attractive force known as the Casimir effect, predicted by Casimir in 1948 and measured in 1957 by Sparnay.
The force becomes unmeasurably small when we move them far apart, however, so this doesn’t help us determine the ‘bare’ cosmological constant.
In fact, the observational upper limit for the cosmological constant is determined by an particle pair energy of 1/100 th of an electron volt. That is the kinetic energy an electron would gain in being accelerated through a potential difference of 0.01 Volts. What a non-functional TV set electrons like that would make–the phosphor wouldn’t even light up as the electrons hit the screen!
No one has a clear idea how to do an accurate calculation of the cosmological constant from first principles. Some fabulous cancellations probably must occur, however, to make it so small. Perhaps a complete quantum gravity theory will provide some insight, and the current candidate is superstring theory, or a more general class called M theory. Other theorists are examining the supersymmetry idea, in which all particles become one at high energies.
Also, if we are to fully accept a variant of Guth’s early inflationary theory, we must conceive of some reasonable way the universe could have had such a large cosmological constant during the brief period of rapid inflation and such a small one now. We will discuss some of the speculative theories being developed later in the book.
The wake of the ‘White Whale’–the effects of a Cosmological Constant
The ‘weight’ of tales about the whale itself and the ‘weight’ of tales of his havoc upon mankind influence the overall dynamics of white whale hunting. In the same way, the relative density of the cosmological constant added to the relative density of matter is thought to determine whether the universe will expand forever, recollapse, or asymptotically approach a state with zero expansion velocity, a static universe.
This is expressed by a very simple equation:
matter density/critical density
+ cosmological const./critical density = 1?
Restated,
Ωtot = Ωm + ΩΛ = 1? (1)
Or is this sum less than or greater than 1? This formulation characterizes the motion of the universe for all time in terms of two constants which could be presently measurable. It should be noted that a factor representing space-time curvature is assumed to be currently negligible.
There is, however, an old way of characterizing the dynamics which gives a nice feel for the universal motion. It utilizes Ωm and a related constant called the current ‘deceleration parameter’, qo = (1/2)(Ωm -3 ΩΛ). This constant tells us the universe is decelerating if it is positive and accelerating if it is negative.
Both formulations assume that matter influences the dynamics of the universe in the normal General Relativity way. This dynamics can also be derived using Newton’s Laws alone, as McCrea showed in the 1930s. We introduce you to the simple equation (1) above, because with its aid alone, you can understand the destiny of the universe.
We can plot the expansion histories for various values of ΩΛ and Ωm . This is indicated in the figure below.
Note that the circled area is the presently likely region these parameters are in to about two standard deviations: centered on ΩΛ = 0.7 and Ωm = 0.3, or Ωtot = 1.0.
If the amount of matter is constant and smaller than critical (extremely likely), its gravitational influence will be diluted by the ever-growing volume of space. The net effect is that if the value of Λ (or ΩΛ) is positive, the universe will eventually engage in an unbounded exponential expansion. This is asymptotically a DeSitter Universe--with the density of matter so low it is virtually non-existent. This is exactly what present supernovae data seem to indicate for the final fate of the universe.
Loitering and no Big Bang regions seem to be excluded because Ωm ≥ 0.2 and Ωtot ≤1.5. The conclusion is that the universe came from a Big Bang with no loitering and is headed toward a DeSitter exponential expansion, functionally identical to the inflationary early universe, but not identical in magnitude. This still leaves leeway for discovering other forms of dark matter in an abundance ≅0.2 critical density.
How long has the great white whale roamed the earth? We would like to know the age of the universe. Cosmologists express the universal age in terms of what is called ‘The Hubble Time’. This is just TH = 1/Hubble Constant = 1/Ho. The most probable models of expansion we have been discussing all allow ages greater than TH. In fact, it is expected that the universe is somewhere between one and two Hubble times old. With current values for the Hubble constant and limits on globular cluster ages, that means somewhere between 12 and 22 billion years old, though there is now some indication from observations done by the WMAP (Wilkinson Microwave Anisotropy Project) satellite reported in 2003 that the universe is 13.7 billion years old.
However, if matter does not influence dynamics in the usual Newtonian way, as some (Millgrom and VSL theorists) have suggested, it is possible the universe is even older. In fact, an infinite age is obtained for an eternally DeSitter universe.
Thus it is that the cosmological constant changes not only the dynamical expansion of the universe and its fate, but also increases its age. This is a relief, as you will recall, because without it time became extraordinarily cramped.
The dynamical quantities of matter density and vacuum (Λ) density can be found if one knows, for example, the redshift and what is called the ‘luminosity distance’ of an object. As regards type Ia (white dwarf) supernovae, we know the luminosity, wattage, or power output at the source which, along with the amount of that power we intercept with our telescopes (flux), yields luminosity distance. They are ‘standard candles’ within about 6% in brightness of each other.
I tell my beginning astronomy students that flux, distance, and luminosity form a triangle. If we know two of them, we get the third as a bonus. We always can determine flux by measuring photons, or light particles, hitting telescope surface area. Thus luminosity gives distance and vice versa. Distance determined from luminosity is called ‘luminosity distance’.
Thus, conceivably, measurements of flux and redshift of Supernovae type Ia alone could completely determine the dynamics of a matter, cosmological constant-dominated universe. This is exactly what the Supernova studies by Riess’ and Permutter’s groups have begun to do.
Another implication of the existence of a cosmological constant is that it should leave its traces on the numbers of objects per unit volume seen at differing epochs. Galaxies are an example of countable objects. If we know that the numbers of objects haven’t changed with time, this too can allow us to determine universal dynamics. The number of objects per solid angle and redshift is what is called the ‘comoving density’. This density depends on the Hubble constant, luminosity distance, and Ωtot.
This is one way we can find out if Ωtot is 1, for example, as the inflationary theory suggests. Without knowing redshifts, but knowing number counts, we can also get both dynamical numbers Ωm and ΩΛ and universal dynamics. Loh attempted to narrow down the possibilities in 1986, but the data is woefully inconclusive.
The effect of the cosmological constant on the formation of lumps of matter which result in galaxies is also appreciable. The variations of density which make up galaxies must be fine-tuned to produce what we see. Various models of dark matter and inflation seem to work well in providing the ‘scale free’ lumpiness we observe. However, they have to be adapted to make room for the cosmological constant.
Variations of velocity around lumps of matter (‘peculiar velocities’) also are related to the dynamical values of Ωm and ΩΛ. Thus measuring their variation at different redshifts is also a way of discriminating universal models. This method becomes sensitive to a cosmological constant at redshifts greater than 1.
Perhaps the method of greatest promise is the ability of the gravitational lensing effect to determine the value of the cosmological constant. Gravity bends light, and as the light from a quasar bends around a large galaxy, it sometimes makes for multiple images. The probability of this happening at different redshift (ages) varies in a certain way for a certain cosmological model. Presently, this is our second best way of getting at Λ, and it seems to indicate that it is nonzero.
Former observations of the ‘White Whale’–Truth or Fiction
How do we sort out the truth about the white whale? It is foundational to think through what we reliably know from past observations. This is the Starbuck approach. What do our past observations tell us about the cosmological constant? Even before the supernova discoveries of 1998 there were some hints that the ‘white whale’ of cosmology was alive and well.
One of the greatest hints of the existence of a cosmological constant pre-Perlmutter deals with the age of the universe. Not only did the pinning down of the Hubble constant at a high value by Wendy Freedman and group from Harvard yield a universe suspiciously too young for the stars in it without the cosmological constant, but also the ages of other objects signaled trouble for energy-free space.
In 1991, Schneider had measured quasar redshifts as high as 4.89. They are now at a whopping z = 6.4. This indicates an even greater age of the universe, since ‘lookback time’ is a function of redshift, and without the ‘white whale’ there isn’t enough time since the Big Bang to look back to earlier quasar epochs. That large a redshift implies a matter density < 0.01 critical density without a cosmological constant. Since 0.1 had already been observed, there was a definite hint of energy in space.
Recently galaxies older than 13 billion years (in cosmological constant-free time) have been found by UC Santa Cruz researchers headed by Sandy Faber--working with the Keck telescope. These also exceed the Wendy Freedman age of 12 billion years maximum (without Λ).
The ages of the oldest globular stellar clusters given by Alan Sandage, Hubble’s successor at the Mt. Wilson Observatory, are 15-18 Billion years. If this range holds up to scrutiny, this lends compelling evidence to a cosmic constant, perhaps not as compellingly if we believe Alan’s much lower value for the Hubble Constant. (Remember that Hubble time is the inverse of Ho.)
The accuracy of age results is only half as good as the accuracy to which the distance is known. Thus, for example, a Freedman accuracy of 7% for the Hubble constant and distances implies a 14% age uncertainty for ages of globular clusters around somewhat distant galaxies.
Heavy elements, not found in globular clusters, but in spiral galaxy disks provide a very certain lower limit for universal age. The oldest heavy elements we find in the universe are at least 9.6 billion years old. However, that does leave room for the Freedman results without Λ.
Galaxy counts at various epochs could pin down a value for the cosmological constant, if there weren’t such strong evidence for galactic evolution. In fact, there seems to be a strong anticorrelation between galaxy redshift and surface brightness. Though galactic evolution might eventually accounted for, it is by no means evident how to do that now.
One might hope that the clustering of galaxies might be disturbed by the cosmic repulsive force, but as Martel and Wasserman showed in 1990, such structures would be insensitive to ΩΛ.
Michael Turner, from the University of Chicago, however, early in the ‘90s, indicated that a cosmological constant might be a way of saving the CDM (cold, dark matter) theory, which seemed to work well to produce the right-sized lumps for galaxies and clusters. This seems to indicate a matter density = 0.2 critical density. Since recent measurements in 1999 of the microwave background seem to point to a flat universe with Ωtot=1, this means that the cosmological constant in such a theory must provide about 80% of the total density. Such a result is very close to the independent determinations from recent supernovae investigations. However, much refinement of the CDM hypothesis is needed. Turner remained as the outstanding champion, however, of a cosmological constant throughout the ‘90s period preceding the supernova discovery in 1998.
From 1995 to 1997 work was done by several groups on the mass, light, x-ray emission, motions, and numbers of galaxies in galactic clusters. The conclusions seemed to center around a value of Ωm = 0.2. Thus, again, the cosmological constant (or some other component) was concluded to provide the other 80% required for a flat, inflationary universe.
At the moment, quasar gravitational lensing is perhaps the best alternative to supernovae studies for determining a cosmological constant. Light from QSO’s bending around massive galaxies creates multiple image effects. The statistics of the numbers of such observations versus redshift is sensitive to ‘funny energy’ in space. In 1996 Kochanek found a limit of vacuum energy density < 0.66 critical density for a flat universe.
We have focused on a cosmological constant, in part, because it is the simplest explanation for the observed dynamics. To have a component behave any other way would violate the General Theory of Relativity. However, it is not out the realm of possibility that we are missing the right theory. Consequently, there have been a number of time-varying cosmological constant theories, time-varying speed of light theories (VSL), and superstring or exotic object hypotheses, some of which still match the ten parameter model.
How to ‘kill’ the ‘White Whale’–Theory and the Cosmological Constant
“Launched at length upon these almost final waters, and gliding toward the Japanese cruising-ground, the old man’s purpose intensified itself... in his very sleep, his ringing cry ran through the vaulted hull, “Stern all! The White Whale spouts thick blood!”
Quantum theory alone is not sufficient to justify a contribution to an accelerating universe by a well-determined cosmological constant or some other manifestation. The first strike quantum field theory calculation was off by a factor of 10 to the 120th power. Also, General Relativity alone allows for a cosmological constant, but does not give a way of finding its value. We must navigate into the ‘final waters’ of quantum gravity to find the ‘beast’.
Before the nineties, when few ‘whalers’ believed the constant was non-zero, there were many attempts to prove that the constant was exactly zero. Some of the more widely-touted ones utilized the phenomenon of wormholes.
Formerly called ‘the Einstein-Rosen Bridge’, the wormhole was conceived to be a black hole connected to a white hole by a ‘tube’ out of space and time and back again. A black hole is like a one-way membrane allowing matter and energy to leave the universe and not enter it. A white hole is the theoretical opposite: if you took a movie of a black hole and played it backward, stuff would enter the universe unexpectedly. The white hole is a one way membrane into the universe. Both are singularities in space-time involving a large concentrated mass in Einstein’s General Theory.
Einstein and Rosen showed that wormholes or connecting tubes between different locations on the space-time ‘surface’ were allowed if the universe is closed (or marginally closed–‘flat’). One way of thinking of a closed universe is to think of a sphere like a balloon. Although the dimensionality is lower by one than the actual closed universe, one can picture hollow cup handle-like objects connecting to its surface, and get a picture of the geometry.
Amazingly, quantum gravity theorists asserted that there may be many closed universes connected by these wormhole tubes. Each would have a different set of physical laws determined by quantum probability. Some even asserted that one universe could ‘give birth’ to another–With an umbilical cord ‘wormhole’ connecting the two.
If one imagines large spherical universes like balloons in space with any reasonable number of tubes connecting them, and cup handles tacked on to individual ‘balloons’ in random places, one gets the idea of the construction Coleman used in 1988 to ‘prove’ that the cosmological constant was zero. This was a heralded achievement in physics, as the physicists--‘inner space’ scientists–believed it only could be very large or zero.
Though Coleman’s work now looks to be wrong in several ways, his technique is illustrative of an approach to quantum gravity due to DeWitt and Hawking. They utilized a method of calculation invented by Richard Feyman in the late ‘40's. This is the path integral technique. It is best illustrated for a single free particle. A particle, according to Feynman, may take any possible path in getting from one place to another. Each has a probability. However, one path is ‘stationary’ or most probable.
In the same way, the universe, DeWitt claimed, could produce any geometry of space-time allowed by the General Theory of Relativity. Each had a probability, and one particular type might be stationary or ‘preferred’.
When Coleman did his calculation, he found that Λ = 0 was preferred. This calculation is very difficult and must be accomplished by making time imaginary. This is Hawking’s Euclidean analytic extension of space-time. Though many have been critical of Coleman’s calculation, it represents a milestone in ways of thinking about quantum gravity.
Another interesting idea is the Anthropic Principle. This idea, simplistically, is that the universe must contrive to have its constants and physical laws such that it will evolve conscious beings to observe it. Physical constants must have ‘friendly values’. Weinberg argued in 1989 that the limit on the cosmological constant necessary for intelligent life was close to the observational limit at the time. This argument, then, is still valid since the discovered value for Λ is below that limit.
There have also been not a few astronomer types who have suggested a time varying cosmological constant, Philip Peebles, Geoffrey Burbidge, and myself among them. Primordial inflationary theories have such an epoch as the universe slow rolls down the potential hill to a more sedate expansion. However, many of these theories run into conflict with cosmological nucleo-synthesis (making up of the elements) and observations of the cosmic microwave background. There have been arguments to attempt to circumvent these difficulties, however.
Supersymmetry and superstring theories--our best hopes at a true quantum gravity--may someday confirm the existence of a cosmological constant and its size, or lead us to an understanding of some other component of the ‘funny energy’ which makes the universe presently accelerate its expansion. Much work is underway in these directions.
One can classify the different alternatives to a cosmological constant by using a simple ratio which expresses the equation of state of that component of the universe. The constant w = P/ρ expresses the proportionality of the pressure, P, of a universal dynamical component to its energy density, ρ
Here are some examples of that constant for different types of universal ‘stuff’. You might say that w represents the behavior of the material in universal expansion. Matter density has w = 0. This just means that matter (baryons, neutrinos, and dark matter) has no pressure. For radiation (light) w = 1/3. The curvature of the universe has w = -1/3. We expect that neither of these last two contributes significantly to universal expansion at present.
There are, however, other less conventional possibilities. The cosmological constant weighs in with w = -1. You’ll recall that it creates a negative pressure on the universe--it pushes out. Vilenkin, Spergel, and Pen calculated that certain types of networks of cosmic strings would have an effective w = -1/3. Cosmic strings are like stringy fractures in the fabric of space-time coming from certain string theories in quantum gravity. Domain walls–the edges of a pocket universe–if they come in a network would have w = -2/3, according to earlier work by Vilenkin. There is even speculation of the existence of variable mass particles (VAMPs) which would redshift at a slower rate than ordinary matter. They would have an w < 0. The unknown accelerating component was dubbed ‘quintessence’ by Caldwell , et. al., in 1998, perhaps a better term than ‘funny energy’, or the recent ‘dark energy’, since it may not be energy in the ordinary sense. However, the word ‘quintessence’ has now become more specifically assigned to a time varying component of spatial energy density, a time-varying cosmological ‘constant’.
Garnavich and his team published their analysis of the recent supernova data in December of 1998 and concluded that the quintessential w is bounded below by -1 and above by about -0.4, and the bounds have been narrowed more since. This weakly rules out a lot of possibilities, but more work needs to be done to narrow the scope.
Many researchers will probably go with the cosmological constant, as it can be represented within the General Theory of Relativity itself without recourse to the sketchier quantum gravity theories. This is with the reservation that several types of ‘stuff’ may contribute.
We now approach our first direct confrontation with the ‘whale’.
Chapter 8
Starbuck almost kills Ahab: the Universal Expansion is Accelerating
“...slowly, stealthily, and half sideways looking, he placed the loaded musket’s end against the door.
‘...if I wake thee not to death, old man, who can tell to what unsounded deeps Starbuck’s body this day may sink, with all the crew...’ ‘Stern all! Oh, Moby Dick, I clutch thy heart at last!’ Such were the sounds that now came hurtling from out the old man’s tormented sleep... Starbuck... placed the death tube in its rack, and left the place.”
Ahab will not give up: Supernova Observations and Cosmic Repulsion
For a time reason (Starbuck) seemed poised to overcome obsession (Ahab). There was a long period when it was unreasonable to pursue a cosmological constant or any type of ‘quintessence’ in space. However, haunting peripheral evidence in the ‘90s, like mutterings from the subconscious, stopped the ‘reasonable’ rejection of ‘Einstein’s greatest blunder’.
First, the extra matter density needed to satisfy inflationary demands was not found. In fact only about 30% of critical density was accounted for indirectly--in terms of its influence on the dynamics of galaxies, clusters, and superclusters. The only possibility was some component of dark matter between the superclusters, and it became apparent that such a component was extremely unlikely. Modeling the clumping of matter with cold dark matter or even a mix with some hot dark matter did not seem compatible with a higher matter density in the voids.
Second, gravitational lensing was yielding a weak indication of a cosmological constant somewhere between 0.2 and 0.7 of critical density.
At that juncture, even the coordinator of inflationary enthusiasts, Michael Turner from the University of Chicago, was beginning to include a cosmological constant in his thinking, leaving open the possibility that another dark matter component’s traces might still be hidden.
Meanwhile, paralleling these developments was a great stream of advance in technology. It would lead to accurate determinations of the nature of Hubble’s law out to a distance adequate to determine how the universe had expanded in the past.
Richard Mueller from Lawrence Berkeley Laboratory (LBL) worked with Luis Alvarez in discovering the reason for the extinction of the dinosaurs–a cataclysmic impact by an asteroidal body. In addition, he set out to find the cause of the apparent 26 million year periodic extinction of earth species. He suspected this to be a brown dwarf or ‘failed’ star called Nemesis with a 26 million year orbital period. He hypothesized that once every orbit it swung near the earth in its elongated elliptical orbit, catapulting debris from the Oort cloud and asteroid belt toward our home.
To see if he could find ‘Nemesis’, he needed to scan large regions of sky intelligently. To do this, he masterminded robotic control of a telescope, and software analysis of its data. The program was able to compare a given region of the sky at two widely separated times, to see if a moving ‘star’ could be found.
Mueller soon realized that the technique could be easily applied to finding exploding stars--supernovae--in distant galaxies. He could store observations of many galaxies at one time, and then about a month later compare observations of the same patches of sky. If a bright object appeared, it could be investigated as a possible supernova. In fact, the light from one observation could be subtracted from the other, leaving only the bright exploding stars. The spectra and other characteristics of these potential supernovae then had to be carefully analyzed to be sure they weren’t other objects.
In the mid ‘80s, Saul Perlmutter joined Mueller’s supernova group at LBL. Perlmutter, as an undergrad at Harvard, chose physics over philosophy, and went on to study particle physics at Berkeley. He was later to head up the ‘Supernova Cosmology Group’, which first gathered a preponderance of evidence to pin down the universal acceleration. Before he joined, the group was already successfully identifying supernovae in galaxies at moderate distances.
Several dozen supernovae were identified by 1989. However, these were not at great enough a distance find out whether the universe was accelerating or decelerating. Besides, the issue of distances to supernovae was not well resolved at the time. However, a certain type of supernova–supernova Ia–was identified as a possible candidate for being a ‘standard candle’.
Recognizing a ‘standard candle’ is like knowing a lightbulb is always 100 watts, no matter how far away it is. Light falls off as the square of the distance, so one could find the distance to the lightbulb by using a light-meter to measure the light received at the observer.
Similarly, since Supernovae Ia all seemed nearly the same source wattage (luminosity) at the peak of their explosion, they agreed they should be able to use them as standard candles, or distance markers, by how bright they appeared in their telescopes.
A star goes supernova Ia when matter from a red giant companion is drawn onto the surface of a white dwarf. When the total mass of the dwarf reaches 1.34 times the mass of the sun, a peculiar thing happens. The dwarf gets hot enough to start burning the carbon of which it is made, rapidly fusing it to heavier elements, but only in a small portion of the star.
From then on, it’s sort of like an atom bomb triggering a hydrogen bomb explosion, because once the burning starts, it spreads to the whole star. The total burning of a white dwarf of 1.4 solar masses will produce an explosion of a predictable peak luminosity or brightness.
To understand a supernova, one must not only look at its spectrum but at it’s ‘light curve’. The ‘light curve’ is a plot of received light flux from the star versus time. A supernova typically brightens to a peak luminosity in a few days, and then gradually tapers down over a period of several weeks.
It’s much like climbing a mountain with a steep incline on the near side, but a more gradual descent on the far side.
The problem with identifying Ia’s as standard candles, however, was that the ‘mountains’ of apparent brightness were not always of exactly the same slopes, and therefore might not have the same peak luminosities. However, in 1993, an American astronomer named Mark Philips, working in Chile, showed that their peak luminosities were correlated with the rate of descent of flux. That’s like saying: the steeper the descent, the taller the mountain. And, by noting how fast the flux from the exploding star trailed off, one could find the value of its standard candle luminosity at the peak.
This was very exciting, because it meant that hope was raised for finding precise distances to more and more distant galaxies, as telescopes improved their reach. In 1988, Danish astronomers found the first distant supernova Ia at a redshift of z = 0.32.
In the ‘90s, more were being discovered by Permutter’s team with each observation. The Hubble Space telescope was in the sky, and it was gradually realized that supernova standard candles were a key to the unfoldment of a deeper knowledge of cosmology.
As supernovae Ia were discovered at greater and greater distances, astronomers could use their luminosities to get their distances, and the redshifts of spectral lines to get the recession velocities of the associated galaxies. An extension of the Hubble law plot seemed feasible. With much difficulty, Perlmutter’s group and a second major supernova group were able to get Hubble telescope time.
One of the reasons the large redshift supernovae observed are considered to be surely ancient is ‘time dilation’. The time to brighten and fade is longer the larger the redshift.
The expansion of the universe is the cause. Early photons from a single explosion travel less far than later ones, indicating a time delay from light curve rise to fall that is different for different epochs. The time dilation factor is 1 + z.
The Pequod and the Rachel–The Two Supernova Teams Provide the First Equation
The Rachel, another whaler, had battled the white whale. Its captain lost his son in the encounter. They met at sea and Ahab learned of the encounter.
“The story told, the stranger captain immediately went on to reveal his object in boarding the Pequod. He desired that ship to unite with his own in the search; by sailing over the sea some four or five miles apart, on parallel lines, and so sweeping a double horizon, so to speak.”
It became a competition and cooperation between the two coasts of the USA, one group centered (at first) on Harvard and the other centered on UC Berkeley. The Harvard group was spearheaded by Robert Kirschner (a student of Fritz Zwicky–the cluster dark matter man). He early recognized the significance of Supernova Ia: that they could lead us out of ‘age of ignorance’ in cosmology. He collaborated with Brian Schmidt, who soon moved to Australia to be with his wife.
Schmidt became titular head of the ‘Harvard-spawned’ endeavor. They called themselves the ‘High z Supernova Search Team’. This team became distinguished by the accomplishments of its members, many of whom started at Harvard. Particularly notable for their contributions were Adam Riess and Alex Filippenko. Riess was able distort supernova Ia light curves to make standard peak luminosities realizable. Alex Filippenko had become a renown observer, teacher, and pioneer supernova researcher. Both of these ‘Harvard’ boys moved to Berkeley, blurring the lines bounding the two groups geographically.
Here is a rundown on the two groups, listing some of the members noted in their now famous 1998 publications:
‘Supernova Cosmology Group’ (32 members) ‘Berkeley Centered’: Richard Mueller (originator of the observing computer program and robotics), Saul Perlmutter (head of the group), Warrick Couch (hardware–CCD expert), Gerson Goldhaber (examined the time stretching effect), C. R. Pennypacker and a host of physicists from the Lawrence Berkeley Lab and worldwide. Alex Fillipenko also published with this group in the 1998 revelation of 42 high-z Supernova Ia (he participated in both groups).
‘High-Z Supernova Search Team’ (24 members) ‘Harvard Initiated’ : Robert Kirshner (initiator), Brian Schmidt (head), Pete Garnavich (Harvard-Smithsonian), Mark Phillips (standard candle pioneer), Adam Riess (stretch fit for the light curves), Alexei Filippenko (again), Alan Dressler, and a worldwide team. Their first important release in 1998 included the data for 14 high-z Supernovae Ia.
What did the combined 1998 data for 56 high-z supernova Ia’s show? To understand this, we must understand the goals of cosmology in 1998. By the way, the current number is well above 100, with quite a few at a much greater distance than the original batch. The most distant supernovae are now placed at nearly 12 billion light years distance.
The fondest hope of the researchers was to reach closer to the dream of pinning down Ωm, the matter density for the universe. Along with this was the enigma of ΩΛ, related to the cosmological constant. Was it zero, as physicists and most astronomers had asserted for decades? If not, what could it be? There was a surprisingly precise answer to both these questions, though we will see it could not be fully accomplished by supernova observations alone. It required detailed observations of the lumps in the Cosmic Background Radiation (CBR). We will discuss the impact of recent observations of the CBR after we address the ‘Supernova Revolution’.
The Supernova Ia observations for close range detections in the late nineties were shaping up a value for the Hubble Constant, H0. The procedure was simple in concept, challenging in practice. Redshifts of spectral lines provided recession velocities galaxies containing relatively nearby SN Ia’s, and peak luminosity standard candles provided distances. A plot of recession velocity versus distance for nearby supernovae yields a straight line. The slope of the line is the Hubble constant. It took some time for the supernova Hubble constant to reasonably agree with that derived from the Cepheid variable data, but by 1998, they both converged to a range of about 65-70 km/s/Mpc.
The Hubble constant gives the current rate of expansion of the universe, but reveals nothing of how the universe was expanding in the past. To do that with supernovae, required extending the observation to larger ‘cosmological’ redshifts of 0.3-0.7. This was the great work the two supernova groups accomplished. The mass of data, gathered and analyzed by two separate scientific groups in slightly different ways was very convincing. Although the error bars in luminosity-determined distance were large for each supernova, larger amounts of data shrunk the uncertainty in the overall determination.
Also significant were the corrections that were used to gradually shrink the uncertainty of individual measurements. Trumpler, in the 30's, had demonstrated a correlation between the reddening of light passing through dust and overall extinction of that light. Thus ordinary dust was mathematically accommodated as a factor in changing the apparent brightness of supernovae.
Some astronomers wonder, however, if there might be a form of large dust particles called ‘grey dust’ which would make the results far from reliable. Grey dust, if it exists, would absorb without reddening. Therefore its effect would not be easily corrected for. Nevertheless, recent studies make it clear that this is a very unlikely possibility.
After six years--as of this writing--the data stand firmly, and observations are being extended to many more Supernovae. The stretch factor and light curve form corrections from Mark Philips and Adam Riess yielded almost identical light curves for all supernova Ia. The question is whether supernovae at low z are the same in content and thus luminosity as those of mid-range z. Could white dwarfs have had lower metallicities or different structures in the past, and thus explode with a different bang?
It’s sort of like comparing how they made firecrackers in my youth to how they are made today. They sound the same, but were they made the same? This question also seems, for supernovae Ia, to be answered in the almost certain affirmative.
Just as Captain Ahab questioned the observations of passing ships that had seen the white whale, so are these revolutionary conclusions being questioned. However, time and alternate lines of discovery seem to confirm the uncanny picture they present. Additional information from gravitational lensing and the Cosmic Background Radiation (CBR) confirms the existence of a cosmological constant-like component to space itself.
What exactly did the supernova data give us? High-z supernova velocities and distances show that the Hubble curve deviates from a straight line at about z = 0.3 and beyond. If it had maintained a straight line at great distances, there would be no effect related to the matter in the universe. It was expected by many astronomers that the curve of v versus d should have bent upward, proving the decelerating influence of the gravity of the matter in the universe.
However, the Hubble curve (in the v versus d form) turned decisively down instead, indicating that the universe has accelerated. We know this was true for the brief instant of rapid expansion in the inflationary theory. But to assert that this was true for billions of years after that was revolutionary. The data was so good, in fact, that they asserted it to be 99% certain the universe will expand forever, in the simplest model. Thus were wiped out the dreams of those who believed the cosmos underwent a cyclic expansion and collapse.
The balance of matter and cosmological constant ‘funny energy’ was also narrowed in scope by these observations. The way the Hubble curve turns down is related to the difference of Ωm and ΩΛ. In fact, the combined supernova data indicate the Ωm - ΩΛ = -0.4 (within about 0.1). This is like having an equation in algebra, where x - y = 5. One immediately craves another independent equation like x + y = 8 to determine what x and y are.
Ahab is forever Ahab–The Cosmic Background Provides the Second Equation
“There she blows!–there she blows! A hump like a snow-hill! It is Moby Dick!”
The big news of the year 2000 in astronomy was a clear picture of a hump that has given us an accurate value for the total density of the universe including cosmological constant. (In astronomy, ‘accurate’ usually means less than 10% uncertainty.) We have sighted the ‘white whale’ in the year 2000. Its ‘hump’ is a lump typically found in the latest detailed measurements of the Cosmic Microwave Background map of the sky.
I had a friend in New York City in the ‘70s, a rather large man, who fancied himself a sea captain, though he was not. One day we went to the somewhat crowded beach at Coney Island, he sporting a captain’s cap and me my Neil Young folk-singer haircut. He instructed me to walk a good distance on the beach away from him and be ready to respond with a certain phrase to his predetermined query. I walked several hundred yards distance and waited for his shout.
“Have you seen the white whale?” he bellowed.
I waited a couple of seconds and replied in my loudest, most sea-worthy voice.
“Yes! We’ve slain him yesterday.”
The cosmic microwave background with its second harpoon has almost certainly ‘slain’ the white whale. What have our balloon-borne observations shown? The universe is flat. It’s sort of anticlimactic, like a beer drinker’s pronunciation on opening a can left in the refrigerator for years. The beer is flat. ‘I could have told you so’, you say. The ‘inner space’ contingent of particle physicists who live and breathe inflation could have told us the universe was flat.
Inflation, at least the type invented by Guth, and perpetuated by generations of physicists gone astronomers, requires that the universe currently be flat. This is the other equation we need to get matter and space densities separately: Ωm + ΩΛ = 1. However, before the observations reported by the MAXIMA and BOOMERanG groups April 27 and May 10, 2000, it had not been nearly as certain that they were right.
Let’s assume that these observations are correct, use them to solve our set of two equations, two unknowns, and then talk about the reasons we strongly believe in flatness.
Let x = Ωm , and y = ΩΛ. The two equations then become:
x - y = 0.4 (Supernova Ia observations),
x + y = 1.0 (CMB observations).
You may check that the solutions are x = 0.3 and y = 0.7. Translated, Ωm = 0.3 and ΩΛ = 0.7 (within about 0.1, it turns out). That is, matter currently makes up about 30% of critical density and ‘funny energy’ 70%. Note that these are the values of these parameters for the current epoch. Earlier in the universe, both the relative matter density and relative cosmological constant density were different than they are today. The matter density was much larger and the relative space density much smaller (that is, if we go for the standard interpretation of a cosmological constant).
This is a subtle matter which may be confusing for beginning cosmology students. Remember that the omegas (Ω with a subscript) are the true densities divided by the critical density at the epoch. That critical density may vary with time. What makes the universe flat now is most probably not what made it flat in the past. This makes ΩΛ vary with time, though the cosmological constant density does not.
Now, as to the reasons the Cosmic Background data indicate a flat universe. The sum of the matter and energy in the universe is what decides curvature in Einstein’s General Theory of relativity. Curvature determines the movement of matter. It is only natural then that Ωm + ΩΛ, the sum of the matter and energy components that contribute to curvature, can be determined by looking at the lumpiness of the microwave background.
About 280,000 years after the big bang, there was finally enough space between protons and electrons so that the light in the universe was free to propagate without interacting with them. Before this time, when an electron and proton would combine to make a hydrogen atom, a photon would come along and liberate the electron. The end of this epoch is called the recombination era, then, because the electrons were finally free to combine to make lasting electrically neutral hydrogen.
In the process of electrons falling in to the electric grasp of a proton and making hydrogen, ultraviolet photons were released with 13.6 electron volts of energy. The stretching of space in the expanding universe also expanded the wavelength of these photons until they are currently microwaves with a much lower energy.
Since the moment of recombination was the last time the light in the universe was in equilibrium, having a common temperature and a Planck or black body curve, the black body distribution of light centered on ultra violet was later shifted to a lower temperature. The Wien peak of this curve is theoretically and observationally associated with a black body at 2.7 o K.
This was discovered by Penzias and Wilson at Bell Labs in the 70's. However, in about 1990, the Cosmic Background Explorer (COBE) group from Berkeley was able to see that the Microwave Background was lumpy, varying in temperature by about 1 part in 10,000 in lumps of random sizes. This seemed to confirm a lumpiness at the time of Recombination which could produce the matter structures we see today.
This lumpiness was good enough to see a spectrum of sizes from the end of the inflationary period much prior to Recombination. However, the detail was not good enough to see even smaller-sized lumps which could determine the curvature of the universe, and which had occurred as a natural result of Recombination.
The discovery of the largest of these lumps in temperature variation, like the appearance of the hump of Moby Dick when he surfaced, created quite a stir among the crew of the ship of cosmology. This lump relates to the size of the ‘last scattering volume’ at the time of recombination.
Before recombination, the density of light generally bounced of protons and electrons, keeping the whole gas in equilibrium at a common temperature. The matter and radiation thus maintain homogeneity or smoothness, but only within a volume the size that sound could travel in the time from the big bang to the recombination era. Disturbances could not communicate with regions larger than that.
How big in angular size this original lump is now depends on the curvature of space, Ωm + ΩΛ, because the curvature determines the rate of expansion of space and thus the Microwave Background lumps.
A lump is measured by a small local increase of temperature. The largest variations are tiny: not over 50 millionths of a Kelvin degree. Consequently, early measurements had large error bars, because the lumps were very small.
However, the measurements released in mid-year 2000, had improved enough to definitely see the ‘a big lump’ the size of the expanded last scattering surface. They were just the right angular size to confirm within about 10% that Ωm + ΩΛ = 1. The universe is flat. In 2003, WMAP narrowed that uncertainty to about 2%.
How does one make generalization about the sizes of lumps, when we see so many of different sizes? The answer is a process called multi-pole expansion. Each multi-pole is labeled by the letter, L = 1, 2, 3, 4, ... (I use a capital L so it can be understood in print, but researchers use a lower case).
In the 1800's, the mathematician Legendre pioneered the angular decomposition of any function in space, utilizing the Legendre Polynomials. In our case, we wish to analyze the surface of a sphere (or part of one) over which the variations of CMB temperature are plotted. Each increase in L divides each previous sector of the sphere in 2
L = 1 is called the monopole term. It represents a lump the size of the whole sky.
L = 2 is the dipole term. Divide the sphere in two, and find the temperature difference between the two hemispheres. The COBE satellite was able to find that we were moving in the direction of one hemisphere at a rate of 550 km/s, because of a slight temperature difference between on direction and the opposite. This is our motion in the Milky Way relative to the ‘comoving’ expanding universe.
L = 3 is the quadrupole term. The sphere is divided in 4 pieces.
L = 4 is the octopole term, dividing the sphere in 8 pieces.
The number of pieces is thus 2L-1.
The key term for the biggest lump was calculated to be about L = 210, if the universe is flat. This corresponds to a lump about 9/10ths a degree across. The BOOMERanG group found it to be L = 200, and the MAXIMA balloon-borne experiment was in close agreement at L = 220. This produces an combined uncertainty in Ωm + ΩΛ = 1 of about +9%. This uncertainty has been substantially narrowed to about 2% by the WMAP satellite early in 2003. In a few years, the Planck satellite may narrow it more.
The Planck satellite, among other things, will survey the microwave background with microkelvin temperature accuracy. To be launched in the first quarter of 1997, it will be placed in the outer Lagrangian point of the earth-sun system. That is, out away from the sun there is a point where the gravity of the earth and sun balance and an object properly placed there will stay. There the satellite can achieve greater temperature stability than WMAP.
The height of the first lump in the microwave background temperature difference can directly give Ωm, the density of matter in the universe. I have not seen figures on this determination, but it was somewhat ill-determined, until the WMAP (Wilkinson Microwave Anisotropy Project) satellite did its work while orbiting the sun. This means we have fairly precise values (within about 3%) of Ωm and ΩΛ, the matter and dark energy components of the universe. These values harmonize well with the supernova data and are mutually confirming. Thank you Penzias and Wilson!
There was one ‘fly in the ointment’, however. Smaller peaks at predicted values of higher L are expected if we believe the inflationary theorists. For a while, it looked as though BOOMERanG and Maxima were not finding them. Nevertheless, the new WMAP satellite did find them, confirming inflation with a Cold Dark Matter model plus funny energy in space. Michael Turner should be very happy.
It may still be that the cosmological ‘constant’ varies with time. This is the quintessence idea and most cosmologists believe it requires abandoning Einstein’s Theory of General Relativity. However, as indicated in the next chapter, there is at least one outside possibility that does not.
Chapter 9
The Hidden Terrors of Moby Dick–Quintessence
“Moby Dick moved on, still withholding from sight the full terrors of his submerged trunk, entirely hiding the wretched hideousness of his jaw. But soon the fore part of him slowly rose from the water... “
Though the present data work harmoniously with the idea of a cosmological constant, life for future cosmologists may not be that simple. Pinning down the matter density and the space energy density at present doesn’t necessarily reveal what it was in the distant past. We have only seen part of the whale, so to speak. Though it may be the main goal of cosmology, the dynamics of the universe from beginning to end has not yet been revealed.
One possibility is that there are more than two contributions to universal curvature. Perhaps we will need matter, the cosmological constant, and something else to explain future supernova data. However, adding extra components to the equation is hardly in obedience to Occam’s razor--the maxim that ‘simpler is better’.
The search for SN Ia objects at higher z is going on as I write. The announcements prior to December 1998 included supernovae only up to about z = 0.7. However, the record holder in 2000 was Albinoni at z = 1.2, exploding at an amazing 10 Billion years in the past. By the way, the record holder for distance for all types of objects as of April 2000 was a quasar with redshift of 5.8, which is seen gushing at 13-14 billion years ago (in a cosmological constant model).
How will we know when we have the right understanding of the movement of the cosmos? When we have a fundamental theory which describes the velocities we see at all distances, and the ripples we see in the CMB. The simplest hypothesis is that there is a single component additional to matter. Cosmologists usually assume a simple equation of state: that the pressure of the component is proportional to density. We call the proportionality constant w, the ‘equation of state parameter’.
As indicated in a previous chapter, different types of ‘stuff’ have different w, and w may even depend on time.
Over 10 years ago, the term ‘quintessence’ was applied to dark matter in clusters and galaxies. Lawrence Krauss from Case Western Reserve was thinking of the 5 elements of the Greeks: earth, air, fire, water, and the fifth–ether, or quintessence.
However, his appellation didn’t stick.
The current general consensus among cosmologists is the following:
1. Earth--baryonic or nuclear matter, made of protons and neutrons (baryons).
2. Air--hot dark matter, smaller particles with large velocities.
3. Water--cold dark matter of the non-baryonic type. Slower moving particles, which are not baryons.
4. Fire--photons or light.
5. Quintessence--anything else. At first this term was applied to the cosmological constant as well, but Caldwell and Steinhardt (author of the ‘new inflation’) from the University of Pennsylvania have limited the word to a time-varying cosmological component. This component is not any of the above, nor is it a cosmological constant.
The Harpoon is not yet forged–Time variable cosmological constant?
“Hast seen the White Whale?”
“Look!” replied the hollow-cheeked captain from his taffrail; and with his trumpet he pointed to the wreck.
“Hast killed him?”
“The harpoon is not yet forged that will ever do that... “
We are looking for a fundamental theory to describe the motion of the universe from beginning to end. Most astronomers, but not all, accept that there was probably a brief period of inflation, a rapid expansion for a fraction of a second, which ended. However, as we have pointed out, we do not have a fundamental theory which exactly fits the numbers we need to produce not only inflation, but the lumpiness we need to produce the matter aggregations we see today as a result of it.
For lack of a deeper theory, particle physicists invoke the unexplained ‘inflaton field’. The universal marble had to roll very slowly down the hill--too slowly for any of our current particle theories to produce such a roll.
We also know a great deal about the dynamics of our universe as far back as 12 billion years into the past. However, very little is known about universal movement in between inflation and then--except that there was a gravity dependent deceleration--and so a truly fundamental theory of the whole ‘she-bang’ is out of our grasp, at the moment.
This should change with the Next Generation Space Telescope, now renamed the James Webb Space Telescope (JWST), which will be launched in about 2010. It will observe objects with such a large redshift that their visible light reaches us as infrared. Thus the time from the recombination era until about a billion years into the evolution of the universe will explored in detail for the first time. It will have the capability of observing objects 400 times fainter than the Keck telescopes on earth. It will give us a detailed view of the formation of galaxies in what is called the ‘first light’ epoch.
We don’t know for sure that we need one, but let’s explore the possibility of a time-varying cosmological ‘constant’, in hopes that if we do need one, we will have some well thought-out scenarios to choose from.
The first public appearance of a reasonable time-varying quintessence was the theory of Philip Peebles and Bharat Ratra in 1988. I had mentioned an ‘unreasonable’ time-varying theory to Peebles in 1985, but I’ll get into that later. Being a grad student, and not seeing any time variable theories in print, I thought I had staked claim to the territory, and I was a little chagrined that Peebles and Ratra beat me to the publication draw.
Their theory harmonized with the usual inflationary scenario. It related to the potential energy hill of inflation (the slope of the Mexican hat) caused by what is called a ‘scalar quantum field’. The kinetic energy from coming down the main part of the hill in inflation is thought to have fed into making the matter we know and love today.
The hill, they speculated, however, might have a ‘foothill’. They gave it the simplest form possible, calling it a ‘power law tail’. This tail meant that the energy in the vacuum would gradually decrease like a time-decreasing cosmological constant. Such a tail might give a fundamental reason why the present cosmological constant is about the same magnitude as normal matter density, a seeming coincidence. One form of the model also allows for Ωm = 0.3 in a flat universe, so it is still in the running.
Several forms for quintessence have been ruled out to a high degree of certainty by the supernova Ia observations. The Perlmutter group results limit the equation of state parameter to w < -0.6 with 95% certainty. This completely eliminates the network of cosmic string theory of Vilenkin, Spergel, and Pen at w = -1/3. Vilenkin’s comoving domain walls (a bubbly universe), is still an unlikely possibility at w = - 2/3, while the ‘globally wound texture’ theory of Davis, Kamionkowski, and Toumbas is ruled out with its w = -1/3. Anderson and Carroll’s idea of VAriable Mass Particles (VAMPS) is still in the running, as are some other theories.
J. M Overduin, form the University of Waterloo in Ontario, Canada would have us believe it possible that our universe is the result of a previous universe having met its ‘Waterloo’. He pointed out in May, 1999 that some time-variable quintessence theories allow an avoidance of the Big Bang, instead allowing for a ‘big bounce’.
This simply means the universe could have originated from a finite size rather than a cosmic ‘point’ or singularity. It’s called a ‘bounce’ because a big crunch or collapsing phase could have preceded the expanding phase. One of these possibilities has the cosmological constant scaling as the square of the Hubble Constant for its epoch. This Λ ∼ H2 model, promoted by Carvalho, Lima, and Waga, predicts that if Ωm = 0.3, ΩΛ should be 0.7. This is a nice fit to the currently held balance of matter and space energy.
The spotlight in quintessence theory, however, is now on variable cosmological ‘constants’ inspired by string theory. In a variant of the M class of string theories, is thought that our universe is one of many islands of three dimensional space in a four dimensional supra-verse. In our locality the fourth space dimension is curled up, but not so tiny that it doesn’t have an influence.
It is connected to other island universes whose gravity acts through that fourth ‘connecting dimension’. The influence of gravity through the fourth space dimension may appear as a time varying cosmological constant. The string theory basis for this was announced by Nima Arkani-Hamed and two Berkeley physicists in 2000.
However, Lineas Albrecht and Constantinos Skordis from U. C. Davis have taken the potential hills possible with this string theory and have been able to calculate possible approaches to a nearly constant cosmological space energy density. Their theory involves a time variable w.
As the universe rolls down the potential energy hill, it gets stuck in a cul-de-sac. In the process of rolling, the energy in space from the other dimension goes from having an attractive effect like other matter to having a repulsive effect. It ends up having its state parameter w = -1. In other words, it ends up behaving like a cosmological constant.
Albrecht’s theory thus explains why the cosmological constant density is about the size of the matter density at present. It also escapes being ruled out by nucleosynthesis restrictions on expansion. However, it involves specially selecting the shape of the hill from among the string theory possibilities. It is a beginning at working on a fundamental theory of quintessence. A finer view of the CMB (microwave background) may allow a less arbitrary determination. This will come soon.
Also notable in the quintessence field is the idea of the ‘Tracker field’. Zlatev, Wang, and Steinhardt (again) explained that a large range of initial conditions approach the same evolutionary track when there is a coupling between the scalar field and cosmic curvature. This is ‘inevitable quintessence’ and is sometimes called an ‘Attractor’solution. Spatial inhomogeneities in the scalar field do not destroy the production of a currently reasonable quintessence energy.
Philip Peebles from Princeton and Bharat Ratra from Kansas State use a potential energy that is an inverse power law of the field. Their model is called a ‘Tracker‘ solution. Tracker potentials usually produce a funny energy that decreases asymptotically to the current 70% value. The field involved must, however, come from new physics which transcends ordinary quantum field theory.
Takeshi Chiba of the University of Tokyo has coupled the quintessence field to gravity and found a time variable gravitational constant a possibility. From limits on the variability of G given by measurements of young and old neutron stars in binary pulsars, one can narrow down the numbers involved in such a theory.
There are also Variable Speed of Light Theories (VSL) which tie the variability to light’s speed rather than the energy in space. We will discuss these in a later chapter.
Chapter 10
The High Velocity of the Breach: Variable Speed of Light Theories
“Rising with utmost velocity from the furthest depths, the Sperm Whale thus booms his entire bulk into the pure element of air, and piling up a mountain of dazzling foam, shows his place to the distance of seven miles and more.”
There are problems with the inflationary theory. A reasonable particle physics explanation for the rapid expansion period has not been found. In addition, the fact the universe is flat, means that we must also explain why lambda is close to critical and close to the matter density. Our universe seems very contrived indeed. We should be able to understand this quasi-flatness and quasi-lambda problem in addition to the original lambda problem.
In addition, data is coming in from various sources indicating that coupling constant, α, for the electromagnetic force may have been smaller in the past. This means a smaller value for electric and magnetic forces. Since
α = e2/hc ,
one possibility is that the speed of light could have been greater in the past. Such theories that suggest a different speed of light in the past are called VSL (Variable Speed of Light) theories.
M. T. Murphy and crew from the University of New South Wales in Australia have suggested that quasar absorption lines, whose wavelengths depend on this coupling constant, suggest it was smaller by a factor of about a part in 100,000 at moderate cosmological distances when the universe was about half the size. The data asserts a difference from constancy of over five standard deviations. This is over 99% certainty that this constant was different in the past. Though there are questions about some error creeping into the data, at this time most of them have been answered.
Also WMAP satellite data reported early in 2003 indicate a strong possibility that α might have been smaller yet at an epoch when the universe was 1000 times smaller than it is today. Though the data is far less impelling than the Australian results, the uncertainty is six percent to the side of lower α, and only two percent to the higher side, with a big peak of data about 2% different from the present day ‘constant’.
The first suggestion from theory that the speed of light might have been different from the past came from John Moffat in 1993. Though he talked about a sudden change,
John Barrow suggested that the change might track the expansion of the universe. Since that time, Albrecht, Alexander, Magueijo, Smolin, and others have suggested reasonable theories based on modifications of Einstein’s relativity related to hints from the evolving M Theory and other theories of quantum gravity. All of them have been geared to be alternatives the inflationary theory.
One of the most interesting theories, in my opinion, is one presented by Stephon Alexander and Joao Magueijo from Imperial College, London. It is based on non-commuting space-time coordinates. In the emerging M or ‘Matrix’-‘Membrane’ Theory the D0 membrane coordinates are non-commuting. One way of thinking about this is that it looks as though space-time is quantized.
Space-time quantization suggests that space-time may come in small units. In the Non-Commuting VSL theory space and time coordinates commute, indicating eitther space OR time is quantized. One way of saying this is that there is a minimum time between events. This time is most likely in the realm of 10-43 second--the Planck time--a very small time indeed. In addition, an alternate developing quantum gravity theory called ‘Loop Quantum Gravity’ asserts the other possibility that there might be a quantization of volume and area, related to the Planck length of about 10-33 centimeters.
The Alexander-Magueijo (AM) theory suggests that photons, or light quanta, behave very much like sound vibrations in an ordinary crystal. These sound vibrations are called phonons. Thus both phonons and AM photons exhibit strange behavior at high temperatures–they become very sensitive to the crystal structure. In the latter case it is sensitivity to the ‘crystalline’ space-time which causes a change in the speed of light.
The ‘deformed’ radiation also causes a distorted black body curve. If so, this distortion should be measurable if and when we are able to measure the Graviton Background radiation (analogous to the Microwave Backrground). This graviton distorted black-body curve might be a remnant of the decoupling of gravitational radiation from matter at the Planck time. However, no one has detected gravitational radiation directly. Nevertheless, we have indirectly measured its effect on the slowing of the orbital speed of a pulsar companion star in a binary system. Thus there is hope that our future technology will detect this gravity wave picture of the universe from its very earliest epoch. A picture of the distribution and motion of energy during this ancient time would be a big leap toward confirming possible quantum gravity theories as well. Confirmation of the AM theory or another black body distortion theory would obviate the need for an inflationary scenario, solving the smoothness, horizon, lambda, quasi-flatness, and quasi-lambda problems in a sweep.
Investigations into other circumstances in which the variations in the speed of light are visible are being conducted. Further studies of quasar emission lines and a deeper probe into the structure of the Microwave Background utilizing the Planck telescope should begin the weeding out process, discriminating between VSL theories or perhaps denying the need for them altogether, though current results seem to weakly support VSL.
Magueijo seems to have his finger in almost all pots. With Kimberly and Medeiros he suggested a reasonable modification of relativity in which the Planck time is an invariant. That is, the duration of the Planck time is the same for all observers. This theory is similar in form to the usual formulation of special relativity. Such a codification enables what is called ‘double special relativity’.
There is, however, one difficulty. Inflation solves one problem that is more difficult to solve with VSL theories. That problem is explaining why the current universe is isotropic. Differences in direction from early space time distortions must be removed to make our universe the same in every direction from every point in space. Inflation does this, but as John Barrow points out, most naive VSL theories are less effective.
There may be modification of VSL’s which solve the isotropy problem, or it may be that the universe is not isotropic. In fact, some data arriving in late 2002 seemed to indicate that the universe MAY be toroidal in shape. This is one of the simplest anisotropic distortions. If the universe is doughnut-shaped, light sent out in one direction would curve around and return to its origin in a finite time. More data is needed to disprove or prove this possibility.
For these reasons, some, like Barrow, have proposed that it is not the speed of light which varies, but rather the charge on the electron.
The data for an anisotropic universe is very weak at present, and most cosmologists are still working with the assumption of isotropy.
Chapter 11
The Sphinx in the Desert–The Earliest Stages in the Universe and How We Know Them
“It was a black and hooded head; and hanging there in the midst of so intense a calm, it seemed the Sphinx’s in the desert. ‘Speak, thou vast and venerable head,’ muttered Ahab, ‘...and tell us the secret thing that is in thee. Of all divers, thou has dived the deepest. That head upon which the upper sun now gleams, has moved amid this world’s foundations.’”
While gazing at the Sphinx I have often wondered what she could tell us about earth’s past if she could speak. For the universe, the Sphinx might be the ball the universe was in at the Planck time (10-43 seconds). Before that, space and time certainly did not exist as we know them. What did? Or is it superfluous to ask the question?
M theorists like Ed Witten from Princeton--the first to consider an 11 dimensional universe as essential--might say membranes (one meaning of the ‘M’). These are surfaces of any numbers of dimensions one less than 11. With the interactions of theses ‘branes’, as they are called, the universe shape and dimensionality of space and time, as we know it, may have originated. The lowest dimension brane is a zero brane.
I often jokingly say when I talk about the Big Bang, that the universe begins by ‘blowing out its branes.’ I guess because it has zero branes, that’s a no braner.
It gets even more riotous when you consider that a p-dimensional brane is called a ‘p-brane’. That way it doesn’t take much intelligence to think about them.
What happens when a bunch of p-branes collide with the Earth? Your choice: either 1) our ‘civilization’, or 2) p’s on Earth.
So much for frivolity. I have been accused of excesses in that area, and I don’t want to be in trouble with my cherished readers.
So the universe’s origin is quite wacky. However, after Planck time, physics we know and love starts to kick in. We suspect that the distortions of space time and energy existent at the Planck time will create gravitational waves embodied in enigmatic gravitons, like light is represented by photons.
The neat thing is that this graviton background provides a picture of the universe immediately after Planck time, that is, if we are able to detect it. We have only been able to do this indirectly so far in the slow down of companions to neutron stars. Most physicists have faith that future technology of graviton detection will emerge. This means that just as the microwave background radiation (MBR), gives a picture of the universe at recombination 280,000 years ABB (after the Big Bang), the gravitational background radiation (GBR) will yield the details of the distortions in space-time present as the universe emerges from behind the veil of the Planck Matrix (one meanings of the M theory). Then the rescue of Zion will be nigh. Pardon my movie reference.
What about a timeline of the universe as we know it after the Planck time? Stephen Weinberg in his famous book from the ‘80s, The First Three Minutes, gave us a picture of these earliest times from the point of view of the standard theory of particle physics.
Particles and Light–the Flesh and Blood of the Universe
“Stern all, the White Whale spouts thick blood!”
In a sense, time itself may have come into existence out of the quantum foam of space-time distortions, perhaps even from just a single bubble. Thus it may not be meaningful to speak of 10-43 seconds after the Big Bang, the Planck time, as the beginning of time. However, for lack of a better phrase, we will.
One of the great quests in physics is the search for the ultimate symmetry which will unite the four forces of physics at or near this beginning. We know that electromagnetism and the weak nuclear force governing the decay of neutrons are truly one force that now appears to be two because of the breaking of symmetry at a very early epoch. Theories called Grand Unified Theories (GUT) have been concocted to describe the unification of the electro-weak force with the strong nuclear force which hold neutrons and protons together and binds them to each other. This symmetry was broken even earlier in time Though no one knows whether we have a decent GUT theory, physicists have faith in the possibility.
At an earlier time it may be that supersymmetry was broken. Before then particles were playing identity change into their more massive superpartners and back again. In the theory, particles with half-integer spins (Fermions) have partners that are particles with integer spins (Bosons), and vice versa.
Theories like supergravity and superstrings attempt to use supersymmetry to unify gravity with all the other forces. We are on the verge of finding or not finding predicted superpartners with our particle accelerators.
Here then are the epochs of the breaking of symmetry separating out the forces:
Supersymmetry gravity breaks away after 10-43 s
Grand Unified Strong force separates 10-35-37 s
Electroweak E-M and weak force split 10-11 s
Though we are not clear how matter may have originated, we have somewhat of an idea how it transformed as time went on. One idea for the origin is that the period of inflation may have played a part. The vacuum energy or funny energy in space at the time may have rolled slowly down a hill of potential and reached a bump at the end and oscillated. The energy of oscillation may have produced matter. If you roll a marble off the center of a sombrero with a curled-up brim, the marble may oscillate when it reaches the brim.
Thus matter may have come in to existence somewhere around 10-35 seconds at the end of inflation. At that time particles and antiparticles were in equilibrium with photons. A particle and antiparticle can annihilate to make a pair of photons. With so little elbow room early on, they did it quite often. However, the photons almost immediately produced back the pairs of particles.
During this era we had what one would call a ‘quark soup’. The energy is high enough to keep neutrons and protons from being formed by condensation of quarks. This state may have been produced at Fermilab in Illinois in 2002 by slamming gold atoms together at speeds near the speed of light, but the matter is still under debate. The quarks combined when the energy was low enough. This left photons and newly-formed neutrons and protons, because the temperature had lowered enough to prevent them from being destroyed.
At this point symmetry between matter and antimatter had been broken and matter particles outnumbered antimatter by about one part in a billion. At a time after the electro-weak phase transition, at about 10-6 seconds, protons and neutrons annihilated their antimatter counterparts, leaving behind the one billionth part matter. This explains why we don’t detect antimatter and its remnants in appreciable amounts in cosmic rays from the cosmos. We have to create it artificially in the lab.
At this time neutrons and protons changed back and forth into one another, but neutrons have more mass and are harder to create. Thus the proportion of protons to neutrons settled in to about four to one.
When the universe was about one second old, antielectrons annihilated electrons. The photons had lowered in energy as their wavelengths lengthened with expanding space, and thus they had too little energy to produce electron anti-pairs. During this era the number of remaining electrons was nearly equal to the remaining protons. Had this not been true, matter in nature would not be neutral on the average, and this would have resulted in outrageous electric forces.
The neutron supply gradually depleted as they decayed into protons, electrons, and anti-neutrinos. Protons thus further outnumbered neutrons.
At about 100 seconds the universal temperature was a billion degrees and hydrogen was fused into deuterium, helium and lithium. At ten minutes, the temperature went below 10 million Kelvin and this fusion stopped. During this epoch of primordial fusion, the entire universe was like one big stellar core, with a sufficient temperature to make the process work. We call this epoch’s process ‘primordial nucleosynthesis’. During this time there were about 7 protons for every neutron.
Think of 16 particles. About 2 of them were neutrons. They combined with 2 protons to make a Helium atom. This used up 4 out of 16, so that the universe became about 25% Helium by mass. Only a trace of deuterium and Lithium was formed. This then, was the ‘primordial soup’. This is the proportion we still find hanging around in gas that never formed stars.
Later–scientists now say about 200 million years later–stars were formed and began to fuse to make heavier elements as well. The massive stars did it very quickly, exploding their ‘heavy metal’ in the universe. The ‘Metallica’ era had begun.
Astronomer Talk about the Epochs–Redshift, z
Let us uplevel our discussion of the early universe by using redshift terminology. This will give us an idea of the scale of the universe at each epoch.
To help visualization one could consider a given cube of space, since space itself is expanding dragging the matter along with it. For this reference cube in the early universe, the redshift,
z ≈ size now/size then.
Thus when we say that the Recombination era happened at
at z = 1,300 we not only mean that light emitted at that time is shifted by 1,300 times the starting wavelength, but also that the universe itself is stretched by a factor of about 1,300 times. It was 1300 times more compact then than now. This happened at a time about 200,000 years ABB (After the Big Bang).
Here is a list of important phases for which we know the redshift and consequently the time (all times are ABB):
Inflationary era zinfl = 1027 t = 10-34 s
Thermalization of matter
And antimatter ends: zth = 106 t = 1 yr
‘The Cosmic Photosphere’
Equality of matter and
radiation zeq = 3233 t = 50,000 yrs
Recombination zrec = 1250 t = 240,000 yrs
Decoupling zdec = 1089 t = 379,000 yrs
Last scattering surface zlss = 1055 t = 400,000 yrs
Reionization
Stars first form: zth = 20 t = 180,000,000 ‘First Light’ yrs
The first three are determined theoretically, as we have no radiation that betrays the conditions and times. We have excellent data for the last three from the WMAP satellite in its probes of the lumps in the microwave background.
Here is a description of these different benchmarks.
The Inflation epoch has been described in detail in a previous chapter. WMAP has confirmed it or some alternate theory, equivalent in effect, (like Variable Speed of Light–VSL) by noting the scale free nature of the lumps.
We have described Thermalization of matter and antimatter as a thermal equilibrium between the two and the light they form by annihilating. The time indicated when it ends–is called the Cosmic Photosphere because this is where the majority of light in the universe is produced by matter-antimatter annihilation without reproduction.
Recombination is when electrons combine with protons to make neutral hydrogen. They recombine without being ionized again, for the most part. In the process, an ultraviolet photon is emitted by each atom. Universal expansion stretches its wavelength into the microwave.
When the empty space is sufficient for light to pass through matter without much scattering off the electrons as well this is called ‘decoupling’. Light no longer interacts with matter. Light and matter go through an independent evolution, and matter is allowed to condense by means of its self-gravitation.
The very end of this decoupling process is when the photons last scatter off electrons. This is called ‘The Last Scattering Surface’, and it is somewhat nebulous–that is, different for each photon, but statistically centered on z - 1055.
We cannot see beyond this epoch because the direction of photons would be changed by scattering before then. It’s very much like noting how far you can see in a fog. Though different photons may have a direct line to the observer, there is a kind of average distance for last being scattered off water droplets. Replace water droplets by electrons and you have a picture of the process. Only neutrinos or gravitational waves could penetrate this ‘fog’ to see what was going on beyond the last scattering surface.
Reionization of neutral atoms of hydrogen occurs when
stars are first formed, or if other energetic processes would occur even earlier. A type of star called population III was theorized to produce reionization. This is a very massive clump of gas collapsing directly into a black hole–more massive than normal stars. It was once thought that these were the source of gamma ray bursts from the early universe. Now that theory has been discounted by observations from the Chandra gamma ray satellite.
Chapter 12
Fate’s Lieutenant: The Destiny of the Universe
“Ahab is for ever Ahab, man. This whole act’s immutably decreed. ‘Twas rehearsed by thee and me a billion years before this ocean rolled. Fool! I am the Fates’ Lieutenant; I act under orders.”
Captain Ahab
Is there such a thing as the ‘fate’ of the universe? If so, can we figure it out from what we know now? These are extremely difficult questions to answer.
If the universe is really flat and remains so, it would expand forever. However, we can in principle never know that it is exactly flat. Including both matter and funny energy the density of the universe is currently known to be within about 2% of critical density. However, the universe is likely to Marginally Closed or Marginally Open.
A hair to the higher side of critical density (Marginally Closed) would mean the universe is closed but immense, a monstrous three sphere in 4-D space that only appears to be flat because we can’t see much of it. It would take light perhaps trillions of years to circuit the sphere and return to the same point–but it could. Also, it is worthwhile noting that wormholes connecting distant portions of the universe are only possible for Marginally Closed universes.
A hair to the lower side of critical density (Marginally Open) means the universe is like a very large extended Pringle potato chip–what we call ‘saddle shaped’. It would expand forever and ever and is possibly infinite. A Marginally Open universe would more rapidly approach what we call ‘heat death’ than a Marginally Closed.
However, both of these scenarios are applying what we know about now and the recent (soon to be distant) past to the future. Things may not be that simple.
Here are some possibilities which make the fate of the universe uncertain.
A pocket universe could be created in our locality which would change the shape and acceleration of the universe. Such a pocket universe could be created by a random quantum fluctuation, or a determined raging scientist with lots more energy at his or her disposal than we will have in the near future.
There is no guarantee that other unknown components changing the expansion might not come into play in the future.
One of the most fascinating scenarios I’ve heard is called ‘The Big Rip’. R. R. Caldwell from Dartmouth College, in a paper intriguingly entitled, “Phantom Energy and Cosmic Doomsday’, describes the effect of a negative energy runaway cosmological constant. He describes an energy for which the sum of density and pressure is negative. It does something really wild–drives itself to infinity in a finite time, ‘rips apart the Milky Way, solar system, Earth, and ultimately the molecules, atoms, nuclei, and nucleons of which we are composed, before the death of the universe in a Big Rip.’ This theory, released in 2003, actually finds that in light of our current understanding, such an ultimate disaster is a possibility, and he should know–he is a primary investigator into the implications of the WMAP microwave background data.
Fortunately, there are plenty of other post WMAP possibilities in the running. The funny energy could still be a cosmological constant, for example. This is the simplest possibility. There are surviving ‘tracker’ quintessence models and other forms of exotic time-varying cosmological constants. However, with the exception of the Big Rip, if we assume a single component of funny energy, the various models just speed up or slow down the following scenario, which holds for marginally open or closed, nearly flat, or flat universes:
In this simplest of all possible worlds, and perhaps most likely we would undergo three stages.
We are currently in the stellar era where there are lots of stars. By a star we mean an astronomical object still burning by means of nuclear fusion.
The approximate timetable in this stage (expressed in units of ten billion years–TBYs):
The sun becomes a white dwarf in about 0.5 TBYs
Most longest lived stars (M stars) die 100 TBYs
Gas supplies used up 1000 TBYs
All stars gone to white dwarf or beyond 1200 TBYs
The next stage of universal disorder or entropy is the degenerate era. The universal content will include white dwarfs (at first), brown dwarfs, and black dwarfs–cool carbon cinders resulting from white dwarfs, planets, neutron stars, and black holes. As the entropy in the universe increases, energy becomes unusable.
The timetable in the Degenerate Era (expressed as n, where time = TBYn ):
Most stars and planets ejected from galaxies by n = 2
Black holes swallow everything in galaxies by n = 3
Objects outside galaxies disintegrate by n = 4? (Means is proton decay.)
It should be said that the timing of this epoch is very uncertain. It is thought that the Grand Unified Theory (GUT) of particles which will unify all forces of physics but gravity requires the proton to decay into perhaps a pion and a positron, the antiparticle to the electron. However, a workable GUT has not been found. The first GUT prediction of the very long lifetime (n=3 powers of TBY) of the proton failed experimental test. Since then, GUT theories suggest an even longer lifetime for this particle vital for matter as we know it. Without protons we wouldn’t have atoms and chemistry, as we know it.
The black hole era comes next. The only composite discrete objects will be black holes.
Stellar mass black holes evaporate by n = 7
Supermassive black holes evaporate by n = 10
(a ‘google’ years)
The dark era comes next. In the process lots of photons are created. Most massive particles will have found antiparticle counterparts to annihilate with, producing more photons. Thus the content of the universe will be mostly photons, neutrinos, and lightly bound positronium atoms.
These final positronium atoms carry the fascination that they are not only created by orbiting electron-positron pairs, but their orbits will be light years in size. They will eventually decay by n = 11, that is, 10110 years.
If we have a Marginally Open or Critical universe, life will most likely be gone in a thousand trillion years. It becomes cold, dark universe indeed. The Marginally Closed possibility has not been studied in detail.
Would there be a Big Bounce? Another interesting question. Some scientists say that all black holes might lead to baby universes. This sounds almost like ‘cosmic reincarnation’. A universe’s information is kind of its ‘karma’. If information survives the big black hole stage as some quantum gravities suggest, this might mean birth of a baby universe with somewhat of a destiny–‘fates lieutenant’.
Chapter 13
Harborless Immensities:
Extra Dimensions
“Already we are boldly launched upon the deep; but soon we shall be lost in its unshored, harborless immensities.”
“The strong, unstaggering breeze abounded so, that sky and air seemed vast outbellying sails; the whole world boomed before the wind.”
Early in the book we mentioned the ‘inner-space, outer-space’ connection, a term coined by Michael Turner. The claim is that our study of particle physics should illuminate our study of the cosmos, and vice versa. After all, our particle accelerators probe smaller and smaller regions of space as they increase the energy of particles. In addition, the further back we look in time by probing deep space, the hotter and more energetic the universe was. So there is a parallel between phenomena occurring at high energy and in the early universe.
One thing that is agreed upon is that in the early universe gravity is no longer described by General Relativity. Quantum rules, and the unplumbed depths of Quantum Gravity, apply.
We experience life as if we live in four dimensions: three of space and one of time. However, it has gradually dawned on physicists that our understanding of nature may depend on comprehending more dimensions.
The Fifth Dimension–a perpendicular leap of the whale
In 1919, a mathematician by the name of Theodore Kaluza stunned Einstein by sending him a paper suggesting that his four dimensional theory of space-time might be improved by adding a fifth dimension.
He showed that electromagnetism and gravity could be simultaneously described by five curved dimensions. This was the first unification since Maxwell had brought together Electricity and Magnetism in 1860. Einstein was reluctant to recognize this feat as it seemed to somehow make his 4-D theory obsolete. He contemplated Kaluza’s paper for two years before recommending it for publication.
However, other physicists had a valid criticism. Why don’t we experience or measure this fifth dimension? In 1926, Swedish Mathematician Oscar Klein came up with the solution: the fifth dimension is compact.
Consider a straight line along the surface of a cylinder as representing the four dimensions we know and love. Kaluza’s fifth dimension is like the circular one around the cylinder. If the cylinder is shrunk down arbitrarily small so we don’t experience it, we say it is has undergone ‘compactification’. Later quantum theory would require that it not be infinitesimally small, but rather at least the size of the Planck length, or 10-33 centimeters. This is still unmeasurably small.
In 1927, Fritz London from Stuttgart recognized that quantum theory also implicitly contained the full theory of electromagnetism. It was embodied in the freedom of an extra dimension implicit in the theory--called the gauge dimension.
London’s quantum theory and the Kaluza-Klein’s extended general relativity theory had something in common: electromagnetism. It was decades later that these two theories plus the theory of nuclear forces were amalgamated into 10 dimensions in what we now call Superstring Theory.
Superstrings became the hope for a Theory of Everything in 1984, when Schwartz and Green showed that infinities in the quantization of gravity did not occur in Superstrings. I was a graduate student at the time and I remember the excitement, when one of them came to lecture at the University of California at San Diego.
In ten dimensions, then, superstrings were string-like warps in space, the warp extending into 6 compact spatial dimensions. The strings were either loops or open-ended, and their vibrations and windings represented the known particles. Closed loop strings are sort of like vibrating rubber bands. The different vibrations represent the different particles of nature.
It was understood that the infinities that crept into attempts to quantize gravity were related to the fact that particles were considered to be points. However, in the superstring description they were loops with a finite size. Remember that the smaller the scale observed, the larger the energy. Infinite energies were necessary to probe a point particle and they showed up in the math, but since superstring loops had a finite size, it did not require an infinite energy to observe them. There was a natural energy cutoff in the calculations.
Formerly, especially in Feynman diagrams which describe particle interactions, particles moved in what we call a world lines. Now superstrings (at least the loops) moved and created world tubes. This finite size of path removed the unrenormalizability of gravity.
Some scientists, like Robert Brandenberger from Brown University, were seriously considering how such a theory might impact the search for the inflaton field of the inflationary theory of the universe. Even as early as 1985, I recall him trying to extract the perfect ‘slow rolling’ hill of potential energy for the universe to create inflation from the equations of superstrings.
Brandenberger and Cumrun Vafa, in 1988, came across something extraordinary in the history of extra-dimensional physics. They gave us the superstring explanation for why we experience four dimensions instead of the 10 of superstrings. This rests on the theory of what is called the T-duality of superstings.
Superstrings can have two kinds of energy one because of their vibrations and another related to the number of times they are wound around compact dimensions like a tetherball rope around a pole. This was called the ‘winding number’. It was found when the radius of the pole was changed to from R to 1/R, the roles of the two types of energy interchanged and the total energy of the string was the same. This is called T-duality.
If you make the pole smaller, the string could wind around it more times, but have less vibrational nodes in a perfectly compensating way. Thus they were able to show that a very small universe was equivalent in all ways to a very large one. Thus the necessity that the universe began in a point or singularity was removed.
Also, particles and antiparticles are represented by strings vibrating in opposite directions around the ‘pole’ of a curled up dimension. When they come together, they annihilate, releasing their force of constriction around the dimension and allowing it to uncurl or decompactify. It’s like two boa constrictors fighting to the death, then releasing their prey.
Consider that the universe may have started as a ‘nugget’ of 10 curled-up dimensions. Barndenberger and Vafa proved that particle and antiparticle strings were only likely to come together in large quantities in three spatial dimensions, not when they are involved with more. Thus the annihilation of the wrapping strings allowed only three spatial dimensions to uncurl, the three dimensions we move about in today.
An alternate theory that the universe started out with 10 uncurled dimensions and 6 of them curled up can also be supported.
Though Brandenberger and Vafa started out with superstring theory, they did a similar thing with the M Theory. Recall that the M theory ups the dimensional ante by one. The universe is suggested to be 11 dimensional instead of 10.
Why make life more complicated? There were five independent superstring theories which worked to contain all of particle physics. However, in the early nineties it was realized that they were related to one another and to another theory called 11 dimensional supergravity. These related superstring theories were all perturbation theories on a fixed background, that is, approximate calculations with successively smaller terms.
For this to work, the coupling constant for the string force has to be less than one. This is like using a just few terms from a binomial expansion of (1 + x)n requires x to be smaller than 1.
In 1995, Ed Witten of Princeton University suggested that by extending the theory to a coupling constant greater than one, one type of superstring theory transformed into 11D supergravity. This, then, would only work for an exact theory, not one of successive approximations. Also, the different string theories seem to be related to each other by T-Duality.
This meant that there should be some theory of the universe in 11 dimensions that is the centerpost for these 6 theories. Ed Witten called it the M Theory. Though he didn’t give us the meaning of ‘M’, the M theory, in the form most employed, realizes the importance of membranes, or branes for short.
I like to think of membranes as bubble like surfaces. However, these can be as complicated as 9 dimensional surfaces in 10 dimensions space. The idea, in one description suggests that strings are like surgical tubing instead of being infinitely thin. An open string can be connected to branes on either end, just like intervenous tubing can be connected to a bag of saline solution and to a patient’s arm.
With the extra dimension, physics becomes more interesting and complex. Branes play an integral part in the formation and destruction of superstrings which are related to particles.
In 2002, Brandenberger, Alexander, and Eason from Brown University put out a paper claiming that a gas of these multidimensional branes still would allow only three dimensions to decompactify from a 10 dimensional space nugget. This explains why we experience three dimensions from the more complete perspective of M Theory.
There are also excellent anthropomorphic arguments against us experiencing more or less spatial dimensions than three. Basically we couldn’t functionally exist in two dimensions, because an alimentary canal in 2-D would divide our body in two, with no communication possible between the two parts–no blood could flow. In more than three dimensions gravity would be stronger than centrifugal force and planets would fall into their stars, if they could form at all. Even electrons would collapse into nuclei, and chemistry as we know it wouldn’t exist. If intelligent life were to exist it would be radically different.
The Ekpyrotic/Cyclic Universe
“The whale... bore down upon its (the ship’s) advancing prow, smiting his jaws amid fiery showers of foam.”
Did the universe really begin by blowing out its branes?
One theory is that the brane (membrane) we live on was formed by the collision of two branes. (Sounds sexy doesn’t it?) Again, as in the case of the Kaluza Klein theory, a fifth dimension plays the key role.
Ed Witten and Petr Horava from Rutgers University argued in the mid ‘90s, that although the universe is four dimensional including time, a fifth dimension may open up for a finite length of time. If such a thing happens it is possible that there are other ‘brane’ universes along the extra dimension. This is the Heterotic M Theory.
Our universe and another universe could be three dimensional surfaces in four dimensional space. As a lower-dimensional analogy, imagine two sheets hung on two parallel clotheslines. Then visualize bringing the two clotheslines together rapidly so the sheets collide, rippling in the wind. Not all parts of the sheet would collide at the same time.
For the two colliding branes, or 3-D universes, one with negative tension, one with positive, the collision provides a heating effect and a resulting expansion. This is the theory called the Ekpyrotic Universe of Turok, Ovrut, Steinhhardt, and Khoury. ‘Ekpyrotic’ comes from the Greek ‘ekpyrosis’ meaning fiery conflagration.
According to the promoters of the idea, it solves all the problems the inflationary theory was invented for. It produces no magnetic monopoles. The two colliding universes start out flat and in equilibrium so no flatness and horizon problems are encountered. The two universes pass through one another and our universe is left with scale free density lumps related to the ripples. Thus the invention of the imaginary inflaton field of inflation would be made irrelevant (pardon my alliteration).
Paul Steinhardt and Neil Turok now promote a particular version of the Ekpyrotic Theory in which the two colliding branes periodically recollide. They are under the influence of an attractive force somewhat as if a stretched spring. This results in cyclic big bangs.
During the cycle, the dark energy decays and our universe recollapses, intensifying the attractive force on the other brane, and a collision occurs at the big crunch creating a new big bang. This is the ‘Cyclic Universe’ idea. It has undergone many revisions at the hands of critics. The problem is that it presents a challenge to the inflationary theory, which is the darling of the particle physicists, become the darling of astrophysicists.
Andre Linde, transplanted from Russia to Stanford University, and author of the chaotic inflation theory, came out with a scathing criticism of such theories in May of 2002. The criticism is unparalleled in the annals of science in that it involves dozens of unique critical statements. Nevertheless, at the end of this blistering paper in which he suggests that the only way to save the Cyclic Theory is by tacking on his own chaotic inflation at either end of the cycle, he admits: “The Ekpyrotic/Cyclic scenario is the best alternative to inflation that I am aware of.”
As in the early stages of the development of the inflationary theory, there are many reasonable challenges to be overcome. However, who knows, these may be met, as they were–for the most part--for inflation. But right now, Linde says these theories suffer from ‘brane damage’.
Other Dimensional Excesses–Welcome to Braneworld
Granted: what we have been discussing in this chapter is very speculative. However, almost all great theories have started out with wild speculation.
In 1999, Lisa Randall and Raman began a search for quintessence by suggesting that our 3-D world could be embedded in a 4-D universe. Call the larger dimensional world ‘the bulk’ (the hulk?), and our world ‘the brane’. Then tension of the brane due to conditions at the bulk-brane boundary–the ‘edges’ of our universe–make the roll down the potential hill into an inflation of the universe. However, large gravity waves which conflict with big bang nucleosynthesis are created in such a theory.
One of the difficulties in these type of theories of quintessence is deriving the correct potential energy function from braneworld structure and dynamics, which is essentially uncertain. However, with the Planck observatory going into space in about 2007, gravitational lensing observations, galaxy surveys, and a the possibility of a dedicated supernova telescope in space (SNAP), we may be able to observationally pin down the correct potential. That would be somewhat like providing an elevation map for a trip descending from Mt. Everest.
We may find ourselves working backwards–trying to understand braneworld–the realm of the M Theory–from the potential function derived from cosmological observations.
Chapter 14
The final pursuit and demise of the Pequod crew: Repulsion Kills Cosmology
“And now, concentric circles seized the lone boat itself, and all its crew, and each floating oar, and every lance pole, and spinning, animate and inanimate, all round and round in one vortex, carried the smallest chip of the Pequod out of sight.”
If a cosmologist were proven right in the risky game of cosmology, it would most probably be only for a time.
My own ship of thought and all of my cosmological inner crew of personalities will probably sink and perish leaving only the schoolteacher. Perhaps teaching is more valuable to life than musing on the cosmos, anyway. If so, I will be content to have survived such a soul-deepening adventure. Such is the perspective of just one--may I call myself?--cosmologist. I wanted to know the answer to the question: where did we come from?
Pardon me for ignoring, for the most part, the diatribes on what is called the ‘anthropic principle’ which fill the pages of many a modern cosmology book. I find such discussions highly speculative. To me, they are like trying to figure out why you have won the lottery. Too much mystery, too much vagary leave me with an uneasy feeling in the pit of my stomach. I have stopped reading those parts of those books.
Will all present cosmological theories come to naught? To risk playing the anthropic game, I’ll say the chances are about 50/50.
The ship sinks: old views of cosmology fall
I hate reading and writing summaries. I like to look for the meat in a book, ferreting it out from the fat. What good is a picture of the meal?
However, I will restrain my reluctance in the interest of those who do want a summing up.
The view, held for over 60 years, that space provided no impetus in the cosmic expansion, has fallen with observations that Supernova Ia to moderate distances indicate the universe is accelerating. Only a repulsive force could make it do so. The gravity of matter is attractive.
Many astronomers now pin the tail on a constant contribution of space we call the cosmological constant. We don’t know enough to secure that donkey’s position, however. My guess is that donkeys move, albeit ever so slowly. Hence a slow approach to a constant value for the cosmological ‘constant. We need to find a theory that will ‘transcend’ general relativity in the early universe.
A sequel to ‘Moby Dick’? what is next for cosmology?
The next step is to find Supernova Ia as far away as we can, until we plumb the nature of the primeval acceleration.
We also want to carefully examine the Cosmic Microwave background. The Planck satellite goes up in a few years and will extend beyond the WMAP results to a much greater precision.
Also, our explorations into the ‘inner space’ of particle theory may reveal more of the origin of matter in the universe. Our accelerators now slam gold atoms together at speeds near the speed of light, producing a state of matter similar to that in existence a fraction of a second after the big bang. Discovery of new particles, particularly pairs of supersymmetric partners of the usual particles called ‘sparticles’ would be useful in piecing together the puzzle. But maybe this will take us in an entirely new direction. This area of study and superstring theory provide hope for a more complete understanding. Particle physicists aren’t out of the cosmology running yet.
One of the most exciting future events I could imagine, observationally, would be the discovery of the graviton background. This could give us a picture of the universe 10-43
seconds after the Big Bang.
Our Pequod has sunk, and only Ishmael, the school teacher, survived. If one’s theory doesn’t float there is always a job enlightening the next generation to the mysteries of the cosmos. Perhaps one of my students or yours will slay the ‘white whale’ within the next century. The possibility is not ignorable.
Glossary
accelerated expansion–in cosmology, this means that the scale factor for distance between galactic superclusters is increasing at a greater rate as time progresses. This defies the decelerating effect of gravitating matter in the universe.
active galaxies–these are galaxies with cores presenting an intermediate level of activity between quasars and present day galactic cores.
adiabatic expansion–an adiabatically expanding universe is one in which there is no inflow or outflow of energy.
age of the universe–time since the big bang. For an accelerating universe this is greater than 1/H0, the Hubble time. For a decelerating universe it would be less than Hubble time.
Andromeda galaxy–the nearest large spiral, ‘twin’ of the Milky Way. It is 2.2 million light years away.
anthropic principle–the universe has the characteristics it does so that we could be here to observe it. In a universe with physical laws and constants slightly different, intelligent life would not be possible.
axions–particles dreamed up by theorists to account for the neutron not lining up its spin in a magnetic field. They are an hypothesized form of non-baryonic dark matter.
baryons–neutron or protons, the main component by mass of the ordinary matter of which our bodies are made.
baryonic dark matter–matter made of neutrons and/or protons.
big bang–the hot, dense initial state of the universe hypothesized to precede the observed expansion.
big crunch–an eventual recollapse of the universe to a hot, dense state. It would disappear inside the event horizon of an immense black hole.
Big Rip--R. R. Caldwell from Dartmouth College, describes the effect of a negative energy runaway cosmological constant. He describes an energy for which the sum of density and pressure is negative. It does something really wild–drives itself to infinity in a finite time, ‘rips apart the Milky Way, solar system, Earth, and ultimately the molecules, atoms, nuclei, and nucleons of which we are composed.
binary pulsar–a pulsar inside a dual star system. A pulsar is a neutron star which periodically beams radio waves toward the earth.
black body–in astronomy, a hot gas in equilibrium with itself achieves a common temperature (e. g., the surface of a star)
and radiates with a black body spectrum.
Black body spectrum–a black body (see above) produces all wavelengths in a Planck distribution of intensity, which has a peak, or brightest wavelength.
black hole–infinite warp in space-time produced by infinitely collapsed matter. Its escape speed exceeds light speed.
Braneworld–the world where our universe is embedded in a Higher dimensional universe containing other membranes.
brown dwarf–a failed star. Usually considered larger than Jupiter, but smaller than the 0.08 solar masses needed to ignite nuclear fusion at 10 million K temperature.
bubbles in the false vacuum–a portion of space time undergoes rapid inflation while its surroundings have not changed state.
bulk modulus–the index of compressibility of a substance.
Cepheid variable stars–stars which vary in brightness with a period of one to 50 days. A Cepheid with a given period is an excellent standard candle, yielding distance.
chaotic inflation–proposed by Andrei Linde in 1983. Energy density potential is like a bowl. The initial value of the field is randomly chosen (a random height up the sides) and rolls to the center producing an inflationary period.
closed universe–a universe whose density is greater than critical. In the absence of a cosmological constant it would eventually recollapse.
cold dark matter–slow moving, heavy particles with low energy needed for the dynamics of galaxies, clusters, or the universe.
comoving density–the number of objects per solid angle and redshift.
Cosmic Microwave Background--the radiation remnant of the big bang composed of photons produced when electron combined with protons to make hydrogen for the last time. These photons were originally ultraviolet, but their wavelength was stretched by universal expansion.
cosmological constant–a dynamical factor in universal expansion producing a repulsive force, and related to a constant vacuum energy density.
critical density–the total energy density necessary to produce a flat universe. Presently it is about 10-29 g/cm3.
Cyclic Universe–universes undergoing periodic collisions , expanding then contracting, only to collide again. Related to an extra dimension of space.
dark energy–a component of universal density not related to matter. A cosmological constant or quintessence.
dark matter–matter which isn’t in the form of stars which is deemed necessary to gravitationally produce the motion of galaxies, clusters, superclusters, or the universe.
deceleration–a slowing down of the expansion of the universe.
deceleration parameter, q–a measure of the rate of slowing down of the universe. If q is negative at some time, the universe is accelerating. With q positive, the universe decelerates.
decoupling era–radiation and matter decouple at about 280,000 years after the big bang. Before then, light is interacting with matter. After that epoch, light is free to propagate without scattering or interaction.
De Sitter universe–a universe with no matter, but possibly a cosmological constant or constant vacuum energy density. It expands exponentially.
Doppler shift–a shift of frequency higher for relative motion toward, lower for relative motion away. Works for sound, light, or any periodic wave.
Einstein’s static universe–Einstein sought to justify a static universe in his General Theory of Relativity. If the universe were closed, he reasoned, it would be finite and a static universe would attract itself by gravity and collapse. He invented the cosmological constant and its associated repulsive force to counter the attraction. Unfortunately, such a picture was unstable and didn’t account for later observations of universal expansion. Einstein later thought of this as his ‘greatest blunder’.
Ekpyrotic Universe–the big bang occurs when two universes collide along an extra dimension of space in the M-Theory.
Euclidean space-time–flat space, total density = critical density.
eternal inflation--If we consider the possibility that a region of false vacuum could appear as a random quantum fluctuation in a normal universe, then normal universes could spawn inflationary ‘babies’. Sometimes called the ‘pocket universe’ or ‘baby universe’ theory of Andrei Linde.
ether–a ‘substance’ in space. It used to be thought of as a medium for light, but now must be considered a quintessence or cosmological constant-related essence.
false vacuum–when empty space has an equilibrium energy larger than the ground state. When the vacuum is pumped up like that, it behaves like a cosmological constant, and causes inflation.
filigree structure–the structure of superclusters of galaxies which is filamentary and bubbly. This is the last structure seen as one moves to larger and larger scales of view.
first light–the epoch when stars first ignite.
flatness problem–the universe has a total density very close to critical density at present. This meant that the initial conditions in the universe had to be finely tuned to produce it.
flat universe–a universe with total density = critical density.
flux–the wattage incident on a square meter observing surface.
‘funny energy’–another name for the cosmological constant or quintessence component of universal energy density.
galactic halo–a component of the galaxy predicted by Peebles and Ostriker to exist as dark matter in a spherical shell beyond the extent of the galactic disk.
galaxy rotation curves–the plot of velocity of stars around the center of a spiral galaxy. The violation of Keplerian behavior signals the necessity for galactic dark matter.
‘graceful exit’ to inflation--the false vacuum is expanding as bubbles in the surrounding true vacuum space. Though it requires mathematical proof, this means that the space in between the bubbles grows too fast in Guth’s theory to allow bubbles to combine in large numbers (percolation). Without this process, it was thought that there was no way to produce our universe with an inflationary scenario.
Grand Unification Theory (GUT)–the particle theory first proposed to unify the electroweak and strong forces into one. This is the theory producing a phase change which suggested inflation to Alan Guth.
gravitational lens–in the General Theory of relativity, light bends in a gravitational field. Around a large galaxy or cluster of galaxies, multiple images of an object like a quasar behind it may be seen as light bends around several sides of the large gravity field.
gravity–the weakest of the forces of nature. In the weak field limit, its intermediating particle is the graviton. Its was thought to be only attractive until the repulsive force of the cosmological constant was required to explain Supernova Ia observations. This repulsive force, however, is due to a related negative (outward) pressure of space.
Great Attractor–A concentrated mass 1,000 times that of the largest galaxy. It is pulling hundreds of galaxies into it, and it is hidden behind the center of our galaxy, preventing our finding out what is causing this tremendous gravity field. It was first discovered by Alan Dressler, Sandy Faber, et. al. They called themselves ‘The Seven Samurai’.
grey dust–a hypothetical type of dust with large grains. If it exists, it would absorb light from Supernova Ia without reddening it, making them seem dimmer than they actually are. This would make the claim of universal acceleration spurious. However, it seems likely that a substantial grey dust component would be inconsistent with other observations.
Hawking radiation–Stephen Hawking said that black hole curvature has energy which can produce virtual particle pairs. One of the pair may leave the black hole and annihilate with an antiparticle. This produces radiation, reducing the mass of the black hole, and having a black body signature. Smaller black holes radiate faster than large ones.
Heterotic M Theory–for a time, 4 space dimensions are allowed to be uncurled instead of just 3.
higher harmonics in the background radiation–These are like sonic higher frequency harmonics in a organ pipe, but existent within the ‘surface of last scattering’ (see below).
homogeneity–the assumption, marginally proven, that the universe has a constant density at the largest scale.
horizon problem–the microwave background from opposite ends of the sky could not have been in contact to equilibrize the same temperature black body distribution without a faster than light expansion period for the universe. Inflation provides such a rapid expansion.
Hubble Constant, H–the ratio of speed to scale factor, the slope of the Hubble curve at any epoch. The present slope is H0.
inflation–a brief, terminated period of rapid universal expansion caused by a cosmological constant-like universe related to an inflaton field (see below).
Inflationary theory–a theory arranged to resolve the horizon, flatness, and monopole problems (see definitions) by hypothesizing inflation (above definition).
inflaton field–to have percolation or coming together of inflating bubbles to produce our universe, a slow-rolling potential is required. In fact, it must roll so slowly that no known particle theory could produce the right inflation to give us density fluctuations of the amplitude needed to produce the matter distributions we see today. The inflaton field was invented (ad. hoc.) to do this.
inhomogeneity–a matter/energy density lump in the universe.
‘inner space’–the pursuit of fundamental particle theory as it relates to cosmology.
intergalactic dark matter–dark matter between the galaxies. Either it is dynamically inferred by motions of visible matter in clusters or superclusters, or by the motion of the universe itself.
isotropic–the universe looks the same in all directions from all points.
lambda, Λ–the famed cosmological constant in Einstein’s relativity. It is within a constant factor of a false vacuum energy density, and if positive, tends to accelerate the universe.
lambda problem–the fact that we coincidentally live in a time when the cosmological constant energy density is about the same as the matter energy density. Initial conditions for the universe require fine tuning for such a happenstance.
last scattering surface–acoustic waves caused by matter (vacuum) energy density lumps have their last vibrations seen when light traces these inhomogenities at the recombination era. The last scattering surface is that of the size of a sphere with radius equal to the distance sound would travel since the big bang. It is thus sometimes called the ‘acoustic horizon’.
Legendre polynomials–spherical harmonics used to give a multi-pole expansion (see below).
lensing events–when a star comes in front of a MACHO (brown dwarf), the star acts as a gravitational lens and the light from the MACHO brightens momentarily.
light curve–a plot of light output received over time. For a supernova Ia, the light curve increases rapidly to a peak then decreases over several weeks.
.Local Group–a poor (small) cluster of galaxies in which our Milky Way galaxy resides.
lookback time–time from the present back to a specified era.
Lorentz Invariance–obeying the laws of Special Relativity, this special transformation relates space and time in a moving frame to that in a rest frame. For cosmology, it is seemingly important that space have a no energy or cosmological constant-like behavior in the present era, so that it harmonizes with Special Relativity.
luminosity distance–the distance inferred by luminosity (power output at the source) by means of the r2 law of light.
MACHOS–MAssive Compact Halo ObjectS. Brown dwarfs by another name. These are failed stars between Jupiter and 8 percent of the sun’s mass.
magnetic monopoles–kinks in spacetime arising out of the quantum foam. North magnetic poles without a south, or vice versa. None observed.
magnetic monopole problem–in standard big bang theory the universe has not expanded rapidly enough to get rid of the primordial magnetic monopoles described above. Rapid inflation is necessary to explain the paucity of monopole detections.
matter density–the average density of matter on the homogeneous scale. This seems to be about 30% of critical density from observations of distant supernova Ia’s, the CBR, and other observations.
maximal symmetry–Lorentz invariance (see above) of matter + vacuum, and conservation of energy of matter + vacuum. This implies density of matter + density of vacuum = a constant, and provides a Steady State-like exponential DeSitter expansion for the universe. This includes, in Maximal cosmology, an increasing and asymptotic quintessence.
metric–the mathematical form of any particular curved space-time in Einstein’s General Theory of Relativity.
mini black holes–small black holes which may have emerged out of the quantum foam at Planck time shortly after the big bang. They evaporate rapidly. However, none have been detected.
missing mass–matter inferred to exist by the dynamics of galaxies, clusters, superclusters, or the universe itself. Sometimes called ‘dark matter’
M Theory–A generalized class of theories relating to superstring theories, where extra dimensions beyond the four are allowed more freedom, and do not necessarily need to be Planck compact. Some versions involve ‘branes’ (membranes), which are surfaces embedded in a higher dimensionality.
multi-pole expansion–2L-1 divisions of the sky (or a spherical surface) to observe clumping of a field of numbers across it. Temperature fluctuations of the Cosmic Microwave Background can be analyzed in this fashion.
negative pressure–in the case of a cosmological constant, adiabatic expansion requires an outward pressure of the vacuum energy density. A positive pressure is one applied on the universe. It is negative when universe presses outward.
neutrinos–particles emitted in the weak nuclear interaction. They move at the speed of light and almost have no mass. They are very penetrating, and can be ‘eaten up’ by WIMPS (see below).
New Inflation–A version of inflation which solved the non-percolation issue of Guth’s version. It was a potential with a very low inclination, providing the slow roll necessary for our universe to be formed out of a single bubble as Stephen Hawking suggested. Unfortunately, it did not provide the right amplitude for density fluctuations necessary to produce our present day lumps of matter. However, the slow -rolling idea became the paradigm for later inflationary theories.
non-baryonic matter–matter that is not neutrons or protons.
non-dynamical dark matter–matter that cannot be inferred by the motions in galaxies, clusters, or superclusters. An example is the matter thought necessary (before cosmological constant discovery) to make the density exactly critical in the inflationary theory.
nucleosynthesis–the formation of elements heavier than hydrogen in the big bang. Burbidge, Fowler, and Hoyle gave us a great match in their theory describing the formation of deuterium, helium, and lithium in the cosmic fireball. This places some restrictions on the dynamics and content of the universe at temperatures greater than 10 million K.
Occam’s razor–about 1200 AD, William of Occam said that if two theories describe the same data equally well, the simpler one should be chosen.
Oort cloud–a region far beyond Pluto where most comets spend most of their time.
percolation–referring to the inflation of bubbles and how well they unite into bigger bubbles as the universe expands.
Perfect Cosmological Principle–in the Steady State Theory, the assumption that the matter energy density remains the same over all time, and thus provides for a DeSitter expansion.
Planck energy–the energy scale achieved at the Planck time (see below). E = 2 x 1019 Gev.
Planck time–the time at which the universe emerges from a realm in which quantum theory takes over for General Relativity. The universe emerged (as a bubble?) from the quantum foam at T = 10-44 sec.
pocket universes–baby universes formed from bubbles in the quantum foam of a ‘momma’ universe.
primordial density fluctuations–the emerging lumps from the quantum foam at the Planck era. These are expanded and changed by inflation. The lumps have a scale free spectrum (come in a variety of sizes).
quasars–cosmic gushing cores of ancient galaxies. These have been seen at large redshifts.
quintessence–a time variable cosmological ‘constant’. We may find this is necessary as we explore the expansion of the universe at earlier epochs with the CMB and supernova Ia observations.
quantum foam–randomly reprocessed space time, forming irregular curvature structures, including black and white holes, wormholes, kinks (magnetic monopoles), and possibly cosmological strings and branes. You might say the universe blows out its ‘branes’ at the Planck time.
recombination era–a time, 280,000 years after the big bang when the universe became transparent to radiation, and hydrogen was formed
renormalization–in quantum field theory infinities can often be subtracted off to yield a useful theory of an interaction. For example, the electroweak field theory is renormalizable.
repulsive force–in cosmology, this is caused by a quintessence or cosmological constant with a negative state parameter, w. This provides a negative pressure–repulsive force.
Robertson-Walker metric–the space time curvature description for a perfect fluid which is homogeneous and isotropic, like the universe, and is used in cosmology.
scale free spectrum–in the microwave background COBE observations, it was found that there were no size of lumps preferred. This is what an inflationary universe provides, and what our observations of present matter lumping demands.
secondary Doppler peaks (in the CMB). These are like higher acoustic harmonics (higher octaves in an organ pipe) on the sphere which is the ‘last scattering surface’. If they exist they will tell us much about the nature of the substance early in the universe. This bears a similarity to the quality of a musical instrument relating to the mix of harmonics its makeup provides. A piano, for example, has a different quality for a given note than a flute. So also one type of quintessence will have different amplitudes (and possibly frequencies) of secondary Doppler peaks from another.
Special theory of relativity–Einstein’s theory of relative motion, stating that the speed limit in the universe is the speed of light. It is mathematically expressed by Lorentz transformations of space and time.
standard candle–an astronomical object with a given luminosity or wattage. Its wattage and apparent brightness tells us how far away it is.
Steady state theory–the universe was hypothesized to be the same in appearance throughout time. However, no cosmological constant or quintessence component was invoked. Matter maintained its density by ‘magic’, acting like a cosmological constant itself.
superclusters–clusters of clusters of galaxies, often having a filamentary or bubbly structure.
supermassive black holes–large black holes, thought to inhabit the centers of galaxies. These are phenomenal energy engines, perhaps powering quasars and active galaxies.
Supernova Ia–matter from a red giant in a binary star system accretes onto a white dwarf, until it reaches 1.4 times the mass of the sun. It then ignites nuclear fusion at a rapid rate, and blows up with a given amount of ‘bang’, making it an excellent standard candle and measuring device for universal expansion.
superstring theory–many dimensions of space and time collapse to the 4 of space and time. The other dimensions curl up, perhaps smaller than Planck length (10-33 cm). Particles are represented as different vibrations on either looped or straight strings. String theory is our best hope for a fundamental description of a cosmological constant or quintessence, if we need it.
Supersymmetry–a theory in which bosons (integer spin) and fermions (half integer spin) become one early in the universe. Particles thus have supersymmetric pairs at lower energies. These theories have their difficulties, but there is still hope for a renormalizable quantum gravity.
T-duality–In string theory we replace the size of a dimension by 1/size and nothing changes.
ten parameter theory–there are ten numbers thought to describe the dynamics and structure of the universe.
‘the time problem’–without a cosmological constant, the inflationary theory and other theories don’t allow enough time for the development of globular clusters with an age of at least 12 Billion years, and observed galaxies and quasars at 13-14 Billion years. With a cosmological constant the universe is at least 14-15 billion years old.
time variable cosmological ‘constant’–perhaps we will discover that a cosmological constant can’t describe the nuances of ancient accelerations in the universe. We may have to resort to a time varying vacuum energy density. Perhaps it is the slow rolling down-the-hill type. Or perhaps it shifts in its nature from attractive to repulsive. Time and new observations will tell.
ultraviolet divergence–at small wavelengths, particle theory has infinities. Since ultraviolet is a smaller wavelength than visible, we call those ‘ultraviolet divergences’.
Vacuum energy density–the energy per unit volume invested in a false vacuum with a non-ground state energy. The cosmological constant is a constant multiple of this density.
VAMPS–VAriable Mass ParticleS. One theory of quintessence requires particles to vary in mass.
VSL (Variable Speed of Light) Theories–alternatives or adjuncts to Inflation, in which the speed of light was different in the past.
W and Z particles–the intermediating particles of the electroweak theory discovered by Rubio, et. al., and predicted in the electroweak unification theory of Weinberg, Salam, and Galashow.
white dwarf–the end state of a low mass star. A carboncinder with all nuclear fusion turned off. About earth-sized, these retired stars blow up in Supernova Ia explosions.
white hole–like a time-reversed black hole. In behavior very much like a black hole. They accumulate accretion disks and radiate. However, they are thought to blow up by a process of collecting light and matter at their event horizon.
Wien’s Law–the surface temperature of a black body is inversely proportional to it’s peak wavelength.
WIMPS-Weakly Interacting Massive Particles. These are predicted by a supersymmetry theory, and may eat up neutrinos in the sun. They are a going candidate for non-baryonic dark matter.
WIMP wind–a variation in particle detection at different seasons supposedly caused by the earth moving through a field of WIMP dark matter.
WMAP–Wilkinson Microwave Anisotropy Probe–a satellite
put into a place where sun and earth’s gravity balance,
which in 2001 started measuring the lumps in the microwave background with unprecedented accuracy.
wormholes–a space-time tube between a black and a white hole, leading to a different place and time.
zero point oscillation–virtual quantum vibrations of empty space whose contributions could lead to a cosmological constant or quintessence.
Bibliography
Basic Cosmology:
Harrison, Edward R, Cosmology: the Science of the Universe, (Cambridge University Press: Cambridge), 1981. (Elementary.)
Narlikar, J. V., Introduction to Cosmology, (Cambridge University Press: Cambridge), 1993. (Intermediate.)
Linder, Eric V., First Principles of Cosmology, (Addison Wesley: Reading, Massachusetts), 1997. (More advanced.)
Cosmic Microwave Background:
Gawiser, Eric, and Silk, Joseph, The Cosmic Microwave Background Radiation, (a great review of the observations and theory–preprint), Feb. 2, 2000. Earlier version in Science Vol. 280, P. 1405, 1998.
White, M., CMB Anisotropy:
www-cfa.harvard.edu/~mwhite/rosetta/node2.html.
Hanany, S., et. al., Maxima-1: A Measurement of the Cosmic Microwave Background Anisotropy On Angular Scales of 10' to 5o, preprint, May 10, 2000.
Cosmological Constant:
Carroll, Sean M., and Press, William H., The Cosmological Constant, Annual Review of Astronomy and Astrophysics, Vol. 30, Pp. 499-542, 1992.
Goldsmith, Donald, Einstein's Greatest Blunder?: The Cosmological Constant and Other Fudge Factors in the Physics of the Universe, (Harvard University Press: Cambridge Massachusetts), 1997.
Zichichi, Antonio, et. al., ed., Gravitation and Modern Cosmology : The Cosmological Constant Problem (Ettore Majorana International Science Series, Physical Sciences, Vol. 56)
Dark Matter:
Bartusiak, Marcia, Through a Universe Darkly, (Avon: New York), 1993.
Krauss, Lawrence M., The Fifth Essence: Dark Matter In The Universe, (Basic Books: Chicago), 1989.
Inflation:
Guth, Alan H., The Inflationary Universe: A Quest for a New Theory of Cosmic Origins, (Helix Books: Reading, Massachusetts.
Abbott, L. F., and So-Young Pi, ed., Inflationary Cosmology,
(World Scientific: Philadelphia), 1986.
Quintessence:
Peebles, P. J. E., and Ratra, Bharat, Cosmology With a Time- Variable Cosmological “Constant”, Astrophysical Journal, Vol. 325, P. L17-L20, Feb. 15, 1988.
Albrecht, Andreas, and Skordis, Constantinos, Phenomenology of a Realistic Accelerating Universe Using Only Planck-Scale Physics, Physical Review Letters, Vol. 84, No. 10, Mar. 6, 2000.
Krauss, Lawrence, Quintessence: The Mystery of Missing Mass in the Universe, (Basic Books: New York), 2000.
Overduin, J. M., Nonsingular Models With a Variable Cosmological Term, Astrophysical Journal, Vol, 517, Pp. L1- L4, May 20, 1999.
Related Particle Physics:
Barrow, John D., Theories of Everything: Quest For Ultimate Explanation, (Ballantine Books: New York), 1991.
Greene, Brian, The Elegant Universe Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory , (Vintage Books: New York), 2000.
Kaku, Michio, and Thompson, Jennifer, Beyond Einstein: The Cosmic Quest for the Theory of the Universe, (Anchor Books:
New York), 1995.
Kaku, Michio, Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the 10th Dimension, (Anchor Books: New York), 1994.
Supernova Observations:
Garnavich, Peter M., et. al., Supernova Limits On The Cosmic Equation of State, Astrophysical Journal, 509, Pp. 74-79. December 10, 1998.
Perlmutter, S., et. al., Measurements of Ω and Λ From 42 High-Redshift Supernovae, Astrophysical Journal, Dec. 1998.
Riess, Adam G., et. al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, obtained in preprint, appeared in
Astronomical Journal in 1998.
Schmidt, Brian P., et. al., The High-Z Supernova Search: Measuring Cosmic Deceleration and Global Curvature of the Universe Using Type Ia Supernovae, Astrophysical Journal, Vol. 507, Pp. 46-63, Nov. 1, 1998.
Time and Cosmology:
Boslough, John, Masters of Time: Cosmology at the End of Innocence, (Addison Wesley: Reading, Massachusetts), 1992.
Gribbin, John, Unveiling the Edge of Time: Black Holes, White Holes, Wormholes, (Harmony Books: New York), 1992.
Novikov, Igor D., The River of Time, (Cambridge University Press: Cambridge, U. K.), 1998.
Rees, Martin, Before the Beginning: Our Universe and Others, (Helix Books: Reading Massachusetts), 1997.
Universal Acceleration:
Livio, Mario, The Accelerating Universe: Infinite Expansion, the Cosmological Constant, and the Beauty of the Cosmos, (John Wiley and Sons: New York), 2000.
Goldsmith, Donald, The Runaway Universe: the Race to Find the Future of the Cosmos, (Perseus Books: Cambridge, Massachusetts), 2000. (Great new book on the subject.)