Print Magazine

Innovative, Compelling,
Mission-critical. 

Analog's award-winning stories delivered directly to your door!

Shop Print Magazine

Digital Newsstand

Start Reading.
Available for your tablet, Reader, Smart Phone, PC, and Mac! 

Shop Digital Newsstand

The Alternate View

The Inconstant Hubble Constant

by John G. Cramer

Edwin Hubble of CalTech established his reputation as a pioneer astronomer by demonstrating that there were other galaxies in addition to our Milky Way. In 1929 he published a paper, somewhat anticipated by Friedmann and Lemaître, providing evidence that the observed red-shift of distant galaxies was proportional to their distance from the Earth. In the light of more recent research, we would have to say that his data set was much too small to be conclusive, and his distance estimates were too small by a significant factor. Nevertheless, the essential distance vs. red-shift relation that he discovered was correct.

At the time, Hubble was very cautious about associating the red-shift with recession velocity and thought there might be another explanation. However, the contemporary view is that the observed red-shift is produced by the recession velocity, which Doppler shifts the observed light to lower frequencies and longer wavelengths. The observed proportionality between distance and recession velocity, now known as Hubble’s Law, is direct evidence of the expansion of the Universe. The ratio of recession velocity to distance is called the Hubble constant, and it is one of the most fundamental parameters describing our Universe.

In 1995, when I attended the 17th Texas Symposium on Relativistic Astrophysics, held in Munich that year (see AV-73 in the August 1995 issue of Analog), the value of the Hubble constant was only know to a precision of about 50%. Since that time, however, astronomy and astrophysics have made great strides in providing more accurate observations and measurements. But along with these improved measurements, a problem has arisen: they don’t agree.

There are many ways by which the Hubble constant can be determined, but we will focus on the two methods that provide the best precision: analysis of the cosmic microwave background radiation, and analysis of “standard candle” astronomical observations. The current problem is that these two trusted methods give inconsistent values for the Hubble constant, values that differ by four or more standard deviations. To consider this problem in more detail, we will consider more deeply these methods for measuring the Hubble constant.

*   *   *

The photons of light of the cosmic microwave background (CMB) were released from the hot ambient plasma of the Universe a few hundred thousand years after the Big Bang. As the Universe cooled, the free protons and electrons forming the hot plasma filling the Universe combined to form hydrogen atoms, causing the Universe to undergo an optical phase transition from opaque to transparent. Light, which had been surrounded by charged particles and was quickly absorbed, was suddenly able to stream freely, as electrically neutral hydrogen atoms replaced the charged particles. The emission of CMB radiation reached its emission intensity peak about 379 thousand years after the Big Bang, and it has been freely streaming in transit to our detectors from the most remote parts of the Universe ever since. The CMB radiation has been red-shifted from ultraviolet and x-ray photons to microwaves with a characteristic black-body temperature of 2.73 degrees kelvin by the intervening expansion of the Universe. One can think of the stretching of space as the Universe expands as also stretching the wavelengths of these photons. Thus, the Hubble constant determines the overall Doppler shift from source to detector that these ancient photons have received.

The European Space Agency’s Planck Mission, using the baryonic oscillations that modulate the angular intensity and temperature structure of the observed CMB, has provided the most precise determination of the Hubble constant from CMB analysis. The most recent version of the Planck analysis deduced the Hubble constant for the very early Universe to be H0 = 67.36 ± 0.54 km/sec per megaparsec, a 0.74% measurement. (Note: The parsec, 3.26 light-years or 3.086 x 1016 meters, is the standard measure of distance used by astronomers and is related to the angular parallax shift of the apparent positions of nearby stars as the Earth orbits the Sun.)

Another way of determining the Hubble constant is the “standard candle” method, using astronomical observations of the red-shift and brightness of stars of known luminosity to deduce distance with the inverse square law. The brightest of the standard candles is the type Ia supernova. A type Ia supernova is a non-luminous burned-out star that is in a binary orbit around another star, from which it receives hydrogen gas that builds up on its surface. After enough hydrogen has accumulated and is compressed by gravity, it suddenly detonates in a thermonuclear explosion that shines with extraordinary brilliance for a month or less, then fades away. Such supernovas occur in all galaxies and can be observed (during their period of brilliance) in our galactic neighbors and also in galaxies that are halfway across the Universe. There are significant differences in light falloff times of the light from these supernovas, varying from about 10 days to over 30 days. The falloff time can be used to correct for variations in the supernova luminosity, providing a standard candle of extraordinary brightness.

One of the most trusted standard candles is provided by a star-type called a Cepheid variable. Cepheids are variable stars that have a regular cycle of rising and decreasing luminosity. Henrietta Swan Leavitt worked at the Harvard College Observatory as a “computer.” She was given the job of examining photographic plates in order to measure and catalog the brightness of stars. In 1908, she discovered a relation between the luminosity and the period of Cepheid variables. She found that the observed frequency of the Cephid cycle can accurately predict its luminosity. Leavitt’s discovery provided astronomers with the first “standard candle” with which to measure the distance to faraway galaxies.

Current results from the SH0ES Collaboration, which used Cepheids in the Large Magellanic Cloud to calibrate their distance ladder of type Ia supernova brightness vs. red-shift observations, found the value H0 = 73.5 ± 1.4 km/sec per megaparsec for the later Universe, a 1.92% measurement that is grossly inconsistent with the CMB value. Astronomical observations of several other phenomena in the later Universe after star formation, e.g., red giant stars, gravitational lensing, gravitational wave emission, etc., tend to give values of H0 that agree with the SH0ES result, usually with somewhat larger measurement uncertainties.

*   *   *

In other words, the early-Universe CMB value of the Hubble constant differs from the later-Universe local values derived after star formation by around 4 or more standard deviations. The implications of this “tension” between the early-Universe value of H0 and later-Universe local values are now a matter of debate. Is there some error in the methods of analysis that is producing the discrepancy? Was the Universe actually expanding at a different rate during the first few hundred thousand years after the Big Bang than its expansion rate in more recent eras? The LCDM Standard Model of Cosmology (Cold Dark Matter with cosmological constant L), which has up until now provided very reliable predictions of the evolution of the Universe from Big Bang to the present, now has a serious problem. In its present form, it predicts that the Hubble constant should be a true constant and should not depend on the era in which it is acting. A challenging result suggests that the LCDM Standard Model is in need of modification to accommodate a varying Hubble constant.

*   *   *

Theorists love such discrepancies, because it allows them to propose modifications of established theories that can accommodate some new phenomena. The discrepancy in measurements of the Hubble constant is no exception, and many theoretical ideas are emerging. One of the most promising of these is the concept of Early Dark Energy.

To consider this concept, we’ll start by considering the “standard” dark energy of the LCDM Model, which is estimated to account for about 69% of the mass-energy of the Universe. This mysterious dark energy is the intrinsic energy of a given volume of empty space, and it has the property that if the volume doubles, the dark energy also doubles, preserving the same energy per unit volume. This energy-creation property has the counterintuitive effect of producing a negative pressure that drives the expansion of the Universe and accounts for the accelerating expansion of the Universe in late times, as observed from the red-shift of the most distant supernovas.

The hypothesis of Early Dark Energy (EDE), in contrast, proposes that there is also another form of dark energy that is not locked to an increase with volume, but rather dilutes as the Universe expands even faster than does the energy of electromagnetic radiation. The effect of EDE is to initially exert a positive pressure on space, and thereby reducing the expansion rate of the early Universe until the early dark energy is diluted away by expansion and ceases its effect.

The authors of the EDE hypothesis, Vivian Poulin and colleagues of Johns Hopkins University, suggest that at early times characterized by red-shifts of greater than 3,000, the EDE accounted for about 5% of the energy of the Universe, but it dissipated to become negligible as the expansion increased. They have modified the LCDM model to include EDE effects and have refitted the data from the Planck measurements of CMB and the SH0ES analysis of supernovas calibrated with Cepheids. They find that the disagreement in H0 is explained, and fits to both data sets are somewhat improved. While some improvement in fitting might be expected simply from adding fit parameters, this result nevertheless suggests that the Early Dark Energy hypothesis should be taken seriously and should be subjected to additional testing as observational data sets improve. At present, EDE seems to be the “best game in town” for resolving the H0 puzzle.

In closing, I would have to say that I am not delighted that the LCDM Model seems to need to have an extra element added. It already has too many poorly understood components, including inflation, dark matter, and dark energy. Now it may have another component—Early Dark Energy—that is unlikely to be accessible to any direct observational tests. The Universe is growing stranger.

*   *   *

References:

The Hubble Constant Problem:

“Investigating the Hubble Tension—Two Numbers in the Standard Cosmological Model,” Weikang Lin, Katherine J. Mack, and Liqiang Hou, ArXiv:1910.02978v1 [astro-ph. CO] (2019).

Early Dark Energy:

“Early Dark Energy can Resolve the Hubble Tension,” Vivian Poulin, Tristan L. Smith, Tanvi Karwal, and Marc Kamionkowski, Phys. Rev. Lett. 122, 221301 (2019).

 

John G. Cramer’s 2016 nonfiction book describing his transactional interpretation of quantum mechanics, The Quantum Handshake—Entanglement, Nonlocality, and Transactions, (Springer, January-2016) is available online as a hardcover or eBook at: http://www.springer.com/gp/book/9783319246406. Book editions of John Cramer’s hard SF novels Twistor and Einstein’s Bridge are available from the Book View Café co-op at: http://bookviewcafe.com/bookstore/?s=Cramer. Electronic reprints of 202 or more “The Alternate View” columns written by John G. Cramer and previously published in Analog are currently available online at: http://www.npl.washington.edu/av.

 

Copyright © 2020 John G. Cramer

Website design and development by Americaneagle.com, Inc.

Close this window
Close this window

Sign up for special offers, information on
upcoming issues and more!


Signup Now No, Thanks