cosmology

Cosmologists Debate How Fast the Universe Is Expanding

New measurements could upend the standard theory of the cosmos that has reigned since the discovery of dark energy 21 years ago.
An illustration of the cosmologists Wendy Freedman and Adam Riess debating the expansion rate of the universe at a recent meeting at the Kavli Institute for Theoretical Physics in Santa Barbara, California.

Wendy Freedman and Adam Riess recently made their case for different expansion rates of the universe at a meeting of top cosmologists in Santa Barbara, California.

Patryk Hardziej for Quanta Magazine

Introduction

In 1998, two teams of cosmologists observed dozens of distant supernovas and inferred that they’re racing away from Earth faster and faster all the time. This meant that — contrary to expectations — the expansion of the universe is accelerating, and thus the fabric of space must be infused with a repulsive “dark energy” that comprises more than two-thirds of everything. For this discovery, the team leaders, Saul Perlmutter of the Supernova Cosmology Project and Brian Schmidt and Adam Riess of the High-Z Supernova Search Team, won the 2011 Nobel Prize in Physics.

Fast forward to July of this year.

On a Monday morning three weeks ago, many of the world’s leading cosmologists gathered in Santa Barbara, California, to discuss a major predicament. Riess, now 49, strolled to the front of a seminar room to give the opening talk. A bulldog of a man in a short-sleeved box-check shirt, Riess laid out the evidence, gathered by himself and others, that the universe is currently expanding too fast — faster than theorists predict when they extrapolate from the early universe to the present day. “If the late and early universe don’t agree, we have to be open to the possibility of new physics,” he said.

At stake is the standard theory of the cosmos that has reigned since the discovery of dark energy. The theory, called ΛCDM, describes all the visible matter and energy in the universe, along with dark energy (represented by the Greek letter Λ, or lambda) and cold dark matter (CDM), showing how they evolve according to Albert Einstein’s theory of gravity. ΛCDM perfectly captures features of the early universe — patterns best seen in ancient microwaves coming from a critical moment when the cosmos was 380,000 years old. Since the Planck Space Telescope’s first map of this “cosmic microwave background” was released in 2013, scientists have been able to precisely infer a distance scale in the young universe and use ΛCDM to fast-forward from the 380,000-year-mark to now, to predict the current rate of cosmic expansion — known as the Hubble constant, or H0.

The Planck team predicts that the universe should expand at a rate of 67.4 kilometers per second per megaparsec. That is, as you look farther into space, space should be receding 67.4 kilometers per second faster for each megaparsec of distance, just as two Sharpie marks on an expanding balloon separate faster the farther apart they are. Measurements of other early-universe features called “baryon acoustic oscillations” yield exactly the same prediction: H0 = 67.4. Yet observations of the actual universe by Riess’s team have suggested for six years that the prediction is off.

Adam Riess is smiling and gesturing to an audience of cosmologists with a slide on a screen behind him showing the discrepancy between predictions and observations of the cosmic expansion rate.

Adam Riess, a cosmologist at Johns Hopkins University, giving his opening talk at the Kavli Institute for Theoretical Physics in Santa Barbara, California.

Monica Almeida for Quanta Magazine

That July morning in a room with an obstructed view of the Pacific, Riess seemed to have a second Nobel Prize in his sights. Among the 100 experts in the crowd — invited representatives of all the major cosmological projects, along with theorists and other interested specialists — nobody could deny that his chances of success had dramatically improved the Friday before.

Ahead of the conference, a team of cosmologists calling themselves H0LiCOW had published their new measurement of the universe’s expansion rate. By the light of six distant quasars, H0LiCOW pegged H0 at 73.3 kilometers per second per megaparsec — significantly higher than Planck’s prediction. What mattered was how close H0LiCOW’s 73.3 fell to measurements of H0 by SH0ES — the team led by Riess. SH0ES measures cosmic expansion using a “cosmic distance ladder,” a stepwise method of gauging cosmological distances. SH0ES’ latest measurement in March pinpointed H0 at 74.0, well within H0LiCOW’s error margins.

“My heart was aflutter,” Riess told me, of his early look at H0LiCOW’s result two weeks before Santa Barbara.

For six years, the SH0ES team claimed that it had found a discrepancy with predictions based on the early universe. Now, the combined SH0ES and H0LiCOW measurements have crossed a statistical threshold known as “five sigma,” which typically signifies a discovery of new physics. If the Hubble constant is not 67 but actually 73 or 74, then ΛCDM is missing something — some factor that speeds up cosmic expansion. This extra ingredient added to the familiar mix of matter and energy would yield a richer understanding of cosmology than the rather bland ΛCDM theory provides.

During his talk, Riess said of the gulf between 67 and 73, “This difference appears to be robust.”

“I know we’ve been calling this the ‘Hubble constant tension,’” he added, “but are we allowed yet to call this a problem?”

He put the question to fellow Nobel laureate David Gross, a particle physicist and the former director of the Kavli Institute for Theoretical Physics (KITP), where the conference took place.

“We wouldn’t call it a tension or problem, but rather a crisis,” Gross said.

“Then we’re in crisis.”

To those trying to understand the cosmos, a crisis is the chance to discover something big. Lloyd Knox, a member of the Planck team, spoke after Riess. “Maybe the Hubble constant tension is the exciting breakdown of ΛCDM that we’ve all been, or many of us have been, waiting and hoping for,” he said.

The Hubble Constant Surd

When talks ended for the day, many attendees piled into a van bound for the hotel. We drove past palm trees with the ocean on the right and the Santa Ynez Mountains to the distant left. Wendy Freedman, a decorated Hubble constant veteran, perched in the second row. A thin, calm woman of 62, Freedman led the team that made the first measurement of H0 to within 10% accuracy, arriving at a result of 72 in 2001.

The driver, a young, bearded Californian, heard about the Hubble trouble and the issue of what to call it. Instead of tension, problem or crisis, he suggested “surd,” meaning nonsensical or irrational. The Hubble constant surd.

Freedman, however, seemed less giddy than the average conferencegoer about the apparent discrepancy and wasn’t ready to call it real. “We have more work to do,” she said quietly, almost mouthing the words.

Freedman spent decades improving H0 measurements using the cosmic distance ladder method. For a long time, she calibrated her ladder’s rungs using cepheid stars — the same pulsating stars of known brightness that SH0ES also uses as “standard candles” in its cosmic distance ladder. But she worries about unknown sources of error. “She knows where all the skeletons are buried,” said Barry Madore, Freedman’s white-whiskered husband and close collaborator, who sat up front next to the driver.

Freedman said that’s why she, Madore and their Carnegie-Chicago Hubble Program (CCHP) set out several years ago to use “tip of the red giant branch” stars (TRGBs) to calibrate a new cosmic distance ladder. TRGBs are what stars like our sun briefly turn into at the end of their lives. Bloated and red, they grow brighter and brighter until they reach a characteristic peak brightness caused by the sudden igniting of helium in their cores. Freedman, Madore and Myung Gyoon Lee first pointed out in 1993 that these peaking red giants can serve as standard candles. Now Freedman had put them to work. As we unloaded from the van, I asked her about her scheduled talk. “It’s the second talk after lunch tomorrow,” she said.

“Be there,” said Madore, with a gleam in his eye, as we parted ways.

When I got to my hotel room and checked Twitter, I found that everything had changed. Freedman, Madore and their CCHP team’s paper had just dropped. Using tip-of-the-red-giant-branch stars, they’d pegged the Hubble constant at 69.8 — notably short of SH0ES’ 74.0 measurement using cepheids and H0LiCOW’s 73.3 from quasars, and more than halfway to Planck’s 67.4 prediction. “The Universe is just messing with us at this point, right?” one astrophysicist tweeted. Things were getting surd.

Dan Scolnic, a bespectacled young member of SH0ES based at Duke University, said that he, Riess and two other team members had gotten together, “trying to figure out what was in the paper. Adam and I then went out to dinner and we were pretty perplexed, because in what we had seen up to this point, the cepheids and TRGBs were in really good agreement.”

They soon homed in on the key change in the paper: a new way of measuring the effects of dust when gauging the intrinsic brightness of TRGBs — the first rung of the cosmic distance ladder. “We had a bunch of questions about this new method,” Scolnic said. Like other participants scattered throughout the Best Western Plus, they eagerly awaited Freedman’s talk the next day. Scolnic tweeted, “Tomorrow is going to be interesting.”

To Build a Distance Ladder

Tension, problem, crisis, surd — there has been a Hubble constant something for 90 years, ever since the American astronomer Edwin Hubble’s plots of the distances and recessional speeds of galaxies showed that space and everything in it is receding from us (Hubble’s own refusal to accept this conclusion notwithstanding). One of the all-time greatest cosmological discoveries, cosmic expansion implies that the universe has a finite age.

The ratio of an object’s recessional speed to its distance gives the Hubble constant. But whereas it’s easy to tell how fast a star or galaxy is receding — just measure the Doppler shift of its frequencies, an effect similar to a siren dropping in pitch as the ambulance drives away — it’s far harder to tell the distance of a pinprick of light in the night sky.

It was Henrietta Leavitt, one of the human “computers” at the Harvard College Observatory, who discovered in 1908 that cepheid stars pulsate with a frequency that’s proportional to their luminosity. Big, bright cepheids pulsate more slowly than small, dim ones (just as a big accordion is harder to compress than a tiny one). And so, from the pulsations of a distant cepheid, you can read off how intrinsically bright it is. Compare that to how faint the star appears, and you can tell its distance — and the distance of the galaxy it’s in.

In the 1920s, Hubble used cepheids and Leavitt’s law to infer that Andromeda and other “spiral nebulae” (as they were known) are separate galaxies, far beyond our Milky Way. This revealed for the first time that the Milky Way isn’t the whole universe — that the universe is, in fact, unimaginably vast. Hubble then used cepheids to deduce the distances to nearby galaxies, which, plotted against their speeds, revealed cosmic expansion.

Hubble overestimated the rate as 500 kilometers per second per megaparsec, but the number dropped as cosmologists used cepheids to calibrate evermore accurate cosmic distance ladders. From the 1970s on, the eminent observational cosmologist and Hubble protégé Allan Sandage argued that H0 was around 50. His rivals claimed a value around 100, based on different astronomical observations. The vitriolic 50-versus-100 debate was raging in the early ’80s when Freedman, a young Canadian working as a postdoc at the Carnegie Observatories in Pasadena, California, where Sandage also worked, set out to improve cosmic distance ladders.

To build a distance ladder, you start by calibrating the distance to stars of known luminosity, such as cepheids. These standard candles can be used to gauge the distances to fainter cepheids in farther-away galaxies. This gives the distances of “Type 1a supernovas” in the same galaxies — predictable stellar explosions that serve as much brighter, though rarer, standard candles. You then use these supernovas to gauge the distances to hundreds of farther-away supernovas, in galaxies that are freely moving in the current of cosmic expansion, known as the “Hubble flow.” These are the supernovas whose ratio of speed to distance gives H0.

But although a standard candle’s faintness is supposed to tell its distance, dust also dims stars, making them look farther away than they are. Crowding by other stars can make them look brighter (and thus closer). Furthermore, even supposed standard-candle stars have inherent variations due to age and metallicity that must be corrected for. Freedman devised new methods to deal with many sources of systematic error. When she started getting H0 values higher than Sandage’s, he became antagonistic. “To him, I was a young upstart,” she told me in 2017. Nevertheless, in the ’90s she assembled and led the Hubble Space Telescope Key Project, a mission to use the new Hubble telescope to measure distances to cepheids and supernovas with greater accuracy than ever before. The H0 value of 72 that her team published in 2001 split the difference in the 50-versus-100 debate.

Freedman was named director of Carnegie Observatories two years later, becoming Sandage’s boss. She was gracious and he softened. But “until his dying day,” she said, “he believed that the Hubble constant had a very low value.”

A few years after Freedman’s measurement of 72 to within 10% accuracy, Riess, who is a professor at Johns Hopkins University, got into the cosmic distance ladder game, setting out to nail H0 within 1% in hopes of better understanding the dark energy he had co-discovered. Since then, his SH0ES team has steadily tightened the ladder’s rungs — especially the first and most important: the calibration step. As Riess put it, “How far away is anything? After that, life gets easier; you’re measuring relative things.” SH0ES currently uses five independent ways of measuring the distances to their cepheid calibrators. “They all agree quite well, and that gives us a lot of confidence,” he said. As they collected data and improved their analysis, the error bars around H0 reduced to 5% in 2009, then 3.3%, then 2.4%, then 1.9% as of March.

Meanwhile, since 2013, the Planck team’s increasingly precise iterations of its cosmic microwave background map have enabled it to extrapolate the value for H0 evermore precisely. In its 2018 analysis, Planck found H0 to be 67.4 with 1% accuracy. With Planck and SH0ES more than “four sigma” apart, a desperate need arose for independent measurements.

Tommaso Treu, one of the founders of H0LiCOW and a professor at the University of California, Los Angeles, had dreamed ever since his student days in Pisa of measuring the Hubble constant using time-delay cosmography — a method that skips the rungs of the cosmic distance ladder altogether. Instead, you directly determine the distance to quasars — the flickering, glowing centers of faraway galaxies — by painstakingly measuring the time delay between different images of a quasar that form as its light bends around intervening matter.

Tomasso Treu holding a cup of coffee and chatting with fellow cosmologists in a courtyard at the Kavli Institute for Theoretical Physics.

Tommaso Treu (center), a cosmologist at the University of California, Los Angeles, and a member of the H0LiCOW team, chatting with colleagues at the KITP conference.

Monica Almeida for Quanta Magazine

But while Treu and his colleagues were collecting quasar data, Freedman, Madore and their graduate students and postdocs were pivoting to tip-of-the-red-giant-branch stars. Whereas cepheids are young and found in the crowded, dusty centers of galaxies, TRGBs are old and reside in clean galactic outskirts. Using the Hubble Space Telescope to observe TRGB stars in 15 galaxies that also contain Type 1a supernovas, Freedman’s CCHP team was able to extend their ladder to supernovas in the Hubble flow and measure H0, as an additional point of comparison for Planck’s 67.4 and SH0ES’ 74.0.

“At some level I guess the expectation in your own head is, ‘OK, you’re going to come out one way or the other,’ right?” Freedman told me. “And you sort of … fall in the middle. And, ‘Oh! That’s interesting. OK.’ And that’s where we came out.”

Stuck in the Middle

My seatmate on the van the morning after Freedman’s paper dropped was a theorist named Francis-Yan Cyr-Racine, of the University of New Mexico. Earlier this year, he, Lisa Randall of Harvard University, and others proposed a possible solution to the Hubble constant tension. Their idea — a new, short-lived field of repulsive energy in the early universe — would speed up cosmic expansion, matching predictions to observations, though this and all other proposed fixes strike experts as a bit contrived.

When I brought up Freedman’s paper, Cyr-Racine seemed unsurprised. “It’s probably 70,” he said of H0 — meaning he thinks early-universe predictions and present-day observations might ultimately converge in the middle, and ΛCDM will turn out to work fine. (He later said he was half kidding.)

In the seminar room, Barry Madore sat down by me and another reporter and asked, “So, where do you think all this is heading?” To the middle, apparently. “You know that song, ‘Stuck in the middle with you?’” he said. “Do you know the lyrics before? ‘Clowns to the left of me, jokers to the right. Here I am, stuck in the middle with you.’”

Another curveball came before lunch. Mark Reid of the Harvard-Smithsonian Center for Astrophysics presented new measurements of four masers — laserlike effects in galaxies that can be used to determine distances — that he had performed in the preceding weeks. Combined, the masers pegged H0 at 74.8, give or take 3.1. Adam Riess took a picture of the slide. Scolnic tweeted, “This week is too much. Go home H0, you’re drunk.”

When I spoke with Riess during the midday break, he seemed overwhelmed by all the new measurements. For several years, he said, he and his SH0ES colleagues had their “necks stuck out” in claiming a discrepancy with Planck’s Hubble constant value. “At that time, it was tension, and it was discrepancy, and, you know, we also got a lot of grief about it,” he said. But in two weeks, he had gone from “feeling fairly lonely” to having three new numbers to consider. Overall, Riess said, “the tension is getting greater because, you know, nobody is coming out below the Planck value.” If it was all a mistake, why didn’t some teams measure an expansion rate of 62 or 65?

As for that 69.8, Riess had questions about Freedman’s method of calibrating the first rung of her distance ladder using TRGBs in the Large Magellanic Cloud. “Now the Large Magellanic Cloud is not a galaxy; it’s a cloud. It’s a dusty, amorphous thing,” Riess said. “This is the great irony of it. They went to TRGBs to escape dust,” but they have to calibrate them somewhere — “that is, they have to pick some TRGBs where they say we know the distance by some other method. And the only place that they have done that in is the Large Magellanic Cloud.”

An hour later, Freedman, looking serene in a flower-print skirt, made her case. “If we put all our eggs in the cepheid basket, we will never uncover the unknown unknowns,” she said.

Wendy Freedman, a professor at the University of Chicago, argues that new and better measurements are needed before the Hubble constant discrepancy can be confirmed.

Monica Almeida for Quanta Magazine

She explained that she and her colleagues had used TRGBs in the Large Magellanic Cloud as their calibrators because the cloud’s distance has been measured extremely precisely in multiple ways. And they employed a new approach to correct for the effect of dust on the brightness of the TRGBs — one that utilized the stars themselves, leveraging their changes in brightness as a function of color. She noted that her paired TRGBs and supernovas, on the second rung of her distance ladder, show less variation than Riess’s paired cepheids and supernovas, suggesting that her dust measurement may be more accurate.

Freedman stressed during the discussion that better measurements are still needed to rule out systematic errors. “I think that’s where we are,” she said. “That’s just reality.”

From here, the discussion turned into a sparring contest between Freedman and Riess. “Wendy, to answer your question,” Riess said, though she hadn’t asked one, “there have been five fairly independent results presented so far. The dream of getting there is — getting there.”

The Room Where It Happens

Scolnic, the SH0ES scientist and Riess collaborator, suggested we go outside. We sat on a sunny bench near the peach stucco building. A salty breeze blew in from the Pacific. “Definitely a day unlike any day I’ve experienced,” he said.

H0LiCOW’s new result felt to him like a year ago, what with Freedman’s TRGBs and Reid’s masers. “That’s three big hits all within the last week. And I don’t really know where we stand,” he said. Even if the discrepancy is real, “there’s no good story now which explains everything, on the theory or the observation side. And that’s what makes this so puzzling.”

“In ‘Hamilton’-speak,” he said, “this is the room where it happens right now.”

Freedman appeared from the direction of the bluffs overlooking the ocean.

“Hey, Wendy,” Scolnic said. “Wendy, I was just saying, doesn’t this feel like the room where it happens, in ‘Hamilton’-speak? Like, as a little kid, wouldn’t you want to be in this room?”

“Isn’t this where we want to be?” Freedman said. “We’re working on pretty amazing data. Things that are telling us something about how the universe is evolving.”

“And the numbers are this close; we’re arguing about a few percent,” Scolnic said. “For all the sociological drama, it’s funny that it’s about 3 kilometers per second per megaparsec.”

“You have the right attitude,” Freedman said.

It was time to attend the conference dinner, so they went to figure out how to get back in the building, which was locked after hours.

New Physics

Day three brought two new measurements of the Hubble constant: A cosmic distance ladder calibrated with “Mira” stars gave 73.6, and galactic surface brightness fluctuations gave 76.5, both plus or minus 4. Adam Riess took more photos, and by the end of the day a plot had been created reflecting all the existing measurements.

An infographic titled “The Hubble Constant Controversy.” The description below reads “The current expansion rate of the universe, called the Hubble constant (H₀), is tied to the universe’s fundamental ingredients and laws. But when cosmologists calculate the value of H₀ based on measurements of the early universe, they predict a lower value than when they actually measure how quickly various distant objects are moving away from us in the present-day universe. However, a new measurement of expansion in the present-day universe finds an intermediate H₀, complicating the cosmological debate.” Below is a chart that plots different estimates for the Hubble Constant - the expansion rate of the universe. The two left-most data points both show a slower estimated expansion rate of 67.4 m/s/Mpc. These estimates are based on measurements of the early universe. 67.4 km/s/Mpc Planck - Anisotropies in the cosmic microwave background; 67.4 m/s/Mpc DES - Baryon acoustic oscillations. The five right-most data points show a faster expansion rate. These estimates are based on measurements of the present universe. 73.3 m/s/Mpc H0LiCOW - Gravitationally lensed quasars; 73.6 m/s/Mpc MIRAS - Mira variable stars 74.0 m/s/Mpc; SH0ES - Cepheid variable stars 74.8 m/s/Mpc; MCP - Megamasers 76.5 m/s/Mpc; SBF - Galaxy surface brightness fluctuations. A final data point sits between the left-most and right-most data points and shows an intermediate expansion rate of 69.8 m/s/Mpc. This estimate is also based on measurements of the present universe. 69.8 m/s/Mpc; CCHP - Tip-of-the-red-giant-branch stars.

Quanta Magazine; Source: Vivien Bonvin

The two early-universe predictions studded the left side of the plot, with tight error bars around 67.4. Five late-universe measurements lined up on the right, around 73 or 74. And there in the middle was Freedman’s 69.8, the wrench in the works, the hole in the narrative, the painful conciliatory suggestion that all the measurements might come together in the end, leaving us with the mysteries of ΛCDM and nothing new to say about nature.

Then again, all the late-universe measurements of H0, even Freedman’s, fall to the right of 67.4. Erroneous measurements should come out low as well as high. So maybe the discrepancy is real.

The last speaker, Cyr-Racine, held a vote about what the discrepancy should be called. Most people voted for “tension” or “problem.” Graeme Addison, an expert on baryon acoustic oscillations, said in an email after the conference, “My feeling is that the Hubble discrepancy is a real problem, and that we are missing some important physics somewhere. But the solutions people have put together so far are not super convincing.”

Addison finds the consistency of H0LiCOW and SH0ES especially compelling. And although Freedman’s paper suggested “that uncertainties associated with the SH0ES cepheids may have been underestimated,” he said there are also questions about the TRGB calibration in the Large Magellanic Cloud. Freedman claims to have improved the dust measurement, but Riess and colleagues contest this.

This past Monday, in a paper posted on arxiv.org, Riess and company argued that Freedman and her team’s calibration of TRGBs relied on some low-resolution telescope data. They wrote that swapping it out for higher-resolution data would increase the H0 estimate from 69.8 to 72.4 — in range of SH0ES, H0LiCOW and the other late-universe measurements. In response, Freedman said, “There seem to be some very serious flaws in their interpretation” of her team’s calibration method. She and her colleagues have redone their own analysis using the newer data and, she wrote in an email, “We DO NOT find what [Riess and coauthors] are claiming.”

If the four new H0 measurements on the right can’t quite seem to overcome Freedman’s middle value in some people’s minds, it’s due partly to her usual equanimity. Additionally, “she is extremely well respected, and has a reputation for doing meticulous and thorough work,” said Daniel Holz, a Chicago astrophysicist who uses neutron star collisions as “standard sirens,” a promising new technique for measuring H0.

Meanwhile, the next data release from the Gaia space telescope, due in two or three years, will enable researchers to calibrate cepheids and TRGBs geometrically based on their parallax, or how far apart they look from different positions in the sky. The James Webb Space Telescope, Hubble’s successor, will also yield a wellspring of new and better data when it launches in 2021. Cosmologists will know the value of H0 — probably within the decade — and if there is still a discrepancy with predictions, by decade’s end they could be well on their way to discovering why. They’ll know it’s a tension or a crisis and not a surd.

This article was reprinted on TheAtlantic.com.

Comment on this article