According to the Standard Model of Particle Physics, the universe is governed by four fundamental forces: electromagnetism, the weak nuclear force, the strong nuclear force, and gravity. While the first three are described by Quantum Mechanics, gravity is described by Einstein’s general theory of relativity. Surprisingly, gravity is the biggest challenge for physicists. While the theory accurately describes how gravity works for planets, stars, galaxies and clusters, it doesn’t apply perfectly at all scales.
While general relativity has been validated repeatedly in the past century (beginning with the Eddington Eclipse Experiment in 1919), gaps still appear when scientists try to apply it to the quantum scale and to the universe as a whole. According to a new study led by Simon Fraser University, an international team of researchers has tested general relativity at its largest scale and concluded that it may need a few tweaks. This method could help scientists solve some of the biggest mysteries facing astrophysicists and cosmologists today.
The team consisted of researchers from Simon Fraser, the Institute of Cosmology and Gravity at the University of Portsmouth, the Center for Particle Cosmology at the University of Pennsylvania, the Osservatorio Astronomico di Romathe UAM-CSIC Institute for Theoretical Physicsuniversity of Leiden Lorentz Instituteand the Chinese Academy of Sciences (CAS). Their results appeared in an article titled “Impressions of cosmological tensions in reconstructed gravityrecently appeared in Nature Astronomy.
According to Einstein’s field equations for GR, the universe was not static and had to be in a state of expansion (otherwise gravity would cause it to contract). While Einstein initially opposed this idea and tried to propose a mysterious force that kept the universe in balance (his “cosmological constant”), Edwin Hubble’s observations in the 1920s showed that the universe is expanding. Quantum theory also predicts that the vacuum of space is filled with energy that goes unnoticed because conventional methods can only measure changes in energy (rather than the total amount).
In the 1990s, new observatories such as the Hubble Space Telescope (HST) pushed the boundaries of astronomy and cosmology. Thanks to surveys like the Hubble deep fields (HDF), astronomers could see objects as they appeared more than 13 billion light-years (or less than a billion years after the Big Bang). To their surprise, they found that the rate of expansion has accelerated over the past 4 billion years. This led to what is known as “the ancient cosmological constant problem,” where gravity is weaker on a cosmological scale, or some mysterious force is driving the cosmic expansion.
Lead author Levon Pogosian (Professor of Physics, Simon Fraser University) and co-author Kazuya Koyama (Professor of Cosmology, University of Portsmouth) summarized the issue in a recent article via The conversation. As they explained it, the cosmological constant problem boils down to a single question with drastic implications:
“[W]whether the vacuum energy is actually attracted – exerting a gravitational force and changing the expansion of the universe. If so, why is gravity so much weaker than predicted? If the vacuum isn’t attracted at all, what’s causing the cosmic acceleration? We don’t know what dark energy is, but we have to assume it exists to explain the expansion of the universe. Likewise, we must also assume that there is some kind of presence of invisible matter called dark matter to explain how galaxies and clusters evolved into the way we observe them today.
The existence of dark energy is part of the standard cosmological theory known as the Lambda Cold Dark Matter (LCDM) model, where Lambda represents the cosmological constant/dark energy. According to this model, the mass-energy density of the universe consists of 70% dark energy, 25% dark matter, and 5% normal (visible or “luminous”) matter. While this model has successfully matched the observations collected by cosmologists over the past 20 years, it assumes that most of the Universe is made up of undetectable forces.
Hence some physicists have ventured that GR may need some adjustments to explain the Universe as a whole. In addition, astronomers noticed a few years ago that measuring the rate of cosmic expansion in different ways yielded different values. This problem, Pogosian and Koyama explained, is known as the Hubble voltage:
“The disagreement, or tension, is between two values of the Hubble constant. One is the number predicted by the LCDM cosmological model, which was developed to match the light left over from the big bang (the cosmic microwave background radiation). The other is the expansion rate measured by observing exploding stars known as supernovae in distant galaxies.”
Many theoretical ideas have been proposed to adapt the LCDM model to explain the Hubble stress. Among them are alternative gravitational theories, such as Modified Newtonian Dynamics (MOND), a modified view of Newton’s law of universal gravity that does away with the existence of dark matter. For more than a century, astronomers have been testing GR by observing how the curvature of spacetime changes in the presence of gravitational fields. These tests have become particularly extreme in recent decades, including how supermassive black holes (SMBHs) affect orbiting stars or how gravitational lensing amplifies and alters the passage of light.
For their study, Pogosian and colleagues used a statistical model known as Bayesian inference, which is used to calculate the probability of a statement as more data is introduced. From there, the team simulated cosmic expansion based on three parameters: the ESAs’ CMB data Planck satellitesupernova and galaxy catalogs such as the Sloan Digital Sky Survey (SDSS) and Dark energy research (DES) and the predictions of the LCDM model.
“Together with a team of cosmologists, we put the basic laws of general relativity to the test,” said Pogosian and Koyama. “We also explored whether modifying Einstein’s theory could help solve some of cosmology’s outstanding problems, such as the Hubble voltage. To determine whether GR is correct on the largest scale, we examined three aspects of it simultaneously for the first time. These were the expansion of the universe, the effects of gravity on light, and the effects of gravity on matter.”
Their results showed some inconsistencies with Einstein’s predictions, although they had fairly low statistical significance. They also found that solving the Hubble tension problem was difficult simply by tweaking the theory of gravity, suggesting that an additional force may be needed or there may be errors in the data. If the former is true, Pogosian and Koyama said, it’s possible that this force was present during the early universe (c. 370,000 years after the Big Bang) when protons and electrons first came together to create hydrogen.
Several possibilities have been developed in recent years, ranging from a special form of dark matter, an early type of dark energy, or original magnetic fields. In any case, this latest study indicates the need for future research that may lead to a revision of the most widely accepted cosmological model. Said Pogosian and Koyama:
“[O]Our research has shown that it is possible to test the validity of general relativity over cosmological distances using observational data. While we haven’t solved the Hubble problem yet, in a few years we will have a lot more data from new probes. This means that we can use these statistical methods to further modify general relativity, explore the limits of modifications and pave the way for solving some of the outstanding challenges in cosmology.”