When one set of data fails to agree with another over the same phenomenon, things can get interesting. It’s in such inconsistencies that interesting new discoveries are sometimes made, and when the inconsistency involves the expansion of the universe, there are plenty of reasons to resolve the problem. Lately the speed of the expansion has been at issue given the discrepancy between measurements of the cosmic microwave background and estimates based on Type Ia supernovae. The result: The so-called Hubble Tension.
It’s worth recalling that it was a century ago that Edwin Hubble measured extragalactic distances by using Cepheid variables in the galaxy NGC 6822. The measurements were necessarily rough because they were complicated by everything from interstellar dust effects to lack of the necessary resolution, so that the Hubble constant was not known to better than a factor of 2. Refinements in instruments tightened up the constant considerably as work progressed over the decades, but the question of how well astronomers had overcome the conflict with the microwave background results remained.
Now we have new work that looks at the rate of expansion using data from the James Webb Space Telescope, doubling the sample of galaxies used to calibrate the supernovae results. The paper’s lead author, Wendy Freedman of the University of Chicago, argues that the JWST results resolve the tension. With Hubble data included in the analysis as well, Freedman calculates a Hubble value of 70.4 kilometers per second per megaparsec, plus or minus 3%. This result brings the supernovae results into statistical agreement with recent cosmic microwave background data showing 67.4, plus or minus 0.7%.
Image: The University of Chicago’s Freedman, a key player in the ongoing debate over the value of the Hubble Constant. Credit: University of Chicago.
While the cosmic microwave background tells us about conditions early in the universe’s expansion, Freedman’s work on supernovae is aimed at pinning down how fast the universe is expanding in the present, which demands accurate measurements of interstellar distances. Knowing the maximum brightness of supernovae allows the use of their apparent luminosities to calculate their distance. Type 1a supernovae are consistent in brightness at their peak, making them, like the Cepheid variables Hubble used, helpful ‘standard candles.’
The same factors that plagued Hubble, such as the effect of dimming from interstellar dust and other factors that affect luminosity, have to be accounted for, but JWST has four times the resolution of the Hubble Space Telescope, and is roughly 10 times as sensitive, making its measurements a new gold standard. Co-author Taylor Hoyt (Lawrence Berkeley Laboratory) notes the result:
“We’re really seeing how fantastic the James Webb Space Telescope is for accurately measuring distances to galaxies. Using its infrared detectors, we can see through dust that has historically plagued accurate measurement of distances, and we can measure with much greater accuracy the brightnesses of stars.”
Image: Scientists have made a new calculation of the speed at which the universe is expanding, using the data taken by the powerful new James Webb Space Telescope on multiple galaxies. Above, Webb’s image of one such galaxy, known as NGC 1365. Credit: NASA, ESA, CSA, Janice Lee (NOIRLab), Alyssa Pagan (STScI).
A lack of agreement between the CMB findings and the supernovae data could have been pointing to interesting new physics, but according to this work, the Standard Model of the universe holds up. In a way, that’s too bad for using the discrepancy to probe into mysterious phenomena like dark energy and dark matter, but it seems we’ll have to be looking elsewhere for answers to their origin. Ahead for Freedman and team are new measurements of the Coma cluster that Freedman suggests could fully resolve the matter within years.
As the paper notes:
While our results show consistency with ΛCDM (the Standard Model), continued improvement to the local distance scale is essential for further reducing both systematic and statistical uncertainties.
The paper is Freedman et al., “Status Report on the Chicago-Carnegie Hubble Program (CCHP): Measurement of the Hubble Constant Using the Hubble and James Webb Space Telescopes,” The Astrophysical Journal Vol. 985, No 2 (27 May 2025), 203 (full text).
Hubble’s Loaf
by J. O.
The universe, they say,
is dough.
Not matter, not thought, not silence—
but dough,
stretching in the heat of something we mistake for time.
Galaxies are raisins.
Not drifting,
but drifting nonetheless,
because the dough itself exhales.
Each raisin
remains where it was kneaded,
and yet it flees from all others
like distance awakening in the flesh of space.
They call it a constant.
Seventy kilometers per second per megaparsec,
as if numbers could cradle awe.
As if the sky were made of rulers and rules.
But what they mean is this:
for every three million light-years between us,
the space between grows
by seventy kilometers every second.
Not because the raisins race apart,
but because what is
becomes more.
You are not moving.
But the stars are receding.
The breath of space
pulls you from every other center.
And you thought you had none.
Then—
the tension.
Two bakers,
measuring the loaf.
One whispers: seventy-three.
The other: sixty-seven point four.
The dough, it seems, tells two truths.
But they cannot both be true.
Unless the oven is haunted.
Unless time itself is fractured.
Unless there are recipes
written in the dark
in tongues we have not yet remembered.
So now the head breaks open.
Because if space itself is expanding,
then what is motion?
What is speed,
when even the ground beneath the clock
dissolves?
The loaf rises not into emptiness,
but into itself.
Its curvature swells
in silence.
And gravity? Perhaps
not a force
but the residue
of something curling inward
to avoid being named.
Perhaps dark energy
is not energy,
but doubt
woven into the yeast.
The more you measure,
the more the universe hesitates
to agree with itself.
And so we bake
in a mystery
whose heat comes not from fire
but from
a question
that never cools.
And yet another breathless “it could be ‘New Physics'” bites the dust. Whether “new Physics”, “Aliens”, or some other fanciful wish fulfillment, I am encouraged that the careful application of science tends to dispel claims and find more prosaic explanations.
I hesitate to suggest it is something in the zeitgeist, like “witch-finding”, but I wonder if future observers will look back at our time and find some explanation, whether it is a need for attention in our cacophony in our information ecosystem, the crises we are facing, or something else.
I find it almost calming that phenomena that appear confusing resolve to “normalcy”. Now, when something truly strange is discovered that leads to a genuinely new discovery that changes our accepted knowledge of the universe, perhaps the impact will be that much greater, at least for me.
Something in a zeitgeist or a trend of thought and feeling in a period could be unconscious and biased in favor of a specific world view such as a religious zeitgeist. In 2009, I came up with the idea that if we can’t find any physical evidence for the existence of dark energy and dark matter, they must be completely imaginary constructs that is the unconscious projection of evil into outer space or it’s concretization into a physical particle.
The witch finding might definitely apply here philosophically or psychologically since the unknown is always dark, doesn’t reflect any light, emit any light or absorb any light since it is not a physical concept and physicists, astronomers and astrophysicists use the imagination in theory making and don’t always think about inner processes might result in bias. Consequently, there really might not be any new physics, but only a refinement and knowledge of the full capability of the old physics, the first principles. The poem seems to put the mystery within us.
There are many flaws in the Lambda Cold dark matter big bang theory which is not at all five sigma. The Hubble flow could be an illusion.
“Wendy Freedman of the University of Chicago, argues that the JWST results resolve the tension.”
This is such an extraordinary claim that I just had to drop everything and skim the paper for particulars. My initial response: no.
To be fair, the paper does list many sources of error or uncertainty. But those do not excuse the claim. The two that immediately jump out are these:
1) They calibrate on just one distant galaxy. They therefore inherent all the measurement and systematic errors associated with that one data point, and then add their own. Which brings use to…
2) The error bars are really wide. So wide that they encompass everything contributing to the Hubble tension. That is not unification, just one great big fuzzy pdf that resolves nothing. The conclusions just refer back to Figure 1, which when I first came across it early in the paper made me sad.
The Hubble tension remains unresolved, IMHO.
While you laugh at the standard deviations in Figure 1, they do indicate that the “tension” due to different methods with tighter ranges may be incorrect. IOW, the methods of direct stellar observations and the CMB may be much closer than previously thought, allowing for their convergence to a single value, rather than truly separate estimates with less uncertainty.
Clearly, the mean values are not yet very close, although closer than before, and the ranges now overlap. I don’t claim to understand the nuances, but it does seem that there are real uncertainties between the stars used for direct observation, both of magnitude and distance. The higher resolution JWST data compared to the HST does imply greater accuracy, especially with the ability to measure different stellar types.
This Wikipedia entry has a very nice image showing the tension as a result of different measurement methods, and I note the CMB values for the H_0 constant have very tight error bars. This new data pulls the direct observation data for the H_0 from 74 down to 70. The CMB data is anchored at 67, but some of those CMB methods have means that push closer to the new data on direct observations.
Therefore, I see it as more appropriate to be conservative and suggest that is no “tension” and that they really will converge, or differences explained by known physics, rather than jumping to the possibility of “new physics”.
Indeed, I would even consider removing the “?” in the essay title, as conventionally this is answered by a “no”, whilst in this case, there is reason to believe the answer is “yes”.
Either further measurements will recreate the “tension” or effectively eliminate it, resolving the Hubble constant to a value that is encompassed by the best indirect and direct m, measurements with relatively narrow error bars. Either way, let’s not succumb to the call for “new physics” anytime there appear to be differences in observations that cannot be explained. It just leads to later disappointment when a prosaic explanation is sufficient.
“While you laugh…”
No I don’t. Where do you get that? The study they did is perfectly fine, and it properly lays out the limits of the study and how it can be improved in the future. My concern was with the lead author’s comment, which is why I zeroed in on that.
This study does not resolve the tension. Clearly there will have to be a resolution, eventually. Perhaps additional data from JWST observations will help us get there. But we are not there yet. That is all.
Wikipedia entry on the Hubble constant
There are extragalactic maser sources with well determined parallaxes. A recent (-ish: 2013) paper is “TOWARD A NEW GEOMETRIC DISTANCE TO THE ACTIVE GALAXY NGC 4258. III. FINAL RESULTS AND THE HUBBLE CONSTANT”
https://iopscience.iop.org/article/10.1088/0004-637X/775/1/13/meta
They got a distance of 7.60 ± 0.17 ± 0.15 Mpc. Using the Cepheids in NGC 4258 and their distance value, they got H0 = 72.0 ± 3.0 km s−1 Mpc−1.
Maybe they can use Webb to get better magnitudes for the Cephieds in this galaxy, and recalibrate H0. Or get parallaxes for more distant masers.
So strange that this should come out now, just days after a big article in The Atlantic on Adam Riess (Nobel prize for the acceleration of the universe’s expansion) and his growing belief that the Hubble tension is real, and that dark energy is not constant. Freedman is quoted in that article too. Here is an archive link (the article is otherwise behind a paywall): https://archive.ph/FoLGh
I believe this issue is less about science and more about open minds versus closed wallets. Grant funding tends to favor those who conform to established norms, while the real tension lies with those who think independently.
September 23, 2024
Quantized redshift and challenges to Big Bang hypothesis.
The Hoyle–Narlikar variable mass hypothesis provides a framework in which quasars are ejected from galactic nuclei, starting with low mass and high redshift, extracting energy from the surrounding system, and over time, increasing in mass and decreasing in redshift, with redshift being related to mass rather than distance.
The recently observed GNz7q is a dusty starburst galaxy from which a luminous quasar is apparently emerging, a situation that is not consistent with the presence of a still young central black hole in a less massive phase at high redshift.
GN-z11 is observed as it was about 400 million years after the Big Bang and appears to be a moderately massive, metal-free, second-generation galaxy. JWST has observed a supermassive black hole forming approximately 200 million years after the Big Bang, which raises the question as to how this SMBH formed so quickly just after the birth of the universe. Many other observations raise fundamental questions about the formation and evolution of both galaxies and quasars.
Recent observations by JWST and other telescopes violate the standard cosmology fundamental principle that the universe is on average homogeneous and isotropic. Also violated are standard model predictions about high redshift, high luminosity, metallicity and carbon evolution in the early universe, including the presence of large filament objects, galaxies like the Milky Way and unexpected morphology of objects at both high and low redshift.
https://phys.org/news/2024-09-quantized-redshift-big-hypothesis.html
[Submitted on 12 Aug 2024]
Quantized Redshift and its significance for recent observations.
“With the recent observational evidence in extra galactic astronomy, the interpretation of the nature of quasar redshift continues to be a research interest. Very high redshifts are being detected for extragalactic objects that are presumably very distant and young while also exhibiting properties that are characteristic of a more mature galaxy such as ours. According to Halton Arp and Geoffrey Burbidge, redshift disparities consist of an intrinsic component and are related to an evolutionary process. Karlsson observed redshift periodicity at integer multiples of 0.089 in log scale and Burbidge observed redshift periodicity at integer multiples of 0.061 in linear scale. Since Singular Value Decomposition (SVD) based periodicity estimation is known to be superior for noisy data sets, especially when the data contains multiple harmonics and overtones, mainly irregular in nature, we have chosen it to be our primary tool for analysis of the quasar-galaxy pair redshift data. We have observed a fundamental periodicity of 0.051 with a confidence interval of 95% in linear scale with the site-available Sloan Digital Sky Survey data release 7 (SDSS DR7) quasar-galaxy pair data set. We have independently generated quasar-galaxy pair data sets from both 2dF and SDSS and found fundamental periodicities of 0.077 and 0.089 in log scale with a confidence interval of 95%.”
https://arxiv.org/abs/2408.07101
So that is the abstract. I have no idea what this means. Can you explain what it is in a way that a 10-year-old can understand, and what its significance is?
I put it to the AI and got this: https://www.perplexity.ai/search/https-arxiv-org-abs-2408-07101-pq_coOpwRdmfQY0hs0PH8A Apparently people have proposed that redshift is quantized, and this paper generated results consistent with that from a larger dataset. There’s a Wikipedia article: https://en.wikipedia.org/wiki/Redshift_quantization
Interesting thought… redshift is really odd because it seems to have special license to just _eat_ energy right out of the cosmos in defiance of conservation law, and now there’s something else odd to matchmake with that. (The AI flailed around but I don’t think it hit any marks that time)
So if redshift is quantized rather than continuous, how does that affect the Hubble constant? Increase/decrease and by how much? Is it even relevant if true, or a phenomenon that impacts other features of the cosmos?
Thank you, that was exactly what I was looking for. It would be beneficial if someone used AI to analyze the data sets now.
My perspective on why this occurs is likely too controversial for most cosmologists:
Alternative Cosmological Models:
The Hoyle–Narlikar variable mass hypothesis and related models propose that quasars are ejected from galactic nuclei, and their redshift is related to their mass and evolutionary stage, not just their distance. This can naturally lead to intrinsic redshift components and quantization effects.
Halton Arp provided substantial evidence from actual observations, showing that galaxy clusters were ejected from the cores of large Seyfert galaxies. His book, “Seeing Red,” details these observations.
Centaurus A, located 12 million light-years away, is the nearest Seyfert galaxy and quasar in an active state.
https://www.icrar.org/wp-content/uploads/2021/12/All-wavelengths_B.jpg
“Centaurus A is a giant elliptical active galaxy 12 million light-years away. At its heart lies a black hole with a mass of 55 million suns. This composite image shows the galaxy and the surrounding intergalactic space at several different wavelengths. The radio plasma is displayed in blue and appears to be interacting with hot X-ray emitting gas (orange) and cold neutral hydrogen (purple). Clouds emitting Halpha (red) are also shown above the main optical part of the galaxy which lies in between the two brightest radio blobs. The ‘background’ is at optical wavelengths, showing stars in our own Milky Way that are actually in the foreground. Credit: Connor Matherne, Louisiana State University (Optical/Halpha), Kraft et al. (X-ray), Struve et al. (HI), Ben McKinley, ICRAR/Curtin. (Radio).”
Astronomers capture black hole eruption spanning 16 times the full Moon in the sky.
https://www.icrar.org/centaurus/
The video on this site illustrates the scale of this quasar, showing how young quasars/galaxies are ejected from the core of the larger quasar.
Prompt: What impact does redshift quantization have on the measurements for the Hubble constant?
Gemini response:
## Redshift Quantization: A Fringe Theory with No Impact on Mainstream Hubble Constant Measurements
**Redshift quantization, the controversial hypothesis that the redshifts of galaxies are not smoothly distributed but fall into discrete, preferred values, holds no sway in mainstream cosmology and therefore has no impact on the measurement of the Hubble constant.** While proponents of this idea argue it would fundamentally invalidate our understanding of cosmic expansion, the broader scientific community has largely dismissed the evidence for redshift quantization as a product of statistical misinterpretation and observational biases.
The Hubble constant ($H_0$) is a fundamental parameter in cosmology that describes the rate at which the universe is expanding. It is determined by measuring the redshift of distant galaxies and their distances. The standard interpretation, underpinned by a wealth of observational evidence, is that redshift is a continuous indicator of a galaxy’s recessional velocity due to the expansion of space.
### The Radical Claim of Quantized Redshifts
The theory of redshift quantization posits that the observed redshifts ($z$) of galaxies tend to cluster around specific, periodic values. This suggests that the distances to galaxies are not random but are arranged in concentric shells around our own galaxy.
If redshift quantization were real, it would have a profound and revolutionary impact on our measurement of the Hubble constant. The very foundation of the Hubble-Lemaître law, which states that a galaxy’s recessional velocity is proportional to its distance ($v = H_0 d$), would be undermined. If redshift is not primarily a measure of recessional velocity but is instead an intrinsic property of galaxies or a result of some unknown physical law, then the entire methodology for calculating $H_0$ from redshift surveys would be incorrect. The Hubble constant, as we know it, would cease to be a meaningful measure of the universe’s expansion rate.
### Mainstream Scientific Consensus: No Credible Evidence for Quantization
Despite the potentially groundbreaking implications, the vast majority of astronomers and cosmologists do not accept the claims of redshift quantization. The consensus is that the observed periodicities are not a real physical phenomenon but rather the result of several factors:
* **Selection Effects and Observational Biases:** Early studies that reported redshift quantization were often based on small and heterogeneously selected galaxy samples. The way galaxies are chosen for observation can inadvertently introduce patterns. For instance, different observational techniques and instruments are optimized for specific redshift ranges, which can lead to apparent clustering in the data. Subsequent, much larger and more systematic surveys, like the Sloan Digital Sky Survey (SDSS), have failed to find statistically significant evidence for redshift quantization when these biases are accounted for.
* **Statistical Artifacts:** The methods used to identify periodicities in redshift data have been a subject of debate. Critics argue that the statistical significance of the claimed periodicities is often overstated. What may appear as a non-random pattern can, in many cases, be attributed to chance fluctuations in the data, especially when dealing with a limited number of data points.
* **Large-Scale Structure of the Universe:** The universe is not uniformly filled with galaxies. Instead, they are distributed in a cosmic web of filaments and voids. When we observe a particular line of sight, we are essentially taking a core sample through this complex structure. This can naturally lead to apparent groupings of galaxies at certain distances, and therefore certain redshifts, which can be misinterpreted as quantization.
In conclusion, while the concept of redshift quantization presents a fascinating “what if” scenario for cosmology, it is a hypothesis that has failed to gain traction within the scientific community. The evidence presented in its favor is widely considered to be weak and better explained by known observational and statistical effects. Therefore, in the context of mainstream cosmology, redshift quantization has no impact on the ongoing efforts to precisely measure the Hubble constant and understand the expansion of our universe.