Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields

Brandon Zhao1   Aviad Levis2,3   Liam Connor4   Pratul P. Srinivasan5    Katherine L. Bouman1,6

1Department of Computing and Mathematical Sciences, California Institute of Technology
2Department of Computer Science, University of Toronto
3David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto
4Center for Astrophysics, Harvard & Smithsonian
5Google DeepMind
6Departments of Astronomy and Electrical Engineering, California Institute of Technology



Abstract

Weak gravitational lensing is the slight distortion of galaxy shapes caused primarily by the gravitational effects of dark matter in the universe. In our work, we seek to invert the weak lensing signal from 2D telescope images to reconstruct a 3D map of the universe’s dark matter field. While inversion typically yeilds a 2D projection of the dark matter field, accurate 3D maps of the dark matter distribution are essential for localizing structures of interest and testing theories of our universe. However, 3D inversion poses signficant challenges. First, unlike standard 3D reconstruction that relies on multiple viewpoints, in this case, images are only observed from a single viewpoint. This challenge can be partially addressed by observing how galaxy emitters throughout the volume are lensed. However, this leads to the second challenge: the shapes and exact locations of unlensed galaxies are unknown, and can only be estimated with a very large degree of uncertainty. This introduces an overwhelming amount of noise which nearly drowns out the lensing signal completely. Previous approaches tackle this by imposing strong assumptions about the structures in the volume. We instead propose a methodology using a gravitationally-constrained neural field to flexibly model the continuous matter distribution. We take an analysis-by-synthesis approach, optimizing the weights of the neural network through a fully differentiable physical forward model to reproduce the lensing signal present in image measurements. We showcase our method on simulations, including realistic simulated measurements of dark matter distributions that mimic data from upcoming telescope surveys. Our results show that our method can not only outperform previous methods, but importantly is also able to recover potentially surprising dark matter structures.

paper [pdf] code [Github]


Video


Citation

@inproceedings{zhao2025revealing, title={Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields}, author={Zhao, Brandon and Levis, Aviad and Connor, Liam and Srinivasan, Pratul P and Bouman, Katherine}, booktitle={The Thirteenth International Conference on Learning Representations}, year={2025} }


Acknowledgements

This work was supported by NSF award 2048237, Google, a Carver Mead New Adventure Fund Award, Stanback innovation Fund Award, Amazon AI4Science Discovery Award, and a Caltech S2I Award. A.L. was supported by the Natural Sciences \& Engineering Research Council of Canada (NSERC). We thank Francois Lanusse for his helpful discussions regarding weak lensing and providing us with guidance on the JaxPM code, Carolina Cuesto-Lazaro for her input on dark matter mass mapping applications, and Olivier Doré for helpful discussions about cosmology.