Saturday, February 14, 2026

Could we make Gaussian Splat capture of Saggitarius A?

A couple of years ago I was studying the technique of capturing Gaussian Splats (3D holographs) with a group based out of England. We used the software from the Google spinoff Niantic Labs to stitch thousands of photos together in a technique called photogrammetry. The compelling thing about creating captures of Gaussian Splats is that, unlike a “mesh” or point cloud 3D capture, every point in space captured in a Gaussian Splat can have multiple hues to each voxel. (Voxels are the word for pixels in 3D space contrasted to 2D surfaces.) So as you move through the hologram, every point in space around you can have different colors that change as light reflects or refracts differently through each captured point in the space. The benefit of this is that the holographs look volumetrically real like the holodeck concept from Star Trek.

Around that time I was discussing Gaussian Splats with my friend Ben, who also is from the UK coincidentally. He had worked in light-field capture with Lytro, a kind of 3D camera that could be used to adjust focus within a captured light field after the photo was taken. He had also worked for two other companies that were building Augmented Reality scene creation for mobile devices, glasses and head mounted displays. So he had thought and worked in depth with these concepts of 3D photography and scene rendering in viewing platforms far longer than I had as a hobbyist.

In a discussion about neural radiance fields (NeRF is a predecessor of Gaussian Splats) we were discussing how renderings of photogrammetry stitches of field captures across time could remove ephemeral items from the scene. Watch this scene (Waymo & UC Berkeley "Block NeRF" study) where a NeRF was generated from multiple Waymo cars' 2D photos taken as they drove through San Francisco then stitched into a 3D model. By fusing many different captures over time, you see only what lasts while things that are moving or ephemeral disappear because their existence is not confirmed between different measurements. This is a boon for privacy purposes because what people generally want not to be public in the world is their own ephemeral presence in it. So cars, people, birds and leaves for instance are not reinforced in a light field capture once the captures are stitched over time instead of stitched over points in space in a single time. Note that in the BlockNeRF video actual time is subtracted, the visual perspective of time applied later is artificial. All temporal things were removed by their not being there when another photographic pass was made. 


Studying further on this, you’ll find dozens of resources in the radiance field community about creating visual fidelity improvements that can be made by taking in multiple perspectives and using the consistencies between them to clarify blurry or low-light images. You can think of this like the dragonfly’s compound eye where each photoreceptor in their eye may have low color interpretation of a broad spectrum, but averaged in a mosaic with 20,000 other photoreceptors is able to create a very detailed and precisely accurate capture of the light field that can even supersede human level precision. 

The extrapolated stitching between perspectives can be used to create “novel views” between two or more perspectives that have shared consistencies between them to synthesize other fictional but accurate perspectives of what an interstitial view between the perspectives might look like in reality. A great example of this is this study a friend made of his drone flying through the woods and running into a tree limb. His drone couldn’t see the limb it crashed into in order to avoid it. But assembling hundreds of images into a Gaussian Splat depiction of the scene allows the novel view perspective to see the obstacle that the drone could not see head-on. 

I have done some amazing Gaussian Splat captures at home where I’ve focused the capture inside the home, on a porch or a scene with a window and then rotated the output ply-file to see what the inside of the house looks like from a novel view 20 feet outside the house looking back. It’s hard to explain how striking this experience is if you haven’t seen it yourself. But what happens here is my moving photoreceptor camera creates a depiction of the entire volume of the house and outside the house as can be seen from within. I could then move outside the house, go down to the scale of a blade of grass captured in that scene and walk behind the blade of grass which the camera identified as a solid object in the distance. I could then see what looking back at the house might appear like to an insect emerging from behind that blade of grass. It’s astounding to see how powerful this technology is.

My conversation with Ben one day went down a fascinating rabbit hole as we talked about the concept of time-exposures across cameras. Typically, a time exposure is one camera having its shutter open for a longer duration in low light contexts to create a detailed picture by allowing more photons through the iris-like shutter to achieve the desired exposure on the film. Time exposures don’t just get brighter, they get more accurate by taking in more of the photons from the reference scene, thereby getting sharper and truer to the scene observed with eyes. In our conversational framing we discussed a hypothetical lens as big as any number of people with any number of cameras at any number of times through a period. If you abstract away the lens and you abstract away time and positions of scene capture based on where the photographers or observers were, you’d get an ultra high resolution image, like a UHD photo, but taken as a 3D volume. (You could even throw other spectra like X-ray or infrared into the mix!) Naturally, if you sprinkle time lapsing into it, across derived novel perspectives you can re-animate the scene not as it was, but as it would have appeared from novel perspectives in the scene not actually captured, but inferred. As Ben pointed out, you could even color correct and optimize known-objects in the scene. That image of the moon not showing up well? Then you can over-dub your blurry moon with a high resolution NASA image of the moon and color/size adjust it down to fit in the appropriate place in your captured photo, he pointed out. You could re-apply sunny day hues to identifiable items in the scene based on the statistical average pantone scale averaged across a month or a specific season and position of the sun in the sky.

Riffing on this, we discussed a concept of a Gaussian Splat of a house photographed not in terms of visible light, but invisible radio spectra. Phones, Wi-Fi and Bluetooth radio devices are in nearly all homes and buildings now bathing our environment with short-distance high precision wavelenghts. So hypothetically, you could conduct the same Gaussian Splat capture I did with photons and replicate it using only the invisible spectra to get a clear view of the house in terms of how each-device’s radio waves bouncing or passes through the various walls in my house depending on their physical density and the reflective or absorptive properties.

All these things are on the edges of our emerging technologies now. Many don’t have tremendous utility at present. But eventually they may. We don’t have a use for Mantis Shrimp level spectral Gaussian Splats or radio-wave radiance fields of homes, unless for engineering purposes. But it’s fun to think about how these new advances can apply to new sectors. 

Artist rendition of Pulsar illuminating a gas cloud

I was reading about the discovery of BLPSR, a pulsar believed to be near the center of our galaxy. As Carl Sagan explained at length in his series Cosmos in the 1980s, pulsars are rapidly spinning neutron stars which serve as beacons we see all over the cosmos caused by collapsing stars in the final throes of their existence. They are not particularly exciting in general. All they do is blink rapidly in the sky. But what is exciting about them is that they are very regular in their pulsing which is caused by the period of their spin. So their signals can be used to precisely clock and therefore detect aberrations in spacetime that are caused by gravity wave ripples coming out of massive explosions between us and them or passing through us locally that we otherwise couldn't detect because our bodies ripple along with the spacetime warps of the of the environment containing us. The article in Scientific American pointed out that we could create a galaxy-scale Light Interferometer Gravitational-wave Observatory that would be far better than the ones we built on Earth because the length of the Milky Way LIGO would have a fidelity to peer across a distance of 26,000 light years. (LIGO on Earth is only 4 kilometers in size) 

The change of interval of a pulsar's flashes over time would help us measure the bending of time and warping of space between us and the light source. We could use it as a telescope for super-massive explosions in the pulsar's proximity and beyond, like a microphone at the center of the galaxy! We could perhaps even detect aftershocks of the Big Bang 13.8 billion years ago, thought to be echoing back from the edges of time's beginning. (We can see light from the edge of the fireball of our origin already. We may soon be able hear it as well.) When LIGO first captured the gravitational wave ripple of two neutron stars colliding, they rendered the wave frequency into audible range in this demonstration. A gravitational wave microphone at our galactic center might capture some fantastic cacophony from our immediate neighborhood. Also an interferometer the size of half the galaxy will pick up much larger wavelength variations with higher fidelity than any interferometer that can be built on Earth. 

Implementing a BLPSR gravitational observatory could be a fascinating development over the coming decades. But I had another funny thought inspired by my discussions with Ben. Just like the wifi/bluetooth signal being a proxy for a light source that can traverse walls, if we have an opportunity to monitor BLPSR over a number of years, and it happens to transit behind Sagittarius A, our galaxy’s black hole, then we can make a capture of Sagittarius A with much finer volume-based precision, isolating its Schwarzschild radius (half the diameter of its event horizon) very precisely by comparing the gravitational lensing that would happen when BLPSR is directly behind our black hole. We could also determine the full width of Sagittarius A's accretion disc by noting exactly when BLPSR’s redshifted appearance disappears until when her blue-shifted pulses emerge. Just as Henrietta Swan Levitt used Type 1A supernovae as a “standard candle” of brightness across the cosmos to determine the rate of our universe’s expansion, the so-called Hubble constant we may use a single source pulsar like and its perturbations to scan through and around Milky Way's environs and generate multi-point reference volumetric view impossible before. 

Beyond the LIGO-like applications, we may be able to use BLPSR, or other domestic pulsars in our galaxy if any others are found, as a camera flash to make time exposures of their light bending around their invisible companions. We could do so by merging time of all frames of the transit of BLPSR around an orbit and use her light blinks and time-warp corrected intervals from the flash rate changes to reveal Sagittarius A in a whole new light that can only be inferred from the motion of other objects at present. A Gaussian scan of Sagittarius A may even give us better sense of visible versus invisible matter in proximity to a black hole.

GAL-CLUS-022058s is nicknamed the "Molten Ring."
We know what a gravitational-lensing around space-time warps looks like outside our galaxy from Hubble Space Telescope and JWST images. But those astronomical objects are profoundly distant from us as are the light sources behind that bend around the warp illuminating the spacetime curvatures. So to have a single color flash from a known pulsar acting like a standard-candle through its journey over the long term could illuminate an orbit of our galactic center that would give us a beautifully precise view of the hottest densest part of our galaxy. Whether it looks like Kip Thorne's suggested theoretical depiction of an up-close view of an accretion disk (from the movie Interstellar below) or not remains to be seen. But if we are able to make a Gaussian Splat volumetric capture of our galaxy in the coming years that would be a fantastic way for the next generation to explore the mysteries of dark matter and dark stars in our proximity.

Kip Thorne's theoretical light refraction model of a black hole