In the past few weeks a new reconstruction technique has been taking the community by storm, called Gaussian Splatting. It is sort of an evolution on NeRFs, and mathematically it’s not dissimilar from the kind of reconstructions we do for spatial audio.
Luckily a couple of companies have already implemented Gaussian Splatting pipelines, and here we are comparing two of them with photogrammetry.
Here is Luma AI:
Polycam
And finally a photogrammetry computed with PhotoCatch (which uses Apple’s ObjectCapture API, like Polycam’s photogrammetry), converted to GLB in Blender to stay under Sketchfab’s size limit. In practice the only modification is that the textures are in JPEG instead of PNG.
The difference is particularly evident in the textures on the building, and it’s just staggering on the vegetation.
Also there is a small caveat: Luma was given a video, Polycam was given 200 photos, PotoCatch was given around 250 photos. This reflects differences in the platforms themselves.
Update 2023-10-12: There is already an A-Frame component for self-hosting Gaussian Splats, we’re going to try it real soon.
Leave a Reply