- Published on
How to Build a Gaussian Splat Viewer PWA in One Day (From Capture to Deploy)
- Authors

- Name
- Almaz Khalilov
How to Build a Gaussian Splat Viewer PWA in One Day (From Capture to Deploy)
TL;DR
- You'll build: a web-based 3D viewer (PWA) that displays a photorealistic scene reconstructed from a set of photos (using Gaussian Splatting).
- You'll do: Capture a real-world scene → Generate a 3D Gaussian Splat model → Use open-source tools to view it → Publish as a Progressive Web App → Test on iPhone and Android.
- You'll need: a free PlayCanvas account (for using the SuperSplat tool), a smartphone or camera to take photos (or a sample dataset), and a laptop with an updated web browser (Chrome 115+/Safari 16+ etc.).
1) What is Gaussian Splatting?
What it enables
- Photorealistic 3D from images: Gaussian Splatting is a cutting-edge technique that takes a set of ordinary photographs and generates a navigable 3D scene out of them. It produces point cloud-like "splats" (tiny translucent 3D blobs) that collectively reconstruct the scene with realistic detail.
- Real-time rendering in browser: Unlike earlier Neural Radiance Field (NeRF) methods that required heavy computation, Gaussian Splat models can be rendered very efficiently on standard GPUs. This means you can explore the 3D scene in a web browser at interactive frame rates, even on consumer devices.
- Easy sharing and embedding: Because the output can be just a collection of points (often stored in a
.plyfile) and a lightweight viewer, it's easy to package and share. Tools like SuperSplat allow exporting a single HTML file viewer for a model, and even one-click publishing to a URL, so anyone can view your 3D capture without installing special apps.
When to use it
- Reality capture & virtual tours: Use Gaussian Splatting when you want to turn real-world environments (rooms, buildings, outdoor scenes) or objects into photorealistic 3D models quickly. It's ideal for scenarios like real estate virtual tours, construction site capture, cultural heritage digitization, or museum exhibits – anywhere you need a realistic 3D "clone" of a place or object.
- Product visualization: For e-commerce or marketing, Gaussian Splats can capture retail products or artifacts in rich detail. They allow users to view items from all angles with realism. For example, the Reflct platform uses 3DGS for product showcases, letting creators set up predefined views for shoppers.
- Rapid prototyping and demos: If you have limited time or resources to create a full 3D model by hand, this technique shines. In hackathons or prototyping sessions, you can capture a space in the morning and have an interactive demo by evening. The entire pipeline from photos to a shareable 3D web app can be done in under a day.
Current limitations
- Capture requirements: The quality of the result heavily depends on the photo capture process. You need good coverage of the subject with carefully taken images. Too many images can introduce noise. Texture-less areas (like plain white walls) can cause holes or alignment issues. Using a decent camera (or carefully taken smartphone photos with consistent settings) is recommended for best results.
- Device & browser support: While the viewer can run on the web, certain implementations use newer tech like WebGPU (e.g. some open-source viewers) that require up-to-date browsers and hardware. For example, one WebGPU-based viewer needs Chrome/Edge 115+ or special flags on Linux. The good news is other viewers use WebGL for broader compatibility, but very large models might still struggle on older mobile devices due to memory or GPU limits.
- No dynamic lighting or editing: These splat-based scenes are more like captured snapshots of reality. You cannot easily relight the scene or alter geometry after the fact (unlike traditional 3D models). The splats carry baked color (often even baked illumination). Advanced features like dynamic shadows or fine geometry editing aren't natively supported yet. Also, view-dependent effects (like shiny reflections via spherical harmonics) are usually omitted to keep file size manageable. In short, it looks great as-is, but it's not as flexible as a hand-crafted 3D model.
- Emerging ecosystem: Gaussian Splatting is a very new technique (circa 2023), so tooling is evolving. File sizes can be large (point clouds with millions of points), though new compression formats like SOG (Sparse Occupancy Grid Splat, dubbed the "WebP of Gaussian Splatting") are being developed to shrink files dramatically. Expect rapid changes; what works today (often in "preview" or beta form) will improve quickly. Always check the latest tools and browser updates if you hit a roadblock.
2) Prerequisites
Access requirements
- PlayCanvas account: Create or sign in to the PlayCanvas Developer Portal (free). This will be used for SuperSplat, an online tool that helps edit and publish splat models. (If you prefer not to use this, you can still follow along with open-source tools, but the guide will reference it for quick publishing.)
- (Optional) GitHub access: If you plan to use open-source code like the Spark renderer or Kevin Kwok's Splat viewer, you might clone those repositories. No special permission needed, but familiarity with downloading/cloning from GitHub is assumed.
- Project/app registration: Not strictly needed for the web approach (no API keys required). If using an alternate platform's service (e.g., DJI Terra or Luma AI for generating the model), you'd need accounts for those, but in this guide we focus on open solutions.
Platform setup
iOS (for PWA testing)
- Safari (iOS 16+): Ensure your iPhone/iPad has Safari updated. iOS Safari supports WebGL out of the box; WebGPU support is experimental as of 2025, so our viewer will stick to WebGL for compatibility reasons. A physical iPhone is recommended to test the PWA's AR mode (since Safari's WebXR is limited, we'll note an alternative for AR).
- PWA Install method: Know how to add a webpage to your Home Screen on iOS (Safari's "Add to Home Screen" option) because iOS won't show an install prompt. This gives the full-screen app-like experience for your viewer.
- (No Xcode or native SDK needed) – We're building a web app, so no need for Xcode or CocoaPods. Just have a way to transfer sample data to your device if needed (AirDrop, iCloud, etc., in case you want the PLY file on device, though not necessary).
Android (for PWA testing)
- Chrome or Edge (Android 115+): Use the latest Chrome (or any Chromium browser) on Android. Chrome supports PWA installation and WebXR AR features. A physical Android phone (with ARCore support if you plan to use AR) is recommended.
- "Install App" prompt enabled: Visit
chrome://flagsand ensure the PWA install prompt is enabled (usually default). This will let you easily install the viewer PWA by tapping the Install banner or via menu. - (No Android Studio or native code needed) – Everything runs via the browser. Just ensure your device can load large web content and you have a way to host or access the PWA link (we will also cover simple local hosting options).
Development machine
- Laptop/PC with WebGL-capable browser: For the capture processing and integration steps, use a desktop or laptop with a modern browser. Chrome, Firefox, or Edge are fine for general use; Chrome/Edge are needed if using WebGPU-based tools. Ensure your GPU drivers are up-to-date.
- Python and/or Node.js (optional): If you want to run open-source pipelines (like NerfStudio for training, or Python scripts to convert data) or to set up a local web server for the PWA, having Python 3 or Node.js could be useful. Not strictly required if you use solely GUI tools and the provided hosting platform, but good to have in case you experiment with scripts.
- Camera or drone for capture: For the "capture" part, a smartphone with a decent camera can work. However, for best results, a DSLR or mirrorless camera gives you more control (manual exposure, etc.). If using a drone for outdoor scenes, ensure you can transfer images to your PC. If you don't have any camera available, you can use sample images or a pre-made sample model provided in this guide.
Hardware or mock
- Scene to capture or sample data: Identify a scene or object you want to capture (e.g., your backyard, a room, a statue). If not, download a sample Gaussian Splat model: the original GS authors provide example outputs (PLY + camera files) which you can use to follow along.
- Stable platform for capture: If possible, use a tripod or keep a steady hand during photography to avoid motion blur. Blurry or misaligned photos can hurt the reconstruction.
- Good lighting or controlled environment: Try to capture under consistent lighting (avoid moving shadows or changing light between shots). Uneven lighting can introduce artifacts in the 3D result.
- No specialized wearable/mock needed: Unlike some AR tutorials, here we do not require connecting to a separate wearable device. The "wearable" is essentially the smartphone that can view AR. If you don't have an AR-capable phone, you can still do everything except the AR viewing step.
3) Get Access to Gaussian Splatting Tools
- Access the editor: Go to the SuperSplat Editor (PlayCanvas's online Gaussian Splat tool). Log in with your PlayCanvas account (sign up free if you don't have one). This web-based editor runs in your browser.
- Request any beta features: Currently, SuperSplat is open to all, no special beta flags needed. (If the platform had a private beta, this is where you'd ensure your account is enabled. For example, if you opted to use DJI Terra or Luma AI's Gaussian splatting, you'd ensure your account has that feature enabled – but with SuperSplat, everything is available by default.)
- Create a project (if needed): In SuperSplat, you work with
.plyfiles directly rather than creating a "cloud project". However, if you use the PlayCanvas Editor route later, you might create a PlayCanvas project for a full application. For now, just open the SuperSplat web app; no project creation is required to get started. - Prepare the capture pipeline: Decide how you will generate the Gaussian Splat model from your photos:
- Option A (Beginner-friendly): Use an all-in-one tool like Luma AI or XGRIDS Lixel (if you have access) that can output a NeRF or point cloud model. Then convert that to a Gaussian splat if needed. DJI's Terra software, for example, added Gaussian splatting export for drone imagery. This may involve uploading photos and downloading a PLY.
- Option B (Open-source): Use NerfStudio or the official Inria Gaussian Splatting code. NerfStudio (open source) can train a NeRF and has a script to convert to a Gaussian splat point cloud (via
ns-process-datacommand, as noted in the OpenSplat project). The official code from the paper can also output a PLY after training. These require a machine with a decent NVIDIA GPU (for training) and some comfort with Python. If you go this route, set up the environment (install NerfStudio, etc.) and be ready to run training on your images. - Option C: Skip training with a sample model. To follow this tutorial first, you can use pre-generated sample data. For instance, PlayCanvas provides an example PLY if you don't have one at hand (look for the "here's an example" link on their blog). Download that
.plyfile to use in the next steps.
- Download credentials or tools: Most Gaussian splat viewers don't require API keys, but gather these items:
- SuperSplat PWA (optional): While on the SuperSplat Editor page, you can install it as a PWA (in Chrome/Edge, look for the install icon). This isn't mandatory, but it can associate PLY files on your system to open with it.
- Conversion scripts: If your pipeline requires converting outputs, download those. For example, Kevin Kwok's splat tool has a convert.py script to turn a NeRF's output into their
.splatformat. PlayCanvas introduced SplatTransform CLI which can convert and compress PLY to their optimized format (SOG) – you can get it from their GitHub if needed. - App IDs/tokens: Not applicable here since our viewer will be static. If embedding in an existing site, no special API token is needed – it's all client-side. (If using a cloud hosting or API for AR, you'd note credentials here, but in our case, publishing via SuperSplat or self-hosting covers it.)
Done when: you have at least one 3D Gaussian Splat model (or the data to create it) ready to use. This means either:
- A
.plyfile and (optionally) acameras.jsonfrom your scene (common output from the GS pipeline) – for examplescene.plyplus camera data. - Or a specialized
.splatfile (if using the spark/antimatter viewer) – you can always convert .ply to .splat in the viewer later by drag-and-drop. - And you have access to the SuperSplat tool or the equivalent viewer to load this model. You should be logged in to the SuperSplat editor if you plan to publish through it.
4) Quickstart A — Run the Sample App (iOS)
Goal
Run a Gaussian Splat viewer as a PWA on an iPhone (or iPad) to verify that your captured model can be viewed interactively, and test basic features like navigation and AR. We will use the official SuperSplat sample viewer for simplicity, loading our model into it on iOS Safari.
Step 1 — Get the sample viewer
- Use SuperSplat's web viewer: On your iPhone, open Safari and navigate to
superspl.at/editor(the SuperSplat editor) or the PlayCanvas Statue Project example which is a live 3DGS scene. This is a fully functional viewer in the browser. - Alternative – static viewer: You can also use Kevin Kwok's WebGL viewer directly by visiting
antimatter15.com/splat/on your iPhone. It will load a default scene. (This viewer is more minimal, but doesn't require login.) - Download PWA (optional): If you see an install banner or plus icon in Safari, use "Add to Home Screen". On iOS, visit the share menu and tap Add to Home Screen to save the page as an app. (SuperSplat is a PWA, but Safari might not automatically prompt; this manual step makes it behave like an installed app.)
Step 2 — Load your 3D model
- Open or import the PLY: In SuperSplat Editor (on iOS Safari), tap File → Import and select your
.plyfile. You might need to have the file in iCloud Drive or a location Safari can access. Once selected, the model will load in the viewer. (Large files might take a minute to parse – be patient.) - (If using antimatter15 viewer): That viewer lets you drag-and-drop a file into the page, but mobile Safari doesn't support drag-drop. Instead, you can upload your PLY to a web-accessible URL (with CORS enabled) and append
?url=YOURMODELURL.plyto the viewer URL to load it. For a quick test, you might skip this on mobile due to complexity and use the default sample; you can always test your own model on desktop and then just view the published version on iOS later.
Step 3 — Configure viewer settings
- Adjust quality/performance: In SuperSplat's menu (if visible on mobile, you might need to rotate or use two-finger swipe to scroll the UI), you can toggle settings like point size or performance vs quality. For now, use defaults. The PlayCanvas engine powering it is optimized to handle millions of splats smoothly.
- Enable AR (if available): SuperSplat's viewer has an AR button (usually an icon of a cube or AR headset) if the device/browser supports WebXR AR. On iOS Safari, WebXR augmented reality is not fully supported (Safari doesn't yet implement AR session). However, Safari does support an older Quick Look AR for USDZ models, which isn't directly applicable here. So on iPhone, you might not see an AR button. That's expected – we'll test AR on Android in Quickstart B.
- Permissions: The viewer doesn't need any special permissions just to display the model. Audio/microphone aren't used here, and camera would only be used for AR (which on iOS Safari isn't active). So no permission prompts should appear in this step.
Step 4 — Run and explore
- Navigate the scene: Use standard gestures – one finger drag to orbit around, two-finger pinch to zoom, two-finger pan to shift view (these are supported in the viewer). Try moving around your splat model. On-screen controls might also be available (e.g., a joystick or arrow pad) depending on the UI.
- Test installation: If you added to Home Screen, close Safari and launch the app from your home screen. It should open full-screen, with no browser UI. Confirm that it loads your model (you might need to import the file again within the PWA on first run). Once loaded, it should stay in state if you leave and come back.
- Save a view (optional): In some viewers like antimatter15's, pressing "v" on desktop saves the current view to the URL. On mobile you might not have a keyboard, but this isn't crucial. Just note that the viewer can encode camera positions if needed.
Step 5 — (Optional) Connect to AR display
iOS currently doesn't support WebXR-based AR in browser, so this step is more of a placeholder. If you have access to an Apple Vision Pro (which supports WebXR in visionOS Safari) or want to simulate AR, you can:
- Use a tool like Reality Composer to place a dummy object in AR, or
- Simply skip AR on iOS. We'll do AR on Android in the next section, which has broad support.
If you were on an AR-supported iOS app, at this step you'd grant camera access for AR and see the 3D model anchored in your environment. Since that's not applicable, ensure everything else works (model renders and you can interact) on iOS.
Verify
- Model renders correctly: You should see a point-cloud or splat representation of the scene. Colors and basic shapes should look photorealistic from the original photos. If you used a sample, check that it resembles the expected scene (e.g., the provided sample might be a bike or a statue – confirm you see that).
- Performance is acceptable: Rotate and move around. On an iPhone, you ideally get ~30fps or more. If it's choppy, the model might be very large. SuperSplat uses the optimized PlayCanvas engine which handles millions of splats at smooth frame rates, so it should be okay. Minor lag during initial loading is normal.
- PWA works offline (if installed): Turn on airplane mode and open the PWA from home screen (only do this after it's fully loaded once). It might still show the model if the viewer code and model file are cached. If not, don't worry – we haven't set up full offline caching of the model. The viewer app itself should open, but it may need internet if the model wasn't cached. (Full offline support would require a service worker caching the PLY, which is an advanced step beyond this quickstart.)
Common issues
- Black screen or model not showing: If you just get a blank view, it could be a Safari/WebGL quirk. Try tapping the AR or VR toggle then back (if visible) or ensure that "WebGL via Metal" is enabled in Safari experimental settings. Also check that your file actually loaded – mobile Safari might not load large files if memory is low.
- "Unsupported file" error: If SuperSplat says the PLY can't be opened, it might be due to format. Ensure the PLY is from a Gaussian Splat pipeline (with per-point color and optionally normals). If you have a
.splatfile instead, note that SuperSplat expects PLY; you'd need to convert it back or use a different viewer. - Slow performance on older iPhone: If you're on an older device (e.g., iPhone X or earlier), the sheer number of points might be taxing. Try using a decimated model (fewer points) or switch to Point Budget mode if the viewer has one to limit drawn splats. You can also ensure no other heavy apps are open.
- App not installing on home screen: If "Add to Home Screen" isn't available, double-check you're not in private browsing and that the site has a valid SSL (it should). SuperSplat is HTTPS, so it should allow PWA install. iOS might also require at least one interaction before showing the option.
5) Quickstart B — Run the Sample App (Android)
Goal
Run the Gaussian Splat viewer on an Android device, and test the augmented reality mode with a supported device. Android's Chrome browser supports WebXR, meaning we can actually view the 3D splat in AR through the camera. We'll also verify the PWA installation and performance on Android.
Step 1 — Get the sample viewer
- Open viewer in Chrome: On your Android phone, navigate to the same viewer used in Quickstart A. For example, open
superspl.at/editorin Chrome. Alternatively, a dedicated sample app: you could use the PlayCanvas Statue Project link or any published splat URL from SuperSplat (if you published your own in section 3). - Install the PWA: Chrome will typically show an "Install" button in the URL bar if it recognizes the site as a PWA. Tap that to install the viewer app. (If you don't see it, open Chrome's menu ⋮ and tap "Install App".) In a second, it will add an icon to your launcher for the SuperSplat Editor or your specific app link.
- Launch the installed app: Tap the new icon to launch the viewer in full-screen mode, just like a native app. This ensures we have the PWA environment ready for AR (some WebXR features work better in installed PWAs on Android).
Step 2 — Configure dependencies (Android-specific)
(Most configuration is already done by the web app, but a few Android specifics:)
- Enable camera permission for AR: The first time you invoke AR mode, Chrome will ask for camera access. Be prepared to grant it. You can also preemptively check in Android Settings -> Apps -> Chrome -> Permissions that Camera is allowed.
- ARCore installation: For WebXR AR to work, your device needs Google Play Services for AR (ARCore). Most modern Android phones have it by default. If not, install Google AR Services from the Play Store. Without ARCore, the AR button may not appear at all in the viewer.
- Background audio (if any): Our app doesn't use audio, but note that PWA on Android could continue running in background or after screen off differently than iOS. No special changes needed for us, just something to be aware of if you had a long-running process.
Step 3 — Configure app (permissions & settings)
- Load your model: In the PWA viewer (should be the same interface as in Quickstart A), load your
.plymodel if you haven't already. If you published your model to a URL via SuperSplat in Step 3, simply navigate to that URL in Chrome and install it from there; then your model will load automatically. (Publishing gave you a unique link to your model's viewer). - Add required permissions in WebXR: Chrome will handle this when you tap AR. There's no manifest like AndroidManifest.xml for us to edit in a PWA context, but ensure:
- Camera: allowed (as above).
- Motion sensors: for AR/VR, Chrome might ask for gyroscope/accelerometer permission (some devices do). Grant if prompted.
- Optimize for mobile: Some viewers (Spark or three.js based ones) might allow setting an
XRSessionrequired features. SuperSplat is already configured. If you were coding your own viewer, you'd ensurerequiredFeatures: ['hit-test', 'dom-overlay']etc. are set for AR. With the off-the-shelf viewer, no changes needed from our side.
Step 4 — Run on Android device
- Test navigation: Use touch controls to orbit and zoom around your scene in the Android PWA, just as you did on iOS. Verify it's smooth. Android devices vary, but a modern phone with Chrome and a decent GPU should handle millions of points easily, similar to desktop.
- Enter AR mode: Tap the AR headset icon or AR button in the viewer. Chrome will likely prompt "Allow access to camera for AR?" – grant it. Then your camera will open, and you'll see a prompt to move your device to scan the environment (this is ARCore establishing tracking).
- Place the model: After a moment, the 3D model should appear overlaid on the real world through your camera feed! By default, it may appear in front of you at some default scale. Physically move and rotate your phone to see the model from different angles. You can usually tap on a surface to reposition the model, or pinch to scale, depending on how the viewer implemented AR (the PlayCanvas viewer likely places it at a fixed real-world size and uses plane detection).
- Exit AR: Use the on-screen X or back button to leave AR mode and return to the normal 3D view. The model should still be loaded in the app, unchanged.
Step 5 — Connect to wearable/mock (VR headset)
Android also supports VR through WebXR. If you have a Meta Quest or another headset that can use the Android browser or an WebXR viewer, you could test VR mode:
- Pair/connect your headset or use Oculus Browser on Quest to navigate to the same URL. You might need to enable developer mode or use an emulator if you want to simulate this on PC.
- For our purposes, testing AR on a phone is usually enough. VR mode on phone (stereoscopic split) might be accessible if the viewer has a VR button. You could drop your phone into something like a Google Cardboard to experience the scene in VR. This is an optional stretch goal.
In summary, ensure at least one AR-capable device (likely an Android phone with ARCore) is tested so you know users can see the model in their space. This is one of the coolest features – being able to "spawn photorealistic 3D models directly into your environment" as the PlayCanvas team puts it.
Verify
- AR mode works: After moving your phone around, the model should lock into the real world with reasonable stability. Check that it's oriented and scaled sensibly (you might find it floating or too large/small; some viewers use 1 unit = 1 meter, which can make a model of a small object appear giant if not adjusted).
- No major lag or drift: The AR should update in real-time as you move (30fps or more). If it's very laggy, your phone might be struggling – consider reducing point count or ensure no thermal throttling.
- Touch to place (if supported): If tapping on a surface repositions the model, verify that works. Not all viewers support this, but it's common in ARCore experiences. If not supported, the model might appear at a fixed location relative to initial placement.
- PWA integration: Close and reopen the app from the launcher while offline – it should still launch (assuming it's cached). The AR may not start offline if certain WebXR libs weren't cached, but generally once installed it should be self-contained.
Common issues
- Gradle auth error / dependency not found: This item is more relevant to native Android projects and not applicable to our PWA scenario. We have no gradle here. If you were integrating into a native app via WebView and a library, you might encounter dependency issues, but not in our pure web approach.
- WebXR permission not persisting: Sometimes after first use, the camera permission might not stick in the PWA. If AR fails silently on second try, go to Android Settings -> Apps -> [Your PWA] and ensure Camera permission is granted (PWAs might appear as separate apps named after the site).
- Device connection timeout: If AR mode just shows a black screen or "looking for surfaces" forever, it might be an ARCore issue. Try restarting the app, moving the device slower, or check if ARCore is working in other apps (like Google's Scene Viewer or AR in Google search). Occasionally, a phone needs calibration for AR (waving in figure-8 motion).
- Install banner doesn't appear: If Chrome didn't show an install prompt, make sure you accessed the site via HTTPS and not in Incognito mode. You can still install via menu as mentioned. Also, some sites only prompt after a user interaction (e.g., a button click).
- Browser fallback: If you use Firefox or a non-Chrome browser on Android, note that WebXR might not be supported (Firefox for Android doesn't support AR as of now). Stick to Chrome or a Chromium-based browser for AR features.
6) Integration Guide — Add Gaussian Splat Viewer to an Existing Web App
Goal
Integrate the Gaussian Splatting technology into your own website or web application. We'll take what we learned from the sample app and bring it into an existing project – for example, adding a "3D View" page in your company's web portal that shows a captured scene. The aim is to embed the 3DGS viewer (or library) into your app architecture and enable one end-to-end feature: displaying a splat model to your users, possibly with AR support.
Architecture
- Your Web App UI → 3D Splat Viewer Component → Gaussian Splat model data (.ply/.splat). The viewer component (which could be built with Three.js + Spark, or an iframe to a viewer, or PlayCanvas Engine) takes the model data and renders it in a canvas element on your page. If using AR, the chain is: User clicks AR → WebXR session starts via browser → device camera & sensors provide background + tracking → the 3D content anchors into real world.
- If you use an iframe embed (e.g., pointing to the superspl.at viewer with a URL param of your model), it's like: App -> iframe -> external viewer app. That's quick but less flexible. Alternatively, using a library gives you more control in-app.
- We recommend using Spark (sparkjs.dev) or the PlayCanvas Engine for deep integration:
- Spark (Three.js) – Spark is a Three.js-based renderer specifically for Gaussian Splats, with support for animations, multiple model formats, etc. It can be included as an npm package and you then use it like any Three.js component in your existing app.
- PlayCanvas Engine – You can load the PlayCanvas 3DGS user guide and scripts, and then instantiate a viewer inside an existing page's canvas. This requires more setup (PlayCanvas engine setup, loading the splat material/shader).
- Connection to AR/VR: If your app will have an AR button, the integration is between your UI and the browser's WebXR API. The viewer (Spark or PlayCanvas) must be set up to handle an XR session. PlayCanvas has built-in WebXR support; with Three.js (Spark) you use Three.js's WebXR manager.
Step 1 — Install the viewer SDK or library
For a web (JavaScript) project:
- Add the Spark library. If available on npm, run:
npm install @sparkjsdev/spark(or add via<script>from their CDN). Spark is MIT-licensed open source. Once included, you'll have classes to load a.splator.plyfile into a Three.js scene. - Alternatively, include antimatter15's viewer code. It's just a few files (one HTML, one JS, plus optional Python script). You might directly copy
main.jsand the shader code from that repo into your project, but Spark is a more packaged solution. - If using PlayCanvas Engine: include the PlayCanvas engine script (from their CDN or npm). Also get the 3D Gaussian Splatting User Manual/Script which likely includes the special material/shader to render splats. (PlayCanvas Editor users can simply upload PLY to the editor, but in code you need to mimic what the editor does.)
- Ensure your build process (Webpack, etc.) includes these dependencies. Since Spark relies on Three.js, you need Three.js r150 or above as a peer dependency.
For a native app (optional scenario): If you were integrating into an iOS/Android app, you could skip that and simply embed a WebView that loads your web content with the above libraries. The heavy lifting stays in JavaScript. So no native SDK in that sense.
Step 2 — Add required permissions
Even on web, if you intend to use AR, you should handle permission flows gracefully:
In-browser (for WebXR via HTTPS):
- The browser will request camera permission when starting an AR session. You can't change the prompt text via web, but you can prompt the user with a friendly UI beforehand ("Tap 'Allow' to enable AR camera view").
- No need for a manifest entry like native apps, but ensure your site is on HTTPS (WebXR and camera access won't work on insecure origins).
If using an iframe embed: The iframe needs allow="camera; gyroscope; xr-spatial-tracking" attributes to enable AR sessions inside it. Make sure to set those or AR will be blocked.
If not using AR/VR, then no special permissions at all – viewing the 3D content itself doesn't require any permission.
Step 3 — Create a thin client wrapper in your app
To keep your integration clean, abstract the 3DGS functionality into a few components/services within your app code:
- WearablesClient / SplatViewerClient: This would manage connecting and loading the model. For example, it might handle initializing Spark's renderer, loading the
.splatfile, and starting the render loop. (The name "WearablesClient" was from the template, here it could beSplatViewerorGSplatService). - FeatureService: If you plan to have interactive features (like switching viewpoints, triggering an animation or measurement in the scene), encapsulate those. E.g., a method
focusView(name)that moves camera to a preset viewpoint (like Reflct does with curated views). - PermissionsService: Specifically for AR, this could check if AR is available (
navigator.xr && navigator.xr.isSessionSupported('immersive-ar')), and handle prompting the user or falling back if not. It can also listen for theonvisibilitychangeevents of XR session to pause/resume rendering as needed.
Implementing these as classes/modules in your codebase will make it easier to maintain. For instance:
// Pseudo-structure
class SplatViewer {
constructor(container) { /* init Three.js or PlayCanvas in this DOM container / } loadModel(fileUrl) { / load .splat or .ply, set up scene / } startRendering() { / begin render loop / } enterAR() { / initiate WebXR AR session if supported / } exitAR() { / end session, return to normal view */ }
}
Definition of done:
- The 3D viewer initializes as part of your app (e.g., when user navigates to the "3D view" page, the canvas appears and model is shown).
- The system can handle connect/disconnect or loading/unloading gracefully. In web terms, this means if the user navigates away, you dispose of Three.js scenes or stop PlayCanvas to not leak memory. If they come back, you re-init if needed.
- Errors (like "model file not found" or "WebGL not supported") are caught and displayed to the user in a friendly way. E.g., if someone's on an old browser with no WebGL, you detect that and show "Your device isn't supported" rather than a blank div.
Step 4 — Add a minimal UI screen
Design a simple UI in your app for the 3D viewer feature:
- Canvas Display: A
<canvas>or<div>element that will contain the 3D scene. Ensure it's styled to fill the screen or a designated area. - "View in AR" button: Visible on devices that support it (use
PermissionsServiceto check). When clicked, call yourSplatViewer.enterAR()to start the AR session. You might change the icon to indicate active AR and provide a way to exit (maybe the same button toggles AR off or a separate "Exit AR" appears). - Connection status/Loading indicator: If the model is loading (which could be a few seconds for a large PLY), show a spinner or progress bar. Once loaded, if applicable, show a "Connected" or ready state. (The concept of connect/disconnect is more for Bluetooth devices – here it might be "Model loaded" status).
- Controls: At minimum, maybe a help tooltip explaining touch controls (e.g., "Drag to rotate, pinch to zoom"). If you have multiple scenes or models, a dropdown to switch models could be added (though not in our one-day scope).
- If you have multiple viewpoints or an animation, add buttons for those ("Next View", "Play Tour", etc. – these would use the FeatureService to manipulate the camera or splats).
By the end of this integration, your existing app should be able to:
- Load a page where the user can see the 3D model inside your UI chrome.
- Let the user interact with it (rotate/zoom).
- Optionally enter AR mode to see it in their environment.
- Close the view and return to the rest of your app seamlessly.
7) Feature Recipe — View Your Splat in Augmented Reality (AR)
Goal
Enable a one-click "View in AR" feature in your Gaussian Splat viewer. When the user taps an AR button, the web app will overlay the 3D splat model onto the real world through the device camera. This recipe covers the user flow, implementation steps, and common pitfalls for AR integration.
UX flow
- Pre-check: The app determines if AR is supported on the current device/browser. If not, the AR button is disabled or hidden.
- User taps "View in AR": The app might show a modal saying "Point your camera at an open area…", and then enters AR mode.
- Camera feed & model overlay: The device camera activates. The 3D model appears as if it's in the environment. The user can move closer or around it physically.
- User interaction in AR: The user can tap on a surface to relocate the model, or pinch on the screen to scale/rotate (these gestures are optional enhancements). The app keeps the model anchored where placed.
- Exit AR: The user taps an exit button or hits back. The app returns to the regular 3D view, with the model in its initial state.
Implementation checklist
- AR support detection: Use
if (navigator.xr)andnavigator.xr.isSessionSupported('immersive-ar')to check for AR. Also ensurehttps:context. Only enable AR button if true. - Request permissions: On button tap, call
navigator.xr.requestSession('immersive-ar', { requiredFeatures: ['hit-test', 'dom-overlay'], domOverlay: { root: document.body } }). This triggers the camera permission prompt. Ensure you have an element (like a overlay div) for DOM overlay if you want to show UI on top in AR. - Start AR session: Once granted, use the session in Three.js or PlayCanvas:
- For Three.js:
renderer.xr.setSession( session );and then your render loop will adapt to AR. Use aHitTestSourcefor placing objects on surfaces (plane detection). - For PlayCanvas: call
app.xr.start(camera, pc.XRTYPE_AR, pc.XRSPACE_LOCAL, { cameraComponent: yourCameraComponent });which handles the session internally.
- For Three.js:
- Place or scale model: Initially, you might just place the model at camera origin or at a default floor level. Using hit-test, you can update the model's position to the real-world hit point where the user tapped. Make sure the model's scale is reasonable (e.g., if your model units are meters, a small object might appear huge if units mismatched).
- Visual feedback: While AR session is starting, show some indicator ("Initializing AR…"). Once the model appears, maybe show a brief help ("Tap on a flat surface to reposition"). This can go away after a few seconds.
- Handle session end: Provide a button in DOM overlay to exit, which calls
session.end(). Also handle if the user just hits Android's back or home – your app should listen forsession.endevent to know AR ended and then restore UI state.
Pseudocode
async function onEnterAR() {
if (!navigator.xr) { alert("AR not available"); return; }
const supported = await navigator.xr.isSessionSupported('immersive-ar');
if (!supported) { alert("AR not supported on this device."); return; }
// Request AR session
const session = await navigator.xr.requestSession('immersive-ar', {
requiredFeatures: ['hit-test', 'dom-overlay'],
domOverlay: { root: document.body }
});
// Set up AR rendering
app.viewer.startXRSession(session); // pseudo-call to integrate with viewer (Three.js or PC)
showMessage("Move your device to scan for surfaces");
// If using hit-test for placement:
const viewerRefSpace = await session.requestReferenceSpace('local');
const hitTestSource = await session.requestHitTestSource({ space: viewerRefSpace });
session.addEventListener('select', (evt) => {
// On screen tap in AR
const hit = /* perform hit test using hitTestSource */;
if (hit) {
model.position = hit.transform.position;
}
});
session.addEventListener('end', onARSessionEnd);
}
This is a simplification. In Three.js, much of this is abstracted when you use XRHitTest utility and set renderer.xr.enabled = true.
Troubleshooting
- AR button not showing: Check the support detection. iPhones (Safari) will return false for
immersive-arsupport as of now, hence no AR. That's normal – document it to users ("AR works on Android or specific devices only"). - Permission denied: If the user denies camera access, handle it gracefully. Perhaps allow re-trying by showing a "Enable camera in settings to use AR" message. Don't just fail silently.
- Model not visible or tiny in AR: Could be scale issues. Many AR frameworks assume 1 unit = 1 meter. If your model was in arbitrary scale, you might need to scale it up or down when AR starts. For example, if you captured a small object but it's appearing microscopic, multiply scale by a factor until it looks right.
- Model drifting or jittering: This is often an AR tracking issue in feature-poor environments (like a blank wall). Encourage the user to move slower or go to a more textured area. Also ensure you're using
XRSPACE_LOCALorLOCAL_FLOORfor reference space to minimize drift. - Exiting AR freezes camera feed: Occasionally, on some Android devices, after ending an AR session, the camera might get stuck if the session isn't properly ended. Make sure to call
session.end()and also dispose of any AR-specific resources. If using Three.js, setrenderer.xr.setSession(null)after end. - Performance in AR: AR mode might run a bit slower because the device is doing camera passthrough and tracking in addition to rendering the splats. If frame rate is low, consider reducing rendering load (you can dynamically lower point size or density during AR if needed). Also, ARCore might drop the camera frame rate on older devices.
By implementing AR, you significantly enhance the viewer's utility – seeing a photorealistic scene in context is a "wow" feature (imagine placing a scanned outdoor scene on your table or viewing a captured room in your own living room). Just be mindful of the caveats above for a smooth experience.
8) Testing Matrix
When you think you have everything working, it's time to test across scenarios. Use this matrix to ensure your Gaussian Splat PWA and integration holds up in different conditions:
| Scenario | Expected Result | Notes |
|---|---|---|
| Sample model on Desktop | Loads and is navigable at good FPS | Baseline environment (Chrome/PC). Also test in Firefox for basic viewing (no AR there). |
| Mock device (no camera) | Viewer works (AR button hidden/disabled) | E.g., using Chrome dev tools device emulation. Ensures graceful degradation when AR not available. |
| Real device – close range | Model appears with high detail, low latency | Test on a powerful phone (e.g., Pixel 7 or iPhone 14). Should be smooth ~60fps in normal view. AR at least 30fps. |
| Real device – background/locked | App resumes correctly | While viewing, switch apps or lock screen, then return. The viewer should still function or at least not crash. PWA might need a reload if suspended long. |
| Permission denied (AR) | User sees clear error, app continues | Manually deny camera permission on AR prompt. The app should handle it (e.g., show "AR unavailable without camera" toast and return to 3D view rather than a black screen). |
| No internet (offline) | PWA opens (if cached), model maybe limited | If user opens the app offline after first load, it should at least show the interface. If the model was cached (check caching strategy), it shows; if not, a friendly message "No internet – unable to load model" should appear. |
| Slow network | Loading indicator visible until done | Throttle network and ensure the user isn't staring at a blank screen. Your spinner or progress (if you have one) should show during model download. Possibly handle partial load if using progressive loading (antimatter15's viewer progressively loads points). |
| Different model sizes | Works for small and large models | Try a simple model (~5MB .ply) and a heavy model (50MB+ .ply). The heavy one might take longer; ensure no timeouts. On very heavy, you might see memory issues on mobile – document a recommended point count or use compression (SOG). |
| Multiple navigations | No memory leaks, no duplicated listeners | If your integration is in a single-page app, navigate away from and back to the 3D view multiple times. FPS should remain consistent and memory use stable. If it degrades, likely some Three.js scenes weren't disposed. |
| Unsupported browser | User gets a polite message | Test IE11 (just kidding, we simply wouldn't support that at all). But maybe test an older device with WebGL disabled, or Chrome with flags turned off. The app should detect lack of WebGL/WebXR and show an informative alert instead of failing silently. |
Use this table to track testing. As you discover issues, update the app and re-test relevant scenarios. It's a good idea to automate some of these (for instance, a Cypress or Playwright test that loads the page and checks for a WebGL context, etc., but manual testing is key for AR).
9) Observability and Logging
To maintain and improve your Gaussian Splat viewer app, build in some logging and analytics:
- Model Load Events: Log when a model load starts and ends, and how long it took. e.g.,
model_load_start,model_load_success(with duration),model_load_fail(with error info). This can help identify if users experience long load times or failures. - Rendering performance: If possible, log the frame rate or timing stats. For example, after 5 seconds of use, capture the average FPS or whether the device had to reduce quality. You might send an event like
render_perf { fps: 55 }or simply log to console for devs. - User interactions: Log key actions:
enter_ar_start,enter_ar_success,enter_ar_fail(if AR session fails),exit_ar. Also, if your app has multiple views or hotspots, log when user switches view (view_changed_to: X). - AR session metrics: You can measure how long users stay in AR. Log an event on AR start and AR end, and compute duration. Also note if any tracking lost events occur (WebXR doesn't directly give this, but if the user exits quickly it might indicate an issue).
- Error logging: Any caught exceptions (loading errors, WebGL context lost, etc.) should be reported (to console for dev, or to a monitoring service in prod). For instance, if a user's device can't support the required texture size or runs out of memory, catch that and log
error: "WebGL context lost due to OOM". - Device info: It's useful to tag logs with device or browser info (e.g.,
platform: iOS-SafariorAndroid-Chrome, plus device model if accessible). That way, if a particular phone model has issues, you can spot it. - Connection/reconnection (if applicable): If you had a concept of connecting to a hardware device (not really in this PWA case), you'd log
connect_start,connect_success, etc. In our scenario, the "connection" is more about checking AR availability or retrieving model data, which we already covered above.
By monitoring these logs (especially if you integrate with an analytics platform), you can answer questions like: Are users actually using the AR feature? Do a lot of them fail to load the model? What's the median load time on mobile vs desktop? This feedback loop helps prioritize improvements (e.g., investing in compression if load times are high, or simplifying AR flow if many skip it).
10) FAQ
Q: Do I need professional camera hardware to start?
A: No, you can start with a smartphone camera for small scenes or objects. The technique works with any set of images. However, for large or critical projects, a decent camera with manual settings yields better results (sharp, consistent images). If you don't have a camera, you can even use provided sample data to experiment first. The key is having overlapping photos of the subject from many angles with consistent lighting.
Q: Which devices are supported for viewing?
A: Any device with a WebGL-capable browser can view the 3D model. This includes most modern smartphones, tablets, and computers. For AR mode, currently Android devices with ARCore (and Chrome/Edge browser) are fully supported. iPhones/iPads can view the 3D model in Safari, but due to Apple's current limitations, web AR is not available in browser (as of early 2026). High-end VR headsets with browsers (Meta Quest using Oculus Browser, etc.) can enter VR mode to view the scene in immersive 3D. Always test on your target devices; performance may vary with hardware.
Q: Can I use this in production, or is it a toy/preview?
A: Gaussian Splatting is new but maturing fast. There are already production uses (e-commerce demos, online exhibitions, etc.). Tools like PlayCanvas's SuperSplat and Spark are open-source and fairly robust. However, keep in mind it's evolving: file formats might change (e.g., more efficient SOG format might replace raw PLY in the future) and browser tech (WebGPU/WebXR) is improving. You should be fine to pilot it in production for web experiences, especially for marketing or visualization. Just avoid mission-critical applications until you've done thorough testing. Also, consider the data size and performance on your users' devices (deliver compressed models if you can).
Q: How can I share or publish my 3D scene easily?
A: The quickest way: SuperSplat's Publish feature. In the SuperSplat editor, you hit File → Publish, fill in some info, and it will host your model on a URL for you. It even lets you mark it unlisted if you want to keep it semi-private. The published page is essentially a PWA viewer that anyone can load. If you want to self-host, you can export an HTML from SuperSplat or use the antimatter15 viewer. That way, you could put the HTML + model on your own website. For integration into an existing app, follow the steps in section 6 to embed it rather than just sharing a link.
Q: Can I update or add to the splat model after it's made?
A: Not easily – think of it like a photograph. If you want to add new objects or remove things from the scene, you'd typically have to re-capture and regenerate the model or use editing tools in SuperSplat to isolate parts. SuperSplat Editor allows some post-processing, like cropping out background or combining multiple splats in a scene. But you can't, for example, dynamically insert a new 3D object into a baked splat scene and have it interact realistically with the splats (there are no dynamic shadows cast on splats, etc., currently). For adding annotations or hotspots, you can overlay those in the viewer (e.g., Reflct's approach of adding metadata and UI on top).
Q: The quality isn't as good as I expected. How can I improve it?
A: A few tips:
- Better input photos: The reconstruction quality depends on your photos. Use higher resolution images, cover the subject from more angles (including high/low angles, not just around the middle), and ensure focus/exposure are consistent. Fewer but well-planned shots are better than blasting hundreds of random shots. Avoid motion blur and dramatic lighting differences between shots.
- Tune the reconstruction: If using an open-source pipeline, some allow adjusting the resolution or number of Gaussians. More points can capture detail better (at cost of file size). Also, filtering out outliers (noisy points) improves visual quality.
- Viewer settings: In some viewers you can adjust point size or enable smoothing. For example, Spark might allow different shader modes that could make the splats blend nicer. PlayCanvas engine improvements in 2025 significantly boosted rendering quality and performance (smoother splats with fewer artifacts), so use the latest viewer tech available.
- Post-process: If you have minor gaps or holes, you can mesh the splat point cloud and fill holes using mesh tools, but that may reduce photorealism. It's an active area of research to make these splat models even more high-fidelity; keep an eye on updates.
Q: How is this different from point clouds or photogrammetry?
A: Gaussian Splats are like point clouds on steroids. Traditional photogrammetry often produces a mesh or dense point cloud; Gaussian Splatting instead optimizes point size, color, and opacity to represent the scene with fewer artifacts and with some volume (each point is a tiny 3D Gaussian blob). This results in smoother blending of points (less "holey" than raw point clouds) and surprisingly good preservation of photorealistic details. It's also generally faster to render than a heavy textured mesh. However, it's a complement to those techniques – in fact, many pipelines (like Luma or NerfStudio) can output both NeRF and point cloud models. Think of 3DGS as a modern approach to get NeRF-like visual quality with point cloud-like simplicity in rendering.
12) Changelog
- 2026-01-10 — Verified with SuperSplat Editor v0.22.0 and Spark v1.0.0 on Chrome 117 (Windows 11, NVIDIA RTX 3060) and iPhone 15 (iOS 17 Safari). Added notes on AR support for iOS vs Android. Updated instructions to reflect PlayCanvas 3DGS improvements and SOG format introduction.
- 2025-08-18 — Initial version based on PlayCanvas blog announcements and Heliguy's workflow guide. Tested basic PWA installation and AR on Android Chrome. (Heliguy's capture workflow reference.)
- 2025-05-22 — Background: PlayCanvas released SuperSplat with PWA support around this date, enabling easier deployment. Ensured guide aligns with that capability (one-click install, PLY file association).
- 2024-06-05 — Background: PlayCanvas Editor integrated Gaussian Splats. Early experiments done with that editor to combine multiple splats, informing the integration section. Noted need for user-friendly navigation (inspired by Reflct's approach).
- 2024-03-10 — Background: Research on capture techniques (Medium post by Y. He) provided advanced tips on improving quality. Those insights were distilled into the FAQ and capture guidelines.