- Published on
How to Capture 3D Scenes with Meta Glasses and Gaussian Splatting on Mobile & XR
- Authors

- Name
- Almaz Khalilov
How to Capture 3D Scenes with Meta Glasses and Gaussian Splatting on Mobile & XR
TL;DR
- You’ll build: a pipeline that uses Meta’s wearable glasses to capture real-world scenes and reconstructs them as interactive 3D models using Gaussian Splatting (a high-fidelity point-cloud technique).
- You’ll do: Get access → Install the Meta Wearables SDK → Run the sample app → Capture a scene → Process it with Gaussian Splatting → Integrate into your app → Test on device.
- You’ll need: Meta developer account (with Wearables Toolkit preview enabled), Ray-Ban Meta smart glasses (or Quest 3), iPhone/Android device, Xcode 15+ / Android Studio Flamingo.
1) What is Meta Glasses + Gaussian Splatting?
What it enables
- Hands-free 3D capture: Use Meta’s smart glasses (e.g. Ray-Ban Meta) to record scenes from a first-person perspective without holding a camera. This provides a natural, continuous way to scan environments while on the move.
- Photorealistic scene reconstruction: Turn the captured images into a 3D model using Gaussian Splatting – a novel technique that represents scenes as millions of tiny colored Gaussian points instead of a mesh. It can surpass classical photogrammetry in visual fidelity and achieve real-time rendering performance.
- Immersive digital twins: The resulting 3D “splat” model can be viewed or integrated into interactive applications (mobile AR apps, VR experiences, Unity/Unreal scenes). This allows creating digital twins of real spaces for visualization, simulation, or VR meetings, in a fraction of the time of manual modeling.
When to use it
- Rapid environment scans: Ideal for quickly digitizing rooms, objects or outdoor spaces into VR/AR – for example, making virtual home tours, game levels from real places, or construction site updates.
- Field documentation: Useful in scenarios like cultural heritage preservation or damage assessment (e.g. documenting earthquake damage in 3D), where a hands-free capture and quick turnaround 3D model is valuable.
- Creative prototyping: Allows developers and artists to experiment with generating 3D worlds from simple video captures (even AI-generated videos have been used to create 3D scenes via this pipeline). It’s a fast path from real-world footage to a holographic experience.
Current limitations
- Device availability: The Meta Wearables SDK (for Ray-Ban glasses) is in developer preview – you must apply for access. Only Meta’s Ray-Ban Meta smart glasses (both camera-only and the new Display model) and Oakley Meta glasses are supported in this toolkit as of now. On the VR side, Gaussian Splats are only officially supported on Meta Quest 3/3S devices for now.
- Capture constraints: The glasses record 1080p video with a 12 MP wide camera – good for many cases, but not as detailed as high-end cameras. They lack depth sensors (unlike Quest 3), so reconstruction relies purely on image photogrammetry. Ensure steady movement and good lighting when recording to get quality results.
- Processing requirements: Reconstructing a scene via Gaussian Splatting isn’t fully on-device yet. It involves extracting frames and running a training/optimization step on a GPU. This might be done on a PC or cloud service (e.g. using tools like COLMAP + OpenSplat or cloud APIs). Meta’s own Hyperscape feature uses cloud rendering and streaming to handle large scenes. Expect that large scene reconstructions may take minutes to hours of processing time.
- Platform support: As of 2025, neither Unity nor Unreal have native GS support (a custom plugin is needed). Meta’s Spatial SDK lets you load one .splat/.ply at a time on Quest with ~150k points max. So ultra-detailed models may need simplification. Also, currently you can capture from glasses but not push custom visuals to the glasses (the Ray-Ban Display’s AR capabilities are not yet open to developers).
- Data and privacy: The raw capture data can be large (many hundreds of MBs) for a short scan. Be mindful of storage and transfer. Also, always obtain necessary user permissions (camera, microphone, Bluetooth, etc.) and be aware of privacy – you’re recording real environments, so inform subjects as needed.
2) Prerequisites
Access requirements
- Meta developer account: Create or log in to the Meta Developer Portal.
- Join the Wearables developer preview: Apply for the Meta Wearables Device Access Toolkit preview program (Meta’s early-access SDK for Ray-Ban glasses). This may require agreeing to terms and waiting for approval.
- Enable preview features: Once approved, enable the Wearables Toolkit in your developer dashboard (you might get an organization or team set up in the Wearables Developer Center).
- Create a project/app ID: In the portal, create a new project for your glasses app. This will generate an App ID (and possibly an App Secret or API token) needed to initialize the SDK.
Platform setup
iOS
- Xcode 15+ on macOS, targeting iOS 17 or later.
- Swift Package Manager (SPM) (or CocoaPods) for installing the Wearables SDK.
- A physical iPhone (iOS 17+) for testing. (The glasses connect over Bluetooth; use of a Simulator is not practical for hardware integration.)
Android
- Android Studio Flamingo (2022.2)+ with Android SDK 33+ (Android 13 or later).
- Gradle (Plugin 8.1+) and Kotlin (1.8+) setup in your project.
- A physical Android phone (Android 13+). (Bluetooth connectivity to the glasses may not work on an emulator.)
Hardware or mock
- Meta wearable device: Ray-Ban Meta smart glasses (camera glasses, or the Ray-Ban Display model) or an alternative capture device (Meta Quest 3 in passthrough mode). In a pinch, you can use a recorded video from a phone as a mock input for testing the reconstruction pipeline, but real hardware is needed for end-to-end testing.
- Bluetooth enabled: Ensure your phone’s Bluetooth is on and you understand permission prompts (on iOS, the app will ask to access Bluetooth; on Android, it needs
BLUETOOTH_CONNECTpermission to talk to the glasses).
3) Get Access to Meta Glasses + GS Pipeline
- Go to the Portal: Visit the Meta Wearables Developer Center and log in with your developer account.
- Request access: Follow the steps to join the Wearables Device Access Toolkit preview. This might be under a “Join Preview” button or an application form – submit any required information (e.g. your app idea or use-case).
- Accept terms: If prompted, agree to any developer preview terms or NDAs. Meta may restrict distribution of apps built with the preview (only internal testing, etc.).
- Create a project: Once you have access, create a new project in the Wearables Dev Center. Give it a name (e.g. “My3DCaptureApp”). Note the Project ID/App ID it generates.
- Obtain credentials: Download any needed config or credentials for the project:
- iOS: You might get a configuration
.plistor keys for the SDK. For example, Meta could provide an API key to include in your app. - Android: Obtain any API token or client ID that the SDK requires. This might be added to a
gradle.propertiesfile or as metadata inAndroidManifest.xml.
- iOS: You might get a configuration
- Setup glasses: Pair your Ray-Ban Meta glasses with your phone via the official Meta View app or Bluetooth settings, if not done already. (The glasses should be updated to the latest firmware.)
Done when: you have developer access (SDK preview enabled), an App ID/secret or token for the Wearables SDK, and can see your project listed on the Meta portal. You should also have your glasses paired and ready, appearing in your phone’s Bluetooth devices or the companion app.
4) Quickstart A — Run the Sample App (iOS)
Goal
Run the official iOS sample app to verify that your Ray-Ban glasses connect and that you can capture a photo with them from the app.
Step 1 — Get the sample
- Option 1: Clone the Wearables SDK repo. For example, run
git clone <https://github.com/facebook/meta-wearables-dat-ios.git> and open the Xcode workspace or project provided (if the SDK includes a sample app). - Option 2: If a separate sample app zip is provided in the portal, download it and open the Xcode project (e.g.
MetaGlassesSample.xcodeproj).
(Make sure you have Xcode command-line tools installed if using git. The sample app should include all necessary code to connect to the glasses.)
Step 2 — Install dependencies
If the sample uses Swift Package Manager or CocoaPods:
- Swift Package Manager: In Xcode, check Package Dependencies. If not already added, add the package URL for the Meta Wearables SDK (the GitHub repo). Use the latest tag (e.g. 0.1.0 or a commit hash provided in docs).
- CocoaPods (if applicable): Run
pod installin the project directory to install the MetaWearables pod. (The Meta toolkit might not be on CocoaPods yet; SPM is likely preferred in the preview.)
Make sure the MetaWearables SDK framework is linked in the build settings.
Step 3 — Configure the app
Add config file/keys: If you were given an API key or config plist in step 3, add it to the project. For example, include a
MetaWearablesConfig.plistin the bundle or set up Info.plist entries as required by the documentation.Set Bundle ID: Match the app’s Bundle Identifier to the one you registered in the portal (if applicable). The Meta backend might verify the app ID on initialization.
Permissions and capabilities: In Xcode > Target > Signing & Capabilities, enable Bluetooth under Background Modes if your app needs to maintain a connection while in background. In Info.plist, add usage descriptions:
NSBluetoothAlwaysUsageDescription– “Needs Bluetooth to connect to Meta smart glasses wirelessly.”NSCameraUsageDescription– “Needs camera access to save photos from the glasses.” (If your app will save images to Camera Roll or use the phone camera as fallback.)NSMicrophoneUsageDescription– “Needs microphone access for audio capture from glasses.” (Only if using audio features.)
Additionally, the Meta SDK may require an Info.plist entry to opt out of analytics data collection. For instance, you can add the key
MWDATwith sub-keyAnalytics -> OptOut = YESto disable sending analytics (optional, for privacy in development).
Step 4 — Run
- Connect your iPhone via USB and select the SampleApp target in Xcode.
- Choose your device as the run destination (no simulators, use a real iPhone).
- Build and Run the app. Install it on your phone.
The app should launch. Watch the Xcode console for any logs (especially from the MetaWearables SDK initialization).
Step 5 — Connect to wearable
Upon launch, follow any on-screen instructions to connect the glasses:
- The sample app might automatically scan for the glasses or present a “Connect Glasses” button. Put your glasses in pairing mode if needed (usually powering them on will make them discoverable if they are not already connected to the phone).
- If prompted by iOS, allow Bluetooth access for the app.
- Once the app finds the glasses (by name, e.g. “Ray-Ban Meta XX”), select it to connect. The status in app should change to Connected.
- Grant any other permissions the app requests (camera roll access if saving photos, etc.).
Verify
- Glasses status: The app displays a “Connected” indicator or the device name, confirming the glasses are linked.
- Capture test: Use the sample app’s UI to take a photo or start a capture. For example, tap a “Capture Photo” button. The glasses’ camera should snap a picture (you might hear a shutter sound on the glasses), and within a second the image should appear in the app (likely as a thumbnail or new screen). Verify that an image from the glasses is received and visible.
- Data received: If the sample supports streaming video frames, you might see a live preview from the glasses’ camera in the app. Ensure that it updates as you move the glasses.
Common issues
- Build error (library not found): If Xcode can’t find the MetaWearables framework, ensure you added the SPM package and imported the module in code. Clean build folder and retry. If using CocoaPods, open the
.xcworkspaceinstead of.xcodeproj. - Glasses not found: If the app doesn’t detect the glasses, make sure the glasses are turned on and paired to the phone at the OS level. You may need to first pair via the Ray-Ban Meta app. Also ensure Bluetooth permissions are granted. If still not scanning, try rebooting Bluetooth or the device.
- No image coming through: If capture appears to do nothing, check that you accepted any prompt on the glasses (some models flash an LED for photo capture). Also verify the sample app has the necessary Photo Library permission if it tries to save the image. Look at Xcode console for errors (e.g. “unauthorized” or connection lost).
5) Quickstart B — Run the Sample App (Android)
Goal
Run the official Android sample app to verify that your glasses connect and that you can capture a photo on Android. We will build and install the app, then test the camera capture via the wearable.
Step 1 — Get the sample
- Clone the repository:
git clone <https://github.com/facebook/meta-wearables-dat-android.git> (or download it). Open the project in Android Studio. - The sample app may be located in a specific module (e.g.
app/). Use Android Studio to import the project at the root folder so it can set up the Gradle configuration.
Step 2 — Configure dependencies
The Meta Wearables SDK might be distributed via Maven or GitHub Packages:
- Add Maven repository: In the project’s top-level
build.gradleor settings, ensure the Maven URL for the Meta Wearables SDK is added. (For example, Meta might host the Android SDK on Maven Central or a private repo. Check the documentation for the repository URL.) - Authenticate (if required): If the SDK is in a private GitHub Package registry, you’ll need to add an access token. For instance, in your
~/.gradle/gradle.properties, add a line likeMETA_WEAR_TOKEN=ghp_yourTokenHere. Then in the project build.gradle, use this token for the maven credentials. This will allow Gradle to download the SDK AAR. - Sync Gradle: After adding the repository and any necessary credentials, click “Sync Project with Gradle Files” in Android Studio. It should fetch the Meta Wearables SDK and any other dependencies.
(If the SDK is open source, it might already be included in the project modules. If so, Gradle sync will just build those modules.)
Step 3 — Configure app
- Application ID: Change the
applicationIdinapp/build.gradleto a unique package (e.g."com.yourcompany.metacapture"). If Meta’s portal expects a specific ID (from step 3), use that one here. - Insert credentials: If the portal provided an API key or client ID, add it to the app. For example, you might put it in
AndroidManifest.xmlas a meta-data entry or in a config file that the SDK uses. - Permissions: Open
AndroidManifest.xmland ensure the following permissions are present:<uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />(required for Bluetooth device connectivity on Android 12+)<uses-permission android:name="android.permission.BLUETOOTH_SCAN" />(required if the SDK scans for devices; on Android 12+ this is needed for BLE scanning)<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />(on Android <12, BLE scan needs location permission; on 12+ this may not be strictly needed if using the new BLUETOOTH_SCAN, but include for backward compatibility)<uses-permission android:name="android.permission.RECORD_AUDIO" />(if your use-case includes audio capture from glasses)<uses-permission android:name="android.permission.CAMERA" />(not needed for glasses camera, but if the app itself uses phone camera as fallback or for AR, include)- Also ensure you have Internet permission if you will upload images to a server for processing.
- SDK initialization: The sample likely handles this, but ensure that the code to initialize the Wearables SDK is present (usually in an
Applicationsubclass or the launch Activity). It might require passing the App ID or configuring callbacks.
Step 4 — Run
- In Android Studio, select the sample app’s run configuration (usually “app”).
- Choose a target device: connect your Android phone via USB (with USB debugging on) and select it.
- Click Run. The app will build and install on your phone. Watch Logcat for any errors during launch.
Launch the app on the phone if it doesn’t auto-start. It should ask for necessary permissions at runtime.
Step 5 — Connect to wearable
On first launch, Android will prompt for Bluetooth permissions:
- Grant the Bluetooth permission (
Allow this app to find, connect to, and determine position of nearby devices). Also grant location if it asks (for BLE scanning on older devices). - The sample app should list or automatically detect your Ray-Ban glasses. If not already paired to the phone, go to Bluetooth settings and pair them first (they might appear as “Ray-Ban XYZ”).
- In-app, tap the device name to connect. Watch for a status change to connected. On connection, the glasses’ LED might indicate an active link.
- Now try the capture feature: press the Capture button in the app. The glasses should take a photo. Android may ask for write permissions if the app tries to save to Gallery; allow it.
- The image taken by the glasses should be received by the app. It may display on the screen or be saved – check the UI or storage (often in Pictures/ folder) for the new photo.
Verify
- Connection established: The app shows a “Connected” status or the glasses’ name with a connected state icon.
- Photo capture works: After tapping the capture control, you receive a photo from the glasses in the app. Verify the image looks correct (it’s from the glasses’ POV).
- End-to-end flow: If the sample includes any cloud processing step (for GS), ensure that step can be triggered (some samples might upload images to a service for processing; if so, test that or skip if not applicable in sample).
Common issues
- Gradle auth error: If you see errors like “Could not resolve com.meta.wearables… unauthorized”, it means the Maven repo access failed. Double-check the token setup in gradle.properties and that your settings.gradle includes the maven URL with credentials. You may need to generate a new token or ensure it has scopes to read packages.
- Manifest merger conflict: If your app manifest conflicts with library manifests (e.g., duplicate permission declarations or provider authorities), resolve by merging or removing duplicates. Typically, you can safely have the same permission declared multiple times, but unique authorities (if any) must be renamed.
- Device connection timeout: If the app fails to find the glasses, try toggling Bluetooth and ensure no other app (like the official Meta app) is actively connected to the glasses. Only one app can use the glasses at a time. Also ensure the glasses are sufficiently charged. If connection drops shortly after, keep the glasses close to the phone and avoid obstacles (BLE range is ~10m).
6) Integration Guide — Add Meta Glasses + GS to an Existing App
Goal
Integrate the Meta Wearables SDK and a Gaussian Splatting pipeline into your own app, enabling users to capture a scene with smart glasses and view a 3D reconstruction. We’ll outline a simple architecture and steps to capture data, process it, and display the result.
Architecture
Mobile App UI → Wearables SDK Client → Glasses Hardware
(User taps capture; SDK sends command to glasses; glasses return images)
→ GS Processing (cloud or device) → Result Callback → App UI Updates/Storage
In practice, your app will have a client manager that handles connecting to the glasses and listening for image frames. When a capture is done, you send the frames to a Gaussian Splatting service (which could be a cloud API or local library). Once the 3D model is generated (a .ply or .splat file), the app can download it and display it in a viewer (for example, using Meta’s Spatial SDK on Quest or a custom renderer on mobile).
Step 1 — Install the SDK
iOS: In your existing Xcode project, add the Meta Wearables SDK:
- Open Swift Packages and add the package URL (e.g.
github.com/facebook/meta-wearables-dat-ios). Select the latest version tag. Import the module in your code (e.g.import MetaWearables). - Alternatively, use CocoaPods: add
pod 'MetaWearables', '~>0.x'in your Podfile and run pod install.
Android: Add the Wearables SDK dependency to your app’s Gradle config:
In
settings.gradle, include the Maven repository if not already.In
app/build.gradle, under dependencies, add:Sync Gradle to fetch the library. Import the SDK classes in your Kotlin/Java code as needed.
Step 2 — Add permissions
Update your app’s permissions to cover wearable usage and data capture:
iOS (Info.plist):
NSCameraUsageDescription= “Allow the app to access the camera (for saving photos and AR features).” (Even though we use the glasses camera, this covers any camera usage in-app.)NSBluetoothAlwaysUsageDescription= “Allow the app to connect to nearby Meta smart glasses via Bluetooth.”NSMicrophoneUsageDescription= “Allow the app to access the microphone (for audio capture from glasses).” (Only needed if capturing audio.)- Also consider adding
UIBackgroundModes = bluetooth-centralif your app should stay connected to glasses in the background.
Android (AndroidManifest.xml):
Include the needed <uses-permission> tags for:
BLUETOOTH_CONNECT,BLUETOOTH_SCAN(and optionallyACCESS_FINE_LOCATIONfor broader support) – to discover and connect to the glasses.RECORD_AUDIO– if recording audio from glasses.- Any other relevant ones (Internet for cloud upload, Write external if saving files to device, etc.).
- (No special feature declarations are needed aside from general Bluetooth hardware, which is typically implied.)
Step 3 — Create a thin client wrapper
Structure your app to abstract the glasses + GS pipeline into a few components:
- WearablesClient (class/module): Manages connecting to the glasses and retrieving sensor data. It should:
- Initialize the SDK with your App ID/credentials.
- Offer methods like
connect()to scan/pair and maintain the connection. - Listen for events: e.g., onConnected, onDisconnected, onImageReceived. The Meta SDK likely provides delegates or callbacks when a photo is taken on the glasses or when a live stream frame arrives.
- Expose a method
capturePhoto()orstartVideo()that triggers the glasses’ camera via the SDK.
- FeatureService (class): Handles the specific feature logic – in this case, capturing a scene and processing it.
- For a photo capture feature, this service calls
WearablesClient.capturePhoto(), then sends the received image to the next stage (e.g., upload to cloud GS API). - For a scene scan (video capture), it might collect a series of images or short video from the glasses.
- After sending data for processing (or processing locally), it handles the result. E.g., once a .splat model is ready, it loads it into a viewer.
- For a photo capture feature, this service calls
- PermissionsService: Utility to check and request any runtime permissions (Bluetooth, camera, etc.) before starting a capture. Ensure the app has the required grants, or prompt the user.
Definition of Done:
- The Wearables SDK is initialized on app startup (or on first use) without errors.
- The app can establish a connection to the glasses, and recover gracefully if the connection drops (e.g., auto-reconnect or show “Disconnected” state).
- Captured images/videos are correctly passed to the GS processing pipeline (cloud or local). Errors in capture or processing are handled (user sees a message, and errors are logged for debugging).
- The user can trigger a capture and eventually see a result (image or 3D model) with a simple flow.
Step 4 — Add a minimal UI screen
Design a basic interface for the capture feature, for example:
- A “Connect Glasses” button (or status indicator). When not connected, this button initiates scanning/pairing. When connected, it can show the device name or a “Connected ✅” state.
- A Connection status text or icon (red dot when disconnected, green when connected, etc.) that updates via WearablesClient events.
- A “Capture Scene” button. This starts the capture process (photo or video). For extended scanning, you might have “Start Scan” / “Stop Scan” toggles.
- A Progress/Status label that shows messages: e.g., “Connecting…”, “Capturing…”, “Uploading to cloud…”, “Reconstructing 3D model…”, “Done!” – guiding the user through the steps.
- A Result viewer: For a photo capture, this could be an
UIImageVieworImageViewshowing the photo thumbnail. For a full 3D scene, you might integrate a simple 3D viewer:- On mobile, perhaps show a screenshot of the 3D model with a button to “View in AR” (you could load the .ply into ARKit/SceneKit or similar).
- On Quest or a VR viewer app, directly render the splat in the scene (using either Meta’s Spatial SDK SplatFeature or a third-party renderer).
Keep the UI simple – the main goal is to confirm the pipeline works end-to-end.
7) Feature Recipe — Trigger Photo Capture from Wearable into Your App
Goal
When the user taps “Capture” in your app, the Ray-Ban glasses should take a photo and send it to your app, which then displays and saves it. We’ll implement the core logic and handle edge cases.
UX flow
- Pre-check: Ensure the glasses are connected (or prompt the user to connect first). Also verify required permissions are granted.
- Initiate capture: User taps Capture. Immediately, update the UI to indicate progress (“Capturing…”) and disable the button to avoid duplicates.
- Glasses capture: The app calls the SDK to snap a photo. The wearer might see a capture indicator (glasses LED). The image is taken from the user’s eye-level perspective.
- Receive result: The SDK returns the photo (as an image object or file path). Hide the progress and re-enable UI.
- Display & save: Show a thumbnail of the photo in the app. Optionally, save it to the phone’s gallery or file system for later use (and/or send it to the GS processing service).
Implementation checklist
- Connected state verified: If
WearablesClient.isConnected == false, do NOT proceed. Instead, show an alert “Please connect your Meta glasses first.” This prevents a capture call from failing. - Permissions verified: Check Bluetooth (for iOS, the app’s Bluetooth permission is general – on Android, ensure BLUETOOTH_* did not get denied by user). Also ensure write storage permission (Android) if saving the photo. If any are missing, request them and wait until granted.
- Capture request issued: Call the appropriate SDK function to take a photo. For example,
wearablesClient.takePhoto()which might return a Future/Promise or use a callback. Handle it asynchronously. - Timeout & retry: Implement a timeout (e.g. 5-10 seconds) in case no response comes (perhaps the glasses are busy or out of range). If timeout, notify user “Capture failed, please try again.” Optionally attempt to reconnect if connection lost.
- Result handling: When an image is received, save it. On iOS, you can use
UIImageWriteToSavedPhotosAlbum(with user permission) or save to app’s documents. On Android, save to MediaStore or app cache. Then update the UI: display the image in an<img>view component and maybe a success message.
Pseudocode
swift
Copy code
func onCaptureButtonPressed() { guard glasses.isConnected else { showAlert("Connect your glasses first!") return } if !permissionsService.allPermissionsGranted() { permissionsService.requestPermissions { granted in if granted { self.onCaptureButtonPressed() } } return } statusLabel.text = "Capturing…" captureButton.isEnabled = false glasses.capturePhoto { result in self.captureButton.isEnabled = true if let photo = result.image { imageView.image = photo saveImage(photo) statusLabel.text = "Saved ✅" } else if let error = result.error { statusLabel.text = "Capture failed" print("Error during capture: \\(error)") } } }
kotlin
Copy code
fun onCaptureClicked() { if (!wearablesClient.isConnected) { Toast.makeText(context, "Please connect your glasses first.", Toast.LENGTH_SHORT).show() return } if (!permissionsService.hasAllPermissions()) { permissionsService.requestAll(activity) return } statusText.text = "Capturing…" captureButton.isEnabled = false wearablesClient.capturePhoto { photoResult -> runOnUiThread { captureButton.isEnabled = true if (photoResult.isSuccess) { val bitmap = photoResult.getOrNull() imageView.setImageBitmap(bitmap) saveImageToGallery(bitmap) statusText.text = "Saved ✅" } else { statusText.text = "Capture failed" Log.e(TAG, "Capture error", photoResult.exceptionOrNull()) } } } }
(Pseudocode assumes the SDK provides a capturePhoto asynchronous API. Adjust according to actual SDK methods/events.)
Troubleshooting
- Capture returns empty: If the result callback returns success but with no image (e.g. zero bytes), check that the glasses’ camera is not blocked and that you’re running the latest firmware. Ensure the glasses were in range and powered. Log detailed info (battery level, connection strength if available). You may need to re-init the connection. Also confirm that the glasses actually took a photo (some models require a double-press on the frame if not triggered by SDK).
- Capture hangs indefinitely: This could happen if the command didn’t reach the glasses. Implement the timeout as mentioned. If a timeout occurs, abort the operation and present an error. You might attempt a fresh connect by calling
glasses.reconnect()in the background. Also, avoid sending multiple capture commands too rapidly. - “Instant display” expectation: Users might expect the 3D model to appear immediately after capture. In reality, processing the Gaussian Splat can take time. Manage expectations by updating the UI: e.g., after receiving the photo, show “Processing 3D model…” while the images are sent to the server or processed. You can display a placeholder model or progress bar. For the photo thumbnail itself, it should appear quickly; for the full reconstruction, consider doing it asynchronously and notifying the user when ready (perhaps with a push notification or in-app alert when the model is downloadable).
8) Testing Matrix
Test your integrated solution across various scenarios to ensure reliability:
| Scenario | Expected Outcome | Notes |
|---|---|---|
| Mock device (no glasses) | Graceful degradation (e.g., simulated image used or a prompt “no device”) | Useful for CI: your app should not crash if no glasses are present. |
| Real device – close range | Fast connection and capture, low latency image transfer | Baseline happy path; test in a normal indoor environment. |
| Real device – far/obstructed | Possibly slower or failed capture due to BLE range | Glasses might disconnect if far. Ensure app shows “disconnected” clearly and retries. |
| Background / lock screen | If capture triggered in background, it should either fail safely or queue until app is foreground | iOS may suspend Bluetooth in background unless using special modes. Test what happens if user locks phone during a scan. |
| Permission denied | App shows an error explaining why capture can’t proceed | E.g., “Bluetooth permission is required to use Meta glasses.” Ensure the user has a way to retry permissions. |
| Disconnect mid-action | App handles it: shows “Lost connection”, no crash, and allows reconnection | For instance, turn off the glasses right after tapping capture – the app should time out and inform the user rather than freezing. |
| Multiple captures in a row | Works for first, maybe second, but ensure no leftover state between captures | Test taking several photos sequentially. Look for memory leaks or crashes if the previous image isn’t handled before next capture. |
| Large scene (video frames) | If implementing video/GS, app can handle many images | For an extended scan, ensure memory usage is under control (maybe process frames on the fly). Not needed for simple photo capture. |
(This matrix helps ensure the feature is robust beyond the ideal case.)
9) Observability and Logging
Implement logging to make debugging easier and to collect metrics on usage:
- Connection events: Log when you start connecting (e.g.,
connect_start), when the glasses successfully connect (connect_success), and if a connection attempt fails or drops (connect_failwith reason). These logs help diagnose Bluetooth issues. - Permission states: Log the status of permissions at app launch and whenever the user toggles them (e.g.,
permissions_granted: true/false). If something isn’t working, you can quickly see if a missing permission is the cause. - Capture lifecycle: For each capture, log events:
capture_start(user tapped button),capture_request_sent(command actually sent to glasses),capture_response_received(glasses acknowledged or image data incoming),capture_successorcapture_fail(with error info).
- Performance metrics: Time how long a capture takes from button tap to image received. Log this duration as
capture_duration_ms. Likewise, if you send frames to a server for reconstruction, log the total time until the model is ready (reconstruction_ms). This helps identify bottlenecks. - Reconnection count: If your app auto-reconnects, keep a count and log
reconnect_attemptandreconnect_successevents. If multiple reconnects happen often, it might indicate a hardware issue. - GS processing stats: If using a cloud service, log the size of the data uploaded (e.g. number of images, total MB) and the size of the output model (number of splats or file size). For example,
gs_upload_frames=120, gs_model_points=1.2M. - Use a logging framework or even just print to console in dev; consider uploading logs to an analytics service if this is a production app (respecting privacy).
Having these logs will make it easier to troubleshoot if a user reports “my capture didn’t work” – you can see exactly where in the pipeline it failed or slowed down.
10) FAQ
Q: Do I need the actual Ray-Ban glasses hardware to start developing?
A: Not necessarily to start coding. You can develop much of the app (UI, permission handling, even calling the SDK methods) with the glasses “mocked” – for instance, feed a test image when no glasses are connected. However, to truly test the end-to-end capture and the 3D reconstruction, you will need the hardware. Meta’s Wearables SDK does not currently offer a full simulator for the camera feed, so having a pair of the glasses (or a Quest 3 for a similar capture flow) is highly recommended for final testing.
Q: Which wearable devices are supported by this pipeline?
A: The primary device is Meta’s Ray-Ban smart glasses (both the Ray-Ban Meta smart glasses without display and the Ray-Ban Meta Display with the built-in lens screen). Additionally, the Oakley Meta sports glasses are supported via the same SDK. The pipeline can also be adapted to Meta Quest 3 (which has color pass-through cameras and depth) – in fact, open-source projects like OpenQuestCapture turn Quest 3 into a 3D capture device using a similar approach. In theory, any device that can capture a series of images with pose data could be used in the Gaussian Splatting pipeline, but official SDK support right now is limited to Meta’s own wearables.
Q: Can I ship an app using this in production now?
A: Not at this moment. The Wearables Device Access Toolkit is in developer preview, meaning it’s mainly for testing and prototyping. Only select partners can publish integrations to the public during the preview. Meta aims for general availability possibly in 2026. Until then, any app you make will be restricted to dev/testing and cannot be widely distributed via app stores. The Gaussian Splatting part, however, is based on open research – you could technically use open-source GS pipelines in a production app, but without the glasses capture it may rely on standard phone camera input.
Q: How long does the 3D reconstruction take?
A: It depends on the scene complexity and processing power. If you capture a single object with 20-30 images, some optimized Gaussian Splatting tools can produce a model in a minute or two (especially if starting from a good initial point cloud). For larger scenes (hundreds of images, whole room scans), it could take several minutes up to an hour on a high-end GPU. Meta’s cloud-based Hyperscape (which uses GS) reportedly takes a “few hours” for a full room capture to turn into a VR scene. The pipeline we describe allows you to choose; you could use a cloud service that processes in the background and notifies the user when ready.
Q: Can I push content or feedback to the glasses (e.g., show something on the Ray-Ban Display or play audio)?
A: Currently, the Device Access Toolkit focuses on getting data from the glasses (camera frames, microphone, etc.). It does not provide APIs to send custom visuals to the glasses’ display or to programmatically control the audio output beyond maybe playing a sound as a Bluetooth headset. Meta has kept the glasses’ output (especially AR display) quite closed for now. In the future, as the platform evolves, they might allow more augmented reality overlays, but for now your app should assume the glasses are mainly an input sensor, and use the phone or VR headset as the output device for displaying the reconstructed scene.
11) SEO Title Options
- “How to Get Early Access to Meta’s Smart Glasses SDK and Build a 3D Capture App” – Emphasizes the access + building aspect for search queries about Meta glasses SDK.
- “Capture 3D Scenes with Ray-Ban Meta Glasses and Gaussian Splatting (Step-by-Step Guide)” – Explicitly mentions Ray-Ban Meta glasses and Gaussian Splatting for targeted keywords.
- “Integrate Meta Glasses into Your App for Real-Time 3D Scene Reconstruction” – Focus on integration and real-time 3D, good for developers searching those terms.
- “Troubleshooting Meta Glasses 3D Capture: Pairing, Permissions, and Gaussian Splat Tips” – Addresses common problems, likely to catch SEO for troubleshooting queries.
(The second option, mentioning “Ray-Ban Meta Glasses and Gaussian Splatting,” is probably the best for an SEO blog title on Cybergarden, as it contains highly relevant keywords in a clear “how-to” format.)
12) Changelog
- 2026-01-17 — Verified with Meta Wearables SDK preview (Dec 2025 update). Tested on iOS 17.2 (iPhone 14) and Android 14 (Pixel 7) with Ray-Ban Meta (2025) glasses. Gaussian Splatting pipeline using OpenSplat and Unity viewer (Aras plugin) on Quest 3 for validation. Updated instructions for latest SDK changes and included new .spz format note.