- Published on
How to Choose the Right Capture Pipeline (Photo vs Video) with Meta Wearables Toolkit on iOS & Android
- Authors

- Name
- Almaz Khalilov
How to Choose the Right Capture Pipeline (Photo vs Video) with Meta Wearables Toolkit on iOS & Android
TL;DR
- You'll build: a simple mobile app that connects to Meta's Ray-Ban smart glasses and captures images either by taking a high-res photo or grabbing a video frame (with a trade-off between quality and latency).
- You'll do: Get preview access → Install the Meta Wearables SDK → Run the sample app to test photo vs. video capture → Integrate the SDK into your own app → Experiment on a device or use the provided mock glasses.
- You'll need: a Meta developer account (with Wearables Toolkit access), a pair of Ray-Ban Meta smart glasses (Gen 2) or the mock simulator, and a development device (iPhone with iOS 17+ or Android phone with Android 13+).
1) What is the Meta Wearables Device Access Toolkit?
What it enables
- Hands-free camera access: Allows your mobile app to connect to Meta's AI glasses (Ray-Ban Meta smart glasses) and utilize their on-board camera for video streaming and photo capture. In practice, this means you can programmatically start a live video feed from the glasses or trigger a still photo capture, all from your app.
- Flexible capture pipelines: Supports both real-time video streams and high-resolution still images. The Media APIs let you stream video and capture photos in real time from the glasses. This dual capability is key for choosing the right pipeline: a continuous video stream (for low latency) vs. single photo captures (for maximum image quality).
- Audio and sensor integration: In addition to the camera, the toolkit also provides access to the glasses' microphone array and other sensors. This can enable POV (point-of-view) experiences such as hands-free video recording with audio and live streaming. The sensors can also help the wearer stay oriented near companions, extending your app's functionality into the physical world.
When to use it
- High-detail photo needs: Use the photo capture pipeline when your app requires the highest image quality or resolution – for example, saving a sharp photo, performing detailed image analysis, or capturing a moment to share in full quality. The Ray-Ban Meta glasses can take 12 MP still photos, which offers much more detail than a video frame. This is ideal if you need clarity (e.g. reading small text or QR codes from the image) and can tolerate a slight shutter delay.
- Real-time or rapid capture: Use the video stream pipeline when low latency is crucial – for instance, real-time computer vision tasks (object detection, live translation) or capturing unpredictable action. Grabbing frames from a 30fps video stream means you won't miss a spontaneous moment; the glasses' video (up to ~1920×1440 "3K" resolution at 30 FPS) starts almost immediately and provides continuous up-to-date frames. This is also useful for live streaming content (e.g. broadcasting to social media) where continuous data is more important than maximum resolution.
- Hybrid use cases: Your app can also combine approaches. For example, you might run a live video feed for analysis or preview, and then trigger a full-resolution photo at a key moment (or when the user presses a shutter button) to save the best-quality snapshot. The toolkit supports both modes within one session via its camera API (handled by a
StreamSessionthat manages streaming and photo capture) – allowing flexibility depending on user action or context.
Current limitations
- Device support: As of the developer preview, the toolkit works with Meta's AI glasses (Ray-Ban Meta smart glasses, Gen 2). Earlier Ray-Ban Stories (Gen 1) are not officially supported in the SDK preview. Also, only select developers have publishing capabilities during the preview – general public distribution of apps using this toolkit isn't available until 2026.
- Photo vs video trade-offs: Still-photo capture on the glasses may have a slight shutter lag (on the order of ~0.5–1 second) from when you trigger it to when the image is actually taken. This means fast-moving subjects or blink-and-miss moments could be missed with photo mode. Video streaming, by contrast, has minimal shutter lag (it starts capturing almost instantly) but yields lower resolution frames (approx 2–3 MP per frame) that are compressed (H.264/265). In other words, a frame grab from video will not match the full quality of a dedicated photo – it will be a bit less sharp and noisier due to compression. The quality of a photo is generally better than a paused video frame. Developers need to choose appropriately: use photo mode for quality, video mode for speed.
- Thermal and battery constraints: Continuous video streaming is power-intensive. The glasses can record or stream video for up to about 30 minutes at a time under normal conditions before thermal limits kick in. This also drains the battery faster (e.g. roughly 10% battery for ~6 minutes of streaming in one test). Photo capture is more sporadic and might be more battery-friendly for occasional images. Keep in mind the tiny battery in the glasses (154 mAh) – your app should be judicious in how long it keeps the camera active.
- Permissions and privacy: The toolkit does not yet support accessing the glasses' display or voice assistant in this preview (no pushing content to the glasses). It's focused on sensors (camera, mic). Also, using the glasses' camera and mic should respect user privacy: the glasses have an outward-facing LED that lights up during capture by design. Your app must handle permission prompts on the phone for things like Bluetooth (to connect to the glasses) and microphone access. There may also be platform-specific rules (for example, iOS will terminate the app if this key is missing when attempting Bluetooth operations).
2) Prerequisites
Access requirements
- Create or sign in to the Meta Wearables Developer Portal. You'll need a Meta developer account (use your existing Facebook/Meta login on the developers site).
- Join the Wearables Toolkit preview – since this SDK is in developer preview, you must enroll for access. Navigate to the wearables developer center and request access to the Meta Wearables Device Access Toolkit (DAT) preview. (If required, sign any beta NDAs or accept terms.)
- Once approved, set up an Organization/Team in the portal and invite any team members (if you're collaborating).
- Enable preview features for your org if needed (the portal will guide you if any flags need toggling for your account to use the glasses SDK).
- Create a Project/App ID in the wearables developer center. This will represent your mobile app integration with the glasses. Take note of any App ID or keys – you might need to include them in your app or allow your glasses to connect to this project.
Platform setup
iOS
- macOS with Xcode 15+ and an iPhone running iOS 17 or later (the glasses require fairly recent iOS support).
- Swift Package Manager (built into Xcode) or CocoaPods (if provided) for managing the SDK. (SPM is recommended since the SDK is available as a Swift Package on GitHub.)
- A physical iPhone is recommended. (While you can build the app on a Simulator for basic development, a real iPhone is needed to connect over Bluetooth to the glasses. The toolkit does provide a Mock Device option if you have no hardware, which can run with the Simulator, but for end-to-end testing of latency and camera, real hardware is best.)
Android
- Android Studio Giraffe (2022.3.1)+ with an Android device running Android 13 (API 33)+. (Android 14 is supported as well. Ensure your targetSdk is up to date to handle new Bluetooth permission model.)
- Gradle (latest version compatible with Android Studio) and Kotlin (the sample code uses Kotlin). The SDK is distributed via GitHub Packages, so you'll configure Gradle to fetch it.
- A physical Android phone (Android 13+). Using a real device is strongly recommended because Bluetooth connectivity to the glasses won't work on an emulator. (If absolutely necessary, you could use the Toolkit's mock device on an emulator for non-Bluetooth testing.)
Hardware or mock
- Ray-Ban Meta smart glasses (Gen 2) – the supported wearable device. Make sure they are charged and have the latest firmware (use the Meta View app to update if needed).
- OR [ ] Wearables Mock Device kit – if you don't have the physical glasses, Meta's SDK includes a mock device simulator. This allows you to simulate a glasses connection and test the APIs (you won't get real images, but dummy data can be used to ensure your integration works). This is useful for CI or early development without hardware.
- Bluetooth enabled on your phone/laptop. Ensure you understand the Bluetooth pairing flow of the glasses (you may need to put the glasses in pairing mode or have them in range). Also be aware of any permissions (location services on Android for BLE scanning, etc.) needed for Bluetooth discovery.
3) Get Access to Meta Wearables Toolkit
- Go to the Wearables Developer Center: Visit the Meta Wearables Developer Center portal (at
wearables.developer.meta.com). Log in with your Meta developer credentials. In the dashboard, locate the Device Access Toolkit preview program. - Request access: Follow the instructions to request access to the Wearables Device Access Toolkit developer preview. This may be a form or a button to join the waitlist/preview. You might need to specify your use case or agree to certain terms since it's a limited release.
- Accept terms: Once approved, accept any developer preview agreement and enable the toolkit for your account. You should see confirmation that you have access to the SDK and documentation.
- Create a project: In the Wearables Developer Center, create a new project (or select an existing one) for your glasses integration. This will generate an App ID (and possibly an App Secret or key). For example, you might create a project called "GlassesCamDemo". Within this project, register an iOS bundle ID and/or Android package name for your app if required (to later associate your app with the glasses).
- Download credentials (if any): After project creation, download any configuration or keys:
- iOS: You might obtain a config file (e.g. a plist) or an entitlement that enables the glasses access. (Check if the portal provides an entitlement file or provisioning profile – if so, download it and note where to add it in your Xcode project.)
- Android: You may get an API token or Client ID for the project. Alternatively, since the SDK itself is obtained via GitHub, the main "credential" is your GitHub personal access token (to fetch the package) – make sure you have a PAT with
read:packagesscope ready. (If the portal provides any JSON config or keys, download those for later use in the app.)
- Enable glasses dev mode: On your physical glasses, enable any developer mode if required. (Some versions of Ray-Ban Meta glasses have a setting in the companion app to allow third-party connections or to put the device in pairing mode for the toolkit. Ensure this is toggled on according to Meta's instructions.)
Done when: you have the Meta Wearables SDK access confirmed, an App/Project set up in the portal, and any necessary keys or IDs. At this point, you should be able to log into the Wearables portal, see your project listed, and proceed to integrate the SDK in an app.
4) Quickstart A — Run the Sample App (iOS)
Goal
Run Meta's official iOS sample app to verify that you can connect to the glasses and exercise both photo capture and video streaming features with either a real device or the mock. You should see that the app can take a photo through the glasses and/or display a video feed, demonstrating the quality vs latency difference.
Step 1 — Get the sample
- Option 1: Clone the repo. On your development Mac, run:
git clone <https://github.com/facebook/meta-wearables-dat-ios.git>. Open the Xcode project located atsamples/CameraAccess/CameraAccess.xcodeproj(or the.xcworkspaceif one is provided). - Option 2: Download the zip. On the GitHub page for meta-wearables-dat-ios, download the repository as a ZIP (Code -> Download ZIP). Unzip it, then open the sample app project under
samples/CameraAccessin Xcode.
Step 2 — Install dependencies
The sample app uses Swift Package Manager to include the Meta Wearables SDK:
- In Xcode, check the Swift Packages for the project. If not already resolved, add the package manually:
- Go to File > Add Packages... and enter the repository URL:
https://github.com/facebook/meta-wearables-dat-ios. - Select the latest version (e.g.
0.3.0or the newest tag available) and add the package. Xcode will fetch the SDK.
- Go to File > Add Packages... and enter the repository URL:
- If the sample uses CocoaPods (check for a
Podfilein the sample directory), then runpod installto install pods. However, as of the preview, SPM is the default distribution for iOS. - After adding, you should see frameworks like MetaWearablesCore, MetaWearablesCamera, etc., in the project's Package Dependencies.
Step 3 — Configure app
Before running, adjust a few settings:
- Update Bundle ID: If required, change the sample app's bundle identifier to match one registered in your Meta project (in case the toolkit enforces allowed bundle IDs). For example, set it to
com.yourcompany.GlassesCamDemoif that's what you added on the portal. - Insert config/keys: If the developer portal provided an iOS config file or key, add it now. This could be copying a plist into the project or setting an entitlement. (For instance, if there's a
MetaWearablesConfig.plist, add it to the app bundle and ensure the sample code knows where to load it.) - Info.plist permissions: Add usage descriptions for Bluetooth (required to connect to the glasses) and microphone if your app will use audio:
NSBluetoothAlwaysUsageDescription="This app uses Bluetooth to connect to your Meta smart glasses for camera streaming."NSMicrophoneUsageDescription="Accesses the glasses' microphone for audio when recording video."- (If your app also intends to use the iPhone camera or save photos, include
NSCameraUsageDescriptionand photo library usage as appropriate.)
- Capabilities: In Signing & Capabilities, you might need to enable Background Modes -> Uses Bluetooth LE Accessories if you want the connection to remain active when the app is backgrounded (not strictly needed for running the sample in foreground). Also ensure that under Privacy the Bluetooth capability is checked if using newer Xcode versions.
Step 4 — Run
- Select the target – in Xcode's scheme picker, choose the sample app target (e.g. CameraAccess).
- Choose a device – select your iPhone as the run destination (connect it via USB or network). Using a real iPhone is important for Bluetooth; do not choose a Simulator unless you plan to use the mock device (in which case the mock can simulate a connection without real BT).
- Build & Run the app. Xcode should compile and install it on your iPhone. On first launch, grant any prompted permissions (Bluetooth permission alert, microphone permission if asked).
Step 5 — Connect to wearable/mock
- Pairing the glasses: Put your Ray-Ban Meta glasses in pairing mode (usually, turning them on near the app for the first time, or use the Meta View app to allow a new device connection). In the sample app, tap the "Connect Glasses" button (exact wording may vary). The Meta Wearables SDK has a built-in flow that will scan and discover the glasses. When the glasses are found, select them. You might see an LED blink or hear a sound – follow on-screen prompts to complete pairing. (If a pairing code or Bluetooth pairing request appears on the phone, accept it.)
- Using the mock device: If you don't have hardware, the sample might allow enabling a Mock Device mode. This simulates a glasses connection. Activate mock mode (perhaps a toggle in app or it auto-enables on Simulator). The mock device will act as if a virtual glasses is connected.
- Grant permissions: The first time connecting, iOS will likely ask "App wants to use Bluetooth" – make sure to allow it, otherwise the app won't see the glasses. If you plan to use audio, also accept any microphone permission dialog. The glasses themselves do not require a separate permission on iOS beyond Bluetooth, since they are an external accessory.
Verify
- Connected status: The app should indicate when the glasses (or mock) are connected – e.g. an on-screen label "Connected ✅" or a change in the connect button state. If you have the real glasses, you might also see the status LED on them change or hear a chime upon connection.
- Photo capture works: Try the photo feature first. In the sample app's UI, tap the "Capture Photo" button (or similar). This should trigger the glasses to take a still photo. You might notice a brief delay (the glasses' shutter lag) and the LED on glasses blinking as it takes a photo. The app should then receive the image – for example, it might display a thumbnail or print a log message that a photo was received. Verify that an image appears (if real glasses, you'll see the actual photo it took; on mock, a placeholder image or data).
- Video streaming works: Start the video stream mode (there might be a toggle or a "Start Stream" button). Upon starting, the app should begin receiving frames from the glasses' camera in real-time. This could be shown as a live preview in the app (if the sample has a video view) or simply indicated via a frame counter/log. Verify that you see movement (if you move the glasses or wave in front of them, the preview updates). The latency should be low (a fraction of a second). On real glasses, note the resolution differences – the video will be lower resolution than a still photo. On the mock device, dummy frames will stream.
- Switch pipelines (if available): Some sample apps might let you compare capturing a photo vs grabbing a frame. If you capture a photo while streaming, check that it either stops the stream to take the photo or handles both. Verify your app didn't crash and both modes function.
Common issues
- Build error (iOS): "Unauthorized access to GitHub package" or SPM failing – This means the Meta Wearables package couldn't download. Ensure you have a GitHub token configured if required. For example, add your GitHub PAT in Xcode's package configuration or in
.netrc. Also verify your internet connection. If using CocoaPods, check that the pod repo is accessible. - App crashes on connect: If the app crashes immediately when trying to scan for the device, check that you added the
NSBluetoothAlwaysUsageDescriptionin Info.plist. iOS will terminate the app if this key is missing when attempting Bluetooth operations. Add the key with a user-friendly message and run again. - No glasses found: If the scan doesn't find your glasses, make sure the glasses are powered on and in pairing mode. On Ray-Ban Meta glasses, they should not already be connected to another app (like the Meta View app – disconnect that first). On Android devices you may need location services on for BLE discovery; on iOS just ensure Bluetooth is on. If using mock mode, ensure you enabled the mock and aren't waiting for real hardware.
- Photo capture returns null: If tapping capture yields no image or an error, ensure the glasses are connected and have storage available (the glasses buffer the image). Also, this can happen if the glasses went to sleep – try waking them up or reconnecting. For mock mode, check if the sample requires enabling a "mock photo" file.
- Video stream is black: If you see a black screen or no frames, it could be a permissions issue (e.g., on Android, missing
CAMERApermission – not needed for external camera, but check log). It could also be the glasses timed out – try restarting the stream. Lastly, ensure your app's UI is set up to display the frames (the sample should handle this). - Bluetooth disconnects or times out: If the connection drops frequently, keep the phone and glasses close (within a few meters). Avoid areas with heavy Wi-Fi/Bluetooth interference. If disconnect happens mid-capture, the SDK should attempt reconnection – watch for logs. Sometimes simply restarting the app and glasses helps.
- Permissions were denied: If you accidentally denied Bluetooth or microphone, the app may not function correctly. On iOS, you'll need to go to Settings > Privacy to re-enable Bluetooth for your app. On Android, you can usually re-trigger the permission dialog or enable in settings. Always handle the case where user denies and provide a prompt explaining why it's needed.
5) Quickstart B — Run the Sample App (Android)
Goal
Run the official Android sample app to verify camera streaming and photo capture on Meta glasses. You should confirm that your Android app can connect to the glasses (or mock) and that capturing a photo and receiving a video feed both work. This will demonstrate the quality vs latency trade-off in action on Android.
Step 1 — Get the sample
- Clone the repository:
git clone <https://github.com/facebook/meta-wearables-dat-android.git>. Open the project in Android Studio (use Import Project if needed and select thesamples/CameraAccessfolder which contains the sample app). - (Alternatively, download the repo as a ZIP from GitHub and open the
samples/CameraAccessproject.)
Once opened, let Gradle sync. You should see an app module with the sample code that uses the Wearables SDK.
Step 2 — Configure dependencies
The Wearables SDK is distributed via GitHub Packages, so you need to add the appropriate repository and authentication:
- Add the Maven repo: In the project's
settings.gradleor build.gradle, add the GitHub Packages repository. For example, in settings.gradle (Kotlin DSL): kotlin Copy codedependencyResolutionManagement { repositories { maven { url = uri("<https://maven.pkg.github.com/facebook/meta-wearables-dat-android>") credentials { username = "" // not used password = System.getenv("GITHUB_TOKEN") ?: localProperties.getProperty("github_token") } } // ... other repositories like google(), mavenCentral() } }This points Gradle to Meta's GitHub package registry and uses an env variable orlocal.propertiesentry for the token. - Authenticate: Ensure you have a GitHub Personal Access Token set up (with at least read:packages scope) as
GITHUB_TOKENenv var orgithub_tokenin local.properties. This token is required to fetch the preview SDK artifacts. - Add the SDK dependencies: In your app module's
build.gradle(or Gradle version catalog), add the Wearables SDK libraries. For example: gradle Copy codeimplementation("com.meta.wearable:mwdat-core:0.3.0") implementation("com.meta.wearable:mwdat-camera:0.3.0") implementation("com.meta.wearable:mwdat-mockdevice:0.3.0") // (optional, for testing without hardware)Include the core and camera libraries at minimum, and mockdevice if you plan to use the simulator. After this, click "Sync Project" in Android Studio. It should download the SDK. (If prompted for credentials, double-check the token setup.)
Step 3 — Configure app
Before running the sample, adjust the app configuration:
- Application ID: Change the sample app's
applicationId(inapp/build.gradle) to your own package name if you created one in the Meta portal (for example,com.yourcompany.glassescamdemo). This step might not be strictly required for running locally, but if the glasses enforce app whitelisting, it should match what you set up online. - Permissions (AndroidManifest.xml): Add the required permissions for Bluetooth and (if needed) audio:
<uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />– needed to connect and communicate with the glasses over BLE.<uses-permission android:name="android.permission.BLUETOOTH_SCAN" />– needed if the SDK scans for nearby glasses devices.<uses-permission android:name="android.permission.BLUETOOTH" />– (optional for backward compatibility if targeting < SDK31).<uses-permission android:name="android.permission.RECORD_AUDIO" />– if you will capture audio from the glasses' mics (for video recordings or voice).- You do not need the phone's camera permission to get images from the glasses; the data comes via Bluetooth/Wi-Fi from the external device. However, if you plan on also using the phone camera or storing images, include those permissions as appropriate (e.g., WRITE_EXTERNAL_STORAGE if saving files on older Android, etc.).
- Bluetooth features (optional): You can declare
<uses-feature android:name="android.hardware.bluetooth_le" android:required="true" />to indicate the app uses BLE. This isn't strictly necessary, but if your app is solely for the glasses, it makes sense. - Gradle settings: Ensure your
minSdkVersionis set according to the SDK's requirements (the glasses SDK likely requires minSdk 26 or higher). Also, enable Jetpack Compose or other libraries if the sample uses them (check the sample's Gradle for any required settings).
Step 4 — Run
- Build the app: In Android Studio, click Build > Make Project to ensure everything compiles. If you set up the dependencies correctly, it should build successfully.
- Select Run configuration: Use the provided run configuration (if one exists for the sample app) or create a new one for the app module.
- Choose a device: Connect your Android phone via USB (enable USB debugging) or use wireless ADB. Select that device as the deployment target. As noted, prefer a real device over an emulator for Bluetooth.
- Run the app: Click Run (▶️). The app will install on your phone. Grant permissions when prompted:
- On Android 12+, the system will ask for Bluetooth Connect/Scan permissions at runtime (these are runtime permissions in modern Android). Approve them so the app can find and talk to the glasses.
- If asked for microphone (when you first start a video with audio), grant that as well.
Step 5 — Connect to wearable/mock
- Pairing (Android): On first use, you may need to pair the glasses. The Meta SDK likely provides a connect function that handles BLE scanning and pairing. In the sample app, tap the "Connect" or "Pair Glasses" button. The app should request permission to scan (grant it), then list the available glasses (identified by name, e.g., "Ray-Ban Meta ****"). Tap your device in the list. A system Bluetooth pairing dialog might appear – confirm the PIN if shown and pair. Once paired, the SDK establishes a session.
- Grant any dialogs: During pairing, Android might also prompt "Allow location access?" for Bluetooth scanning (if on Android < 12, because BLE scan is tied to location). Grant it, as it's needed to discover devices.
- Use the mock device (optional): If you have no physical glasses, ensure you enabled the
mwdat-mockdeviceand that the sample has a way to activate it (some samples might automatically use a mock if no device found, or a toggle in a settings). The mock device will behave as a virtual glasses — the app should indicate it's connected to a simulated device. - Finalize connection: Once connected, the sample app will typically update the UI (e.g., a status text "Glasses connected"). Now you're ready to test the camera features.
Verify
- Connection status: The app should clearly indicate when the glasses or mock are connected (for example, a "Connected to Glasses" message or icon). On a real device, you can also verify in your phone's Bluetooth settings that the glasses show as connected. The Secure Session mechanism of the SDK ensures you have an optimized link.
- Photo capture end-to-end: In the sample, press the "Capture Photo" button. The glasses should snap a picture (on real Ray-Ban glasses, you'll see the capture LED flash). The app will receive the photo data via the SDK. Check that the photo is displayed or saved in the app (some samples might show a thumbnail or just log the image size). If connected to real glasses, compare the photo's quality vs a video frame by perhaps also testing the stream. The photo should be higher resolution and clarity. On mock, verify the flow still calls back with a dummy image object.
- Video streaming end-to-end: Tap the "Start Video Stream" (or similar) button. The app should begin streaming video frames. If the sample has a preview surface, you'll see live video from the glasses' POV on your phone screen with minimal lag. Move the glasses around to ensure the feed updates. If there is no visual preview in the sample, look for log output indicating frames are being received (e.g., "Frame #123 received"). The streaming demonstrates the lower latency pipeline. On a real device, wave your hand quickly in front of the glasses – you should see only ~1/30th of a second delay. The trade-off is that if you pause and zoom into a frame, it's lower detail than a still photo.
- Simultaneous usage: (If applicable) try taking a photo while the video stream is running. The SDK might allow it by briefly freezing the stream, or it might require stopping the stream first. Ensure that whichever is required, the app and SDK handle it without crashing. You might get an error if not allowed ("Busy" or similar), which is okay – just know the limitation.
Common issues
- Gradle authentication error: If you see errors like "Could not find com.meta.wearable:mwdat-core:0.X" or 401 Unauthorized during sync, it means the GitHub Packages auth failed. Double-check your PAT in
local.propertiesor env var. Make sure the token is valid (no extra scopes needed beyond read:packages). Also ensure you added the Maven URL exactly as provided. If behind a proxy, ensure Gradle can access GitHub. - Manifest merger conflict: If your project's Manifest had existing
<application>attributes or permissions, you might run into conflicts when adding new ones. For example, the library might require a specificandroid:allowBackupor something. Use tools like<tools:replace>in the manifest merge if needed. Commonly, if two different values for an attribute clash, Gradle will log which to fix. Resolve by either adopting the SDK's requirement or merging the attributes. - Bluetooth device not found (Android): If the app doesn't find the glasses, verify that Bluetooth and Location are on. On Android 13+, also ensure Nearby Devices permission (which covers Bluetooth) was granted – you can check in Settings > Apps > Your App > Permissions. If it's missing, trigger the permission dialog again (the sample might have a button to request permissions). Additionally, make sure the glasses are not already connected to another phone/app. If problems persist, pair the glasses once via the Meta View app to ensure they are updated and then forget them to allow your app fresh access.
- Connection timeout or error: Sometimes the initial connection can take a bit. If it times out, try toggling Bluetooth off/on on the phone, and power-cycle the glasses. Also try initiating the connection from the glasses side (e.g., pressing the capture button might advertise the device). The SDK's built-in connect flow should handle retries, but environment factors (range, interference) matter.
- No video frames or slow frames: If streaming starts but you see very choppy or no frames: check if perhaps the glasses switched Wi-Fi/Bluetooth roles. The Ray-Ban glasses use Bluetooth for control and Wi-Fi (BLE Wi-Fi provisioning) for heavy data when possible. Ensure your phone is either on the same network or that the glasses are permitted to use Wi-Fi direct. If bandwidth is low, you might only get a few frames. On the mock, if you see slow "frames", it could just be the dummy data – that's normal.
- App crashes on orientation change or background: Some sample apps might not handle rotation or app backgrounding while connected. As a quick fix, test in portrait locked mode. If the app goes background (home button) and then returns, you might need to re-init the session in some cases. This is something to keep in mind for production integration.
6) Integration Guide — Add the Wearables SDK to an Existing Mobile App
Goal
Now that you've seen the sample work, you'll integrate the Meta Wearables Device Access Toolkit into your own app. We'll walk through initializing the SDK, connecting to the glasses, and triggering one end-to-end feature (photo capture) in an existing app. By the end, your app's architecture will include a link to the wearable glasses, and you'll be able to capture images from the glasses into your app.
Architecture
- Mobile App UI → Meta Wearables SDK client → Ray-Ban Meta Glasses (camera & mic) → Callbacks/Events → App UI/Storage.
In practice, your app's UI will call into an SDK client to connect and request data. The SDK handles the low-level Bluetooth/Wi-Fi communication with the glasses. As frames or photos come in, the SDK uses callbacks or delegates to deliver those to your app, where you can then update the UI or save content. The idea is to keep your business logic high-level while the SDK abstracts the device communication.
Step 1 — Install the SDK
iOS Integration:
- In your Xcode project, add the Meta Wearables Toolkit via Swift Package Manager. Open
Swift Packagesand add the GitHub repo URL: facebook/meta-wearables-dat-ios. Choose the latest release (e.g. 0.3.0). Xcode will fetch the package. Then, import the relevant modules in code (perhapsimport MetaWearablesCameraetc.). - Alternatively, if a CocoaPod is available, add
pod 'MetaWearablesCamera', '~>0.3'(and corresponding pods for core, etc.) to your Podfile and runpod install. - After installation, verify you have frameworks like MetaWearablesCore, MetaWearablesCamera in your project. These will be used to interact with the glasses.
Android Integration:
- Add the Wearables SDK as a dependency in your app's Gradle files. Ensure the GitHub Packages repository is configured (as in Quickstart B Step 2). Then include: gradle Copy code
implementation("com.meta.wearable:mwdat-core:0.3.0") implementation("com.meta.wearable:mwdat-camera:0.3.0")(Add themwdat-mockdeviceas well if you plan to include a build flavor for testing without hardware.) - Sync your project to download the libraries. You may need to add packaging options if any native libs conflict, but generally it should integrate smoothly.
Step 2 — Add permissions
Ensure your app has the necessary permissions and entitlements to use the glasses:
iOS (Info.plist & Capabilities):
- Add Privacy descriptions in Info.plist:
NSBluetoothAlwaysUsageDescription– e.g."We need Bluetooth to connect to your smart glasses and stream camera content."NSMicrophoneUsageDescription– e.g."Allows capturing audio from the glasses during video recording."- (Add
NSCameraUsageDescriptiononly if using the phone's camera too; not needed just for glasses.)
- Under Signing & Capabilities, enable any needed background modes. If you want the connection to persist in background, enable Uses Bluetooth LE Accessories. Also consider Background Fetch if your app expects to receive data while backgrounded (though continuous streaming likely won't run long in background due to OS limitations).
- Make sure your provisioning profile allows Bluetooth (most do by default if you include the usage description).
Android (AndroidManifest.xml):
- Include required permissions: xml Copy code
<uses-permission android:name="android.permission.BLUETOOTH_SCAN" /> <uses-permission android:name="android.permission.BLUETOOTH_CONNECT" /> <uses-permission android:name="android.permission.RECORD_AUDIO" />Also add<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />if your app targets Android < 12, because BLE scan needs location on older versions. For Android 12+, BLUETOOTH_SCAN covers it. - If your app might use the glasses while in background (not common), you'd also declare
ACCESS_BACKGROUND_LOCATIONfor BLE in background – but typically, you'll keep the app in foreground for camera streaming. - (No camera permission needed strictly for external camera, but if you plan to also capture from the phone or use the images, handle those accordingly.)
- Add an intent filter for Bluetooth if you want to be notified when the glasses connect, or use the SDK's built-in mechanisms.
Step 3 — Create a thin client wrapper
It's good practice to encapsulate the wearable functionality in a small set of classes or services in your app, rather than spreading SDK calls throughout your code. For example, you might implement:
- WearablesClient (or GlassesManager): a singleton or central class that manages the connection to the glasses. It would handle: initializing the SDK, starting a session, scanning/connecting to a device, reconnection logic, and exposing state (connected/disconnected) to the app. This might wrap SDK calls like
Wearables.startSession()or similar. - CameraFeatureService: a class responsible for camera actions (photo capture, video stream start/stop). It would use the SDK's camera APIs. For instance, a method
capturePhoto()that calls the SDK's photo capture function and handles the callback with the image. Similarly, methods to start or stop streaming video frames. - PermissionsService: (especially on Android) a helper to check and request permissions (Bluetooth, audio). This can be used to ensure all necessary permissions are granted before attempting to connect or capture.
By structuring your code this way, if the SDK changes or if you want to add another wearable in future, you only need to modify this layer, keeping the rest of your app decoupled.
Definition of done:
- The SDK is initialized when your app starts (or at least before you attempt to connect). For example, on app launch you might call
Wearables.initialize(context)(depending on the SDK's init needs – check documentation). - Connect/disconnect logic is implemented: e.g., a "Connect" button in the UI calls your
WearablesClient.connect(), which handles scanning and pairing. It should update state and allow a graceful disconnect (perhaps on app exit or a "Disconnect" button) to release resources. - Error handling: All major operations (connecting, capturing, streaming) have their errors caught and communicated. For instance, if a photo capture fails, you might toast "Capture failed, try again." If connect fails, you retry or show "Unable to find glasses." Also log these events (for debugging and user support).
- The rest of the app can query the WearablesClient for status (like "isConnected") and doesn't need to know implementation details of the SDK. This makes it easier to integrate into your existing app architecture (MVVM, MVC, etc., you can have a ViewModel observe connection status from the WearablesClient).
Step 4 — Add a minimal UI screen
Design a simple interface for the glasses features, which you can later blend into your app's UI. Key elements to include:
- Connect button: Allows the user to initiate connection to the glasses. E.g., "Connect to Glasses". When tapped, it triggers the scanning/pairing flow via your WearablesClient. This can also double as "Disconnect" once connected (toggling state).
- Connection status indicator: Show a small indicator or text that reflects the connection state (Disconnected/Connecting/Connected). This could be as simple as a green dot or a text label that updates ("Status: Connected").
- Capture Photo button: A button that says "📷 Capture Photo". When pressed, it calls your CameraFeatureService to take a photo using the glasses. While the capture is in progress, you might disable the button or show a loading indicator ("Capturing…").
- Live Stream toggle/view: (Optional but useful) – a switch or button to start/stop video streaming. If streaming, you could show the video feed in an
ImageVieworTextureViewupdated with frames from the SDK callbacks. For simplicity, you might start with just the photo capture; but if streaming is a key part of your app, include this as well. - Image display or log: Reserve a small
ImageViewto display the last photo captured from the glasses. After a successful capture, update this view with the returned image so the user sees what was taken. If doing streaming, this view could double as the preview. - Result feedback: A small text area or toast messages to show outcomes. E.g., "Photo saved to gallery" or "Stream started (30 FPS)".
This minimal UI will help in testing and ensures the end-to-end integration is working. You can always hide or automate these in a production scenario (for instance, automatically connect on app launch if the device is known, etc., or integrate the capture into your app's existing camera UI).
7) Feature Recipe — Trigger Photo Capture from Wearable into Your App
Goal
Implement the flow where a user taps a "Capture" button in your app, the glasses take a photo, and the image is delivered back to your app for display or saving. We'll outline the steps and considerations to get this working reliably. (This is the high-quality capture pipeline; for an instant frame grab, the steps would be similar except you'd already have a stream running.)
UX flow
- Ensure connected: The UI should only allow the capture action if the glasses are connected. (If not, prompt the user to connect first.)
- User taps Capture: The user presses the "Capture Photo" button.
- Show progress: The app gives immediate feedback – e.g., disable the button and show a "Capturing…" status – because there will be a short delay (the glasses need to actually take the photo).
- Photo is taken: The glasses capture the image and send it to the phone via the SDK.
- Receive result: The SDK calls back with the photo data (image bitmap or file).
- Display and save: The app shows a thumbnail of the photo on screen and optionally saves it to the camera roll or app storage.
- Reset UI: Re-enable the capture button, remove "Capturing…" text, and possibly show a success message ("Saved ✓").
Implementation checklist
- Connected state verified: In your button's onClick handler, first check
if (!wearablesClient.isConnected()) { alert("Please connect your glasses first"); return; }. This prevents trying to capture without a device. - Permissions check: Ensure required permissions are granted before capturing. For instance, on Android check
if (ContextCompat.checkSelfPermission(this, RECORD_AUDIO) != PERMISSION_GRANTED) { requestPermissions(...); return; }. Although taking a photo might not need mic, if your SDK call implicitly might use mic or if you plan to record something, handle it. On iOS, by this point the Bluetooth permission is already dealt with on connect, so just ensure the app isn't in background. - Issue capture request: Call the SDK's photo capture API. For example,
wearablesClient.camera?.takePhoto()or similar. This will likely be asynchronous. Immediately update the UI to indicate progress (set a TextView to "Capturing…", show a spinner, etc.). - Handle timeout/retry: Set a reasonable timeout (maybe a few seconds) in case no response arrives. The glasses might be asleep; if no callback in (say) 5 seconds, you could cancel and inform the user. Also, handle the callback in a way that if it errors (exception or error code), you reset the UI so the user can try again.
- Receive the photo result: In the SDK callback (e.g., a delegate method like
onPhotoCaptured(bitmap)), implement logic to:
* Save the image: you can write it to the gallery or app-specific storage. (Ensure you have permission if writing to public storage on Android.) Maybe generate a filename with timestamp.
* Update UI: stop the spinner, and load the bitmap into your ImageView to show the user.
* Provide success feedback: e.g., a small ✅ icon or a toast "Photo captured!".
- Error handling in callback: If the SDK returns an error (e.g.,
onPhotoCaptureError(error)), display a user-friendly message. Possibly suggest trying again or reconnecting if the error is like "device not ready".
Pseudo-code illustrating the flow:
swift
Copy code
@IBAction func onCaptureButtonPressed(_ sender: UIButton) { guard wearablesClient.isConnected else { showAlert("Connect your glasses first.") return } guard permissionsService.allPermissionsGranted() else { permissionsService.requestPermissions() return } statusLabel.text = "Capturing…" captureButton.isEnabled = false wearablesClient.camera.takePhoto { result in DispatchQueue.main.async { self.captureButton.isEnabled = true if let image = result.photo { self.imageView.image = image self.statusLabel.text = "Saved ✅" saveImageToGallery(image) } else if let error = result.error { self.statusLabel.text = "Capture failed 😞" NSLog("Glasses capture error: \\\\(error)") } } } }
And similarly in Kotlin/Java for Android using callbacks or coroutines.
Troubleshooting
- Capture returns empty: If the photo callback returns success but the image data is empty or null, it could mean the glasses didn't actually capture anything. Ensure the glasses were awake – on some devices, if they've been idle, the first capture might wake the camera but not return an image. A workaround is to issue a dummy request or simply inform the user to try again. Also check that the glasses have storage space (the SDK might fail if internal storage is full, though 500+ photos capacity is available).
- Capture hangs (no response): If
takePhoto()never returns, you likely have a lost connection or a bug. Implement a timeout: e.g., if no result in ~5-10 seconds, cancel the request. You might call a cancel API if available. If not, you may need to disconnect and reconnect the session in some cases. Also verify that you called the capture on the correct thread or with proper context; some SDKs require calls on main thread. - Image is low quality but you expected high: Ensure you indeed used the photo capture API, not grabbing a frame from a video stream. If you accidentally used the stream frame, you'll get lower quality. The dedicated photo should be higher resolution (12MP). You can compare the output dimensions to be sure – a photo might be ~4000x3000 px, whereas video frames ~1920x1440 px. If you are getting smaller images, double-check you invoked the correct method.
- "Instant display" expectation: Users might expect an instant shutter. Manage expectations in UI – e.g., keep the "Capturing…" indicator visible until the image truly arrives. If you want to simulate instant capture, you could immediately show a placeholder thumbnail (like a flash overlay or a frame from the video if you have one) and then update it with the full-res photo when it arrives. This way the UI feels snappier. However, be careful to differentiate it so they know final image might slightly differ (especially if they moved). For fastest results, some apps simply grab the last video frame to show instantly, then later replace with the high-res photo – best of both worlds if appropriate for your use case.
8) Testing Matrix
Because we are dealing with hardware integration and real-world conditions (lighting, motion, wireless connectivity), it's important to test various scenarios:
| Scenario | Expected Outcome | Notes |
|---|---|---|
| Mock device (no glasses) | App behaves as if connected; photo/video calls succeed with dummy data. | Use this for continuous integration tests. No Bluetooth issues to worry about. |
| Real device, close range | Low latency streaming, reliable photo capture. | This is the baseline happy path (glasses and phone in the same room, good signal). Measure latency by anecdotally comparing motion to display – should be very low. |
| Moving subject (action) | Photo mode captures a clear image (provided subject isn't too fast), video frames might show motion blur. | Test taking photos of a moving object vs extracting a video frame. Verify that photo (with faster shutter) is less blurry. If doing CV, test if motion affects detection. |
| Background / Lock screen | (If supported) Define behavior – likely the session will pause or disconnect. | iOS might suspend the session when app in background; Android could keep BLE running in foreground service if implemented. Ensure no crashes if app goes background during capture. |
| Low-light environment | Photo vs video quality differences become pronounced. | In low light, photos might employ HDR or longer exposure (but glasses lens is small) whereas video frames will be noisy. Test both pipelines in a dark room. |
| Permission denied flow | App gracefully handles missing permission. | E.g., deny Bluetooth permission and try to connect – app should show an error and guide user to enable it. No crashes. |
| Disconnect during action | App handles it: perhaps auto-retry or show message. | E.g., turn off glasses power while streaming or capturing – the SDK should timeout and error. Your app should catch this and not crash. Possibly attempt reconnection if appropriate. |
| Multiple sessions (if any) | Only one session at a time, additional attempts blocked or queued. | The SDK likely only supports one active session. Test that starting a second stream or capture while one is ongoing is prevented or queued by your logic, to avoid conflicts. |
Use this matrix to ensure robustness. Particularly focus on the quality vs latency trade: verify that in scenarios where quality is needed (still scenes, well-lit, or when user can wait a second) the photo pipeline delivers superior images, and where latency is key (fast action, unpredictable events) the video pipeline doesn't miss the moment.
9) Observability and Logging
To facilitate debugging and to measure performance, add logging for key events in your glasses integration:
- Connection events: Log when you start connecting (
connect_start), when the device successfully connects (connect_success), and if it fails or drops (connect_failalong with error info). Include timestamps. - Permission state: Log the status of permissions at app launch and when changed (
permission_okor specific likebluetooth_perm_denied) to quickly diagnose user setup issues. - Capture events: For photo – log
photo_capture_startwhen user taps capture, andphoto_capture_successorphoto_capture_failwhen the callback returns. If fail, include the error code (timeout, etc.). For video – logvideo_stream_startandvideo_stream_stop, and any errors in stream (video_stream_error). - Latency metrics: Measure time taken for key operations:
- From user tap to photo received. Log this as
photo_capture_ms = Xmilliseconds. This helps quantify the shutter lag and can be used to improve or set expectations. - If doing live processing of frames, measure perhaps frame interval or end-to-end latency from glasses to app.
- From user tap to photo received. Log this as
- Reconnection count: If you implement auto-reconnect, log how many times a session dropped and reconnected (
reconnect_attempt, with count). - Battery or signal (if available): The SDK might provide battery level of glasses or signal strength. It's useful to log these periodically (
glasses_battery=80%orrssi=-70dBm) especially if you encounter performance issues (e.g., stream lagging when battery is low or out of range). - Storage events: If your app saves photos or videos, log the success/failure of saving (file path, any errors).
By instrumenting these, you not only can debug problems but also gather data to answer questions like "How long does an average capture take?" and "How often do we lose connection?". This is invaluable for improving the user experience over time.
10) FAQ
- Q: Do I need the actual Ray-Ban Meta glasses hardware to start developing? A: Not initially – Meta provides a Mock Device in the SDK so you can simulate a glasses connection without hardware. This means you can implement and test most of the code flows (connection, capture calls, etc.) on the simulator or a device without owning the glasses. However, to test real-world performance (image quality, latency, wireless connectivity), you will eventually need the real device. The mock kit is great for quick iteration and CI tests, but before shipping you'd validate on real glasses.
- Q: Which smart glasses are supported by the toolkit? A: The developer preview currently supports the Ray-Ban Meta Smart Glasses (Gen 2) – these are the 2023 model co-developed by Meta and Luxottica, with the 12MP camera and Meta AI features. The older Ray-Ban Stories (Gen 1) aren't officially supported in the SDK documentation. Future Meta wearables (or other models in the "AI Glasses" lineup) are expected to be supported as the toolkit evolves, but as of now, focus on the Meta/Ray-Ban glasses launched in late 2023.
- Q: Can I ship an app using this to the public (App Store/Play Store) now? A: Not yet for broad distribution. The Wearables Device Access Toolkit is in developer preview, meaning it's mainly for prototyping and testing within your team or with selected testers. Meta has stated that only select partners can publish integrations to general users during the preview phase. General availability for all developers is targeted for 2026. That said, you can certainly build your app and even sideload or privately distribute it to test users (TestFlight, internal testing on Play, etc.) as long as those users have the necessary hardware and you have enabled their devices in the portal.
- Q: How is the image data transmitted – is it secure and fast enough? A: The glasses use a combination of Bluetooth Low Energy and Wi-Fi direct to transmit data. The SDK establishes a secure session for sensor data. Photos are likely transferred as files (which might take a second given size, hence the slight delay) while videos stream perhaps over a continuous socket. Meta improved the glasses' wireless capabilities: Gen 2 uses Wi-Fi 6 and BT 5.3 which provide better throughput and stability. All communications are presumably encrypted between the app and device. In our testing, the latency for video frames is very low (almost realtime), and a full-resolution photo takes around 1 second to appear on the phone after capture – quite reasonable for 12 megapixels.
- Q: Can I also push content or notifications to the glasses from my app? A: Currently, no – the developer preview is focused on accessing the glasses' sensors (camera, microphone). There is no API yet to send visuals or info to the glasses' display in this toolkit. (The Ray-Ban Meta glasses do have a limited LED display for notifications in one model, but that functionality isn't opened up to third-party apps in this SDK.) You also cannot arbitrarily control the glasses (like turning on LED or speakers for output) beyond using the provided camera/audio capture functions. Meta has hinted that voice command integration may come in future updates, but for now, output to the glasses is not part of the public API.
- Q: What about using the glasses for computer vision (CV) tasks in real-time? A: This is a great use case for the video pipeline. You can start the video stream and feed the frames into your CV models on the phone. Keep in mind the frames are ~1920x1440 and 30fps max. If you don't need full resolution, consider down-sampling to speed up processing. The phone's CPU/GPU will handle the analysis since the glasses themselves don't run your code. Also, note the power impact: streaming video and running CV can drain battery on both the glasses and phone quickly. Optimize by perhaps using grayscale or lower resolution for analysis. The AR1 chip on the glasses is quite powerful, but at this time third-party apps can't offload vision models to it – all CV happens on the phone side.
11) SEO Title Options
- "How to Get Access to Meta's Wearables SDK and Run Your First Smart Glasses App (iOS & Android)"
- "Integrate Meta Smart Glasses Camera into Your Mobile App: A Step-by-Step Guide"
- "Photo vs Video Frames on Meta Glasses – Choosing the Right Pipeline for Quality and Latency"
- "Meta Wearables Toolkit Troubleshooting Guide: Pairing, Permissions, and Camera Quality Tips"
(The third title highlights the key topic of photo vs video pipeline and would be a strong choice for an SEO-focused blog, as it contains keywords like "Meta Glasses", "Photo vs Video", "Quality", and "Latency".)
12) Changelog
- 2025-12-23 — Verified with Meta Wearables Device Access Toolkit v0.3.0 (Developer Preview), using Ray-Ban Meta Smart Glasses Gen 2 on iOS 17.1 (Xcode 15.1) and Android 14 (API 34). Ensured steps and APIs align with the latest documentation and sample apps. All information on limitations and features is current as of this date.