- Published on
Meta Wearables Device Access Toolkit: Quickstart Guide for AI Glasses Integration (2025)
- Authors

- Name
- Almaz Khalilov
Meta Wearables Device Access Toolkit: Quickstart Guide for AI Glasses Integration (2025)
Want to extend your mobile app into a glasses form factor—without guessing the connection flows, permissions, or event handling lifecycle? This guide gives you the fastest path to a working demo on Meta’s AI glasses, along with a production-ready checklist for when you’re scaling up.
Watch the Quickstart Video
If you prefer a visual jumpstart, check out the official Meta Connect session on the Wearables Device Access Toolkit (AI Glasses SDK) – it walks through capabilities and an integration demo on iOS and Android. (Video: "Meta Wearables Device Access Toolkit" by Meta Developers, Connect 2025.) By the end of it, you’ll see a sample app connect to the glasses, stream video, and capture photos.
What This Guide Covers
- How to get access to the toolkit (developer preview)
- How to install the SDK and run the sample apps (iOS + Android)
- The core building blocks (built-in connect UI, secure sessions, events, actions)
- A reusable “first feature” pattern you can adapt to any use case
- Basics of testing, privacy, and rollout for production
(In preview, the toolkit supports direct mobile-to-glasses integration, live video streaming & photo capture, a mock glasses simulator for testing, and more.)
Before You Start
Access & Accounts
Make sure you have the following:
- Meta developer account: Sign up or use an existing account on the Meta for Developers portal (required for the wearables preview).
- Wearables Toolkit preview access: Apply for the Meta Wearables Device Access Toolkit developer preview (e.g. via the Wearables Developer Center). Approval is required to use the SDK and glasses integration.
- Documentation hub: Access to the Wearables Developer Center docs (will be available after you’re in the preview).
- Support forums: Join the community on GitHub Discussions (for iOS and Android SDKs) or Meta’s developer forums to get help and share feedback.
Devices & Environment
| Requirement | iOS (Apple) | Android (Google) |
|---|---|---|
| Phone | iPhone (iOS 15.2+) (e.g. iOS 18) system requirements | Recent Android device (Android 10+) system requirements |
| Dev tools | Xcode 15+ (Swift 6) Recommended: Xcode 16 | Android Studio (latest) SDK: Android 13 (API 33)+ |
| Toolkit SDK | Preview SDK (v0.3.0) SDK v0.3.0 via Swift Package Manager | Preview SDK (v0.3.0) via Gradle (GitHub Packages) |
| Glasses device | Ray-Ban Meta (Gen 2), Oakley Meta HSTN supported glasses models (+ upcoming models) | Ray-Ban Meta (Gen 2), Oakley Meta HSTN supported glasses models (+ upcoming models) |
If you’re using React Native or Flutter, you can still use the toolkit – plan to write a thin native module or plugin that wraps the SDK on each platform. Keep your business logic cross-platform, and handle the glasses integration natively in iOS/Android.
Quickstart: Install + Run the Samples
Step 1) Download SDK + Sample Apps
- Developer Center: Log in to the Wearables Developer Center and download the SDK (or get access tokens for package installation).
- Documentation: Review the official Meta Wearables DAT docs (preview site) for integration steps and API reference.
- Sample apps: Clone the sample app projects from GitHub:
- iOS Sample (CameraAccess) – Swift example in the
samples/CameraAccessfolder. - Android Sample (CameraAccess) – Kotlin example in the
samples/CameraAccessfolder.
- iOS Sample (CameraAccess) – Swift example in the
Step 2) iOS: Build & Run
# Add the SDK via Swift Package Manager:
# In Xcode: File > Add Packages...
# Enter GitHub URL: facebook/meta-wearables-dat-ios (choose latest 0.3.0 tag)
# After SPM resolves, the SDK packages (Core, Camera, etc.) are added to your project.
# Prepare the project:
# - Update your app's Info.plist with required permissions (Camera, Microphone, Bluetooth).
# - Ensure your development team is set for code signing.
# Run the sample:
# - Build and run on a real iPhone (glasses must be paired with this phone via Meta AI app).
# - Follow on-screen prompts to connect the glasses.
Checklist (iOS)
- Project compiles and builds successfully (after adding the Swift Package).
- App launches on the iPhone (device, not simulator).
- Required privacy descriptions are in Info.plist (e.g. camera, mic, Bluetooth usage).
- Upon launch, you can reach the toolkit’s Connect button/screen in the app.
Step 3) Android: Build & Run
# Add the SDK via Gradle (Maven Central is not used; use GitHub Packages):
# 1. In settings.gradle, add Meta's GitHub Maven repo with a GITHUB TOKEN (see docs).
# (The repo URL is "<https://maven.pkg.github.com/facebook/meta-wearables-dat-android>")
# 2. In gradle libs.versions.toml, add:
# mwdat = "0.3.0" (version)
# mwdat-core, mwdat-camera, mwdat-mockdevice libraries (group "com.meta.wearable")
# 3. In app/build.gradle, add dependencies:
# implementation(libs.mwdat.core)
# implementation(libs.mwdat.camera)
# implementation(libs.mwdat.mockdevice)
# Build and run the app on an Android device:
# - Make sure the Ray-Ban/Oakley glasses are paired via the Meta AI app on the same device.
# - Launch the sample app and follow prompts to connect to glasses.
Checklist (Android)
- Project syncs and builds (Gradle configured with the toolkit packages).
- App launches on the Android device (with the Meta AI app installed for pairing).
- All required permissions are declared in
AndroidManifest.xml(Bluetooth, Mic, etc.) and requested at runtime as needed. - You can reach the “Connect to Glasses” screen in the app.
Note: On Android, you must add Meta’s GitHub Maven repository and use a GitHub token credentials configuration to fetch the SDK packages. This is a one-time setup to access the preview toolkit.
Step 4) Connectivity Smoke Test
Goal: Verify your app can connect to the glasses and receive at least one data event from them.
- Launch the app and use the provided Connect button to pair with the glasses (this will open Meta’s app for authentication and come back). You should see a “Connected” status in your app if the session is established.
- Try a basic action – for example, start a camera stream or trigger a photo capture from the sample app UI. The expected result is that you see live video or a photo thumbnail coming from the glasses camera.
- Expected outcome: The app reports a connected state and receives a camera frame or event from the glasses (e.g. stream started).
- If it fails: Double-check that your glasses are paired via the Meta AI app, that you approved the connection prompt, and that all necessary permissions were granted (Bluetooth, etc.). On Android, ensure the Meta AI companion app is running (as it bridges the connection). On iOS, if nothing happens, verify your app’s integration bundle (from the developer center) is configured, or enable Developer Mode on the glasses to bypass integration checks during development.
Once you have a successful connection and data flowing, you’re ready to start building features!
Core Concepts You’ll Use in Every Feature
Understanding a few key concepts will make development much smoother. The Meta Wearables toolkit introduces some new patterns to mobile development:
1) Connection Lifecycle (Session-based)
The foundation is a session between your app and the glasses. Each session is a secure, optimized link that handles the connection to the glasses’ sensors secure connection. The session is your primary way to start/stop using glasses features, and it’s designed to manage the lifecycle for you. For example, the toolkit’s session API will automatically handle interruptions (like if a phone call comes in or the user takes off the glasses) and recover when possible lifecycle signals.
Best practices:
- Connect/Disconnect: Use the provided APIs (e.g. a
DeviceSessionor similar class) to initiate and end sessions cleanly. The Android SDK, for instance, provides aDeviceSessionclass that manages connecting and disconnecting as devices become available. - Background/Foreground: Decide how your app should behave if it goes to background. You might keep the session alive for a short time or pause the stream, depending on your use case. (The current preview does not support long-running background sessions by default, so plan accordingly.)
- Reconnect strategy: If the connection drops (e.g., Bluetooth issue or glasses power off), implement a retry or prompt the user to reconnect. The toolkit’s design is to make sessions robust, but in a production app you’ll want to handle edge cases (loss of signal, etc.) gracefully.
- Timeouts: For certain operations (like waiting for a stream to start), consider adding a timeout so your UI isn’t stuck indefinitely if something goes wrong. In the sample, a timer is used to auto-stop the stream after a set duration as a safety net.
2) Permissions & Capabilities
When extending your app to glasses, you are essentially accessing new hardware (camera, microphones, etc.) through the phone. This means handling permissions carefully:
- Bluetooth & device permission: The glasses connect via Bluetooth, so your app needs Bluetooth permission. The toolkit’s built-in Connect flow will trigger the system’s Bluetooth pairing prompt and any OS permission needed registration flow, but you must still declare Bluetooth usage in your app’s config (Info.plist on iOS, Manifest on Android).
- Camera & mic: Interestingly, you’re not using the phone’s camera, but you are streaming video from a camera (the glasses). On iOS, you should include a Camera usage description string just in case (and for clarity to users). On Android, you might not need the CAMERA permission for basic use of the glasses camera, since frames come via the companion app. However, if your app will also capture from the phone or process audio, include the Microphone permission. The glasses’ microphones can be accessed as a Bluetooth input device (just like a wireless headset). Summary: declare
NSCameraUsageDescription,NSMicrophoneUsageDescription, and Bluetooth usage on iOS; and in AndroidManifest, includeBLUETOOTH_CONNECT(and optionallyRECORD_AUDIOif you use audio). - Glasses capabilities: At preview launch, the toolkit supports the camera stream and photo capture. You can also use the glasses microphones and speakers, but those are accessed through standard Bluetooth APIs on each platform Bluetooth audio profiles (e.g., the glasses act as an audio input/output device). Display capabilities (for glasses with a HUD) are not yet available to third-party apps in the toolkit. Be mindful of these limits – your feature set should target camera and audio for now.
- Graceful degradation: Always design your app such that if permissions are denied or the glasses are not connected, it still functions. For example, if camera permission is denied (or the glasses aren’t available), your app could fall back to using the phone’s camera or simply disable the AR feature with a message. Also consider that some users may revoke permissions or unpair the device mid-use.
3) Events → Actions Pattern
Developing for glasses is an event-driven affair. The app will receive events from the glasses (or the companion app) and should respond with actions in your app:
- Events: These could be sensor events, status changes, or user inputs. For example, an event might indicate “glasses have started streaming video” or a hardware button was pressed on the glasses. The Meta toolkit also surfaces standard events like pause/resume/stop for media capture gesture or voice controls – for instance, if the user presses the physical button to stop recording, your app gets notified.
- Actions: Your app’s response to those events. Using the above example, when a “stream started” event comes in, your app could begin displaying the live video feed on screen. Or if a “pause” event comes (user took off the glasses, perhaps), your app might pause any ongoing processes or show a notification. Actions can also be initiated by the app (e.g., user taps a button in the app to trigger an image capture event on the glasses).
- Design for quick feedback: Glasses are all about immediate, hands-free use. So when an event comes in (say, user says a voice command that your app detects via the mic), try to perform the action with minimal delay (on-device if possible) and provide feedback. For example, if the user says “Capture” and you trigger a photo, maybe speak back “Photo taken” through the glasses speakers or show a subtle confirmation on the phone UI.
- Example: A simple event→action loop is: “Glasses button pressed” → event delivered to app → app action: navigate to a specific screen (or call an API)”. Another: “Voice command recognized” → event → app action: start an AI query and read results via glasses audio.” The toolkit gives you the hooks for these interactions; it’s up to your app to implement the logic that makes them useful.
Build Your First Glasses Enabled Workflow
Now, let’s apply those concepts to create a basic feature. This section is intentionally generic so you can imagine plugging in any specific use case.
Pick a “First Feature” That’s Easy to Validate
Not sure what to build first? Aim for something observable, low-risk, and useful. You want to quickly prove that your app glasses integration works end-to-end:
- Observable: You should be able to tell immediately when it works. (E.g. a UI change, a sound, a notification – something obvious.)
- Low-risk: Don’t start with your most mission-critical workflow. Choose something simple that won’t break other app functionality.
- Useful: Ideally, even a basic demo should do something a real user might value, however small, so you can get feedback.
Good first-feature examples:
- “Tap on the glasses → open a specific screen in the phone app.” (Physical trigger on wearable, response in app UI.)
- “Voice command → trigger an in-app message or API call.” (Leverage the glasses mic for a simple voice interaction.)
- “Glasses capture event → send a placeholder data payload to the phone app.” (Use the camera to capture or just simulate an event, and ensure the app handles it.)
Each of these can be accomplished with minimal UI and logic, but they prove out the whole stack (device event, app receives it, app does something).
Implementation Template (Pseudo-Code)
To integrate the glasses for any feature, you’ll typically do something like this:
1. Initialize the Wearables SDK in your app (setup any required singletons or managers).
2. Request and verify permissions (camera, mic, bluetooth) at runtime.
3. Connect to the glasses (e.g. when user taps "Connect", use the SDK’s connect flow).
4. Once connected, start a session and subscribe to relevant events (stream started, button pressed, etc.).
5. On event, execute your app’s action (e.g. run a function, navigate UI, call a service).
6. Provide user feedback for the action (UI update, sound, or haptic feedback in app).
7. Log key states and errors (to console or telemetry) for debugging and support.
This template can guide any feature: just replace the event type and the action. For instance, for a “tap-to-open” feature, step 4 subscribes to a “glasses tap” event, and step 5 opens an activity or view in the app. For a “voice command” feature, step 4 might continuously listen for a keyword (through the mic input) and step 5 executes a command when heard.
Minimal UX Requirements (Don’t Skip These)
Even for a quick prototype, implement some basic UX safeguards:
- Connection status indicator: Show clearly in the app whether glasses are Connected, Connecting, or Disconnected. Users (and testers) need to know if the system is ready. For example, use a simple green/red dot or a status label.
- Error messages: Provide meaningful error or permission prompts. If the glasses aren’t found or Bluetooth is off, tell the user “Please turn on Bluetooth to connect to your glasses.” If a permission is denied, explain why the feature won’t work (“Camera access is needed to stream from your glasses.”).
- Fallback path: Ensure the user isn’t completely blocked if glasses are not available. They should be able to cancel or skip the glasses feature and still use the app. For example, if a workflow normally uses the glasses camera, allow an option to use the phone camera as a fallback, or simply allow the user to continue without that step. This is important for real-world rollout since not everyone will have the glasses or they might be unpaired/out of battery.
Remember, early adopters might demo your app without fully understanding the setup. Good status and error UX will save you a lot of headache in support and will make the demo feel polished even if it’s simple.
Testing & Troubleshooting
Bringing a new device into the mix means you should test a variety of scenarios. Here’s a quick test matrix and some common pitfalls to watch out for:
Test Matrix
| Scenario | Expected Behaviour | Notes |
|---|---|---|
| First-time setup | App guides user through pairing & permissions. Glasses connect and basic feature works. | Test on a fresh install: does the Connect flow prompt the Meta app and OS permissions properly? registration flow Any confusing steps? |
| App backgrounded | Session either stays alive (if short) or gracefully disconnects. User is informed if needed. | Try connecting, then switch apps. Does the stream pause? Resume when foreground? Ensure no crashes if backgrounded. |
| Permission denied | User sees an error and instructions to enable permission. | E.g., deny Bluetooth or Camera permission and attempt connect/stream – your app should catch this and explain how to fix in Settings. |
| Disconnect mid-flow | App notices the disconnect and stops the feature safely. Optionally, it retries or prompts user. | Simulate by turning off glasses or disabling Bluetooth. The app should not freeze. It should cancel the ongoing action and show “Disconnected”. Reconnecting should be a one-tap action if possible. |
| Multiple sessions | (If applicable) Starting a second session after ending the first works normally. | Some SDK versions might have bugs where a second connect fails if the first wasn’t closed. Make sure in testing that you can connect, do something, disconnect, and then connect again. |
Tip: The toolkit provides a Developer Mode for testing without the full registration every time. When enabled on the glasses (via the Meta app’s settings) it skips certain authentication steps, which is handy in dev. Use this in your internal testing to iterate faster, but remember to test with it off at least once to cover the real user flow.
Common Gotchas
- Forgetting permissions at runtime: It’s easy to add the usage descriptions in your config and assume it’s done. You also need to request permissions in-app (especially on Android). For example, on Android call
ActivityCompat.requestPermissionsforBLUETOOTH_CONNECT(andRECORD_AUDIOif using mic) on startup. On iOS, the first use of an API will prompt for you (e.g., accessing the mic triggers the mic permission), but Bluetooth might require a manual prompt if using CoreBluetooth APIs. Double-check that the first connect attempt isn’t silently failing due to a missing permission prompt. - Not updating App ID/Integration info: If you registered your app in the Wearables Developer Center, you likely got an integration bundle or key. Make sure it’s included in your app (typically via a file or config entry). If this is missing, the connection may fail to authenticate. Developer Mode can bypass this, but your app won’t work for normal users without the proper integration config.
- Build and signing issues (iOS): Because the sample uses capabilities like camera and possibly background modes, you might need a paid Apple Developer account to run on device (to get provisioning for those entitlements). Also ensure the bundle ID you register in Meta’s portal matches your Xcode project bundle ID.
- Battery optimization (Android): Some Android phones aggressively kill background processes. During testing, if your app or the Meta companion app gets killed, the session will drop. You might need to disable battery optimization for the Meta app and your app to test long-running sessions. Advise users similarly if applicable.
- Missing reconnection logic: The sample apps handle basic connect/disconnect, but if you only ever test the “happy path”, you might not realize your integration doesn’t reconnect if the connection is lost. Plan to handle
onDisconnectedevents by attempting to reconnect a few times (with backoff), or at least notify the user with a “Tap to reconnect” message. Otherwise you’ll have “works once” syndrome where the feature works the first time but not after a drop until the app is restarted.
Finally, use the Mock Device Kit for automated testing. The toolkit provides a Mock Device framework so you can simulate a glasses device in your unit tests or UI tests Mock Device Kit. This is extremely useful for CI/CD – you can write tests that simulate connecting to a fake glasses, feeding in a fake video feed, and asserting that your app responds correctly. (For example, Meta’s demo showed using a video file as a simulated camera feed and verifying the app logic.)
Privacy, Security, and AU Notes
Building for wearable cameras and mics means thinking about privacy and data security from day one. Here are some practical defaults and tips:
Practical Defaults
| Area | Recommended Default | Why it Matters |
|---|---|---|
| Data minimisation | Only collect what you need. For example, if you just need an image classification, don’t store the whole video stream by default. | Lowers the compliance burden and risk in case of data leaks hardware components access. Users are more comfortable if you’re not stockpiling data. |
| Storage | Avoid saving raw sensor data unless necessary. For images/videos, consider processing in-memory and discarding, or using encrypted storage if needed. | Reduces the risk if a device is lost or if an attacker gains access – less sensitive data sitting around. Also aligns with the principle of least retention. |
| Logging | Redact or avoid personal data in logs. For example, don’t log the content of a voice command, just log that a command was received. | Logs often end up in analytics or crash reports. By keeping them clean, you protect user privacy even in debug info. |
The Wearables SDK itself will collect some usage data (analytics about how the glasses are used with your app) by default. If your policy demands it, you can opt-out of this data collection. For instance, on iOS you add a flag in your Info.plist (MWDAT/Analytics/OptOut = YES) to disable the toolkit’s analytics Acceptable Use Policy configure analytics settings. Be sure to check Meta’s developer terms and privacy policy for what’s collected and get user consent in your app’s privacy notice if appropriate.
Australian context: If you’re operating in Australia, remember that any personal data (photos, audio, etc.) you collect could be subject to the Privacy Act 1988. That means you should have a clear privacy policy, only use data for the purpose you stated, and secure it properly. Australia also has the Essential Eight cybersecurity strategies – while mostly for government, they’re good practice. For example, “Application Control” and “Restricted Admin Privileges” might translate to ensuring your app’s use of glasses hardware is locked down to expected behaviors, and not requesting more permission than it needs.
Moreover, be mindful of the public perception of wearable cameras in Australia: transparency is key. The Ray-Ban Meta glasses have a LED that lights up during recording – don’t attempt to suppress or obscure such indicators. Culturally, being upfront about recording (and offering opt-outs) will build trust with your users.
Production Readiness Checklist
Before you ship a glasses-enabled feature to real users, run through this checklist:
- Robust connection handling: Your app cleanly handles disconnects and reconnections. No crashes or undefined states if the glasses go offline unexpectedly.
- Permission recovery: If a user initially denied permissions, your app should detect that and provide a way to enable them (e.g., a message with a “Enable Bluetooth” link to settings, or re-prompt if appropriate).
- Feature flags / kill switches: Since this is a preview SDK, it’s wise to be able to disable or tweak glasses features via a remote config or feature flag. If something goes wrong in production, you can turn it off without an app update.
- Analytics & crash logging: Integrate analytics to track usage of the glasses features (e.g., how often users connect, any errors). Also ensure crashes or errors in the glasses workflow are reported to your crash logging solution. This will help in troubleshooting issues in the field.
- Device and version notes: Document which glasses models and phone OS versions you support. For example, you might say “Tested on Ray-Ban Meta (Gen 2) with iOS 17 and Pixel 7 (Android 14)”. Users will thank you if they know their device should work. Also handle gracefully if an unsupported device tries to use it (e.g., if Meta releases new glasses not yet supported by your app, detect and warn or update the app).
- Onboarding & documentation: Provide a quick in-app onboarding for the glasses feature. The first time a user tries it, give a short tutorial or link to a help article. This should include how to pair the glasses, what commands or actions they can do, and any tips (like “make sure your glasses are charged”). For internal purposes, also document the integration in your codebase so other devs can understand how it works.
- Compliance check: Ensure your use of the glasses meets any platform rules or legal requirements. E.g., if your app records audio or video, are you notifying users appropriately? Does your privacy policy explicitly cover this new data collection? It should, before you go live.
By covering these bases, you’ll move from a hacky demo to a solid feature that can be rolled out widely with confidence.
Next Steps
Congratulations on getting a basic integration working! From here, the possibilities open up:
- Choose your next glasses feature: Maybe it’s integrating an AI vision model to recognize objects, or building a hands-free barcode scanner for inventory. Use the same Events → Actions pattern: what event do I get from the glasses, and what action should my app take?
- Replace placeholders with real logic: In your prototype, you might have shown a dummy alert or logged something to the console when an event happened. Now wire that up to real business logic – call your API, open the real screen with actual data, etc.
- Expand platform support: If you started on iOS, consider implementing on Android (or vice versa) to reach more users. The toolkit is similar across platforms, and a lot of your learnings will carry over.
- Internal beta testing: Release the glasses-enabled app to a small group (maybe your team or friendly users) via TestFlight or a beta track. Collect feedback on the experience – was it smooth? Were there points of confusion? Use this feedback to iterate on UX or fix bugs. Since this is new tech, real-world usage might surface things you didn’t anticipate (like certain phone models handling Bluetooth weirdly, etc.).
- Stay updated: The Wearables Toolkit will evolve (Meta has hinted at more sensor access and future display support). Keep an eye on updates from Meta – join the developer preview forums and check the GitHub repo for new releases. Each version’s changelog (for example, v0.3.0 was just released with updates SDK v0.3.0) might bring new APIs or needed changes to your code.
Need help hardening the integration or brainstorming advanced use cases? Cybergarden can help you go from prototype → production. We specialize in emerging tech integrations – whether it’s writing native wrapper modules, designing event-driven architectures, rigorous testing, or privacy-by-design consulting, we’d love to assist in making your AI glasses project a success.
FAQs
Q: Do I need preview access to use the toolkit?
A: Yes. As of now, the Wearables Device Access Toolkit is in developer preview, which means you must apply and be approved to get the SDK and enable it for your app. Make sure you use the official sign-up and wait for access before planning a public launch. (Only select partners can publish glasses-integrated apps to the public during the preview Learn more Find our FAQ.)
Q: Can I build with React Native or Flutter instead of native iOS/Android?
A: Absolutely. The approach is to use a thin native layer for the glasses part. For example, on iOS you’d create a small Swift module that uses the Toolkit SDK and exposes functions/events to your React Native app. Same on Android with a Kotlin module for React Native or a MethodChannel for Flutter. The key is to keep all the glasses-specific code in that layer, and call it from your cross-platform code. Many developers have successfully integrated things like ARKit or Bluetooth by this method – this toolkit is no different. Just note that the sample apps provided by Meta are native, so you’ll use those as a reference to implement your own module.
Q: What’s the fastest path to a working demo?
A: Use the provided sample app as your baseline. The fastest way to see results is:
- Get the sample app running (as in the Quickstart above). This gives you a known-good baseline where you can connect to the glasses and stream video.
- Modify the sample slightly: for instance, add a simple UI element or log in one of the event callbacks to trigger your own code. A trivial example: when the glasses capture a photo in the sample app, add a line to upload that photo to your server or to toggle a UI element. This way, you confirm you can run custom logic on glasses events.
- Now you have a minimal end-to-end demo: you connect the glasses, perform an action (like capture), and your custom code runs (e.g., uploading or showing a message). This is often impressive enough to share with stakeholders.
- From there, iterate on that prototype – maybe refine the voice commands or the specific action. But you’ve avoided starting from scratch.
Meta’s sample apps already demonstrate core functionality (connect -> stream video -> capture photo) in a straightforward way official demo. Leveraging them is the quickest way to avoid boilerplate and get to the “cool part” – implementing your unique feature.
Q: Any tips for optimizing performance or battery?
A: Since the glasses offload processing to the phone, your phone app will be doing heavy lifting (especially for video processing). A few tips:
- Use efficient image processing: If you’re running computer vision on the frames, use optimized libraries and consider processing at a lower resolution if 720p is too heavy. The toolkit caps streaming at 720p 30fps bandwidth caps due to Bluetooth bandwidth, so you won’t get full HD anyway – use that to your advantage and don’t upsample needlessly.
- Manage session length: Don’t keep the camera streaming if you don’t need it. The sample uses a timer to auto-stop the stream after a period auto-stop the stream – a good practice to prevent draining battery. Encourage users to only connect when needed, and disconnect after. Perhaps provide a “Disconnect” button in the UI for peace of mind.
- Optimize UI updates: If you’re showing live video on the phone from the glasses, make sure that your UI rendering (e.g., a Canvas or ImageView update) is efficient. Use appropriate frame rate rendering techniques (throttle if needed to match the ~30fps input).
- Battery usage messaging: Consider informing the user if using the glasses feature is intensive. For instance, if a user starts a long livestream via your app, a small note like “Streaming in progress – this may use additional battery on phone and glasses” is good UX. It sets expectations that both devices will drain faster.
Q: How do I handle updates to the toolkit?
A: Keep an eye on the GitHub repository and Meta’s developer announcements. The toolkit is evolving – new versions might add features (or fix bugs). For example, if version 0.4 comes out with support for a new glasses model or new API, plan to allocate some time to integrate it. The GitHub changelog will detail what changed. Given this is a preview, expect breaking changes and plan your development cycle accordingly (i.e., don’t hard-code too much that can’t be changed when the SDK updates). It’s a good idea to wrap the toolkit calls in an abstraction in your code, so if the SDK API changes, you update it in one place. And always test thoroughly when updating the SDK version.