Suggestion for Developing an SDK for Meta Ray-ban Glasses
Hey everyone, I'm curious: will there ever be a developer kit (SDK) available for Meta glasses? I think the glasses are great, but there's not much to do with them right now. So, I thought, why not build a community around it? If Meta releases an API or something that other apps could call or interact with indirectly (since it seems there's already a service running in the background), this could enable commands from third-party apps. This way, you could allow users to develop their own apps and further customize the experience. For example, let's say I want to do something customizable, like turning on my smart lights using the command "Hey Meta". My own app could register a command (or more than one) so that whenever I say "Hey Meta, [do something]", this command triggers (it could be a post or a deeplink) that my app could receive and then perform the desired action, only for that command. And if this API allows me to use the camera live, like we do in Instagram livestreams, the possibilities would be endless. Think about it, everyone. I really want to build custom experiences with this, so please consider releasing an SDK for Meta glasses. Thanks!61KViews41likes29CommentsTechnical Questions for a LBS Game: Disabling System Gestures, Spatial Mapping & Remote Control API
Hello, I'm looking to create a multi-user, large-scale, location-based (offline) game and have a few questions: 1. Is there a way to disable system-level gestures to prevent players from accidentally exiting the application and returning to the home screen during gameplay? 2. Is there a method for scanning a large physical space (approximately 10x10 meters) to generate a persistent and shareable map file? 3. Is it possible to enable or provide some control API? We need an interface that allows a central controller to remotely start and stop the application on all devices, as well as manage the installation and updating of game content.44Views0likes1CommentExploring ChatGPT Integration for VR UI/UX: Has Anyone Tried It?
Hi everyone, I'm currently experimenting with ways to integrate conversational AI (like ChatGPT) into VR environments to enhance user onboarding, contextual help, or dynamic NPC interactions. I’m working with Unity and targeting Quest 2/3. My goal isn’t to run the model locally, but to connect to external APIs (e.g., OpenAI) via secure HTTPS requests — ideally triggered by user input or voice in immersive menus. I’d love to hear from other devs: Have you tested anything similar in a VR context? What are your thoughts on latency and user flow when connecting to external services in VR? Any advice on best practices for handling text responses (display, formatting, user focus)? Are there any sample projects or SDKs that play well with Meta Quest platforms for this use case? This is still a prototype phase for me, but I believe natural language interaction could really improve the way users explore VR content — especially for onboarding or educational apps. Looking forward to your insights!103Views0likes3CommentsPing data overlay
Hi. Fairly new to development and I am stuck with a project. I am trying to develop a program that displays Network information to the user (ping, jitter, packet loss etc.) . I would like this to be able to run over any 3d application in real time Similar to how OVRMetric displays headset information. Any documentation or assistance would greatly be helpful.823Views0likes1CommentVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Oculus quest 2 developer mode not working properly
Hello, Since yesterday 15:30 (AMS time) The developer hub app stopped working properly. I've done a total of 3 factory resets. I reinstalled all my accounts. Reinstalled every app on my computer for developing for the oculus. But the problem persists. The problem is: I connect the oculus through the usb cable to the computer. The hub detects it as usual. But 6 seconds into the future and it kicks the device right of the hub. It tells me: Device detected but unauthorized. Please put on your headset to confirm the USB Debugging dialog. Funny thing is, I allowed this already. I have to restart the device to see this window pop up again. It's on always allow for this computer. But the problem still persists. It kicks my device after exactly 6 seconds and I get the same message, over and over again. Lovely problem. Working already for 6 hours to resolve this problem, but can't seem to find anyone on any forum with the same problem. I haven't found a work around so if someone finds a solution or knows what is wrong please let me know. Just want to add that this is one of the worst devices I have worked with. Can't seem to get this device to work stable for development. Mark fix your company14KViews4likes6Comments[Issue] Total uninstallation
Hi, I've got this issue multiple times but mostly because I create a VRGame and I have to install/uninstall often... It seems that most of the files are removed when we uninstall a game but not all, which leads to errors when trying to install an APK, some issues are from versions, if we try to downgrade even after uninstalling we have some errors. The solution would be to remove all references to the game when uninstalling a game. (it seems Meta Quest Developper Hub's uninstallation process remove more things because i have less issues) PS : Also, another thing, being able to remove uninstalled games from the library. If we installed a great number of free games for testing, I don't want these games to haunt my library forever...598Views0likes0Commentsovr upload fails due to missing android exported in manifest
I started receiving the following failures from ovr-platform-util when uploading quest apks with multiple apps and no changes to my manifest file: The AndroidManifest.xml contains an <intent-filter>, but android:exported was not set on its parent node. Add it to your manifest and try again. See https://developer.android.com/about/versions/12/behavior-changes-12#exported for more details. Anyone else experiencing this ?4KViews1like4Comments