cannot connect headset
hi all, using Win 11 PC with VR Quest 3S. I downloaded Meta Quest Dev Hub on the PC and installed it fine. I verified my account. I toggled on developer mode in the head set. I plugged the USB into the PC and the headset prompted me to accept the connection/terms. I did so, then I could see that the headset was indeed connected within the hub. Great! However I come back one day later and now I cannot connect the headset anymore. I go through the same steps: I tried removing the headset from "devices" in the hub, then adding it fresh. It finds the 3S, connects successfully, then when I plug the cable into the headset I hear the bell sound in the headset but there is no prompt for accepting the connection/terms. The headset does not appear in the devices section of the hub. I have tried reinstalling the hub, toggling on/off dev mode in the headset, etc. any help is appreciated511Views0likes1CommentVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Get info about Hardware conected in oculus Quest head set
Hey nice to meet you, actually im doing some experiments with a oculus Quest 3 and i wonder if i can get a list of each hardware conected vía Bluetooth and the Port c of the head set, this with code c# in unity. That Is possible ?432Views0likes0CommentsWhy dont use quest 3 controllers cameras for a better hand tracking too
I've been exploring the capabilities of various VR systems and their approaches to tracking accuracy and user interaction. Specifically, I've been intrigued by the Quest 3's use of controllers with built-in cameras, primarily aimed at navigating and interacting within the virtual environment. However, this got me thinking about the potential for these cameras to be utilized not just for their intended purpose, but also for enhancing hand tracking capabilities, similar to how the Valve Index leverages its base stations for precise tracking. Given that the Quest 3 controllers are already equipped with cameras, has there been any consideration or experimental development towards using these controllers as a sort of "mobile base station"? Essentially, this would not replace the standard tracking functionalities but could potentially offer developers and users an optional, more precise tracking method when needed. I understand that the primary function of these cameras is not for tracking in the same capacity as base stations. However, considering the flexibility and innovation in VR development, the prospect of harnessing these cameras for dual purposes seems like an intriguing possibility. This could potentially reduce the need for external hardware in environments where precision is key but the setup of traditional base stations is impractical. Are there any technical limitations or developmental considerations that would prevent the Quest 3 controllers' cameras from being utilized in this dual capacity? Could software updates or developer tools enable this functionality, or is it more a matter of hardware limitations? I'm looking forward to hearing your thoughts and insights on this possibility. The idea of expanding the functionality of existing hardware for enhanced user experience and developer flexibility is exciting and could open new avenues for VR interactions and immersion.Solved3.2KViews0likes1CommentMetaverse integrations
hello folks, I am particularly interested in the technical aspects of your Meta's Metaverse platform integration, and I would like to inquire about the 2 features: • It is possible to connect/ recall Web-Views (iFrames) in metaverse scenes? The underlying aim is to be able to open a navigable window on browser pages while remaining within the metaverse. • If I am expected to use a private or custom pre-existing login system, can it be integrated? Thank you in advance for your time and support! Filippo1.4KViews0likes2CommentsIf Q2,Q3 ok, it's systematic on Quest Pro ?
Hi everybody Is a tool which function correctly on Q2 and Q3 will systematically function on Quest Pro please. It's about a project on the way for applab. The particularity is that only hand-tracking is used... Thks by advance for your interest, Serge / France476Views0likes0CommentsEnhancing Viewing Experience on Quest 3 with Passthrough Feature for VR Applications
I’d like to spark a discussion on a potential feature that could significantly enhance our experience with VR applications like Netflix or Amazon Prime on the Quest 3. Currently, engaging with real-world tasks while immersed in full VR applications disrupts the viewing experience. For instance, if you’re watching a movie and need to grab a glass of water, you’re faced with two options: either double-tap the headset to activate passthrough or walk out of your play area boundaries. Both actions, unfortunately, pause your movie, pulling you out of the immersive experience. Imagine if future software updates introduced a passthrough option that activates when we step outside a predefined boundary while still allowing the movie to play. This feature would let viewers seamlessly interact with their physical environment without missing a beat of their film—imagine walking to the kitchen to grab some popcorn while still being able to watch your movie through a passthrough view. Incorporating such a feature could be a game-changer, merging the convenience of the physical world with the immersive experience of VR. Not only would it enhance user convenience, but it could also pave the way for new types of interactive content and applications that fluidly integrate VR with real-world activities. I’d like to add a note on the current passthrough capabilities of the Quest 3. Many of us have already discovered a “sweet spot” when moving between the boundaries of our play area. If you stop halfway, you can find yourself in an intriguing space that blends the VR experience with the physical world. This existing feature hints at the Quest 3’s potential to merge VR with reality more seamlessly. It suggests that the hardware might already possess the capability to integrate our virtual activities with real-world tasks more fluidly. This underutilized aspect of passthrough could be the foundation for developing the kind of feature I proposed, enhancing our engagement with both VR content and our environment simultaneously. What are your thoughts on this?1.6KViews0likes0Comments[Feature Request] Integrate Tracked Keyboard with Meta Remote Desktop
I've been experimenting with Tracked Keyboard (Logitech MX Keys Mini) and Workrooms recently and have noticed a few UX/usability issues. The tracked keyboard communicates directly with the Quest, which makes perfect sense if you plan to use it as a direct input while using the Quest. However in Workrooms, when you bring your PC (or Mac in my case) in to the workroom, the keyboard does not control the PC. This is a major source of friction from a usability/UX perspective. A possible solution, and hence the feature request, would be to integrate the tracked keyboard with Meta's Remote Desktop app as the intention in this scenario is to control and directly input in to the PC. This way the user sees the keyboard (passthrough), the keystrokes are inputted directly to the PC, and Remote Desktop can pass keyboard events, positioning, etc. to the Quest. It does throw up the issue of two modes of Tracked Keyboard (one directly to Quest, the other to Remote Desktop), however cognitively this is easier for a user to manage than having two separate keyboards on the same desk (one for the Quest another for the PC), or to have the disconnect that the Tracked Keyboard doesn't directly control the applications on the PC that are brought in to Workrooms. I hope this helps. rgds Dave2.3KViews2likes3CommentsTransfer rates and connections
From a PC perspective, DisplayPort 1.4 seems to have the best transfer rates, would the connection not fit in the headset, or too much code to load to utilise the PC power (CPU/GFX etc)? I feel there is quite a big market there for average PCs like mine to use headsets, for instance Half Life Alyx is pretty awesome graphically, even on the headset, or project cars - loads of PC games, also with the displayport I could turn off my monitor to save it being calculated on both screens and can run the line straight out my gfx card. Dunno if anyone knows a workaround for this? USB3 maxes out about 1.2ish yet DP is 35ish gbps. Seems a logical thing to utilise if meta wishes to increase global markets. 3D gaming is the way forward. Apologies if this isn't strictly a developer topic. I will get coding soon x746Views0likes0Comments