cannot connect headset
hi all, using Win 11 PC with VR Quest 3S. I downloaded Meta Quest Dev Hub on the PC and installed it fine. I verified my account. I toggled on developer mode in the head set. I plugged the USB into the PC and the headset prompted me to accept the connection/terms. I did so, then I could see that the headset was indeed connected within the hub. Great! However I come back one day later and now I cannot connect the headset anymore. I go through the same steps: I tried removing the headset from "devices" in the hub, then adding it fresh. It finds the 3S, connects successfully, then when I plug the cable into the headset I hear the bell sound in the headset but there is no prompt for accepting the connection/terms. The headset does not appear in the devices section of the hub. I have tried reinstalling the hub, toggling on/off dev mode in the headset, etc. any help is appreciated515Views0likes1CommentLooking for 3d CAD files of Meta Quest 3 Controller.
Hi everyone, I'm Felix from Haply Robotics. We specialize in creating haptic force feedback technology that brings a tangible sense of touch to VR experiences. Our work is all about pushing the boundaries of what's possible in virtual reality, and you can see some of our projects at Haply.co. We’ve had great success in integrating our tech with the Meta Quest 2 controllers by designing an adapter to connect the controller to our robot. I was pretty annoying to use the mesh models to do a good job at this. We are now trying to do the same for the Meta Quest 3. To make this happen, we're in need of the surface or solid models of the Quest 3 controllers, anything usable in CAD, so no mesh ideally. This would significantly accelerate our design process and ensure seamless compatibility with the latest Quest technology. I’m reaching out to this community in hopes of connecting with someone from Meta who can assist us with this or someone who could share those files. If anyone can point me in the right direction or provide the contact of someone who can help, it would be greatly appreciated. Thanks for reading, and I’m looking forward to any guidance or connections you can offer! Felix2.7KViews0likes2CommentsVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Request for Enhanced Audio Routing Controls with External Microphones on Meta Quest
Hello Meta Team, I’m reaching out to request improvements in audio routing control for the Meta Quest, especially when using external USB-C microphones. Currently, the Quest OS automatically routes both audio input and output through the USB-C port when an external microphone is connected. This setup unfortunately limits flexibility, as audio playback through the device’s internal speakers is disabled, which contrasts with the more granular audio routing available on many Android smartphones. Use Case and Rationale: This limitation impacts a wide range of VR use cases, such as: Recording and Streaming: Many creators and developers require the ability to record or stream with an external microphone while still using the Quest’s built-in speakers for monitoring or providing immersive soundscapes. Accessibility and Communication: Granular audio routing control could greatly benefit users with specific accessibility needs or those using VR for social applications, where communicating through an external microphone and hearing output via speakers would enhance the experience. Application Flexibility: Several VR and AR applications rely on custom audio setups that would benefit from per-device audio control, similar to standard Android settings where users can select preferred input and output sources individually. Feature Suggestions: To enhance audio routing control, I’d like to suggest the following features: Option to Separate Input/Output Routing: Allow users to select audio input (e.g., an external mic) and output (e.g., internal speakers) independently, akin to Android’s audio settings. Persistent Device Preference: Provide a setting to enable/disable automatic audio routing to USB-C devices upon connection, similar to the “Disable USB audio routing” option on Android. In-App Routing Controls: Ideally, allow apps to query and set audio routing preferences directly, enabling custom audio experiences within VR applications. I believe that these enhancements would not only improve accessibility but also expand the creative and practical applications of the Quest headset. This type of control is becoming increasingly essential as more users integrate VR into production, education, and social environments. Thank you for considering this request. If there’s any way to provide feedback or be part of testing new audio features, I’d be happy to participate!351Views2likes0CommentsPitch, roll, and yaw not accurate
I am trying to use a headset and a controller: pitch, roll, and yaw values in my latest application. I need very accurate values. I have measured the exact values of the rotation and compared them with ones from ovrCameraRig->CenterEyeAnchor->Euler rotation. Unfortunately, values do not match. For example, for headset pitch, the difference can exceed 10 degrees for 80 degrees. For degrees close to 90 the values look more than strange. Do you know how I can get the exact headset and controller rotations?813Views0likes2CommentsMaking Quest 3 discoverable through local network
I'm trying to use a multicast dns library (zeroconf protocol) to register the quest 3 device in the local network. I've tried the following methods in Unity: Importing this library: https://github.com/jdomnitz/net-mdns; seems to work on the editor when hitting the play button, but not when building the app on the quest 3. So I thought maybe it was that the library not being compatible with Android, so I implemented this other library as a java plugin: https://github.com/jmdns/jmdns. This plugin seems to execute correctly looking the logcat, but it wont publish the service even though I don't get any errors, using this code: public void startService(String serviceType, String serviceName, int port, String serviceDescription) { try { InetAddress address = InetAddress.getLocalHost(); jmDNS = JmDNS.create(address); ServiceInfo serviceInfo = ServiceInfo.create(serviceType, serviceName, port, serviceDescription); jmDNS.registerService(serviceInfo); // This line logs with the correct information but the service doesn't show up Log.d("JmDNSPlugin", "Service Started: " + serviceName + " with type: " + serviceType + "; port: " + port + "; description: " + serviceDescription); } catch (IOException e) { Log.e("JmDNSPlugin", "Error starting service: " + e.getMessage()); e.printStackTrace(); } } I'm not sure if it is a permission thing. I have the following manifest code: <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:installLocation="auto"> <application android:label="@string/app_name" android:icon="@mipmap/app_icon" android:allowBackup="false"> <activity android:theme="@android:style/Theme.Black.NoTitleBar.Fullscreen" android:configChanges="locale|fontScale|keyboard|keyboardHidden|mcc|mnc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|touchscreen|uiMode" android:launchMode="singleTask" android:name="com.unity3d.player.UnityPlayerActivity" android:excludeFromRecents="true" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> <category android:name="com.oculus.intent.category.VR" /> </intent-filter> <meta-data android:name="com.oculus.vr.focusaware" android:value="true" /> </activity> <meta-data android:name="unityplayer.SkipPermissionsDialog" android:value="false" /> <meta-data android:name="com.samsung.android.vr.application.mode" android:value="vr_only" /> <meta-data android:name="com.oculus.handtracking.frequency" android:value="HIGH" /> <meta-data android:name="com.oculus.handtracking.version" android:value="V2.0" /> <meta-data android:name="com.oculus.ossplash.background" android:value="black" /> <meta-data android:name="com.oculus.supportedDevices" android:value="eureka" /> </application> <uses-feature android:name="android.hardware.vr.headtracking" android:version="1" android:required="true" /> <uses-feature android:name="oculus.software.handtracking" android:required="false" /> <uses-permission android:name="com.oculus.permission.HAND_TRACKING" /> <uses-permission android:name="com.oculus.permission.USE_ANCHOR_API" /> <uses-permission android:name="com.oculus.permission.IMPORT_EXPORT_IOT_MAP_DATA" /> <uses-feature android:name="com.oculus.feature.PASSTHROUGH" android:required="false" /> <uses-permission android:name="com.oculus.permission.USE_SCENE" /> <uses-feature android:name="android.hardware.wifi" android:required="true" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.CHANGE_WIFI_STATE" /> <uses-permission android:name="android.permission.CHANGE_WIFI_MULTICAST_STATE" /> <uses-feature android:name="com.oculus.feature.CONTEXTUAL_BOUNDARYLESS_APP" android:required="true" /> </manifest> I think that this line <uses-permission android:name="android.permission.CHANGE_WIFI_MULTICAST_STATE" /> should do the trick but it doesn't seem to work either. So I'm out of ideas, I would really appreciate any help I can get, Thank you in advance.Solved1.3KViews0likes1CommentI need more space to move - how to deactivate the limitation
Hello together. I'm experimenting with a training app in MR with the Quest 3. I use an authoring app for this, I'm not a real programmer who can write code... 🙈 My problem is, I want to project a real sized helicopter onto an empty landing pad or into the hanger in MR. However, the limitation of the range of motion is so small that I cannot walk around the entire helicopter or move far away to position the helicopter correctly as a 3D object (because for this, i have to stand outside of the "box" of the helicopter). I need space of around of 100m². I found instructions on how to deactivate the field limitations (boundary) in developer mode, but this option seems no longer available (developer mode is active). Is there any way to deactivate the field limit on Quest 3?291Views0likes0CommentsDetect batteries and haptic suits only in android with Unity.
I am currently experimenting with the Oculus Quest implementation package and Unity. I have some questions regarding the following points: All my game and data retrieval methods run on Android on my Quest 3. I would like to find out during gameplay which devices are connected to the USB-C port (the only port it has) and also via Bluetooth. By devices, I mainly mean batteries and haptic suits. For example, I want to know which batteries are being used when they are connected to my quest headset, and in case a user connects a haptic suit to the Quest 3, I also want to know in-game which haptic suit is being used.681Views0likes2CommentsGet info about Hardware conected in oculus Quest head set
Hey nice to meet you, actually im doing some experiments with a oculus Quest 3 and i wonder if i can get a list of each hardware conected vía Bluetooth and the Port c of the head set, this with code c# in unity. That Is possible ?437Views0likes0Comments