Subject: Request for Guidance - Innovation Proposal and Strategic Partnership Request
Dear Meta Community/Support Team, My name is AS33, I am a Strategic Designer and Independent Developer. I am currently researching innovation modules that may be relevant to several Meta teams, including but not limited to Meta Horizon, Generative AI, LLaMA, and Experimental Interface Research. I am seeking guidance on the following: 1. What department, contact, or channel is best for submitting innovation proposals or partnership ideas? 2. Is there a dedicated team within Meta (e.g. Horizon Labs, Research, R&D, Co-Design, etc.) that reviews early-stage concept proposals from external independent authors? 3. Are there any internal innovation or consulting programs (e.g. Co-design Program, Meta Open Research, Meta Quest Creators Hub) that are currently accepting new participants or promising collaborations? I am particularly interested in hybrid models where I can contribute not as a permanent team member, but as an external signal architect, designer, or creative collaborator. My goal is to explore mutually beneficial options that could include: - Strategic consulting on symbolic systems, neuro-alignment, or immersive signal architectures - Early testing collaboration with the Meta Horizon or Generative AI teams If you can forward this to the appropriate team or share the appropriate contact paths or application portals, I would greatly appreciate your help. With respect and gratitude, AS3324Views0likes1CommentMRUK not found despite it being created...?
I'm currently using Quest 3 v62 (now v63) and Unity 2022.3.10f1. Working on a random spawn mechanic in the MR environment where objects can spawn on the ceiling. The feature worked fine when I tested it on Unity Playtest, but once I built it on the standalone Q3 (or simply hooked it up with a quest link), the scene could no longer be implemented. The room setup does indicate that I have my tables and walls, but there's no ceiling. I presume the Spatial data didn't transfer properly (I did write a script to grant permission to Q3 for spatial data, and clicked the Permission Requests On Startup) I have no idea where it all went south. Any ideas?1.7KViews0likes2CommentsNo scene mesh when building in .apk
Greetings. When using Meta XR all-in-one SDK v69.0.0.0 and “Scene mesh” block and working with the project within passthrough (i.e. it is MR project) using quest link, it works properly and generates space mesh, but when trying to build a project for android, i.e. .apk scene mesh stops working and generating mesh. How can this be fixed? I need it to work specifically with .apk. Unity version: 2022.3.44f1 I’m using oculus quest 3. Thanks416Views0likes0CommentsVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Hand tracking update root scale not working
I'm trying to use Hand tracking in my app and no matter what the Hand scale stays at one event with a friends hands that are much smaller. After some investigation, and a lot of debugging, I found that the hand scale is calculated for the first frame of the application and is at like 1.1 before it gets switched back to 1 for ever. A "solution" I found is to switch of the update root scale parameter of my hands and I could then scale them depending on this initial value but as per the documentation says, the root scale is supposed to get updated during runtime. (The documentation is pretty empty on everything though and it's never detailled how they are supposed to be mesuring that). Does anyone managed to have the root scale update for their hand tracking ? If yes, could you share some insight with me ?Solved1.5KViews0likes2CommentsDynamic shadows? Shadow map... Please help
Hola! After meshing around with the MR Utility Kit sample, I really liked the "Shadow map" version of Oppy´s shadow. I have been trying to reproduce this shadow on my own project, copying every single blueprint and setting from the PTRL sample map (the map that showcases all interactions with light and shadows in VR) and .... I just cannot make it. Do you know guys how to implement a simple movable shadow in VR using this method? Thanks a lot in advance821Views0likes1CommentMeta please expand React Native to support augmenting 2D UI with 3D objects
Apple's VisionOS allows developers to create non-gaming focused Mixed Reality apps using common iOS UI toolkits (e.g. SwiftUI) and Apple has expanded SwiftUI to support "Volumes" where 2D UI can live in harmony with 3D objects in a virtualized cube of real-estate in the user's volumetric space. The way forward to create a counterpart to this is to expand React Native to support behavior like visionOS volumes, React Native is a widely adopted, community supported and well loved multi-platform UI framework Meta already plays a huge role in maintaining. This would be great fitting. Because Meta Quest devices run on HorizonOS, which is a forked version of Android and because React Native already supports Android, it should be a small leap for Meta's development team to make React Native with Android Studio a new option for building Meta Quest Mixed Reality productivity apps. Please consider.630Views1like0CommentsInternational keyboards
How can it be there is still no support for non english Bluetooth keywords ?! - Meta quest could be a nice productivity tool except this makes it useless outside English countries. Its such a basic feature I was surprised it not being supported when I bought a keyboard. A little searching and I can see this has been a mentioned issue for years ! Its can't be that hard to add? I hope this gets some attention at some point.523Views0likes0CommentsWhen will we get object and image classification (Computer Vision) for Quest 3 and Quest Pro?
If I wanted to build a Mixed Reality app that can detect when a certain brand logo is visible on a poster, coffee cup coaster, etc... and then allow spatial anchoring relative to that logo there seems to be no way to achieve this today. Compute vision for Quest 3 and Quest Pro developers is limited to a very restricted list of "semantic classification" labels, all room architecture and furniture related objects (ceiling, floor, wall, door fixture, lamp, desk, etc..) full list here: https://developer.oculus.com/documentation/unity/unity-scene-supported-semantic-labels/?fbclid=IwAR3KeVSJCLX977HPLKVDkFM3YqG71p_Blo_eoC7onKkax7wyCafLV0gXTCc This also prohibits any kind of AR/MR training experience where some physical world object (e.g. a bulldozer operations panel) could be detected and spatial anchors augmented relative to specific control panel features to provide dialogs, etc.. all the things you'd expect from industrial AR applications. But this is not just useful for Enterprise/industrial AR, image and object classification is actually a core AR/MR feature required to build compelling experiences. Without this, we just have novelty use cases Looking at the competition, I see Byte Dance is solving this but just allowing camera feed access on their Enterprise Pico 4. The retail version they block it. I doubt Meta will provide camera feed access as they are no longer selling Enterprise specific hardware and this would require a special firmware update to enable. Apple has provided camera access to iOS developers using ARKit for years, but for Vision Pro's ARKit implementation they are restricting camera feed access, however they are still providing image classification/detection and their computer vision/classification models, allowing developers to add their own images for recondition, here's a page from their docs. https://developer.apple.com/documentation/visionos/tracking-images-in-3d-space I am really surprised Quest Pro has been out almost a year and this sort or core AR/MR functionality is completely absent. With Quest 3 now released, more attention will be on AR/MR experiences, and Meta has great in house AI technology, they have computer vision models they could build a closed pipeline where the raw image feed is not accessible, but the classifier model is compiled and through a closed system the detection can happen in Unity3D or Unreal apps. Regardless of how they achieve it, this is so very important to future MR/AR type apps. Without it basically all you can do is very simple spatial anchoring, which may be suitable for novelty games but it's very restrictive and not reflective of the power of MR/AR.14KViews17likes21CommentsReal-world objects impact virtual world
I want real-world objects impact virtual world. Example - I want to have data/events when the real-world ball crosses the virtual-world score lines, so I can count the score: Any ideas of how to achieve this? My thoughts: Capture camera feed with scrcpy and do some computer vision magic on PC. Do you see if it's possible and worth to dig in it? The same CV magic but with capturing video from external camera like a smartphone pointed to my play area. In this case there is a benefit - there is no need to look at the ball all the time to have visual data so players could keep their eyes up which is good skill to develop for hockey (and other) players. Track real-world hockey stick by attached controller and detect collisions of virtual hockey stick with score lines. I don't like this approach due to lack of movement freedom - player can't score by pushing the ball of the hockey stick blade. As a player I want to feel that it's the real-world ball crossing the score lines.588Views1like0Comments