Shared Mode - storing app data across sessions
We’re working on integrating a new license system for our VR training app and were hoping to get insights on two related issues we’re facing — particularly in the context of Meta for Work (MHMS). Accessing MHMS Distributed Files We’d like to support large-scale deployments by allowing customers to deploy encrypted license files to headsets via Horizon managed services (there us an option there to deploy files to devices). File distribution itself works fine, but we haven’t found a way to read these files from within the app at runtime, i.e. to have the proper permissions to read those files. Do you have any guidance on how to access these files from the app? Storing License Info Locally We also want to store license data locally on the device after activation (while online, in the case were they do not want to distribute license files), so the app can validate the license even when the device is offline. This is essentially the same encrypted data as the distributed file, just generated online and then downloaded to the device. Is there a supported way to persist such data locally on MHMS-managed devices? We’re aware that local storage is very restricted across sessions, so we’d love to know if there is a way (now or with a future update). A work-around is keeping users in the same session, but that's not really ideal as users always will have the option to end a session. For now we will be requiring MHMS devices to be online, but that's not a solution viable on the long term where you will also see devices being used offline. MHMS itself now supports offline use. If there is a solution (so my first question and maybe to my second) this could also be used for trained led courses where the trainer sets the training options for a training session or even for the organization as a whole (think language and other regional options). Hopefully someone can provide some insights.182Views0likes8CommentsVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Educational institution needs developer accounts for teaching.
Educational institution needs developer accounts for teaching. Hello dear Meta Community, We have the following problem, we are an educational institution and have developed courses for the development of VR applications. Now we are faced with the problem that we want to send 10 Metaquest + laptop to the participants of the course (since they are remote courses). Now the participants have to use the desktop app to use the developer functions, which means they need a developer account. But now we can't give the participants a cell phone for a 2 factor authentication or provide the account with a credit card (I think this is self-explanatory). What possibilities are there, I hope that someone can help me or that the request is taken up by support. Translated with DeepL.com (free version)474Views0likes1CommentMaster Thesis (fake profile problematic)
Hello everyone, I'm a computer science student at the University of Namur (Belgium). During this academic year, I conducted a systematic literature review (SLR) on the issue of fake profiles (focused on detection systems). I would like to continue on this topic for my master's thesis. I know that at some point I will need to contact the Facebook services that deal with this problem, but currently, I have not found any way to contact Facebook. If anyone could tell me how to contact them, it would be greatly appreciated. Best regards, Denis557Views0likes0CommentsWhere to begin?
Hi All, Now, I want to ask you all a question. I am NOT a developer however, I want to learn how to build apps for Meta Quest. I am a teacher who is getting into VR (for my class) and I want to teach students how to build/programme for their VR. If I get it right, I will secure funding for more headsets as I need to show what can be done by building apps for Meta. If I get it wrong, then tools down. So as a complete beginner, not just in terms of Meta but also programming, where can I or what do I need to learn to build something simple (sounds like a game show - "Build Something Simple"), in Meta for the Meta Quest I have. I have a Meta Quest 3. Can anyone advise? Kind regards N705Views0likes1Commentneed advice on online education platform
we have a small team building medical training for doctors in hospital question: after building the XR scene with unity/openxr, what is the best practice to integrate XR program with online course platform? is openEdx a good online education platform for delivering XR program? please help979Views0likes0CommentsWhat are your future Plans for 360 Videographers
Just curious what your future plans are for 360 Video Developers like me. I have hundreds of thousands of watch time on your platform, but you keep expecting content for free. Pico pays me for content, YouTube pays me for content, but I am thousands of dollars in the hole with you from ads, Oculus Launch, and other issues. Are you ever going to value us as creators and help us like Pico or YouTube? Your response would be greatly appreciated so I can know if I will continue to upload to your platform and what I will tell other 360 developers, or if I will delete my channel from your platform. Shauna from VRGetaways Using VR Headsets to Transport you on Adventures to Beautiful Places. 肖纳历险记 shaunasadventures1@gmail.com Website ShaunasAdventures.com or www.vrgetaway.org https://creator.oculus.com/community/109816967492217/ YouTube Link https://www.youtube.com/@VRGetaway590Views0likes0Comments- 593Views0likes0Comments
Thinking about 'presence' when developing a game
"I feel like I'm in this place, not just looking at this place" Not sure how many of you watched the recent Lex Fridman interview with Mark Zuckerberg, showcasing the Meta CODEC avatars? https://www.youtube.com/watch?v=MVYrJJNdrEg It got me thinking about the sense of presence in VR. It's a topic I posted about on a private discord server recently and thought it might be worth sharing here too. I work in a psychology VR lab where we use the fact that immersion triggers real physiological responses to help patients with mental health problems. When you get it right, you really can trigger involuntary fight or flight responses in a participant, or something like a palpable sense of vertigo (think Richie's Plank Experience). All the while, part of your conscious brain knows that you're still in a simulation and that duality - feeling true to life emotions in a safe virtual environment - can be used to help people try things they might not be able to do in real life. It's a form of exposure therapy that can be incredibly powerful. Giving your players that same sense of immersion is also incredibly powerful and is, for me, what differentiates VR from any other medium. Here's that post about presence: Someone asked me recently what qualities I thought a VR application should have. As a developer, my mind initially went to technical requirements like keeping a solid framerate, but I also think it's important to think about designing with the medium in mind. In other words, to think at a higher level about how best to trully immerse your players in your VR experience. Something well worth thinking about is the sense of presence (immersion in the virtual world). There's a lot of research into this, with one of the most established being by a research scientist called Mel Slater. He talks about immersion (the sense of being there) as being like a house of cards that's built on two principles: the Place Illusion and the Plausibility Illusion. In simple terms, Place is whether things look right (it's about the visual representation, the graphical fidelity, which also encompasses things like keeping a solid frame rate). Plausibility is whether things act as they should (based on the world I see, does it react and behave in the way I expect it to behave). That's not to say it has to be realistic. It's more about following its own internal logic. If the VR experience depicts a fantastical world then it can behave in non-realistic ways. It also needs to be consistent. If you've allowed your player to pick-up one object and intereact with it, then they will expect to be able to pick up any and all objects that they come into contact with. Being able to pick up one object but not another will immediately break the plausibility illusion. The house of cards will collapse and your players will lose that sense of presence, of 'being there'. Mel Slater suggests that the Place illusion is actually quite resilient (you can get away with low poly, environments/art styles) and when broken it rebuilds itself relatively quickly. The Plausibility Illusion, by comparison, is much more fragile and perhaps more important to immersion. For example, a low poly/cartoony game with lots of physics interactions (think Job Simulator) provides a very strong sense of presence. Whereas it's much harder for a photorealistic VR simulation to hold that in place because given the hyper-realistic visual representation, players expect equally realistic interactions with that world (this includes not just hand interactions but also interactions with NPCs, movements in the space, etc.). Any jankiness will quickly break the illusion. Thinking about that balance between the two is particularly important for game design, UX and art direction. Slater's also done a lot of research into locomotion and embodiment (that sense of inhabiting someone else's body). This raises questions about whether you need a full body avatar, or just arms, or just hands. This is the body illusion. If you have feet and they don't move the way you expect them to move or the way you know they are moving in the real world (because of proprioception we're still aware of our real body while in VR) again, the house of cards comes tumbling down. It's the reason so many games have avatars that stop at the waist. But then again... Slater found that we have a surprisingly fluid sense of body ownership. Our brain very quickly accepts an alien body (tentacles for hands) or fatter/thinner or a different gender than our own. This really is just a brief oversimplification of Mel Slater's work. For those interested, I'd highly recommend searching out some of his research papers, and at the very least start thinking about these issues as part of your game and app development.1.2KViews1like0Commentsbackwards compatibility
would love to see a future interest in developing backwards compatibility between headsets i owned a rift and rift s before and upgraded to a quest 2 but none of the rift games are compatible with the quest. in order to play those same games i would have to re purchase them, which would be way easier with a exchange program or a way to convert old purchases into the newer version as technology evolves almost everything gets out dated754Views0likes0Comments