Shared Mode - storing app data across sessions
We’re working on integrating a new license system for our VR training app and were hoping to get insights on two related issues we’re facing — particularly in the context of Meta for Work (MHMS). Accessing MHMS Distributed Files We’d like to support large-scale deployments by allowing customers to deploy encrypted license files to headsets via Horizon managed services (there us an option there to deploy files to devices). File distribution itself works fine, but we haven’t found a way to read these files from within the app at runtime, i.e. to have the proper permissions to read those files. Do you have any guidance on how to access these files from the app? Storing License Info Locally We also want to store license data locally on the device after activation (while online, in the case were they do not want to distribute license files), so the app can validate the license even when the device is offline. This is essentially the same encrypted data as the distributed file, just generated online and then downloaded to the device. Is there a supported way to persist such data locally on MHMS-managed devices? We’re aware that local storage is very restricted across sessions, so we’d love to know if there is a way (now or with a future update). A work-around is keeping users in the same session, but that's not really ideal as users always will have the option to end a session. For now we will be requiring MHMS devices to be online, but that's not a solution viable on the long term where you will also see devices being used offline. MHMS itself now supports offline use. If there is a solution (so my first question and maybe to my second) this could also be used for trained led courses where the trainer sets the training options for a training session or even for the organization as a whole (think language and other regional options). Hopefully someone can provide some insights.247Views0likes8CommentsSubject: Request for Guidance - Innovation Proposal and Strategic Partnership Request
Dear Meta Community/Support Team, My name is AS33, I am a Strategic Designer and Independent Developer. I am currently researching innovation modules that may be relevant to several Meta teams, including but not limited to Meta Horizon, Generative AI, LLaMA, and Experimental Interface Research. I am seeking guidance on the following: 1. What department, contact, or channel is best for submitting innovation proposals or partnership ideas? 2. Is there a dedicated team within Meta (e.g. Horizon Labs, Research, R&D, Co-Design, etc.) that reviews early-stage concept proposals from external independent authors? 3. Are there any internal innovation or consulting programs (e.g. Co-design Program, Meta Open Research, Meta Quest Creators Hub) that are currently accepting new participants or promising collaborations? I am particularly interested in hybrid models where I can contribute not as a permanent team member, but as an external signal architect, designer, or creative collaborator. My goal is to explore mutually beneficial options that could include: - Strategic consulting on symbolic systems, neuro-alignment, or immersive signal architectures - Early testing collaboration with the Meta Horizon or Generative AI teams If you can forward this to the appropriate team or share the appropriate contact paths or application portals, I would greatly appreciate your help. With respect and gratitude, AS3346Views0likes1CommentExploring ChatGPT Integration for VR UI/UX: Has Anyone Tried It?
Hi everyone, I'm currently experimenting with ways to integrate conversational AI (like ChatGPT) into VR environments to enhance user onboarding, contextual help, or dynamic NPC interactions. I’m working with Unity and targeting Quest 2/3. My goal isn’t to run the model locally, but to connect to external APIs (e.g., OpenAI) via secure HTTPS requests — ideally triggered by user input or voice in immersive menus. I’d love to hear from other devs: Have you tested anything similar in a VR context? What are your thoughts on latency and user flow when connecting to external services in VR? Any advice on best practices for handling text responses (display, formatting, user focus)? Are there any sample projects or SDKs that play well with Meta Quest platforms for this use case? This is still a prototype phase for me, but I believe natural language interaction could really improve the way users explore VR content — especially for onboarding or educational apps. Looking forward to your insights!124Views0likes3CommentsVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Urgent: Need Assistance with Disabled Instagram Business Account and Lack of Support Options
Hi Meta Support, I’m writing to seek urgent assistance regarding my Instagram business account, which was disabled on August 24th, 2024. This account has been an essential part of my business for the past three years, and losing access to it has severely impacted my livelihood. I received a message from Instagram stating that my account was disabled for allegedly selling or promoting counterfeit goods. This claim is entirely false. I have never engaged in any activity that violates Instagram’s policies, and I take great care to ensure my business operates with integrity. I have already submitted an appeal, but I was informed that Instagram still believes the account was used for prohibited activities. This is incredibly unfair, especially considering the account’s long history of compliance and the fact that I was in the process of obtaining my LLC to further legitimize my business. To make matters worse, I have found no clear way to contact Instagram support directly to resolve this issue. The help center link provided does not offer any further options for appealing or escalating my case. This lack of support leaves me feeling completely helpless, as this account is the only platform I use for my business, and all of my clients are connected through it. I urgently request your assistance in forwarding this matter to the appropriate team at Instagram or providing a direct contact who can help resolve this issue. I am more than willing to provide any additional information or documentation needed to prove my compliance with Instagram’s policies. Thank you for your understanding and attention to this matter. I sincerely hope to resolve this as quickly as possible. instagram handle: bpxkicks591Views0likes0CommentsInternational keyboards
How can it be there is still no support for non english Bluetooth keywords ?! - Meta quest could be a nice productivity tool except this makes it useless outside English countries. Its such a basic feature I was surprised it not being supported when I bought a keyboard. A little searching and I can see this has been a mentioned issue for years ! Its can't be that hard to add? I hope this gets some attention at some point.536Views0likes0Commentsmarketplace has no feelings
What would happen if all Facebook users could feel more united to their website, making it different but not so different from a conventional profile but with editable sections, so that users can design, distribute and categorize each and every aspect.525Views0likes0CommentsWhat documents can be submitted to pass the organization verification
We are developers from China who want to publish an app on Quest. However, during the Organization Verification operation, Verify Business followed the prompts to upload the business license and phone number proof associated with the business, but we were unable to pass the review.564Views0likes0Comments[Feature Request] Integrate Tracked Keyboard with Meta Remote Desktop
I've been experimenting with Tracked Keyboard (Logitech MX Keys Mini) and Workrooms recently and have noticed a few UX/usability issues. The tracked keyboard communicates directly with the Quest, which makes perfect sense if you plan to use it as a direct input while using the Quest. However in Workrooms, when you bring your PC (or Mac in my case) in to the workroom, the keyboard does not control the PC. This is a major source of friction from a usability/UX perspective. A possible solution, and hence the feature request, would be to integrate the tracked keyboard with Meta's Remote Desktop app as the intention in this scenario is to control and directly input in to the PC. This way the user sees the keyboard (passthrough), the keystrokes are inputted directly to the PC, and Remote Desktop can pass keyboard events, positioning, etc. to the Quest. It does throw up the issue of two modes of Tracked Keyboard (one directly to Quest, the other to Remote Desktop), however cognitively this is easier for a user to manage than having two separate keyboards on the same desk (one for the Quest another for the PC), or to have the disconnect that the Tracked Keyboard doesn't directly control the applications on the PC that are brought in to Workrooms. I hope this helps. rgds Dave2.3KViews2likes3Comments