Which AR glasses to buy for research with LLMs and Raw data
Hello there, I'm writing you because, together with a group of PhD and Master's students at my university, we are exploring the development of new mobile or web-based applications that can interface with the Meta Ray-Ban or Aria smart glasses via SDK. Our goal is to test our own vision-language models (VLLMs) accessing directly the raw data streams—specifically video and audio—from the glasses and providing contextualized responses through the device’s built-in speakers using our own LLMs. We are particularly interested in whether it is possible to develop a mobile app or even getting access through a web browser that can: Collect and transmit raw sensor data (video/audio) Send processed responses back to the glasses Use Bluetooth or an Android mobile app (possibly via XRCore or Unity) as the communication bridge If this is feasible, could you kindly advise: Which smart glasses model(s) you recommend for this type of development. We want to buy some few glasses to start with What plugins, SDKs, or frameworks would be most suitable We appreciate your guidance and thank you in advance for your support. Best regards, Luis F.18Views0likes0CommentsVirtual 3D world anchored to real-world landmarks
## Introduction In an era where immersive technologies have struggled to gain widespread adoption, we believe there is a compelling opportunity to rethink how users engage with digital content and applications. By anchoring a virtual world to the physical environment and seamlessly integrating 2D and 3D experiences, we could create a platform that offers enhanced productivity, intuitive interactions, and a thriving ecosystem of content and experiences. We build upon our previous vision for an AR virtual world by introducing an additional key capability - virtual identity augmentation. This feature allows users to curate and project their digital personas within the shared virtual environment, unlocking new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. ## Key Concepts The core of our proposal revolves around an AR virtual world that is tightly integrated with the physical world, yet maintains its own distinct digital landscape. This environment would be anchored to specific real-world landmarks, such as the Pyramids of Giza, using a combination of GPS, AR frameworks, beacons, and ultra-wideband (UWB) technologies to ensure consistent and precise spatial mapping. Within this virtual world, users would be able to interact with a variety of 2D and 3D elements, including application icons, virtual objects, and portals to immersive experiences. As we previously described, the key differentiator lies in how these interactions are handled for 2D versus 3D devices: 1. **2D Interactions**: When a user with a 2D device (e.g., smartphone, tablet) interacts with a virtual application icon or object, it would trigger an animated "genie out of a bottle" effect, summoning a 2D window or screen that is locked to a fixed position in the user's view. 2. **3D Interactions**: For users with 3D devices (e.g., AR glasses, VR headsets), interacting with a virtual application icon or object would also trigger the "genie out of a bottle" effect, but instead of a 2D window, it would summon a 3D portal or window that the user can physically move around and even enter. ## Virtual Identity Augmentation One of the key new features we are proposing for the AR virtual world is the ability for users to place virtual objects, like hats, accessories, or digital avatars, on themselves. These virtual objects would be anchored to the user's position and movements, creating the illusion of the item being physically present. The critical distinction is that 2D users (e.g., on smartphones, tablets) would be able to see the virtual objects worn by other users in the shared virtual world, but they would not be able to place virtual objects on themselves. This capability would be reserved for 3D device users, who can leverage the spatial awareness and interaction capabilities required for virtual object placement. These virtual objects placed on a user would persist across devices and sessions, creating a consistent virtual identity or "avatar" for that user within the AR virtual world. This virtual identity would be visible to all other users, regardless of their device capabilities (2D or 3D). Importantly, the virtual objects used to create this virtual identity could also be leveraged to partially or completely obscure a user's real-world appearance from 2D video, photo, and 3D scanning. This would allow users to control how they are represented and perceived in the blended physical-virtual environment, providing greater privacy and security. ## Enhanced 2D Interfaces for 3D Users Building on our previous concept, we can further enhance the user experience for 2D applications, particularly for 3D users. By leveraging the depth and spatial characteristics of the 3D interface blocks, we can unlock new ways for users to interact with and manage their virtual applications and content. Some of the key capabilities include: 1. **Contextual Controls and Information Panels**: The sides of the 3D interface blocks could display shortcut controls, supplementary information panels, and other contextual elements that 3D users can access and interact with as they navigate around the application window. 2. **Dynamic Layouts and Customization**: 3D users would be able to resize, rotate, and reposition the side panels and controls, enabling personalized layouts and ergonomic arrangements tailored to their preferences and workflows. 3. **Multi-Dimensional Interactions**: The 3D interface blocks could support advanced interaction methods beyond basic clicking and scrolling, such as gestures (grabbing, pinching, swiping) and voice commands to interact with the contextual controls and information. 4. **Seamless Transition between 2D and 3D**: Despite these enhanced capabilities for 3D users, the 2D application windows would still function as regular 2D interfaces for users without 3D devices, maintaining a seamless collaborative experience across different device types. ## Potential Benefits and Use Cases The enhanced AR virtual world concept we propose offers several potential benefits and use cases: 1. **Increased Productivity and Ergonomics**: By providing 3D users with enhanced controls, contextual information, and customizable layouts, we can improve their efficiency and ergonomics when working with 2D applications. 2. **Intuitive Spatial Interactions**: The ability to physically move and interact with 3D portals and windows, as well as the option to place virtual objects on oneself, can lead to more natural and immersive ways of engaging with digital content and applications. 3. **Virtual Identity and Self-Expression**: The virtual identity augmentation system allows users to curate and project their digital personas, enabling new forms of social interaction, status signaling, and even monetization opportunities. 4. **Privacy and Security**: The option to obscure one's real-world appearance through virtual identity augmentation can provide users with greater control over their digital privacy, especially in public spaces. 5. **Collaborative Experiences**: The seamless integration of 2D and 3D interactions within the same virtual environment can enable users with different device capabilities to collaborate on tasks and projects. 6. **Extensibility and Customization**: Providing tools and APIs for developers to integrate their own applications and content into the virtual world can foster a thriving ecosystem of experiences. 7. **Anchored to the Real World**: Tying the virtual world to specific real-world landmarks can create a sense of spatial awareness and grounding, making the experience feel more meaningful and connected to the user's physical environment. Robotics Safety Integration Real-time visualization of robot operational boundaries Dynamic safety zone mapping visible to all platform users Automated alerts for boundary violations Integration with existing robotics control systems Unified space mapping for multi-robot environments Environmental Monitoring Visualization of invisible environmental factors Air pollution particle mapping CO2 concentration levels Temperature gradients Electromagnetic fields Real-time data integration from environmental sensors Historical data visualization for trend analysis Alert systems for dangerous condition levels Construction and Infrastructure Real-time 3D blueprint visualization Infrastructure mapping Electrical wiring paths Plumbing systems HVAC ducts Network cables Safety feature highlighting for drilling and renovation Progress tracking and documentation Client visualization tools for project understanding Augmented safety checks and compliance monitoring Inventory and Asset Management AI-powered real-time inventory tracking Integration with camera-based stock management systems 3D spatial mapping of warehouse spaces Automated photogrammetry for stock visualization Real-time update of virtual inventory models Cross-reference with ordering systems Predictive analytics for stock management ## Conclusion By combining the core concepts of an AR virtual world with the added capability of virtual identity augmentation, we believe we can create a compelling platform that addresses the shortcomings of past immersive technology efforts. This vision not only offers enhanced productivity, intuitive interactions, and a thriving ecosystem, but also unlocks new dimensions of social interaction, self-expression, and the blending of physical and virtual realms. Creating a shift toward a 3D society, by including 2D phones. Leading to a new 3D app store. We invite you to explore this concept further and consider its potential impact on the future of computing and human-computer interaction. Together, we can shape a new era of spatial computing that bridges the gap between the physical and digital worlds.Meta Quest Media Studio - How Do I View Drafts?
I've uploaded a draft video to the Meta Quest Media Studio. The Documentation seems woefully out of date as it references the Go: Selecting "Save as Draft" will allow only other people in your Oculus Organization to preview and review the content in headset. The draft content will show up under "Media Studio - My Videos" shelf in the Meta Quest TV application on Go. I don't see such an option under Meta TV on my account. Other users in our Organization (all Admins) don't see such an option. The only way I was able to view this draft in my headset was: "View Meta Quest Page" Click "Watch in Device" Click "Open" next to my chosen device Watch it in the headset Am I missing a setting or option to be able to view drafts in my headset? What about other people in my Organization?543Views0likes0CommentsWhat are your future Plans for 360 Videographers
Just curious what your future plans are for 360 Video Developers like me. I have hundreds of thousands of watch time on your platform, but you keep expecting content for free. Pico pays me for content, YouTube pays me for content, but I am thousands of dollars in the hole with you from ads, Oculus Launch, and other issues. Are you ever going to value us as creators and help us like Pico or YouTube? Your response would be greatly appreciated so I can know if I will continue to upload to your platform and what I will tell other 360 developers, or if I will delete my channel from your platform. Shauna from VRGetaways Using VR Headsets to Transport you on Adventures to Beautiful Places. 肖纳历险记 shaunasadventures1@gmail.com Website ShaunasAdventures.com or www.vrgetaway.org https://creator.oculus.com/community/109816967492217/ YouTube Link https://www.youtube.com/@VRGetaway590Views0likes0CommentsVR video player performance question and pixel based rendering
Hey guys We are looking to boost image quality and performance of our DeoVR player and we are not sure where to start. Would be really appreciated if you can help us out realising the most efficient rendering engine in our case. Mail ivan@deovr.com We use AVPro that is integrated with ExoPlayer and other media engines. We are primarily looking into 8K 60FPS playback with videos like https://deovr.com/tevrud on Oculus Quest 2, Pro and Windows headsets.We are thinking of pixel based rendering to get a better performance. Our immediate plan is to proceed with: - Oculus new SDK integration with new sharpening feature - A/B test different image settings - new sharpness shader, saturation, etc. - Play with eye texture scale - but this could degrade performance (HS has it optional) - Pure native implementation on Oculus SDK for testing 180 video, to see if Unity is the bottleneck - Playing with shaders and other image adjustment settings - Custom render pipeline in C++ for Unity We are looking for your help to understand the nature of the situation and greatly boost our rendering engine.976Views0likes0CommentsAllow predefined HTML attribute in <video> tag to define video as 2D / 3D and 180VR /360VR
I have a small website with VR Videos that are exclusively for Oculus devices. The video-clips are emmbedded with video-Tag. HTML Code: <video controls> <source src="video.mp4" type="video/mp4"> </video> Everytime the user has to select manually the following parameter. 3D-Side by Side, 180°VR or 360°VR. It would be awesome, if Meta can introduce a custom html attribite to video tag to preselect. Eg. HTML Code: <video controls> <source vrangle="180" vrmode="3DSide" src="video.mp4" type="video/mp4"> </video> Or with better naming 🙂 Thank you.1.1KViews2likes1CommentSpaceship Bridge Simulator-Custom Workrooms
I am a school teacher who uses a spaceship bridge sim and looking to take it into VR but I need to find a developer who can design a Workroom room that looks like a spaceship bridge. I am not sure if Meta allows custom Workrooms, but if so, and you have the skills to create it, let me know and we can discuss it further. Thank you Todd700Views0likes0CommentsVideo Recording Tools
It would be great to have a few more video recording tools. The most basic tool I'm currently looking for is safe titles. A digital box that would appear when i begin to record and shows me whats in frame and If my headset is level or not. There are a ton of options I wish I had in terms of various aspect ratios, frame rates, lens options...etc. But maybe we could start with safe titles 😃1.2KViews0likes0Comments