Shared Mode, Browser app access?
I have a need to allow my Shared Mode headsets access to the Browser app. Support told me this was currently not an option. Please allow this app to be added to the managed apps page on the work.meta.com site > Apps & content page. It should be up to me (the admin) whether that app is secure or not for my use. I'd also really like it if I can set the starting page of the Browser app to a specific URL in this scenario. My use case for this is... We're a construction MEP engineering firm where we're primarily using the headsets with the Autodesk Workshop XR app, BUT we have a need to also access Matterport's website to view 3D scans in VR. I would however prefer to setup our headsets in Shared Mode so that many different users can simply hop on a shared headset and more easily get right into these specific workflows.85Views0likes8CommentsBuilding NE9: A Runtime and App for AI-Driven Interactive 3D and XR Worlds
Hey everyone, I wanted to share what I’m currently building and open it up for discussion. I’m developing NE9, also known as NastyEngine 9. It’s a modular, real-time runtime designed to integrate AI systems, 3D environments, and interactive applications into a single live pipeline. Alongside NE9, I’m building a companion app that interfaces directly with the runtime. The goal is to use it as a control and integration layer where scene logic, agents, and interaction can be composed and updated live instead of being locked into a traditional editor workflow. The core idea is to treat AI, rendering, networking, and interaction as runtime-orchestrated systems rather than isolated tools. This approach makes it easier to experiment, iterate, and eventually extend into XR and VR environments. This is an active build and the architecture is evolving quickly. I’ll be sharing progress, experiments, and lessons learned as things continue to come together. This screenshot shows where we are right now in the development process. This was our first full session using Meta Quest 3 connected to our desktop via USB, running the Meta desktop app as our development workspace. We were viewing and working directly with our existing tools inside the headset to get a real sense of scale, comfort, and workflow. It was our first serious hands-on development session this way, and getting our feet wet was a lot of fun. Even just working from the desktop inside the headset made it clear that this is a platform we’re excited to build for. We’re looking forward to transitioning from desktop-based development into deeper Horizon and native XR workflows as NE9 continues to evolve. If you’d like to connect, feel free to check out my LinkedIn. Thanks for stopping by, and I’m excited to see what we can build together. https://www.linkedin.com/in/daniel-harris-0745b8374/11Views0likes0CommentsDeep link from web
I'm building a WebXR experience and I'd like to have a link to a Quest app with the following behaviour: If the user has not installed the application, it takes them to the store page If they have installed the application, it opens the application Is this / any of this possible?14Views0likes0CommentsWebGPU Compute into WebXR on Quest
Anyone know the expected date for Meta Quest's WebGPU-WebXR layer? i just purchased a MetaQuest 3, to complement my Quest 2, for WebXR development *with* WebGPU ( Compute-Shader only Voxel/SDF engine ) and found Meta-Browser's doesn't support WebGPU-WebXR, a 'Chromium 134' stable feature. Suprising since Quest 3 is a "Flagship of XR" device ( in terms of sales/popularity/development ). Reference check here: https://immersive-web.github.io/webxr-samples/webgpu/ i've web-searched extensively yet not found a workaround/flag to set or anything to do other than the suggestion to build a WebGPU copy into WebGL context ( wasting bandwidth/VRAM on copying the XR canvas? ) Am i missing anything? Thx---79Views1like5CommentswebXR on Meta Quest scan QR code?
Meta Quest with webXR and passthrough I have a webXR project with passthrough and this is working perfect on the Meta Quest 3s with the meta browser! Scan a QR code? But now I want to scan a QR code (printed on paper, laying on the table in the real world) in order to place digital content on top of this QR code. My browser is on the latest version 40.2. I was reading that webXR did not yet have access to the camera API. Is there some progress on this? When can we have access to the camera? Other solution for QR scanning? Or is there an other solution? I do not need the full camera access but only detect the QR code? We made QR scanning with meta quest working in Unity while access to the camera API is available in Unity. But as said we are really looking in this simple QR scanning solution for webXR? As addition to this. We work with three.js. I really hope that QR scanning will be available. This could make AR tours with QR scanning much more easy. And this scanning of a QR should only be done for example 2 times a second. Just to place the content on top of a QR in near-real-time. I really hope to hear164Views1like2CommentsCanvas size (IWSDK)
Is there a way to adjust the canvas size in IWSDK? I'd like to create a story-telling site with Scrollama, but I can't find a way to change the size of the HTML. It's all handled automatically. I tried using window.removeEventListener('resize', () => {} ) but couldn't control it. Sdk: 0.2.2 Meta Spatial Editor: v11.0.0.1015Views0likes0CommentsComponent disappear in Editor
I've been learning ISWDK for a couple of weeks. I like it: it's really cool. I finished the tutorial and almost everything works, but there's one thing I'm not clear on: when I try to create a "prop" on an Entity or Enum component, Meta Spatial Editor doesn't compile (or at least the component disappears) after pressing the Refresh button. It definitely looks like a bug because the documentation cites examples with Entities and Enums. I copied and ran it, but while it doesn't give any errors on the code side, the component on the editor side disappears. All other types are visible and their parameters are accessible from the UI. I'd like to make a tutorial explaining how to migrate from Unity to IWSDK, but I'm missing this step. Here Locomotion is available in Meta Spatial Editor: export const Locomotion = createComponent('Locomotion', { test: { type: Types.Int8, default: 1 }, }); Here Locomotion is NOT available anymore in Meta Spatial Editor: export const MovementMode = { Walk: 'walk', Fly: 'fly' } as const; export const Locomotion = createComponent('Locomotion', { test: { type: Types.Int8, default: 1 }, mode: { type: Types.Enum, enum: MovementMode, default: MovementMode.Walk }, // <-- }); Where I wrong? Sdk: 0.2.2 Meta Spatial Editor: v11.0.0.10 Many thanks in advance #IWSDK, #Types.Entity, #Types.Enum #metaspatialeditor19Views0likes0CommentsIWSDK level loading: GLXF only? Performance concerns with Spatial Editor restrictions
Hi, I’m working with IWSDK and currently loading environments using world.loadLevel with .glxf files exported from Meta Spatial Editor. This works functionally, but I’m running into performance and file size issues. Meta Spatial Editor enforces very restrictive export rules (no Draco, no KTX2/ETC1S, no mesh quantization, etc.), which makes GLXF-based levels significantly heavier than optimized GLTF/GLB assets. My questions are: Is .glxf the only supported format for loading a “level” via world.loadLevel in IWSDK, or is there (or will there be) support for treating a plain GLTF/GLB as a level root? From a performance-oriented workflow perspective, is the intended approach to: use large GLXF files as full levels, or skip level loading entirely and assemble environments at runtime (loading optimized GLTF/GLB assets and creating entities in code)? Are there any plans or recommendations for authoring or exporting GLXF outside of Meta Spatial Editor, or for relaxing the compression restrictions in the editor? I’d like to understand the recommended best practice when performance and download size are a priority. Thanks in advance.MetaQuest Browser Bug: INVALID_ENUM using XRProjectionLayer
This issue already got posted here: https://github.com/immersive-web/webxr/issues/1418 https://issues.chromium.org/issues/465487204 Steps to reproduce the problem Launch Meta Quest Browser (tested on v41.0) on a Meta Quest 3. Start WebXR session that initializes an XRProjectionLayer with: { textureType: "texture-array" } render loop: iterate through the views (without using OVR_multiview2 extension). For each view, attach the subImage.colorTexture to the framebuffer using gl.framebufferTextureLayer Call gl.clear(gl.COLOR_BUFFER_BIT) Call gl.getError() I tried to bind the colorTexture from a subimage of a projection layer with textureType: "texture-array" via: `gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, subImage.colorTexture, 0, subImage.imageIndex);` I seem to get a GL_INVALID_ENUM (1280) error on the first gl rendering operation (only for the first eye/left) when checking for errors. This does not happen if the subimage texture is of type: textureType: "texture" Is this a bug in the Meta Quest Browser Chromium fork?18Views0likes0Comments