Forum Discussion

KovalevBoris's avatar
KovalevBoris
Explorer
2 months ago

Binding objects in reality using spatial anchors

Hi, I'm new to development, so maybe my problem is too simple. But still, thanks in advance for any recommendations!
 
I'm trying to create a very simple AR app for Meta Quest 3, following the official instructions on Meta's website. However, the result so far leaves much to be desired.
 
My goal is to fix an object from a prefab onto a table in the real room. Later, I also plan to add UI buttons to the object to enable interactions — but that comes later.
 
I'm using Unity version 6000.1.6f1. I’ve installed the Meta All-in-One SDK (v77) and decided to start by using the provided Building Blocks. I also installed the OpenXR plugin, as recommended.
 
In Player Settings, under Active Input Handling, I selected Input System Package.
 
In XR Plug-in Management, I selected OpenXR > Meta XR Feature Group.
Under OpenXR settings, in Enable Interaction Profiles, I selected: Oculus Touch Controller, Meta Quest Touch Pro Controller and Plus Controller Profile.
 
In OpenXR Feature Groups, I enabled:
 
- Meta XR Feature,
 
- Meta XR Foveation,
 
- Meta Subsampled Layout,
 
- Meta Quest Anchors,
 
- Boundary Visibility,
 
- Bounding Boxes,
 
- Camera (Passthrough),
 
- Display Utilities,
 
- Meshing,
 
- Occlusion,
 
- Planes,
 
- Raycast,
 
- Session.
 
In the scene, I added the following Building Blocks: Camera Rig, Passthrough, MR Utility Kit, Find Spawn Positions, Controller Tracking Left, and Controller Buttons Mapper.
 
I added my prefab object (currently without UI buttons) and connected it to Find Spawn Positions. I also configured labels like “table” and other placement surface definitions.
 
When I run the app, the room scan loads and the object appears as if it’s placed on a table — but the position is different every time, and sometimes the object ends up on the floor instead of the table.
 
I also tried writing a simple script: when pressing the B button on the controller, a beam should be fired and an anchor placed where the beam intersects with a surface (using my prefab object as the anchor). But nothing happened when I tried this, so I abandoned the idea and looked for something ready-made in the Building Blocks.
 
Eventually, I found the Instant Content Placement Tool. I added all three recommended objects to the scene: Environment Raycast Manager, Cube, Raycast Visualizer.
 
However, when I launched the scene, the controller-based placement didn’t work. The cube simply appeared at the player’s origin point and stayed on the floor, without any movement or interactivity.
 
Of course, I was also hoping to make the placed object stick to real-world surfaces and persist across sessions by saving its anchored position in the room scan — but I understand that this might still be far off.
 
Could you please suggest a clear algorithm or sequence of steps for achieving this?
 
It seemed like placing and saving an object in a room should be a simple task. But after spending a whole week trying to understand the Building Blocks and how everything connects, all I could manage was to place the object — and even that is inconsistent, with its position shifting slightly every time.

4 Replies

  • pmcgvr's avatar
    pmcgvr
    Meta Employee

    Your question seems to be related to Unity and not to Meta Spatial SDK. Please ask your question over in the Unity forums (https://communityforums.atmeta.com/t5/Unity-Development/bd-p/dev-unity), thanks!

     

    For a bit more info on what Spatial SDK is (https://developers.meta.com/horizon/documentation/spatial-sdk/spatial-sdk-explainer😞
    "Spatial SDK is a new way to build immersive apps for Meta Horizon OS. Spatial SDK lets you combine the rich ecosystem of Android development and the unique capabilities of Meta Quest via accessible APIs. It is Kotlin based, allowing you to use the mobile development languages, tools, and libraries you’re already familiar with."

    • KovalevBoris's avatar
      KovalevBoris
      Explorer

      Thank you!

      But it seems to me that the main problem I encountered was the connection of the controllers to the Meta glasses with the Meta Spatial SDK, and the configuration of their control. Since I still have not found any specific and correct information on the use of building blocks. Everywhere it's written very vaguely and evasively, about how this or that block simplifies development for beginners.

      • RiverExplorer's avatar
        RiverExplorer
        Member

        Building blocks: Yep. Without docs, it is what we call a smoke and mirrors product. Looks pretty, makes you dream, and does nothing.

         

    • KovalevBoris's avatar
      KovalevBoris
      Explorer

      In the link in the blog (see below) Meta describe the possibilities of placing objects in physical reality. Why don't you give a simple description of the use of blocks in order to implement this? Actually, a short manual on the use of current building blocks for Unity 6 with the use of Meta SDK All-in-One v77 to create one object placed on a real surface is what I needed. Is it possible to get this? One more thing: to make it using controllers.

      Link: Link to blog