04-24-2025 05:15 AM
Hello, I am working on a mixed-reality application which requires a precise tracking. This means that the virtual object should stay on the physical position they were placed at all times (1-2mm shifting is fine but not much more due to safety reasons). I have been working on the anchors for quite a while and still can't eliminate the shifting. The lines you will see, each one represents 2.5 physical meters as Meta advises to place any virtual object within 3 meters of the anchor, this ensures that there is an anchor every 2.5m. I have 4 videos:
1) https://drive.google.com/file/d/1-hfXLVSe0Fv3mARyZID42A0BEx2Vi9ri/view?usp=sharing This video is in the university I am and I place 3 anchors and walk around the big room. When I came back, the first two anchors were shiften a bit, but the third one which I placed next to a window shifted by a few cm
2) https://drive.google.com/file/d/1UeSeMKn7ShEYxzzXYfDf2OHc7yLpoKh8/view?usp=sharing This video is again in the university and I use a flashlight from my phone, lightning the physical market with the hope that the cameras will recognize it easier and place the anchor at the correct position. Again shifting....
3) https://drive.google.com/file/d/1Q4cbiKRU-Kop69jeYbAb-P-BvLfrwyb1/view?usp=sharing This video is in the university's basement. I place a few anchors and create 2 depth objects. I attach the depth object to the closest anchor, which is within 2.5m. One of the anchors was shifting a lot more than the others which caused the depth object to shift as well.
4) https://drive.google.com/file/d/18AJZ1tTE8d_WtaqpGqMjAtl7wp1S9CxS/view?usp=sharing The last video is in my own room. I place 3 anchors and create 1 depth object and nor the anchors nor the depth object changed its position over time or load.
My question is what should I do next? What should I try out? I don't understand why in my room it works, in the university it half works and in the basement it doesn't work. Looking for any advises at this point...
04-26-2025 10:43 AM
I did a similar experiment, putting both real and virtual markers on the floor and walking around my house to see how they drift, and with similarly disappointing results.
Are you using the room scan or MRUK at all in this? In terms of minimizing error over large distances, like walking between markers on opposite ends of the house, I had the best results by scanning all rooms and then using the MRUK prefab with World Locking turned on, and then setting each marker's position to be relative to the nearest scene anchor (like wall or floor). Unfortunately there also seems to be issues with the scene's room anchors drifting as you move between rooms, which ultimately makes this method inviable, not to mention it's a bit jarring how world locking "corrects" all the positions as you switch rooms.
Maybe Meta is working on it to make it more reliable but I'm not very optimistic. Remember the Augments feature was supposed to go live shortly after Q3 launch more than a year ago, which is a very simple feature that only relies on consistent world anchors for it to work, and yet Meta still hasn't released it nor have they even mentioned it since sometime last year.
I will say, based on my tests, "only" getting 1-2cm of drift is about the best you can hope for. If you are relying on sub 1cm precision I don't think this will ever work for your use case. Maybe you can cook something up using the Depth API or Passthrough API to correct it or make your own anchor system but that sounds like a longshot.
04-26-2025 11:21 AM
I have started researched and experimenting with MRUK, but the lack of clear documentation makes it a bit troublesome. Something I am about to try in the next few days is, if possible, to extract the Depth API Room Data and make a prefab out of it. Then, I could place the content I want to have in the room, based on the Depth API data and make use of the world locking tool.
The things that concern me is if it's possible to get the Depth API data in form of a .json and if I can get that prefab and align it with the room. Another concern is that the room might be too big (my testing environment is 30x30) and older versions of the room scan were struggling to scan more than 70% of the room.
I am curious if you have any other suggestions. I looked into Dynamic Spawning as well, but my use case requires to place objects in mid-air, and not only on the raycast position (which can be worked around), but also loading and saving virtual object transform data and for aligning the same virtual content, as it used to be previously, would require an origin cube, which is made with anchors, which don't work as expected, therefore, I have put that solution on the side for now.