Forum Discussion
ApooBG
9 months agoProtege
OVRSpatialAnchors position shifting over time
Hello, I am working on a mixed-reality application which requires a precise tracking. This means that the virtual object should stay on the physical position they were placed at all times (1-2mm shif...
mcgeezax4
9 months agoProtege
I did a similar experiment, putting both real and virtual markers on the floor and walking around my house to see how they drift, and with similarly disappointing results.
Are you using the room scan or MRUK at all in this? In terms of minimizing error over large distances, like walking between markers on opposite ends of the house, I had the best results by scanning all rooms and then using the MRUK prefab with World Locking turned on, and then setting each marker's position to be relative to the nearest scene anchor (like wall or floor). Unfortunately there also seems to be issues with the scene's room anchors drifting as you move between rooms, which ultimately makes this method inviable, not to mention it's a bit jarring how world locking "corrects" all the positions as you switch rooms.
Maybe Meta is working on it to make it more reliable but I'm not very optimistic. Remember the Augments feature was supposed to go live shortly after Q3 launch more than a year ago, which is a very simple feature that only relies on consistent world anchors for it to work, and yet Meta still hasn't released it nor have they even mentioned it since sometime last year.
I will say, based on my tests, "only" getting 1-2cm of drift is about the best you can hope for. If you are relying on sub 1cm precision I don't think this will ever work for your use case. Maybe you can cook something up using the Depth API or Passthrough API to correct it or make your own anchor system but that sounds like a longshot.
ApooBG
9 months agoProtege
I have started researched and experimenting with MRUK, but the lack of clear documentation makes it a bit troublesome. Something I am about to try in the next few days is, if possible, to extract the Depth API Room Data and make a prefab out of it. Then, I could place the content I want to have in the room, based on the Depth API data and make use of the world locking tool.
The things that concern me is if it's possible to get the Depth API data in form of a .json and if I can get that prefab and align it with the room. Another concern is that the room might be too big (my testing environment is 30x30) and older versions of the room scan were struggling to scan more than 70% of the room.
I am curious if you have any other suggestions. I looked into Dynamic Spawning as well, but my use case requires to place objects in mid-air, and not only on the raycast position (which can be worked around), but also loading and saving virtual object transform data and for aligning the same virtual content, as it used to be previously, would require an origin cube, which is made with anchors, which don't work as expected, therefore, I have put that solution on the side for now.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device