I have been using quest pro for a while. Although it doesn't have a
depth sensor like LiDAR, it has spatial understanding of its
environment. Like if you create a roomscale boundary, you will see
objects being detected with red dots and blue lines. I...
Thanks a lot @JeffNik for your detailed answer. I was wondering, did you
work with quest depth API? Since the depth API is used to perform
dynamic occlusion(you can look here), maybe we can detect a surface and
get the depthmap of it?
@JeffNik , one more question. The room data you get using MRUK and
EffectMesh Prefab, is static data right? I mean it shouldn't change
based on headset's position or direction. And technically it makes sense
because you are accessing room-scan data, ...
But I can only access the dimension, bounding box coordinates of the
room anchors(walls, desk etc) that I defined at the space setup time
using MRUK prefab and corresponding scripts. Not even the point cloud
data of the space setup.Or by "point cloud...
@JeffNik Do you mean the point cloud of the surrounding environment or
is it the point cloud of only the room objects defined by user while
doing the space setup?