How to achieve recognition of a set of dynamic gesture movements rather than individual actions.
Hello everyone, I’m wondering how to record a complete set of gesture movements for the Quest 3 to recognize—for example, having it detect when I stretch out my palm and wave, rather than only recognizing when I’m holding my palm still in front of me. Could anyone help me with this? Thank you so much.
To be honest, the Quest 3’s user gesture interaction experience has left me a bit frustrated. I have to rely on my thumb and index finger for extended periods to perform operations—like making a fist and extending my index finger to click, or pinching the objects or interfaces I want with my thumb and index finger. It was okay at first, but after prolonged use, this puts a significant strain on my fingers. Moreover, it’s not smooth enough to fully integrate the operations into my daily life experience.
So I want to make some attempts. I hope the Quest 3’s gesture interaction experience can be as smooth as that of excellent web front-end pages—seamlessly integrating into daily life like the ones shown in AR concept videos, rather than just generating a few virtual screens in front of you. At the very least, it shouldn’t keep straining my index finger anymore; my fingers are really sore.
If you share the same thoughts or want to communicate with me, feel free to send an email to luoyiheng2005@outlook.com. I look forward to your emails.