cancel
Showing results for 
Search instead for 
Did you mean: 

Francisco Rojas - Launch Pad "PHOBOS: Phobia VR Therapy" - Week 6

This week I am happy to solve two challenges that make PHOBOS so much a better experience.

Challenge One - Easier interaction with objects for PHOBOS newbies using gamepad controller

During Independence Day I had my visiting relatives try out PHOBOS with the Oculus Rift. However, it came to my realization that people unfamiliar with the app do not know what to do in the scenes, especially how to use the XBox controller to interact and move in the environment. This is despite the messages showing up in the HMD telling them what button to press to do what. It is obvious that they do not know how to use a gamepad controller (where specific buttons are). That is understandable because they cannot see the XBox controller itself in VR with the button labels mapped.

Now, if this is how my relatives interact with PHOBOS on first encounter, imagine all the patients who will be interacting with PHOBOS. I do not expect them to be gamers that know how to use a gamepad controller, so eventually when the Oculus Touch controllers come out, I am going to add more intuitive controls to PHOBOS with better affordance interaction using concepts people understand in the real world.

In the meantime, we are stuck with just the gamepad controller, so I've changed some interactions to make it easier for people to use PHOBOS. In most scenes in PHOBOS, I totally relied on trigger boxes that detected the user's capsule collider for interaction in proximity to the interactable item, such as a door. Some triggers used button A for interaction, and others used button B.
The thing is, my relatives did not virtually move close enough to the door (in the trigger zone) to open the door, nor were they sure what button to press on the controller even though it says specifically in the UI box inside the HMD.

So how did I solve these shortcomings? My goal was to reduce the learning curve as much as possible when using PHOBOS. Telling people what to do in the VR isn't easy (and I don't want to make it frustrating for clinicians either to explain to every new patient using PHOBOS). The assumption here is that the user must still use the XBox controller for movement and interaction, with guidance by the clinician telling them what to do next in each scene using only words.

To make it easier, I removed reliance on trigger boxes when interacting with most objects. Instead, I used gaze-based object selection for interaction, which makes a big difference. Now there is less frustration involved for the user because now one only needs to look at the object and press always the exact same button, button A. The cursor used for the gaze-based interaction only detects objects with colliders in the gazable layer, and disappears if not detecting any gazable collider. Thus, the user knows what he or she can interact with simply by gazing and pressing only button A every time, without the need to see any UI telling what button to press.

While gazing at a door, pressing button A will open it, and pressing button A again will close it. Looking away from the door while pressing button A will not affect the door in any way. Usually the user can guess what the object's main function is before pressing button A, and similar toggling interaction is standard for all the scenes now (e.g. open/close, push/pull, on/off).

of80jua896gp.png
Open/close a door using gaze-based object interaction.

sirsbjd9xq7b.png
The cursor aligns with the mesh collider of the stairs.

uz1wole6biqp.png
Easy pull/push of drawers just by gazing at each and pressing button A.

1i86t14eknwc.png
Precise interaction to turn on/off the faucet.

The scene that benefited most from these changes is the claustrophobic apartment since so many interactable objects are clustered together in tight spaces, making trigger boxes unreliable and frustrating since the user must stand inside a tight area and press various buttons. Furthermore, because the user had to formerly stand very near an object, the user would be pushed away when pulling or opening a drawer or door, perhaps hurting the user's eyes in the process with the quick near-approaching object.

The user must still use the analog sticks for continuous movement, which can still be tricky for many people. Perhaps an option for gaze-based teleporting is in order, but that is something to consider for another day.

Challenge Two - Scene filtering based on phobia

Last week, I showed a screen shot of the scene selector. On the left menu, there is a list of various phobias for treatment, but gazing at each had no effect at all on the scenes on the right (unfiltered). Just a few hours last night, I implemented a mechanism to allow the filtering of scenes based on the last gazed phobia treatment option, and it is awesome!

While I am not going to explain how the mechanism is implemented in detail, I do say the implementation relies on the text of the treatment explanation in the orange box, using a dictionary with the phobia name as the key and the value being a string of scene canvas references separated by commas. I added an additional button at the top to show all scenes in case filtering was applied when gazing at a fear type button. By default when loading this scene, no filtering is applied, revealing all available scenes.

7zcdm6z4tzr5.png
All scenes shown unfiltered.

lzws70r3wowu.png
Scenes filtered by claustrophobia.

dmgdb2ec8y2z.png
Scenes filtered by heights.

That is all I have to share for now. See you next week.
0 REPLIES 0