Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
GlimpseGroupDev's avatar
5 months ago

The Meta Full body avatars are kind of a nightmare to work with

This post is not a question as much as just general feedback after working with the SDK. I want to gage if other people have had a similar experience.

I work on a multi user app that was built originally with half body avatars in mind but with Meta announcing that they would discontinue support for them, we had no choice but to migrate to full body. Our app is very versatile and covers a lot of different movement and sitting types so solving for each individually is not ideal. 

What I expected: All current VR hardware (with the exception of additional trackers) gives us three fundamental inputs on the position of the user: head + both hands. Logically, all of the avatar systems I have worked with up until this point take that into account. They take a head + two hand positions as input and voila the avatar is now in the right place in relation to the floor. In the case of a sitting avatar, you would probably input another location that defines where the avatars butt should be to sit in the chair. I fully expected the full body avatars to work like this. Having a body and legs adds the extra difficulty of using IK to animate those other parts but I figured Meta had some good algorithms for figuring that out and would provide a simple way of giving inputs and getting an animated avatar as a result.

What I got: Meta decided that bifurcating the first and third person avatar animation and  restricting the appearance of the third person avatar to remain in realistic positions was more important than the reliability of the position data. The entire system is built around displaying different head and hand positions in third person compared to first person. Crouching does not just match the avatars head to the position of the user, it measures how far down your head is and then plays a crouching animation to a level that mostly lines up. Hand positions are then placed relatively to the head to maintain a normal body structure. This causes a tornado of problems if you want to do anything that does not come pre packaged in the SDK. Even just allowing the user to sit in a chair and freely look around 360 degrees is not included. The user pointing at a specific spot in the environment or at a specific place on another user's body becomes this confusing multiverse conversation about everyone seeing different things.

 

Metas hybrid of IK and normal rigged animations is a nightmare if you want to  accommodate more than one thing. Our app allows users to switch between standing, walking, sitting on the floor, or sitting in various chair sizes and shapes seamlessly. We also have a lot of objects that the player can grab and move which track their position independently and synchronize it over the network. The provided sitting behavior that you can find in the LegsNetworkLoopback scene is totally unusable for me. All of the movements of the users head are clamped to stay still in third person, meaning that lots of body language is removed and also objects they may be interacting with appear to float around unconnected to their hands because the first person hand position that the local user sees gets totally altered by the sitting animation. I had to make my own alteration on all of the crouching animations to get a more versatile sitting animation in which the user could actually move their head and be seen doing so.

One of the things we rely on to make all of our different seating scenarios work is that we apply offsets within the rig to raise or lower the user and get their head to end up in the right place. You can remain sitting in real life but we will adjust where your head ought to end up in relation to the floor by shifting your play space around. This totally wreaks havoc on the meta system. Just to make it so that the user could transition between virtually sitting and standing without changing positions in real life and KEEP THE AVATAR HEAD IN SYNC with their actual head position was a large undertaking. I think one of the biggest problems is that the rig that applies animations to the avatar does not even match up with the position of the rig in the scene. You have this totally invisible rig off in the middle of nowhere that defines what the avatar will look like for others but does not actually line up with anything in the local scene. 

There are just so many scenarios in which the third person rendering of the avatar deviates greatly from what the user in first person is actually doing that networking a sensible world in which all users are experiencing the same thing becomes a struggle. We used to have high fives that worked pretty well and now everyone's hands render slightly different in third person and it ruins the feature. Meta have abandoned the one idea that I would have thought to be the most obvious critical feature. In first and third person, the head and hands of the avatar should always match the inputs given by the user. 

TLDR: Zuckerberg was clearly scared by everyone making fun of the avatars before and so meta ended up sacrificing absolutely everything to put large restrictions on the movement of the third person avatar to keep them from looking silly. For Horizon that's great, for a bunch of apps that were built on a different system and would like to be able to provide the same inputs, it's a nightmare.

My request: Please add a normal IK system, where all I do is tell the avatar where my head, hands and butt should end up and it will do the rest. I understand that I'll get some funny VR chat looking stiff movement or stretched limbs from this but at least the position data will be reliable and I won't have to figure out this complicated puppeteering system that only renders for other users. 


3 Replies

Replies have been turned off for this discussion
  • I fully agree with this.
    But also Meta come on guys.
    If developers are building custom VR training systems for industrial customers (because not usecases are games with strangers hanging out on a cartoon spaceship or whatever)...
    Then HeadsHands was such a quick and easy solution. A trainer doesnt want their body obscuring the view of the thing two or more people are actively interacting with...

  • I had the same problems with the new full body Meta avatars. They are a nightmare to get the positioning to work correctly with compared to Ready Player Me. The IK system had a limit to how far the head and body would ascend if for example, the person was standing. And the avatar was always on the ground. Meta full body avatars are always floating and it's impossible to get them to sync with the ground.

  • Anonymous's avatar
    Anonymous

    I appreciate the feedback here, and I'll pass it on to the Avatars team. I encourage you also to submit this as feedback through the Meta Quest Developer Hub, as that will make sure that the feedback is effectively tracked.