(Unity) OVRPlayerController- How to get OVRPlayerController to move with OVRCameraRig
I'm working off the standard OVRPlayerController, which has an OVRCameraRig as its child. This is a game where I need thumbstick locomotion (which the OVRPlayerControllerProvides), but I also need roomscale movement. In other words, I need to make sure that when the player moves physically, his in game avatar should also move. Currently whats happening is that when the player moves physically, the OVRcameraRig moves with him, but the parent OVRPlayerContoller does not move. This is an issue because I need my OVRPlayerController to move with my player at all times for proper collision tracking and targeting by Hostile AI. What is the best way to achieve this? Iv'e tried a few ways to make it work but wondering what the cleanest solution is. I'll also need hand tracking for this game. Perhaps I should simply use the AvatarSDK standard avatar and make it a child of a Character Controller for thumb stick movement? thanks for the help!20KViews1like11CommentsHow to make Oculus Avatar appear personalised
Hello Everyone, We're struggling with making Oculus Avatar SDK work with Unity and Photon 2 — all works completely well except avatars appear non-personalized (white). We deliver the build through the App Lab release channels. Data Use Checkup passed and it even shows we have used them recently. Users entitle well. The app even receives components of our created version 2.0 avatars like "beard", "glasses", etc. We even tried to build a PC - version and had no luck. All-time white Avatars (non-personalized). Any ideas of how to make them work? Thank you!Solved3.4KViews1like3CommentsCustom Oculus Avatar Mesh on Unity
I am using the following documentation to load a personalized avatar into my unity project: https://developer.oculus.com/documentation/avatarsdk/latest/concepts/avatars-sdk-unity/ We have followed steps 1-4 on this documentation, including creating an oculus app id and pasting it in both the OculusAvatars and OculusPlatform. I am running into two issues: 1. The "PlatformManager.cs" already exists in the project (it is included in the OvrAvatar folder) 2. When I use a unique name like "PlatformManager1.cs" and continue the remaining steps, my personalized avatar does not load, the standard blue avatar remains. The only console error I get is "Unrecognized message type: 78859427" Am I doing something wrong? Is there an updated guide for including the personalized avatar in Unity? Please help!4.5KViews0likes9CommentsPlease create more Oculus SDK Blueprints
Please work with Unreal and implement more Oculus SDK features as Blueprints for Unreal. Things like VoIP, Oculus Avatar integration, inviting friends to sessions/joining friend sessions, and other SDK/OSS features. This would really kickstart a lot of development teams in creating multiplayer VR games using Oculus Sub Systems and SDK features. If there is any news on this topic an Oculus Dev can share or any ETA for new BPs etc. I'd love to hear it:)1.7KViews4likes5CommentsExpressive Avatars: Lipsync, VoIP and Android Mic permissions
Hey folks, I'm sure you've seen the latest updates to Oculus Avatars, which introduced OVRLipsync driven mouth movement, eye gaze simulation and micro-expressions. I wanted to flag something we came up against while we worked on this update, in case it was causing issues with folks building multiplayer Quest and Go experiences. Android only allows access to the microphone from a single process. This wasn't an issue when networking avatars previously, as the mic input wasn't being used. But with the expressive update, we specifically need to run the mic through the OVRLipsync plugin to generate blend-shapes and drive the mouth shapes. Trying to hook up the mic to both VoIP and Lipsync therefore causes an inevitable race condition. The loser gets a bunch of zeros. So either there's no networked audio, or no blend-shapes. :disappointed: Fortunately, Oculus VoIP and Photon both have an available workaround, in the form of the ability to relay the mic buffer using SetMicrophoneFilterCallback() (for Oculus VoIP) as documented here: https://developer.oculus.com/documentation/platform/latest/concepts/dg-cc-voip/ We're in the process of documenting the specifics of how this can then be wired up to Lipsync and Avatars in more detail, but in the meantime, please refer to Social Starter in the Avatar SDK Unity samples, which has implemented the Avatar / Lipsync / VoIP stack correctly.4.6KViews3likes7CommentsMust call get_signature first
I'm trying to use Oculus Avatar SDK on the Oculus Quest 2 with Unity. When I try to get the Platform User ID, I get the following error: "Must call get_signature first" I cannot find in the documentation what is the proper way to configure it in order to be able to make use of this ID. Also manually change the Avatar ID to the ones provided for testing doesn't seem to have any effect on the avatar.1.1KViews1like0CommentsGraphics.CopyTexture Debug Log bloat
Hi guys, This issue has been talked about in a disconnected fashion in several places (including the community forum) but never explicitly here in its own thread. To review, whenever an instance of an OvrAvatar is spawned in any scene, in any context, it spams the Unity log with what looks to be about 100 instances of this message: Graphics.CopyTexture with a region will not copy readable texture data for compressed formats (source texture format 50) As mentioned in one of the previously linked threads, where this error is coming from has already been tracked down: It's in OvrAvatarMaterialManager.cs, lines 185-191 of ProcessTexturesWithMips(). The only way to suppress the bloat is to comment out the entire TextureCopyManager.CopyTexture() call, which will make every part of the avatar except the hands invisible. I can't speak for other people's experiences, but if I'm taking the content of the message literally, it seems that it's mad about copying a mip for a texture to ASTC since I'm developing for the Go. Again, this is a linear issue: every time a new Avatar is added, you get another firehose of ~100 error messages. When spawning Avatars is happening in the context of the initialization of a multiplayer session (and why wouldn't it be, in this case?), you get ~100x however many Avatars you're spawning in. So, in my workflow, it makes picking up on certain other things that I'm trying to debug in logcat at the same time impossible. Most of the time, the buffer on my CLI shell is totally full and I can't see anything that happened before I got a blast of CopyTexture warnings. In general the Avatar SDK needs a lot of love on the mobile side, as it's very expensive and inefficient with the marshaling from the CAPI and really seems to have just been a quick proof-of-concept on making the Oculus Home/full PC SDK "work" on mobile. But at the very least, can you look into this issue and patch it in an upcoming release? It's very, very annoying to have to deal with. Thanks2.2KViews0likes4CommentsUnreal Engine - Avatar Expressive Update
CONFIDENTIAL We appreciate your discretion during this early access period. Please don't discuss these updates or share publicly any images or video of expressive Avatars in action. We'd love your feedback. Please keep all comments / discussions regarding bugs or work in progress with expressive avatars restricted to the Avatar SDK Developer Preview discussion, and avoid posting to the broader public avatar discussion thread. Hey folks, As promised, we've made available the PC UE4 update for Expressive Avatars. Quest update coming shortly but we wanted to get the PC functionality out the door. Release Notes SDK Updated avatar meshes to include added eye / mouth geometry and blendshapes for expression and speech Updated avatar textures for skin, including mask information in the Green and Blue channels of the roughness texture Added runtime parameters to the avatar spec for eye color, lip color, brow color and lash color Added simulation models for eye gaze, mouth movement and general facial animation UE Added support for different methods regarding alpha. We recommend using Masked material for performance and blending into the scene better. It uses a masked material, instead of the previous translucency effect. Added support for Expressive avatars, including integration with Lipsync, gaze targets etc. Added ability to set static or moving gaze targets Known issues This is PC only. Brave is the developer who tries to make this work on Quest. (As above, coming soon) Currently supports UE 4.21 only This build is tested against the 1.36 software update. While avatar support for expressive was also shipped in the Oculus runtime with update 1.35, some functionality was not enabled. If you haven't seen 1.36 update, consider opting into the Public Test Channel. Instructions Dropbox link, here: https://www.dropbox.com/sh/l5limp8zm5wyetj/AAAWf3gVNOwK5v4_DYf6qBkia?dl=0 Unzip the packages to: <ENGINE>/Plugins/Runtime/Oculus/OculusAvatar/ <ENGINE>//Source/ThirdParty/Oculus/LibOVRAvatar/ You'll also need to download and integrate the OvrLipSync plugin into your project's plugin folder and update your uproject to use it. See AvatarSamples, where it's currently included, or download the latest from the Oculus developer downloads page. LocalAvatar.h/cpp shows how to wire up and expressive avatar, below (see code block) Build and run AvatarSamples. In editor you will will see a number of new options exposed to configure the Avatar (image #1, below). Ensure that 'enable expressive' is set. User IDs Please conduct initial testing using one of the following user IDs to retrieve an Avatar with an updated face mesh with expressive blends and no eyewear. 10150030458727564 10150030458738922 10150030458747067 10150030458756715 10150030458762178 10150030458769900 10150030458775732 10150030458785587 10150030458806683 10150030458820129 10150030458827644 10150030458843421 Going forward, all users will automatically be backfilled with an updated face mesh that takes advantage of both gaze and facial expressions. They'll have the option to remove their current eyewear option, exposing the eyes, as we launch this update into the avatar editors on PC and Go. Gaze Targets We've added the ability to tag objects of interest in the scene. To do so attach a UOvrAvatarGazeTarget to an object. This is exposed through the editor for convenience (see Image #2, below). There are 4 types of tags to apply: Avatar Head - This is automatically attached to avatars in the scene. Avatar Hands - Also, automatically attached to the hands. Object - A moving object in the scene. Static Object - Still object in the scene Unless otherwise specified in by code using : void SetGazeTransform(USceneComponent* sceneComp) ...the transform will point to the root object scene component that it is attached. It may desirable to tune that on larger objects to refine the point of interest. UE Avatar options LocalAvatar.h/cpp void ALocalAvatar::LipSyncVismesReady() { if (UseCannedLipSyncPlayback) { AvatarComponent->UpdateVisemeValues(PlayBackLipSyncComponent->GetVisemes()); } else { AvatarComponent->UpdateVisemeValues(LipSyncComponent->GetVisemes()); } } void ALocalAvatar::PreInitializeComponents() { Super::PreInitializeComponents(); if (UseCannedLipSyncPlayback) { FString playbackAssetPath = TEXT("/Game/Audio/vox_lp_01_LipSyncSequence"); auto sequence = LoadObject<UOVRLipSyncFrameSequence>(nullptr, *playbackAssetPath, nullptr, LOAD_None, nullptr); PlayBackLipSyncComponent->Sequence = sequence; FString AudioClip = TEXT("/Game/Audio/vox_lp_01"); auto SoundWave = LoadObject<USoundWave>(nullptr, *AudioClip, nullptr, LOAD_None, nullptr); if (SoundWave) { SoundWave->bLooping = 1; AudioComponent->Sound = SoundWave; } } #if PLATFORM_WINDOWS else { auto SilenceDetectionThresholdCVar = IConsoleManager::Get().FindConsoleVariable(TEXT("voice.SilenceDetectionThreshold")); SilenceDetectionThresholdCVar->Set(0.f); } #endif // TODO SW: Fetch Player Height from Oculus Platform? BaseEyeHeight = 170.f; AvatarComponent->SetVisibilityType( AvatarVisibilityType == AvatarVisibility::FirstPerson ? ovrAvatarVisibilityFlag_FirstPerson : ovrAvatarVisibilityFlag_ThirdPerson); AvatarComponent->SetPlayerHeightOffset(BaseEyeHeight / 100.f); AvatarComponent->SetExpressiveCapability(EnableExpressive); AvatarComponent->SetBodyCapability(EnableBody); AvatarComponent->SetHandsCapability(EnableHands); AvatarComponent->SetBaseCapability(EnableBase); AvatarComponent->SetBodyMaterial(GetOvrAvatarMaterialFromType(BodyMaterial)); AvatarComponent->SetHandMaterial(GetOvrAvatarMaterialFromType(HandsMaterial)); } ALocalAvatar::ALocalAvatar() { RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("LocalAvatarRoot")); PrimaryActorTick.bCanEverTick = true; AvatarComponent = CreateDefaultSubobject<UOvrAvatar>(TEXT("LocalAvatar")); PlayBackLipSyncComponent = CreateDefaultSubobject<UOVRLipSyncPlaybackActorComponent>(TEXT("CannedLipSync")); AudioComponent = CreateDefaultSubobject<UAudioComponent>(TEXT("LocalAvatarAudio")); LipSyncComponent = CreateDefaultSubobject<UOVRLipSyncActorComponent>(TEXT("LocalLipSync")); } void ALocalAvatar::EndPlay(const EEndPlayReason::Type EndPlayReason) { LipSyncComponent->OnVisemesReady.RemoveDynamic(this, &ALocalAvatar::LipSyncVismesReady); PlayBackLipSyncComponent->OnVisemesReady.RemoveDynamic(this, &ALocalAvatar::LipSyncVismesReady); if (!UseCannedLipSyncPlayback) { LipSyncComponent->Stop(); } } void ALocalAvatar::BeginPlay() { Super::BeginPlay(); uint64 UserID = FCString::Strtoui64(*OculusUserId, NULL, 10); #if PLATFORM_ANDROID ovrAvatarAssetLevelOfDetail lod = ovrAvatarAssetLevelOfDetail_Three; if (AvatarComponent) { AvatarComponent->RequestAvatar(UserID, lod, UseCombinedMesh); } #else ovrAvatarAssetLevelOfDetail lod = ovrAvatarAssetLevelOfDetail_Five; IOnlineIdentityPtr IdentityInterface = Online::GetIdentityInterface(); if (IdentityInterface.IsValid()) { OnLoginCompleteDelegateHandle = IdentityInterface->AddOnLoginCompleteDelegate_Handle(0, FOnLoginCompleteDelegate::CreateUObject(this, &ALocalAvatar::OnLoginComplete)); IdentityInterface->AutoLogin(0); } #endif if (UseCannedLipSyncPlayback) { PlayBackLipSyncComponent->OnVisemesReady.AddDynamic(this, &ALocalAvatar::LipSyncVismesReady); } else { LipSyncComponent->OnVisemesReady.AddDynamic(this, &ALocalAvatar::LipSyncVismesReady); LipSyncComponent->Start(); } } Gaze target Configuration1.4KViews1like2CommentsExpressive Update - Launch, updated Unity integration
CONFIDENTIAL We appreciate your discretion during this early access period. Please don't discuss these updates or share publicly any images or video of expressive Avatars in action. We'd love your feedback. Please keep all comments / discussions regarding bugs or work in progress with expressive avatars restricted to the Avatar SDK Developer Preview discussion, and avoid posting to the broader public avatar discussion thread. Hey all, thank you for your ongoing participation in the developer preview. I wanted to keep everyone updated with the latest developments (and a unity package) and chat through launch timings. Launch timings We’re launching expressive next week (April 3rd at 9am). At this time we will enable all users to access the updated avatar editors on Go and Rift, where they'll be able to remove their eyewear and augment their avatar with eye color, brows, lipstick and lash coloring. We've backfilled and updated all users to an expressive avatar already. This means that, at any point, when you publish an app that's requesting the Expressive avatars, their mouths will start working. Prior to 4/3, however, users won't have the ability to remove eyewear. Please note that we've made a change to how gaze targets work (below). You'll want to verify your implementation using our Test IDs (which don't have eyewear) Reminder that the launch date, and assets, are all confidential until we go live. Media coverage We will be proactively communicating to media outlets over the next few days about the upcoming update and readying our blogs and other comms. If you are confident that your app will be updated by 4/3, please message me directly to let me know, as we will look to mention live apps which users can take their expressive avatars to as of launch date. We'll also be mentioning a subset of apps that will update shortly -- so let me know either way! We'll be publishing a blog post specific to this update. We'll also prepare some assets to share / along with relevant copy, if you'd like to participate on your social media channels. Playtesting The avatars team is happy to playtest your app and provide critique on the gaze modeling implementation or any custom shader work you've done (acknowledging that we now do some fun things with masking). Please message me and we can coordinate adding emails to release channels on Oculus store. Latest drop (1.36 SDK) LINK: https://www.dropbox.com/sh/qvegtvnvqbrbec4/AAC-Z4FbfAYNK0lNW5BSbC0Ma?dl=0 Release notes This is actually the SDK 1.36 package that’s going live roughly around the same time as the SDK (and would be available via Unity store, albeit a little too late for you to take the updates for our live date) You’ll not need to copy over the avatars DLL as the Oculus Runtime for 1.36 should now have rolled out to your Rift. The runtime for Quest should be 3.60 – you should have this version by now if you’re getting the developer updates. For Steam, specifically, you can just copy the DLL from your runtime (search your PC for libovravatar.dll and OvrAvatarAssets.zip). Let me know if you can’t find these, I can upload them for you. Change list We’ve updated gaze targets to move away from a tag system (based on feedback) and toward component based gaze tagging. You can now assign targets using the unity interface (pictured, below), or via code GazeTarget newGazeTarget = gameObject.AddComponent<GazeTarget>(); newGazeTarget.Type = ovrAvatarGazeTargetType.AvatarHead; The system allows you to specify 4 levels of gaze saliency (in descending order): Avatar Head, Avatar Hand, Object, Stationary Object We’ve also fixed some issues that were affecting how you implement avatars: Resolving an broken behavior simulation state where no mic perms are granted on Quest (though you should ensure you’re requesting Mic permissions!) Resolving an issue wherein IL2CPP was throwing errors Resolving an issue which was throwing debug errors when ‘Third Person’ was set to True Added the ability to configure the avatar to use the transparent render queue (image below) Some implementation gotcha’s we’ve come across Unity seems to default to enabling OpenGL 3.0 / Vulkan support, which isn’t supported on Go / Quest. You’ll want to disable this For Android, ensure you’re setting ASTC compression Image: Opacity option Image: Gaze Targets978Views0likes1Comment