Random spike in CPU usage and longer render times
Hello, I am a developer on a Unity app for Quest 2. We have a performance problem with our app that causes huge spikes in CPU usage. The app performs well and as we want it to at first, seen here captured by OVR metrics: All of the systems in the app test well and do not cause problems but after a certain amount of time doing various things, a user has a random random chance of having the framerate plummet. In this next picture, I experienced the bug and then returned to the login screen where the bug persisted: You can see that CPU usage has skyrocketed to a locked 100%. You would think that there is some process taking up frame time in the background. I took a look at the profiler on a build to see if there was a key culprit. Here is a frame under good conditions where performance is as expected: And after the bug, the frame looks like this: You can see that the player loop has almost doubled in duration, but the scripts portion of the frame is actually pretty much the same. The change is entirely in EarlyUpdate.XRUpdate and PostLateUpdate.FinishFrameRendering taking 5x as long as before. So why am I getting a huge CPU usage spike while also the rendering of the same workload is suddenly taking so much longer?? Is there any chance that the quest is downclocking under stress? Once this happens, there is no way to retrieve the old performance until you restart the app completely. There is no surefire way to trigger this to happen, it just randomly takes place when there's a decent amount going on and then stays forever. I do not understand what APP T and TW T mean in the OVR metrics panel as well, if those might lead to a culprit. Any ideas would be much appreciated, thank you.1KViews0likes0CommentsCPU level not increasing fast enough
Hi, I noticed that the Dynamic Clock Throttling is sometimes not fast enough to react to the CPU performance demand and even lowers the CPU level when it shouldn't. If I put the headset to sleep and wake up the headset after a few seconds I will have a few seconds where the FPS will be around 36 while the CPU and GPU levels are both on 0. And only after a few seconds(3s to 25s) does the level increase to match the performance required. If you have any solutions please let me know. I tried setting the CPU and GPU levels to 3 on application focus which solves the problem until I remove the high level and then Dynamic Clock Throttling will lower the CPU level to zero which causes the PFS to drop to 36 until it re increases the level up to match what's needed. I have attached a photo of what happening where the blue arrow is where I get the low FPS drop due to CPU level being 0( on this screenshot it only happens for a few second because of me taking a picture but normally happen for much longer). Background: Using Unity 2020.3.25 Happens in multiplayer using Photon fusion. On Quest 2837Views0likes0CommentssuggestedCpuPerfLevel and suggestedGpuPerfLevel appear to do nothing
Setting OVRManager.suggestedCpuPerfLevel and OVRManager.suggestedGpuPerfLevel to OVRManager.ProcessorPerformanceLevel.Boost has no effect on CPU and GPU levels when observed through the OVRMetrics tool. Both are locked to CPU = 1 and GPU = 1. How do give the app control over these levels to increase or decrease as required by the game? Using Unity 2019.3.34 and Oculus Integration 39.0.0.0 Thanks in advanceSolved3.6KViews0likes4CommentsGPU Compatibility
Hello I have an MSI GP72VR 7RF LEOPARD pro laptop, having all the capacities required for the use of the oculus link ( for Oculus Quest 2 ), except the graphics card which is a 1060 3go, my question is: is it impossible for me to connect an oculus link? Is it just a loss of performance? my pc cost 1500 euro 2 years ago i find it strange that it can't work Especially the main selling point of this pc is the "VR READY" feature thank you...629Views0likes1CommentOculus Lipsync(1.43.0 Unity) seems to consume AMD CPU.
Hello, I am building an Unity application (for Windows) with Oculus Lipsync with Unity(2018.4.13f1). And I found that my lip-sync app works as below: 1) On Intel-CPU, the SDK does not consume CPUs: It looks that Intel CPU (Core i7 6700 3.4GHz) takes about 1ms. e67(Core i7 2600 3.4GHz) takes about 0.2ms. 2) On AMD CPU(Ryzen), the SDK consume much CPUs: Basically, It takes several ms. Sometimes takes over 10 ms. (I tried with ADE-62C1 / ADE-6291) I also tried to simplify the app, that is focusing on lip sync processing. Such a simplified app takes 3-5ms for lip-sync calculation. I want to do lip-sync in realtime(mic as audio source) and to accept AMD CPU. So, please advice me how to reduce CPU consumption with AMD CPU. Thank you for your cooperatoin !715Views0likes0Comments(Android) Unity got Poor CPU Performance compared with Native apk
Hello everyone, my team have been contributing to develop " view dependent media player ", which using a single thread decode 2048 * 1536 with android-media-codec (we can call it BASE TEX) and four thread decode 512 * 256 * 24 split blocks for viewer directly look at with ffmpeg (we can call it HD TEX), very similar with VR_5K_PLAYER. The above main framework make the cpu usage more heavy compare with gpu. Cpu need decode 2048 *1536 totally and upload YUV from memory to video memory every frame, while gpu only decode 2048 *1536. After a few test with Samsung S8 & Quest, i found that, s8 & Quest got different performance which s8 got a high fps because of Quest has less cpu avaliability compare with S8. ( PS: S8 & Quest all equiped with snapdragon 835. as far as i know, the 835 chip contain 8 core, 4 core of all 8 core are big core responsible for heavy work ) To confirm my point of view , i made a test proj using unity & the latest gear sdk, add heavy cpu work for some thread public class NewBehaviourScript : MonoBehaviour { // Use this for initialization void Start () { for (int i = 0; i < 8; i++) { ThreadStart method = () => threadWork(); Thread thread = new Thread(method); thread.Start(); } void Update() { long sum = 0; for (int i = 0; i < 1000000; ++i) sum = (int)((sum + i) * 2.0f / 2.0f); } void threadWork() { while (true) { long sum = 0; for (int i = 0; i < 100000000; ++i) sum = (int)((sum + i) * 2.0f / 2.0f); Debug.LogFormat("TestThread End name:{0} curr thread id:{1} cur timeStamp:{2}", sum, Thread.CurrentThread.ManagedThreadId, DateTime.Now.ToString()); } } } the enclosed python file use "adb shell cat /proc/stat" to calculate 8 core usage. the above result is S8, the last four core are big core usage, while 3 big core execute full load. the above result is Quest, the last four core are big core usage, while 2 big core execute full load. my team also made a native android project which using gradle & android studio & without gear, loading my dynamic library through jni & execute view dependent media player. the S8 cpu usage is the below result is executed with S8 using unity & gear & same dynamic library, the two test show that native apk get a better & average cpu usage while heavy cpu work executing, compared with unity & gear apk. My question is, Is there a way for unity & gear apk to control device cpu availability rate ? why native apk performance diff with unity & gear apk? why Quest got less core to execute cpu work compared with S8?788Views0likes0CommentsHaswell or Skylake CPU for my HTC Vive VR headset?
What CPU will work best with my Nvidia GeForce GTX 980. Haswell or Skylake? Keep in mind I just blew a month's worth of pay on my current setup plus the Rift so I'm not looking to spend too much. Right now I'm running an i5-2400. The cheapest of the i5's. Not sure if it's bottlenecking or not but I still want something at least Haswell to complement the GPU and use it's full potential. Is it worth the $200+ upgrading or will I not even notice the difference?2.4KViews0likes7Comments