Forum Discussion
Anonymous
7 years agoOculus DK2 - display a video at a specific time
Dear all, My goal is to play a video on the oculus DK2, by selecting the time instant of the video to be displayed with the current position of a Falcon Novint robot. I have tried using TouchDes...
volgaksoy
7 years agoMeta Employee
There are different samples out there beyond our SDK. For example FFMPEG (see licensing to make sure it works for you) is one such example as well as Microsoft's Media Session that's part of the Media Foundation API https://docs.microsoft.com/en-us/windows/desktop/medfound/media-session. Althought they're deprecated, you can find compilable video playback samples here: https://github.com/pauldotknopf/WindowsSDK7-Samples/tree/master/multimedia/mediafoundation
I believe SimplePlay is one such example, but you would not want to use them directly since Microsoft has updated APIs. But it should get you started. Be aware that almost none of those samples are efficient and do not use D3D11 or the GPU properly. For that, you'd want to look at this doc: https://docs.microsoft.com/en-us/windows/desktop/medfound/supporting-direct3d-11-video-decoding-in-media-foundation
What you need is that once a video frame is decoded, convert to a regular RGB color format our SDK supports, and then copy it into a SDK quad layer (or directly render your own quad into an EyeFov layer). As for how you need to render a quad layer or EyeFov layer, you can see our OculusRoomTiny (ORT) samples in the native SDK download zip. We have GL, D3D11, D3D12, Vulkan versions of the same sample packaged in there. ORT only uses a single EyeFov layer. For quad layers and more advanced use of the SDK, see the OculusWorldDemo sample also in the SDK download zip.
I believe SimplePlay is one such example, but you would not want to use them directly since Microsoft has updated APIs. But it should get you started. Be aware that almost none of those samples are efficient and do not use D3D11 or the GPU properly. For that, you'd want to look at this doc: https://docs.microsoft.com/en-us/windows/desktop/medfound/supporting-direct3d-11-video-decoding-in-media-foundation
What you need is that once a video frame is decoded, convert to a regular RGB color format our SDK supports, and then copy it into a SDK quad layer (or directly render your own quad into an EyeFov layer). As for how you need to render a quad layer or EyeFov layer, you can see our OculusRoomTiny (ORT) samples in the native SDK download zip. We have GL, D3D11, D3D12, Vulkan versions of the same sample packaged in there. ORT only uses a single EyeFov layer. For quad layers and more advanced use of the SDK, see the OculusWorldDemo sample also in the SDK download zip.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 11 months ago