cancel
Showing results for 
Search instead for 
Did you mean: 

Using 360 images for previsual work in accurate real world scale

ArchieAndrews
Explorer

Hi

I have some heavy 3D environments/scenes not originally optimised for VR and thus simply will not render at a performant level and FPS for obvious reasons.

Ideally it would be great to take 360 images of the scenes which I have done many times and then add these to a 360 sphere to emulate the said environment but obviously not be able to move or interact which is fine.

The problem is the scaling of the 360's as they do not look correct when viewed within VR. Usually they are huge in scale.

I know this is a challenging subject but has anyone any idea how to get the 360 images to appear in real world scale when viewing using the headset in VR.

The idea is to be able to render out 360 images from my large scenes that could then be used as a set of view only scenes in VR. These would then play as a part of a story telling narrative where the user is immersed in the environment at the correct scale and perspective of the 360 sequenced images.

Thanks

8 REPLIES 8

jtriveri
Adventurer

The problem with 360 images is that there is no scale to perceive. Because there is no stereo depth perception everything looks infinitely far away. It looks like you're standing inside a giant sphere because you are. Rendering the view at the correct head-height can help, but you really need a stereo 360 representation in order to perceive scale. 

jtriveri Thanks and

stereo 360 representation ? Is that doable? Do you have any information on such?

The reason this is critical for me, is that I recreate fairly detailed (high poly, high material/texture counts) environments related to mining disasters and these up to now have mostly been desktop based on a 1080gpu. Obviously with VR many of the scenes would be too heavy so my rationale was to simply use 360's from the desktop version as merely view points within VR thus avoiding the resource overhead of a 3D scene. Think of them as a chained sequence of 360's that the user views whilst listening to a story telling narrative.

I did look at PCVR as an option thus using the PC Gpu but its not the way I really want to do it.

Any help appreciated!

A

One thing I’ve seen done before is rendering a 360 depth map along with the color texture and using that to displace the image sphere. This would also offer some (very limited) 6DOF movement. I don’t know what software you’re using to set up the scene but rendering this depth map should be doable in any mainstream 3D program. You could even stack multiple concentric spheres in layers to avoid stretched faces on the sides of objects. I have never done this myself so I can’t tell you exactly how I’d set this up but I hope may point you in a good direction

Thanks again

I managed to render to a internal faced sphere instead of a skybox using the panoramic shader in hope that it might allow some minor movement within the scene but it still has the static follow effect when you move your head thus a tendency for nausea if too much movement. I'm going to try see if I can have the sphere compensate the head movements position i.e. Head moves position(not rotation), sphere moves towards headset by the offset. Thus mimicing a small amount of DOF movement. In general the story narrative using 360's the user would be sitting without interactions of any kind.

I know there's no ideal solution here but worth a try.

 

Late to the discussion, sorry, but for what it's worth...

I starred this gist on github many years ago: https://gist.github.com/khadzhynov/24f26234b7ffc9e683049e13143b450e
It's a shader for adjusting the scale and y offset (horizon height) of a skybox cubemap in Unity. Not used it for a while but should still work.

The further discussions you had about enabling 6dof within 'flat' stereoscopic rendered views is also interesting. There are perhaps other ways of doing what you're trying to (if I understand the problem correctly). Google Seurat would have been exactly what you needed, if it haden't been deprecated by Google https://developers.google.com/vr/discover/seurat (See: https://www.youtube.com/watch?v=FTI_79f02Lg) It's possibly still doable if you're able to refactor the code from the archived github repos: https://github.com/googlevr/seurat and the unity plug-in: https://github.com/googlevr/seurat-unity-plugin.
There's a demo on the Quest store (sorry, Horizon Store 😁) that I'm pretty sure uses this technology. Forest – Oniri Tech Demo. I think the company have essentially taken Seurat and implemented it into their own system which is a paid platform very much geared at enterprise (architectural visualisation, that kind of thing) https://www.oniri.space/ By trying out the demo you'll be able to experience both its strengths (very high quality pseudo 3D on mobile) and its weakness (the illusion is only sustained within a small area, which is why you need to teleport from point to point). 

The other technologies that come to mind, if the environments you want to capture are not CG but real, are Lightfields (The demo from google Welcome to lightfields was done in Unity) and an example project can be found here: https://github.com/PeturDarri/Fluence-Unity-Plugin.
But also the work that Varjo have just annouced 'Teleport' https://varjo.com/teleport/ / https://varjo.com/press-release/varjo-demonstrates-teleport-a-powerful-new-service-for-turning-real-... which looks like it could be a game changer for allowing 3D gaussian splatting to run on any standalone headset

ArchieAndrews
Explorer

baroquedub many thanks for a concise bit of info!

The best (still not ideal and maybe not doable) I achieved so far with a pretty basic approach was to write a script that smoothly compensated the offset any head movement by bringing the Sphere position towards or away from the camera when the user moves their head position. It only works with low factor of movement of course.

All this being said, without a great deal of technical knowledge and skill I think the challenge versus reward might not be viable. I'm an old Indie dev and an older coal miner from my early years and some of the scenes I develop tend to be heavy all around but that's never been a massive issue with Desktop and GPU. It was my hoping to capitalise on my existing work (a lot of scenes and assets over 10 years) to generate stereo 360's and use them as environments for VR story telling. But overall it's like old school VR with a static sphere inviting nausea when the user moves around and the world moves with you.

On that note, optimisation it is! - Good thing is the tech and techniques are improving rapidly so maybe near future I can re-address this.

Many thanks

Alan

 

 

 

I'm very much an old school dev too and tbh I see that as an advantage over the generation of younger devs who can rely on everyone having RTX 4000 series GPUs or XR2 Gen 2 chipsets on standalone. They have it easy compared to those of us who learned our trade optimising for GearVR and other antiquated platforms 😄

If you do opt with pure 360, perhaps a simple SBS (side-by-side) stereoscopic cubemap might do the trick ( https://www.youtube.com/watch?v=yDpOOOF0Xlk / https://github.com/Unity-Technologies/SkyboxPanoramicShader/blob/master/Skybox-PanoramicBeta.shader ) but of course you'll never get the parallax right and properly mitigate against that head on a stick view inside the skydome.

Some advice I came across years ago re. optimisation that really stuck with me was the concept of treating your scenes in terms of near-field, mid-range and distance. Optimising is not about decimating every elements in your environment in the same way, it's more about treating each asset depending on how far it lies from the player (which of course requires you to know where your player might be able to move). The Home environments are a good masterclass on this design aproach. So, just to prove there's no sell by date on quality, this video might be old but it's still pure gold in terms of explaining this concept of 'the range of perception': https://www.youtube.com/watch?v=KYYdvtf2DhQ&list=PL4p5NRamFPnitdsEKV8GpY3DllccH-mzl&index=40&t=909s 

Having been in IT the longest part of my career I agree I've seen everything since my 1st, the ZX spectrum, till now on my trusty 1080 LOL and that's showing my age. I'm not from the gaming/dev sector by any stretch but the skills are all transferable.

The big challenge is always resource. Time and energy of 1 man doing everything from 3D builds to dev code and all in between. The learning curves of new approaches eat a great deal of time and often that time gets wasted in trying and testing concepts which you have to do.

If anyone has interest my youtube channel and facebook have a lot of content related to my mining projects. Some are game like some represent the seriousness of mine disasters. I have a new VR project that is on app lab in testing right now as an ongoing project. - https://youtu.be/vRTBULMMvsE?feature=shared

https://youtu.be/-p126Xi_lyw

Facebook

https://www.youtube.com/channel/UCS4xEu-iSdR_pPPkfBp20AA

Thanks again for the great advice !!

A