Forum Discussion
ganzuul
12 years agoHonored Guest
/r/CrossView
This subreddit really made me think about the level of detail I'd want in VR games:
http://www.reddit.com/r/CrossView
My favourite so far has to be http://www.flickr.com/photos/ytf/5087152269/
What if indie designers could scan objects like these, maybe with some sort of steadycam+hud device and just have them appear in-game in the max detail the end-user's hardware supports? Artists could be freed to spend their time designing the plot devices and characters instead of so many background props, resulting in a game world which is more believable.
I don't think the engineering aspect would be too difficult with modern techniques. A portable set of gear could be something like a Hydra duct-taped to a stereoscopic video camera, or a Hydra+Kinekt+Cam combo. The video and tracking would possibly be reconstructed as a point-cloud and you'd run marching cubes over it to generate an extremely high-res model. Then you'd simply apply some magic and know-how and maybe leverage some core-competencies to make the textures and decimate the model for export into render-world... Perhaps those panorama-stitching algorithms are dimension-agnostic?
But either way, this subreddit seems relevant to our interests.
http://www.reddit.com/r/CrossView
My favourite so far has to be http://www.flickr.com/photos/ytf/5087152269/
What if indie designers could scan objects like these, maybe with some sort of steadycam+hud device and just have them appear in-game in the max detail the end-user's hardware supports? Artists could be freed to spend their time designing the plot devices and characters instead of so many background props, resulting in a game world which is more believable.
I don't think the engineering aspect would be too difficult with modern techniques. A portable set of gear could be something like a Hydra duct-taped to a stereoscopic video camera, or a Hydra+Kinekt+Cam combo. The video and tracking would possibly be reconstructed as a point-cloud and you'd run marching cubes over it to generate an extremely high-res model. Then you'd simply apply some magic and know-how and maybe leverage some core-competencies to make the textures and decimate the model for export into render-world... Perhaps those panorama-stitching algorithms are dimension-agnostic?
But either way, this subreddit seems relevant to our interests.
3 Replies
- MrGeddingsExplorernice. though the low res dev kit screen makes for not being able to do super detailed stuff however it seems with the Rift dev kit things do look a bit better up close then they do way far off :-) so.
- ganzuulHonored GuestYup. And you get a better stereoscopic view when you're close, and people will try to look at things up close again even if they learned it was pointless in earlier 3D games.
- geekmasterProtege
"ganzuul" wrote:
... Perhaps those panorama-stitching algorithms are dimension-agnostic?
Stereoscopic panorama stitching requires finding matching "landmarks" in both images, and warping them vertically while viewing. There is a lot of research and many examples on the net.
Try this one, but change the view to SBS wide-eye with the buttons in the upper right corner:
http://www.3dpan.org/3d/40629-40633-201-201
More information:
http://www.mtbs3d.com/phpBB/viewtopic.php?f=138&t=16502
http://www.stereopanoramas.com/blog/
http://www.3d-360.com/
http://www.calit2.net/~jschulze/publications/Ainsworth2011.pdf
http://www.disneyresearch.com/project/megastereo/
And of course, what has Paul Bourke NOT gotten his hands into?
http://paulbourke.net/stereographics/stereopanoramic/
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device