Explorting 3d Scans made on the Quest 3
It would be really cool to be able to export the models and textures made while 3d scanning environments using the quest 3. This would allow us to import the scans into a 3d modelling program like blender, to be able to set them up for use in another VR game or experience. In a way it would be helping expand the content available in the metaverse by allowing users to get creative, even if the 3d model isn't particularly high detail, users could add in missing detail via a 3d modelling program. We could also add extra detail by modifying the textures, normal maps, height maps ect. I would love to be able to make a map out of my gaming room for use in a social VR game! People used to make maps in CounterStrike years ago called RAT maps, where they would model an environment where the player was tiny playing in a massive ( to them). Having this feature available would make is much more accessible to create fun content like that. Progress is good ^_^, and the less limitations we have for contributing to the content available the better! Unlock peoples creativity ^_^2.5KViews0likes1CommentVirtual Glasses
I'm not certain if this counts as accessibility or not, but it's in the same ball park. If all glasses do is warp the way that light hits our eyes to make our vision clearer, then do you think it would be possible to code a VR headset to take a glasses prescription and change the way it presents images to mimic the glasses and clear their vision in the virtual space? That we have glasses prescriptions means that we know how those measurements effect our vision, so why can't we program the headset to make those changes to how it presents images to us? If you combine that with eye tracking and foveated rendering, then it doesn't seem like it should be impossible.1.4KViews0likes0CommentsOculus Design Improvement Concept
So I included this in some of the public facing forums as well, but I figured it worthwhile to share with the developer side as well and was interested to hear what people have to say about this. This would increase the cost of the device but also would improve the capability to include natively supported Augmented reality. The Rift currently only natively supports VR, with AR possible through external Attachements such at the OVRVision device. This device, on its own has frankly subpar performance in comparison to the Rift proper, so I have to wonder if the OVR team could hypothetically design a system which could handle this better. My suggestion, thusly, is a system wherein the front opaque panel is made to be able to shift between opaque and transparent, or able to move out of the way of the viewer by either folding or moving around the user's head. An example method for this would be to move the hardware in the frontfacing lens to the lateral or cranial directions to account for this. Assuming you'd prefer the transparent to dark backing transition with no motors and movement (which I expect) add to the API tools to change the state the state a single function which takes in the transparency value you would like and will automatically set in in the backing hardware. Additionally, this may require changes in your display technology, based on my understanding of how it works. I would argue that the technology to handle this does exist and it should be doable but Im not sure whether you may deem the necessary R&D costs to be appropriate. Another concern in this case might be latency if you also want the feature to feed back an image of what the player can see through the lens, but regardless I do think this to be something worthy of consideration. Anywho, Im Forrest Shooster, a VR/AR/MR game dev who just had a passing thought I wanted to share with the oculus team. I'll be sharing this elsewhere as well. Thanks. Cheers, Forrest Z. Shooster AKA Argzero465Views0likes0Comments