Forum Discussion
thousel
13 years agoHonored Guest
Using a camera inside the Rift
For some time I've been interested in creating a more immersive communications experience than something like a video call in Skype would allow - something that feels more like a face to face encounter.
I was thinking of using a Kinect sensor with the open source Kinfu (http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php) to create real-time texture mapped 3d models of people, then manipulating the models to, say, put people around a virtual table (or whatever else) that's displayed on a Rift. However, since all these people would be wearing Rifts, no one could see each other's eyes. Since eye contact is such an important part of communication, that wouldn't work very well.
I was thinking about putting one or two very small cameras inside the rift to get facial images and try (perhaps with the help of some photos taken beforehand) to paste the facial images into the texture mapped model.
Is there enough space or light inside the Rift for the cameras to work?
I realize this would probably be easier with a CAVE VR system, but that would of course be much more expensive, and I'd like to make something useful for normal people (if people who have a Kinect, Rift, and beefy GPU are normal).
I was thinking of using a Kinect sensor with the open source Kinfu (http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php) to create real-time texture mapped 3d models of people, then manipulating the models to, say, put people around a virtual table (or whatever else) that's displayed on a Rift. However, since all these people would be wearing Rifts, no one could see each other's eyes. Since eye contact is such an important part of communication, that wouldn't work very well.
I was thinking about putting one or two very small cameras inside the rift to get facial images and try (perhaps with the help of some photos taken beforehand) to paste the facial images into the texture mapped model.
Is there enough space or light inside the Rift for the cameras to work?
I realize this would probably be easier with a CAVE VR system, but that would of course be much more expensive, and I'd like to make something useful for normal people (if people who have a Kinect, Rift, and beefy GPU are normal).
11 Replies
- AnonymousI think the problem with 'no eye-contact' could be bypassed somewhat easily.
I want to make clear, this is just a vision of mine (no pun intended), and I haven't received my Rift yet, so I can't say for sure. But my idea is that you make a faint dot in the center of the screen, and that is where the models eyes would always look.
So all the players would have to do do make eye contact, would be to line up that dot with about the center of the other targets head.
It wont look flawless, because of the known "bug" with people looking on someone, but not actually at them. However it will look better than nothing.
Thanks - zaloExplorerImagine using one of these systems inside of a rift:
The dichroic mirror would be between the lens and the screen. - lionleafMeta EmployeeThe Emotiv EEG can detect facial expressions, I've heard it does so pretty well. I haven't tried it myself, but it might be worth looking into?
http://emotiv.com/ - snapdataHonored Guest
"lionleaf" wrote:
The Emotiv EEG can detect facial expressions, I've heard it does so pretty well. I haven't tried it myself, but it might be worth looking into?
http://emotiv.com/
I'm confident that any company selling a machine which they claim can read people's thoughts is completely full of shit. - thouselHonored GuestI found the Emotiv stuff interesting, but not for this.
For a conversation to feel realistic, I think you'd need to use real images of their face (just translated a bit onto a texture map). But since each party would be wearing a Rift covering their face, well, you see the problem. It seems like there are going to be limits on how realistic I could make it. - mechmouseHonored GuestDid anyone ever try this, to see if cameras can be placed inside the rift.
One potential project requires me to know the direction of the operators eyes. I assume it should be posible. - kingtutHonored GuestI was thinking about this recently, as well (while re-reading Snow Crash). The only way I could think of to sense facial movement well when using an HMD was EMG [1]. Unfortunately, I can't see many people being willing to attach a load of sensors to their faces though.
Depending on the scope and quality of emotions to be detected (and also things like lip-sync for voice), you may be able to get away with only a few sensors, and then mapping them to specific codes using something like FACS [2], which standardized codes could then be used to update meshes. But still, I can't even picture people willing to put on something like a balaclava, let alone gluing transducers/sensors to their face.
Maybe you could mount sensors onto the HMD and HMD strap - that could capture a few of the key muscle groups. Facial EMG research appears to be pretty immature (and expensive) at the moment though.
[1] http://www.facialemg.com/
[2] http://en.wikipedia.org/wiki/Facial_Action_Coding_System - ZeroWaitStateHonored GuestWhilst not obviously for CV1, in the 5-10 yr range I think scanning to the retina direct with laser scanners will free up the hardware real-estate around the face, eye tracking will be an integral part of that tech being useable, at that point having your avtars eyes respond accordingly should be cake.
Pangolin have just made some major innovative design changes in the creation of high performance scanners increasing thermal performance amazingly (=speed) whilst reducing cost massively by securing orders of something in the range of 5 million units a year for the next 5 years (probably to the automotive industry) the impact of the scannermax scanners in the laser lighting industry (my background) is no doubt going to be profound, providing scanner that are better than the traditionally industry standard CT Cambridge technology scanner sets at a cost of 1200 + per scan set down to the price of average chines knock offs is a clever trick.
I digress, my apologies - saviorntProtege
"snapdata" wrote:
"lionleaf" wrote:
The Emotiv EEG can detect facial expressions, I've heard it does so pretty well. I haven't tried it myself, but it might be worth looking into?
http://emotiv.com/
I'm confident that any company selling a machine which they claim can read people's thoughts is completely full of shit.
It doesn't read people's thoughts per se. Its more or less a "consumer friendly" EKG, and you have to train the crap out of it, which even then, it's not exactly accurate.
From what I've heard, that is. - hellaryProtegeI'm sure an eye tracking system will end up in a future rift design, it's not a complicated thing to do and with enough time can be miniaturised into a rift. Such a system (like in that dubious video above) could track eye movement, pupil dilation, how open someone's eyes are and coupled with a high fps, high resolution camera that can read mouth movements - smiles etc, this could all be put together for conference settings. Of course, it'd be much more interesting in things like multiplayer gaming in my opinion.
An interesting by product of this could be that future rifts with >4K resolution and with sufficiently reduced latency could have a system whereby only areas that people are looking at are rendered in 'full' quality with peripheral areas having a reduced quality setting - this could allow games to provide a good increase in the modelled complexity without sacrificing on noticeable quality for the user. I.e. wherever they're looking, it looks great.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 3 years ago
- 7 months ago
- 7 months ago