Forum Discussion
LokiMortmagus
13 years agoExplorer
IMHO Spectrum Analysis is the way forward
Spectrum analysis.
As in, use existing, simple to use, ready made, already proven themselves a million times over tools to read our minds, in a close to literal sense, without the need for lots of hardware, or half our room eaten up by a big treadmill thingy from the 90's, or a scuba suit that Vivid made already in the 90's and gave up on because the insurance liability from overheating while in the suit made the financial metrics not worth doing.
If you use the Windows sounds tool thingy to analyze your room, for audio setup, you get a 3D image of the room.
So there's your treadmill thingy taken care of, all software based, and virtualized. You just compare frame differentials on the FLIR spectrum, client side, to detect physical motion. Flir is Forward Looking Infra Red.
Then you have basic biometric data, measurements of how engrossed, excited, focused, immersed in the experience you are. That's an easy one. Just needs a mic at the bottom of the headset, combined with data collected about your eyes, from the goggles as you watch the screen.
That same mic is your mind reading device.
Microtwitches. Hyper-accurate data of both sounds and air flow, taken in realtime using active echo sonar techniques combined with a 2 pass recursive loop that puts signal cancellation into the environment to mask the sonar noise crap.
your facial muscles, and breath, because of the vocal chords in your throat, make tiny subconscious twitches that correspond to words and sentences when we think about stuff.
so sonar can pick up on this, make a good guess as to what phoneme is being thought, then the next phoneme, till the engine can predict the word, and then the next word, and so on.
it's been around since 2006. In use by special forces as Silent Talker since 2008.
I believe that Microsoft's AudioDg.exe and audio end point mapper can get you what you need, if you take the data and process it the right way.
there are many spectrum analysis tools at google play. and an echo amplifier that should get your samples amped enough to measure frame differentials with enough accuracy to predict words in realtime...
keep in mind, you can sample at 192k now, easily, and at framerates in excess of 200-240 fps.
this DOES work. it's been used on me, extensively, for weeks.
mind reading, of surface level thoughts.
for real.
and it can tell what my body position is, and how and when and how far and how fast I move.
so fuck the 20000 cyborg uniforms, lol
use the force, Luke...
As in, use existing, simple to use, ready made, already proven themselves a million times over tools to read our minds, in a close to literal sense, without the need for lots of hardware, or half our room eaten up by a big treadmill thingy from the 90's, or a scuba suit that Vivid made already in the 90's and gave up on because the insurance liability from overheating while in the suit made the financial metrics not worth doing.
If you use the Windows sounds tool thingy to analyze your room, for audio setup, you get a 3D image of the room.
So there's your treadmill thingy taken care of, all software based, and virtualized. You just compare frame differentials on the FLIR spectrum, client side, to detect physical motion. Flir is Forward Looking Infra Red.
Then you have basic biometric data, measurements of how engrossed, excited, focused, immersed in the experience you are. That's an easy one. Just needs a mic at the bottom of the headset, combined with data collected about your eyes, from the goggles as you watch the screen.
That same mic is your mind reading device.
Microtwitches. Hyper-accurate data of both sounds and air flow, taken in realtime using active echo sonar techniques combined with a 2 pass recursive loop that puts signal cancellation into the environment to mask the sonar noise crap.
your facial muscles, and breath, because of the vocal chords in your throat, make tiny subconscious twitches that correspond to words and sentences when we think about stuff.
so sonar can pick up on this, make a good guess as to what phoneme is being thought, then the next phoneme, till the engine can predict the word, and then the next word, and so on.
it's been around since 2006. In use by special forces as Silent Talker since 2008.
I believe that Microsoft's AudioDg.exe and audio end point mapper can get you what you need, if you take the data and process it the right way.
there are many spectrum analysis tools at google play. and an echo amplifier that should get your samples amped enough to measure frame differentials with enough accuracy to predict words in realtime...
keep in mind, you can sample at 192k now, easily, and at framerates in excess of 200-240 fps.
this DOES work. it's been used on me, extensively, for weeks.
mind reading, of surface level thoughts.
for real.
and it can tell what my body position is, and how and when and how far and how fast I move.
so fuck the 20000 cyborg uniforms, lol
use the force, Luke...
1 Reply
- zaloExplorerI'm gonna need some links.
Specifically supporting the 3D mapping and mind reading bits.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 10 months ago
- 11 months ago