cancel
Showing results for 
Search instead for 
Did you mean: 

EMG + EEG + EOG = Path to BCI

Markystal
Explorer
I've been working on a Virtual Reality rig design since last year and plan on building it within this year, with the primary foundation being the use of EMGs as the primary input method for the experience. Essentially, the EMGs (Electromyograms) read the electrical signals that skeletal muscles make when they're activated by our movements to get an idea of what the body is doing and thus, use that information as an input for computer applications. Along the same lines, EOG (electrooculogram)) data would be used to detect where the eye is looking and an EEG (Electroencephalogram) could be used to gather information on the players mental state for additional data.

While I've yet to test out the functionality (funds, waiting for products to be released, lack of time), I've already started to look ahead to where the technology could be taken in the future and how the design could be improved for more accuracy, ease of use, and practical applications. Often times, great revolutions are only waiting for someone to realize what's possible (see Oculus Rift) and I think EMGs, EOGs, and EEGs are such a technology, at the very least within the realm of computer input devices. In the near term, I'd like to think of my rig's (https://developer.oculusvr.com/forums/viewtopic.php?t=5358) appendage situated EMGs as training wheels for an eventual head piece only system.

Still, best think of the path rather than the destination, and I think that development should target increasing the amount of data, the accuracy of data, and the comprehension of data that we get from EEGs. To do that, I think that having a steady progression of data that gained from people using EEGs while correlating it to particular movement, then increasing the EEG performance data collection in major areas, is the best way. However, knowing what people are doing with an EEG alone right now is impractical and getting enough research done in this field will likely be difficult without a lot of funding. This is where the EMGs come in. With EMG's and EOGs serving to determine what activity someone is doing while wearing an EEG, we can get a great pool of data that gives a precise data set of information to tie EEG data to. If we get EEGs of greater accuracy, we can by extension get a more defined data pool with which to know what action the brain intends to do, and thus reduce the need to have the EMGs. Should the EEG data ever reach the point where it's accurate enough to remove the need for the EMGs to begin with, then I believe we'll have found the ultimate interface for use in virtual reality until direct conscious uploads or mind reading come about, if I do say so myself. Hence why I'm aiming all my development efforts on current EEGs and EMGs by supporing the EMOTIV Insight and Thalmic Labs MYO.
4 REPLIES 4

Lindblum
Explorer
I do happen to own an Emotiv EPOC, but I haven't used it very much, because the electrodes are tricky to get set up properly, and I haven't had enough luck in controlling it. I had originally planned to combine it with Rift, but it would be basically impossible to wear at the same time, and I imagine it might even interfere with the sensitive electrodes.

Markystal
Explorer
"Lindblum" wrote:
I do happen to own an Emotiv EPOC, but I haven't used it very much, because the electrodes are tricky to get set up properly, and I haven't had enough luck in controlling it. I had originally planned to combine it with Rift, but it would be basically impossible to wear at the same time, and I imagine it might even interfere with the sensitive electrodes.

Good points. Right now, I'm actually looking more in the direction of something like the Emotiv Insight model being released, basically this month due to it's dry sensor (don't need the saline solution) and smaller form factor. That should make it easier to put on and use, but at the same time, I can see what you mean on wearing both simultaneously. I usually like to look at this from the design perspective of having both items within one package, a la a helmet system. That would negate the need for the Rift's headbands that interfere with the electrode placements and make it easier to quickly just get in the game. However, the matter of using EEGs like the EPOC, EEG, Insight and Mindwave doesn't seem practical to me due too the current amounts of latency and the imprecision of the tools. But, I think it would be in our best interests to explore creating and gathering data while using EEGs and EMGs in tandem due to possible optimizations in operations. One such optimization I'd see is emphasizing the placement of electrodes over the motor cortex to get a better picture of what actions the user is performing. As CPU tech advances, latency should be reduced as the number crunching of the EEG data is speed up. If the tech matures, we may even get better at optimizing the code for seeking movement data. Thanks for the input, I'd really forgotten a bit about the older Emotiv Models in thinking about this. They may be clunky, but they could still have their uses.

ideamachinedev
Honored Guest
"Lindblum" wrote:
I do happen to own an Emotiv EPOC, but I haven't used it very much, because the electrodes are tricky to get set up properly, and I haven't had enough luck in controlling it. I had originally planned to combine it with Rift, but it would be basically impossible to wear at the same time, and I imagine it might even interfere with the sensitive electrodes.


I think OpenBCI might be a better choice along with the electrode cloth hat

ideamachinedev
Honored Guest
"Markystal" wrote:
I've been working on a Virtual Reality rig design since last year and plan on building it within this year, with the primary foundation being the use of EMGs as the primary input method for the experience. Essentially, the EMGs (Electromyograms) read the electrical signals that skeletal muscles make when they're activated by our movements to get an idea of what the body is doing and thus, use that information as an input for computer applications. Along the same lines, EOG (electrooculogram)) data would be used to detect where the eye is looking and an EEG (Electroencephalogram) could be used to gather information on the players mental state for additional data.

While I've yet to test out the functionality (funds, waiting for products to be released, lack of time), I've already started to look ahead to where the technology could be taken in the future and how the design could be improved for more accuracy, ease of use, and practical applications. Often times, great revolutions are only waiting for someone to realize what's possible (see Oculus Rift) and I think EMGs, EOGs, and EEGs are such a technology, at the very least within the realm of computer input devices. In the near term, I'd like to think of my rig's (https://developer.oculusvr.com/forums/viewtopic.php?t=5358) appendage situated EMGs as training wheels for an eventual head piece only system.

Still, best think of the path rather than the destination, and I think that development should target increasing the amount of data, the accuracy of data, and the comprehension of data that we get from EEGs. To do that, I think that having a steady progression of data that gained from people using EEGs while correlating it to particular movement, then increasing the EEG performance data collection in major areas, is the best way. However, knowing what people are doing with an EEG alone right now is impractical and getting enough research done in this field will likely be difficult without a lot of funding. This is where the EMGs come in. With EMG's and EOGs serving to determine what activity someone is doing while wearing an EEG, we can get a great pool of data that gives a precise data set of information to tie EEG data to. If we get EEGs of greater accuracy, we can by extension get a more defined data pool with which to know what action the brain intends to do, and thus reduce the need to have the EMGs. Should the EEG data ever reach the point where it's accurate enough to remove the need for the EMGs to begin with, then I believe we'll have found the ultimate interface for use in virtual reality until direct conscious uploads or mind reading come about, if I do say so myself. Hence why I'm aiming all my development efforts on current EEGs and EMGs by supporing the EMOTIV Insight and Thalmic Labs MYO.


I am planning a similar act, I will be happy to discuss stuff with you in detail, BCI has been my undergrad fantasy 🙂 Although using EEG/EOG as inputs is going to be very very difficult since the current BCI technology is pased on P300 wave which comes 300ms after the event, huge latency there. Alternate techniques are underway, but heavily depend on FFT which is a causal approach requiring certain number of datapoints for the algorithm, which in turn add to latency. EMG might be the best bet when it comes to latency, but if u have EMG you can have accelerometer input far more easier and it takes out artifact removal, and expensive filter/algorithm design out of the design.

My theory is... Oculus is very useful for BCI but not as an input rather as feedback to close the loop. Oculus's power for rendering realistic simulations can be used to easily train models/NN starting with very slow motions and moving forward

Feel free m3ssage at ideamachinedev at Gm41L