We're happily migrating to UE 4.19. In this version has been solved an important bug in the Oculus subsystem (present in UE 4.17 and 4.18) which prevented us from establishing a connection between a server and one or more clients; UE 4.19 has some interesting features which we would like to use: among these, the native voice spatialization.
Before UE 4.19 we had to do some modifications to the engine's source code, in order to expose some references to the AudioComponents and correctly associate the AudioComponent to the right player. With just a little work we succeeded in using the newest UVoipTalker; as usually we tried it with the Null subsystem; then, when we made it work, we tried it with the Oculus' one, and so a bitter surprise: native voice spatialization does not work at all with the Oculus subsystem. Then we tried to understand why, and so we looked to the subsystem's source code, and actually we found that, apart from the Null subsystem, none of the other online subsystems uses this new feature.
So, this thread is to ask if you are considering to implement, in the short-term, the usage of this new feature also in your subsystem.
We already used your OnlineVoiceOculus, but, since your subsystem (and others as well) do not expose a way to obtain the association between the UniquePlayerId and the player's AudioComponent, until now we had to modify the engine's source code (including OnlineVoiceOculus) to expose this association and take the AudioComponent for spatializing it by attaching it to the right player character.
Now the engine provides a standard way for spatializing players' voices. This new feature allows developers to spatialize voices without the need of having a custom build of the engine; so, if you do not plan to support this new feature, you should at least consider to expose a way to associate the player's AudioComponent to the player's character.