cancel
Showing results for 
Search instead for 
Did you mean: 

Voice SDK - Wit.ai General Questions Thread

mouse_bear
Retired Support

Do you have any general questions regarding the wit.ai framework within the Voice SDK? Drop your questions, concerns, and feedback here!

12 REPLIES 12

If a word I am saying appears incorrectly recognised (e.g. I said 'Close' but the text in the utterance shows as 'Lows') the guidance seems to imply that I should correct the text then validate. Is that correct? But if it consistently becomes misinterpreted, is it ok to add the incorrect word to the list of keywords for the entity?

How can you add context to the interpretation of utterances? I'm specifically thinking of ones that might be using the same words but have different meanings (or 'intents').

For example "Let's play ball" - assigned to the scene_load intent, with 'ball' as the entity which has the Role of scene_name and resolves to 'basketball' (the Unity scene name)
However, in the same app, if I want to detect "Pass me the ball" but mapped to another response (e.g. npc_command) with 'ball' this time being an entity with the Role of object_name, or even "Let's play ball" uttered in this new scene but this time needing to trigger a different intent (perhaps npc_command with the entity resolving to approach_player)

Hope that makes sense! Any thoughts on how best to approach this?

Some of my older Wit.Ai applications have 'live utterances' i.e. audio recordings in the Understanding section that need that need to be listened to and confirmed before they can be trained.
I've a few questions about this:
1. Why does my current app not do this? Is it because it is set to public whereas the others are private? If mnot, why not? (has it been disabled recently? Is there a setting that needs to be set?)
2. What does making an app public actually mean?

3. Are there not some serious privacy implications at being able to replay the actual audio from a user's run through your app? In my initial tests when I was still learning about how to properly sign post that the mic was activated, I was getting recordings of people talking in their home without them really knowing this was happening. Seems to me that there should be some very clear guidance to developers that they need to make this explicit to players. The generic "Do you allow this app to have access to your Mic" app manifest warning doesn't quite seem to cover this.

asquare.devs
Explorer

Not sure which forum to post this to. I already reached out here: https://forums.oculusvr.com/t5/SDK-Feedback/Voice-SDK-Feedback-Issues/m-p/890996 so if there is any issue please disregard this double post.

-----------------------------------------------------------------------------------------------------------

We are currently developing a VR simulation for Emergency Services to train users in the language and vocabulary used by personnel responding to active shooter situations & other mass casualty events. Our application relies heavily on speech recognition, and we have implemented Oculus's Voice SDK, which uses Wit.ai, and it works perfectly for our needs.

 

Unfortunately, due to the nature of the delivery of the training program; we cannot guarantee internet access at the locations of the training, and we are looking to implement an offline solution, which to our knowledge is not supported.

 

We are looking to inquire:
1.) Is there possibly an offline-supported package for Oculus's Voice SDK using wit.ai that may be available?
2.) We use a mobile server to network our training; is there a way we can host the required wit.ai software on our mobile server?

 

Please let us know if either of these are possible or if another solution to support this speech-to-text software offline for the purposes of our Emergency Services simulation.

biggi.509754
Honored Guest

im having issues with missing parts of my transcripts. If the user talks slowly or at a hint of hesitancy between words the transcription is cut at the point of slightest pause. This was the expected behavior quoting reliable sources here. "After activate is called it will listen indefinitely until the minimum volume threshold is hit. Once the minimum volume threshold is hit it will send up to 20 seconds of audio to Wit for processing."Ive tuned the WIT voice config back and forth with no luck. This happens for me even if i have the volume threshold at zero. The debug wave file seems fine but the transcript from WIT ends to early if any slightest pauses in the speech.

biggi509754_0-1673569952465.png

 

cazforshort
Expert Protege

Is it just me or is the dictation experience completely broken?

AppVoiceExperience works no problem. Dictation returns nothing.

Also the dictation API is very broken as well. Replacing the endpoint with Speech works,

scr1pt-iwnl
Honored Guest

I implemented this sdk in unity, however for some reason it always returns confidence of 0,00 whatever I say. This happens only in runtime, because when testing with understanding viewer it's confidence is 0,98

having the same issue, found a fix for it by any chance?

Shivam_3901_
Honored Guest

I've been working on a project that involves integrating the Oculus Voice SDK into my Unity application. I followed the official documentation and tutorials provided by Oculus (https://developer.oculus.com/documentation/unity/voice-sdk-tutorials-1) to implement voice commands in my VR experience. The development process went smoothly, and I successfully built the application.

However, after deploying the build on an Oculus device, I encountered an issue with the microphone functionality. The application does not seem to be recognizing or using the microphone at all. As a result, users are unable to utilize voice commands within the VR experience.

Here are some additional details about my setup:

  1. Unity version: Unity 2021.3.15f1
  2. Oculus Integration package version: 2020.3.16
  3. Oculus device used for testing: Meta Quest 2

Steps I've taken to troubleshoot the problem:

  1. Checked the microphone permissions on the Oculus device to ensure that the application has access to the microphone.
  2. Verified that the Oculus Voice SDK is properly integrated into the Unity project and that the required components are set up correctly.
  3. Tried rebuilding and redeploying the application multiple times, but the issue persists.

I'm uncertain about what might be causing this problem, and I'm seeking guidance and suggestions from the community on how to resolve it. Has anyone else encountered a similar issue with the Oculus Voice SDK and microphone functionality? If so, how did you address it?

Any insights, troubleshooting tips, or possible solutions to this problem would be greatly appreciated. Thank you in advance for your help!

Best regards,
Shivam Nimbalkar