Speech Recognition and NodeWebkit

0 favourites
  • 4 posts
From the Asset Store
Whisper is a speech recognition system for making requests to the OpenAI speech to text API
  • Hello,

    Is it possible to use the UserMedia speech recognition plugin with the NodeWebKit export such that FinalTranscript is written directly to a txt file. I modified the speech recognition example to do this but the file is not showing the results.

    Any insight is greatly appreciated!

    Thanks!

  • I guess more to the point, is it possible to access the default microphone input, using UserMedia, through NodeWebKit exporter? It does not appear to be possible, but I might be doing it wrong.

  • Did you ever get anything working? I am playing with annyang.js which uses webkitSpeechRecognition and it is returning true in nodewebkit but still somewhere the speech is not actually getting through to annyang.

    I also can't find anything much besides this post refering to speech recognition.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • I've had no luck on this. On related topic I was hoping to create a very simple "DJ" application that allows the user to load wav files into triggers (or buttons) and then play these sounds back. However, it seem this is not possible as all sounds must be added before the build process meaning that there exists no way to communicate between NodeWebKit and the Audio plugin in Construct 2.

    I put a few posts up about it and got no response so I assume there is no answer to problem unless Ashley, et al. decide to support dynamic loading of sounds.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)