https://cloud.google.com/speech-to-text/docs/streaming-recognize
I've been trying to execute the sample google speech api code under "Performing Streaming Speech Recognition on an Audio Stream"
Here is the code I have been trying to execute:
'use strict';
const record = require('node-record-lpcm16');
const speech = require('@google-cloud/speech');
const exec = require('child_process').exec;
//const speech = Speech();
const client = new speech.SpeechClient();
const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';
const request = {
config: {
encoding: encoding,
sampleRateHertz: sampleRateHertz,
languageCode: languageCode
},
interimResults: true // If you want interim results, set this to true
};
const recognizeStream = client.streamingRecognize(request)
.on('error', console.error)
.on('data', (data) =>
process.stdout.write(
(data.results[0] && data.results[0].alternatives[0])
? `Transcription: ${data.results[0].alternatives[0].transcript}\n`
: `\n\nReached transcription time limit, press Ctrl+C\n`)
);
record.start({
sampleRateHertz: sampleRateHertz,
threshold: 0.5,
verbose: true,
recordProgram: 'arecord', // Try also "arecord" or "sox"
silence: '10.0'
}).on('error', console.error)
.pipe(recognizeStream);
console.log('Listening, press Ctrl+C to stop.');
the output in the terminal: the output in the terminal:
I realise there's a problem with the encoding of the output stream from arecord i.e. it isn't inline with the configuration that's been specified in the program, but I'm not sure what to do to correct this