The DevKitTranslator example is only a proof-of-concept to leverage the Azure IoT Hub, Functions and cognitive translator API to process audio sensor data. This example records the audio data and store it temporally in the device flash (1M). So due to this memory resource limitation it is hard to record a longer audio.
If you really want to expand the record time, it would probably need a re-architecture of this example:
- Change the device code to use WebScoket streaming to upload the audio to cloud continually. We have already provided a WebSocket client library for DevKit.
- Build a Azure Web App that supports WebSocket protocol to process the audio streaming from the device, and then invoke cognitive service translator API to do the translation.
- Send the translation result back to the device via IoT Hub C2D message.
If you really want to try this solution architecture above you can refer to or try the DevKit Chat Bot example. This is a more sophisticated example to demonstrate the power of the IoT DevKit to integrate with more Azure AI Services which is transmiting the continuate audio data via WebSocket streaming.