2

My Android app features a text input box that has a button on the right of the EditText to call the voice-input feature.

I am porting the app with Codename One. At present time the iOS port is the goal.

The button has a suitable icon. This is the code:

voiceInputButton.setOnClickListener(new View.OnClickListener() {
        @Override
        public void onClick(View v) {
            Intent voiceIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
            voiceIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
            try {
                activity.startActivityForResult(voiceIntent, RESULT_SPEECH_REQUEST_CODE);
            } catch (ActivityNotFoundException ex) {

            }
        }
    });

It works very well, the voice-input screen is called and then the result is passed back to the app as a string.

The string is what the user said (for example, a single word).

I need to have this functionality in the CodenameOne app for iOS.

What should be the equivalent? Is it necessary to call native iOS functions, through the native interface?

P5music
  • 3,197
  • 2
  • 32
  • 81
  • If you use the constraint `ANY`, for example in a code like: `TextField myField = new TextField("", "myHint", 80, TextArea.ANY);`, the iOS VKB (virtual keyboard) has in the bottom right an icon to dictate the text. It's not what you asked, but it's something available by default. Other constraints, like EMAIL, don't show the option to dictate text. I suppose you need a native interface to dictate text without opening the VKB, but I'm not sure. – Francesco Galgani Aug 21 '20 at 09:33
  • I looked a bit at doing it natively and couldn't find that for iOS. It isn't as common there. It would also be more challenging since text fields are pretty challenging to get right cross platform. – Shai Almog Aug 22 '20 at 05:11
  • @Francesco Galgani I do not need dictation in real time, like that the EditText is populated while the user is speaking. No challenge here for Textfieds. It is a separate button that calls an "intent". It's strange that the functionality is available by the on-screen keyboard (as on Android too) and not directly (as Android). – P5music Aug 22 '20 at 07:59
  • Then I didn't get what you want to do. If you are not interested in real-time dictation, then you can record the audio and send it to any Rest service that offers transcription via MultipartUpload. The equivalent of `.setOnClickListener` is `.addActionListener`. – Francesco Galgani Aug 22 '20 at 09:24
  • @Francesco Galgani I mean I do not need that the voice input is directly transcribed into the EditText (like while the user is talking words appear in the EditText) although it would be nice. I just want to avoid that the keyboard appear and the user can just call the voice input feature tapping on the icon (then the user dictates and then tap to go back to the app from the voice input srcreen). I do no want the exact Android user experience, but it seems that voice input is not available by "intent" on iOS. – P5music Aug 22 '20 at 12:57

1 Answers1

0

You can implement speech-to-text via Speech framework, to perform speech recognition on live or prerecorded audio. More info: https://developer.apple.com/documentation/speech

About Codename One, you can create a native interface using Objective-C code.

To use the Speech framework with Objective-C, see this answer: https://stackoverflow.com/a/43834120

The answer says so: «[...] To get this running and test it you just need a very basic UI, just create an UIButton and assign the microPhoneTapped action to it, when pressed the app should start listening and logging everything that it hears through the microphone to the console (in the sample code NSLog is the only thing receiving the text). It should stop the recording when pressed again. [...]». This seems very close to what you asked.

Obviously the creation of the native interface takes time. For further help, you can ask more specific questions, I hope I have given you a useful indication.


Lastly, there are also alternative solutions, again in Objective-C, such as: https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/objectivec/ios/from-microphone

You can search on the web for: objective-c speech-to-text

Francesco Galgani
  • 6,137
  • 3
  • 20
  • 23