With a regular Android application, one can have two buttons where one triggers some code and the other triggers different code. For example, with two methods called startCameraForRec() and startCameraForOCR(), I have this working in my Android application:
<Button
android:text="@string/recognize"
android:onClick="startCameraForRec" />
<Button
android:text="@string/ocr"
android:onClick="startCameraForOCR" />
How can I implement the same behavior in a Google Glass application with multiple voice commands? For example, right now I have:
<intent-filter>
<action android:name="com.google.android.glass.action.VOICE_TRIGGER" />
</intent-filter>
When I say a specific command, my application launches. Is there anyway I could set up a second command which launches the application at a different entry point? With buttons, this is trivial (each button can call a method). How can I do this with multiple voice commands in my Google Glass application?