0

I am using xcode 7 , swift 2.0

I am getting voice text to Speech working in Simulator but not in real iphone6 plus device, iOS 9. I have properly imported AVFOUNDATION and its Framework.

I tried...

@IBAction func SpeakTheList(sender: AnyObject) {
    let mySpeechUtterance = AVSpeechUtterance(string: speakString)

    //let voice = AVSpeechSynthesisVoice(language: "en-US")
   // mySpeechUtterance.voice = voice

    let voices = AVSpeechSynthesisVoice.speechVoices()

    for voice in voices {

        if "en-US" == voice.language {
            mySpeechUtterance.voice = voice
            print(voice.language)
            break;
        }
    }
    mySpeechSynthesizer.speakUtterance(mySpeechUtterance)
}

I get the following error : Building MacinTalk voice for asset: (null) Is there anything I ned to do settings in my iphone6plus iOS 9 , or I have to download something.

I have found a suggestion here Why I'm getting "Building MacinTalk voice for asset: (null)" in iOS device test

saying that.. " since iOS9, possibly a log event turned on during development that they forgot to turn off"

Community
  • 1
  • 1
Prince Kumar
  • 59
  • 2
  • 9

3 Answers3

2

Just want to add to this (and by extension, the linked discussion in the original post):

I have two devices: an iPad2 and an iPad Air. They are running exactly the same version of iOS (9.2, 13C75). I have the following objective-C++ function for generating speech from Qt using Xcode 7.2 (7C68) on Yosemite:

void iOSTTSClient::speakSpeedGender(const QString &msg, const float speechRateModifier, const QString &gender, const bool cutOff) {
    QString noHTML(msg);
    noHTML.remove(QRegularExpression("<[^<]*?>"));
    AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:noHTML.toNSString()];
    /* See https://forums.developer.apple.com/thread/18178 */
    const float baseSpeechRate = (m_iOSVersion < 9.0) ? 0.15 : AVSpeechUtteranceDefaultSpeechRate;
    utterance.rate = baseSpeechRate * speechRateModifier;
    NSString *locale;
    if (gender.compare("male", Qt::CaseInsensitive) == 0)
        locale = @"en-GB"; // "Daniel" by default
    else if (gender.compare("female", Qt::CaseInsensitive) == 0)
        locale = @"en-US"; // "Samantha" by default
    else
        locale = [AVSpeechSynthesisVoice currentLanguageCode];
    AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
    const QString errMsg = QString("Null pointer to AVSpeechSynthesisVoice (could not fetch voice for locale '%1')!").arg(QString::fromNSString(locale));
    Q_ASSERT_X(voice, "speakSpeedGender", errMsg.toLatin1().data());
    utterance.voice = voice;
    static const AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
    if (synthesizer.speaking && cutOff) {
        const bool stopped = [synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
        Q_ASSERT_X(stopped, "speakSpeedGender", "Could not stop previous utterance!");
    }
    [synthesizer speakUtterance:utterance];
}

On the iPad Air, everything works beautifully:

Building MacinTalk voice for asset: file:///private/var/mobile/Library/Assets/com_apple_MobileAsset_MacinTalkVoiceAssets/db2bf75d6d3dbf8d4825a3ea16b1a879ac31466b.asset/AssetData/

But on the iPad2, I hear nothing and get the following:

Building MacinTalk voice for asset: (null)

Out of curiosity, I fired up the iPad2 simulator and ran my app there. I got yet another console message:

AXSpeechAssetDownloader|error| ASAssetQuery error fetching results (for com.apple.MobileAsset.MacinTalkVoiceAssets) Error Domain=ASError Code=21 "Unable to copy asset information" UserInfo={NSDescription=Unable to copy asset information}

However, I heard speech! And I realized I was wearing headphones. Sure enough, when I plugged ear buds into the iPad2, I heard speech there too. So now I'm searching for information about that. The following link is recent and has the usual assortment of this-worked-for-me voodoo (none of it helped me, but maybe will help others with this problem):

https://forums.developer.apple.com/thread/18444

In summary: TTS "works" but is not necessarily audible without headphones/ear buds. It appears to be a hardware settings issue with iOS 9.2. The console messages may or may not be relevant.

Final update: in the interests of full, if sheepish, disclosure, I figured I'd share how I finally solved the issue. The iPad2 in question had the "Use side switch to:" option set to "Mute". I left that alone but went ahead and toggled the switch itself. Wham! Everything worked without ear buds. So if you are unable to hear text-to-speech, try ear-buds. If that works, check whether your device is set to mute!

kuipersn
  • 180
  • 9
  • Bottom line: TTS was not audible at all ...but when I used headphones, voila!! there it was!!! Isn't it retarded to suppress Text to Speech just because there are no headphones plugged in?? sometimes apple feels like a dictatorship, where users/developers are given Zero choice!! – Josh Feb 02 '16 at 12:02
  • Text-To-Speech works **with and without headphones**. You may need to turn on the volume of the device and have the mute switch turned off, depending on your audio session configuration. – Roberto Sep 29 '16 at 15:32
0

Do not use pauseSpeakingAtBoundary(). Instead, use stopSpeakingAtBoundary and continueSpeaking. This works for me.

A J
  • 3,970
  • 14
  • 38
  • 53
4jchc
  • 1
  • 1
-1

Finally Found that there was a bug in iOS9, soon after XCODE new release 7.2 update, and iOS 9.2 Update release, I tested same above code, text to speech started working.

Prince Kumar
  • 59
  • 2
  • 9
  • As someone mentioned, it works only with headphones :( – Cristi Băluță Apr 30 '16 at 15:34
  • Text-To-Speech works **with and without headphones**. You may need to turn on the volume of the device and have the mute switch turned off, depending on your audio session configuration. – Roberto Sep 29 '16 at 15:31