Over the past few years I have steadily developed a complete WebRTC based browser Phone using the SIP protocol. The main SIP toolbox is SIPJS (https://sipjs.com/), and it provides all the tools one needs to make and receive calls to a SIP based PBX of your own.
The Browser Phone project: https://github.com/InnovateAsterisk/Browser-Phone/ gives SIPJS it's full functionality and UI. You can simply navigate to the phone in a browser and start using it. Everything will works perfectly.
On Mobile
Apple finally allow WebRTC (getUserMedia()
) on WKWebView, so it wasn't long before people started to ask how it would work on mobile. And while the UI is well suited for cellphones and tablets, just the UI isn't enough now days to be a full solution.
The main consideration is that a mobile app is typically one that has a short lifespan, in that you can't or don't leave it running in the background like you can or would with the Browser on a PC. This presents a few challenges to truly making the Browser Phone mobile friendly. iOS is going to want to shutdown the app as soon as its not the front most app - and rightly so. So there are tools for handling that, like Callkit & Push Notifications. This allows the app to be woken up, so that it can accept the call, and notify the user.
Just remember, this app is created by opening a UIViewController
, adding a WKWebView
, and navigating to the phone page. There is full communication between the app and the html & Javascript, so events can be passed back and forth.
WKWebView & AVAudioSession Issue:
After a LOT of reading unsolved forum posts, it's clear that AVAudioSession.sharedInstance()
is simply not connected to the WKWebView
, or there is some undocumented connection.
The result is that if the call starts from the app, and is sent to the background, the microphone is disabled. Clearly this isn't an option if you are on a call. Now, I can manage this limitation a little, by putting the call on hold when the app is sent to the background - although this would be confusing to the user and a poor user experience.
However, the real issue is that if the app was woken from Callkit, because the app never goes to the foreground (because Callkit is), the microphone isn't activated in the first place, and even if you do witch to the app, it doesn't activate even after that. This is simply an unacceptable user experience.
What I found interesting is that if you simply open up Safari Browser on iOS (15.x), and navigate to the phone page: https://www.innovateasterisk.com/phone/ (without making an app in xCode and loading it into a WKWebView), the microphone continues to work when the app is sent to the background. So how do Safari manage to do this? Of course this doesn't and can't salve the CallKit issue, but still interesting to see that Safari can make use of the microphone in the background, since Safari is built off WKWebView.
(I was reading about entitlements, and that this may have to be specially granted... im not sure how this works?)
The next problem with AVAudioSession is that since you cannot access the session for WkWebView, you cannot change the output of the <audio>
element, so you cannot change it from say speaker to earpiece, or make it use a bluetooth device.
It simply wouldn't be feasible to redevelop the entire application using an outdated WebRTC SDK (Google no long maintain the WebRTC iOS SDK), and then build my own Swift SIP stack like SIPJS and land up with two sets of code to maintain... so my main questions are:
- How can I access the AVAudioSession of WKWebView so that I can set the output path/device?
- How can I have the microphone stay active when the app is sent to the background?
- How can I activate the microphone when Callkit activates the application (while the application is in the background)?