2

I was writing as I could not find the answer in previous topics. I am using live555 to stream live video (h264) and audio(g723), which are being recorded by a web camera. The video part is already done and it works perfectly, but I have no clue about the audio task.

As long as I have read I have to create a ServerMediaSession to which I should add two subsessions: one for the video and one for the audio. For the video part I created a subclass of OnDemandServerMediaSubsession, a subclass of FramedSource and the Encoder class, but for the audio aspect I do not know on which classes should I base the implementation.

The web camera records and delivers audio frames in g723 format separatedly from the video. I would say the audio is raw as when I try to play it in VLC it says that it could not find any startcode; so I suppose it is the raw audio stream what is recorded by the web cam.

I was wondering if someone could give me a hint.

mpromonet
  • 11,326
  • 43
  • 62
  • 91
bilbinight
  • 217
  • 3
  • 15

1 Answers1

2

For an audio stream ,your override of OnDemandServerMediaSubsession::createNewRTPSink should create a SimpleRTPSink.

Something like :

RTPSink* YourAudioMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource) 
{
   return SimpleRTPSink::createNew(envir(), rtpGroupsock,
                                       4,
                                       frequency,
                                       "audio", 
                                       "G723",
                                       channels );
} 

The frequency and the number of channels should comes from the inputSource.

mpromonet
  • 11,326
  • 43
  • 62
  • 91