流媒体学习笔记3(live555的source-sink)

来源:互联网 发布:dd linux iso 编辑:程序博客网 时间:2024/04/30 14:21

上次提到source-sink,也就是SDP消息的组装过程,在OnDemandServerMediaSubsession::sdpLines()中创建了临时的FileSource和RTPSink,先看下这两个createNew

FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {  estBitrate = 500; // kbps, estimate  // Create the video source:  ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName);  if (fileSource == NULL) return NULL;  fFileSize = fileSource->fileSize();  // Create a framer for the Video Elementary Stream:  return H264VideoStreamFramer::createNew(envir(), fileSource);}RTPSink* H264VideoFileServerMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock,   unsigned char rtpPayloadTypeIfDynamic,   FramedSource* /*inputSource*/) {  return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);}

先看source部分,可以看到,实际文件资源定义的部分是ByteStreamFileSource,它的倒继承关系为:

ByteStreamFileSource::FramedFileSource::FramedSource::MediaSource

而返回的却是H264VideoStreamFramer,我们看下它的倒继承关系为:

H264VideoStreamFramer::MPEGVideoStreamFramer::FramedFilter::FramedSource::MediaSource

其实ByteStreamFileSource确实是最初始打开文件获得Byte流的地方,但是单单知道一个Byte-Source还不够,一个最终的处理h264文件,分析格式的Source才是完整的。重点来看下分析h264格式。在H264VideoStreamFramer 中看到的都是一些SPS(序列参数集Sequence Parameter Set)和 PPS(图像参数集Picture Parameter Set)的 get,set基本操作,具体的解析部分在哪呢?哈,H264VideoStreamFramer的构造函数中有这么一句new H264VideoStreamParser(this, inputSource, includeStartCodeInOutput),没错,Parser就是一个解析器。看下它的继承关系:

H264VieoStreamParser::MPEGVideoStreamParser::StreamParser

先看StreamParser,大多都是一些byte操作,有一个成员FramedSource* fInputSource,很明显fInputSource的赋值从H264VideoStreamFramer::createNew(envir(), fileSource)一直传递过来,所以它是一个ByteStreamFileSource的实例。文件读取部分是通过get?Byte(),getBits()来读取fInputSource。

再看下MPEGVideoStreamParser,其中有一个纯虚函数parse(),很明显它是H264VideoStreamParser做最终的分析的接口。看成员函数saveByte(),save4Bytes()的内容:

  void saveByte(u_int8_t byte) {    if (fTo >= fLimit) { // there's no space left      ++fNumTruncatedBytes;      return;    }    *fTo++ = byte;  }

可以在H264VideoStreamParser::parse()中看到通过调用get?Byte(),getBits()将值传入saveByte(),save4Bytes(),实际上就是获得Byte-Source保存到MPEGVideoStreamParser的成员 fTo 中。而MPEGVideoStreamFramer的doGetNextFrame正是用来调用整个parse()的过程

void MPEGVideoStreamFramer::doGetNextFrame() {  fParser->registerReadInterest(fTo, fMaxSize);  continueReadProcessing();}void MPEGVideoStreamFramer::continueReadProcessing(void* clientData, unsigned char* /*ptr*/, unsigned /*size*/, struct timeval /*presentationTime*/) {  MPEGVideoStreamFramer* framer = (MPEGVideoStreamFramer*)clientData;  framer->continueReadProcessing();}void MPEGVideoStreamFramer::continueReadProcessing() {  unsigned acquiredFrameSize = fParser->parse();  if (acquiredFrameSize > 0) {    // We were able to acquire a frame from the input.    // It has already been copied to the reader's space.    fFrameSize = acquiredFrameSize;    fNumTruncatedBytes = fParser->numTruncatedBytes();    // "fPresentationTime" should have already been computed.    // Compute "fDurationInMicroseconds" now:    fDurationInMicroseconds      = (fFrameRate == 0.0 || ((int)fPictureCount) < 0) ? 0      : (unsigned)((fPictureCount*1000000)/fFrameRate);#ifdef DEBUG    fprintf(stderr, "%d bytes @%u.%06d, fDurationInMicroseconds: %d ((%d*1000000)/%f)\n", acquiredFrameSize, fPresentationTime.tv_sec, fPresentationTime.tv_usec, fDurationInMicroseconds, fPictureCount, fFrameRate);#endif    fPictureCount = 0;    // Call our own 'after getting' function.  Because we're not a 'leaf'    // source, we can call this directly, without risking infinite recursion.    afterGetting(this);  } else {    // We were unable to parse a complete frame from the input, because:    // - we had to read more data from the source stream, or    // - the source stream has ended.  }}

也就是说文件数据从ByteStreamFileSource读入,在H264VideoStreamFramer处理,计算出帧的持续时间后,调用FrameSource的afterGetting(),afterGetting()回调了FrameSource::getNextFrame()中初始化的一个函数指针,而调用getNextFrame()是在Sink中完成的,也就是说afterGetting()最终会调用RTPSink的函数。

好的,这里source部分的分析先停下,我们去寻找SubSession是怎么来获得SDP信息的,我们回到最初OnDemandServerMediaSubsession::sdpLines()中的setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate)。观察这个函数可以知道核心获得sink得到的AuxSDPLine,再经过包装成SDPLine。核心在这:

char const* rangeLine = rangeSDPLine();  char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource);

H264VideoFileServerMediaSubsession 有重载函数getAuxSDPLine,所以这里调用的应该是下面的

char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) {  if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client)  if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream    // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known    // until we start reading the file.  This means that "rtpSink"s "auxSDPLine()" will be NULL initially,    // and we need to start reading data from our file until this changes.    fDummyRTPSink = rtpSink;    // Start reading the file:    fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);    // Check whether the sink's 'auxSDPLine()' is ready:    checkForAuxSDPLine(this);  }  envir().taskScheduler().doEventLoop(&fDoneFlag);  return fAuxSDPLine;}

注释的意思是开始的时候不知道H264视频文件的配置信息,直到播放一段后才可以获得SDP。继续追踪startPlaying()到continuePlaying()来到

Boolean H264VideoRTPSink::continuePlaying() {  // First, check whether we have a 'fragmenter' class set up yet.  // If not, create it now:  if (fOurFragmenter == NULL) {    fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize,   ourMaxPacketSize() - 12/*RTP hdr size*/);  } else {    fOurFragmenter->reassignInputSource(fSource);  }  fSource = fOurFragmenter;  // Then call the parent class's implementation:  return MultiFramedRTPSink::continuePlaying();}

这里将fSource又进行了一次包装,认真查看H264FUAFragmenter知道,他是处理Nal Unit的。继续跟踪continuPlaying()->buildAndSendPacket()->packFrame()->fSource,终于到了我们最想看到的部分:

fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),  afterGettingFrame, this, ourHandleClosure, this);

这是一次获取一个完整帧进行转发。getNextFrame()里面调用的是dogetNextFrame(),H264FUAFragmenter的dogetNextFrame的实现:1.没数据时调用fInputSource->getNextFrame(),fInputSource是H264VideoStreamFramer,H264VideoStreamFramer的getNextFrame()会调用H264VideoStreamParser的parser(),parser()又调用ByteStreamFileSource获取数据,然后分析,parser()完成后又回调到dogetNextFrame。2.这时候有数据了,则进行数据分析。

总结下:文件数据从ByteStreamFileSource读入,经H264VideoStreamFramer处理传给H264FUAFragmenter.ByteStreamFileSource返回给H264VideoStreamFramer一段数据,H264VideoStreamFramer返回一个H264FUAFragmenter一个Nal unit .

总结source-sink:一步一步的调用XXXFramer做文件读取和包装,真正的source其实是ByteStreamFileSource,数据最后会移交到MultiFramedRTPSink做RTP包装。

个人总结:学习阶段,表示看源码看到晕死,连博文也看的晕死。上面鬼话只供参考

原创粉丝点击