stagefright 流程了解

来源:互联网 发布:淘宝售后退款在哪里 编辑:程序博客网 时间:2024/06/10 22:55

我们就先来看看Stagefright是如何播放一个影片档。
Stagefright在Android中是以shared library的形式存在(libstagefright.so),其中的module -- AwesomePlayer可用来播放video/audio (註3)。AwesomePlayer提供许多API,可以让上层的应用程式(Java/JNI)来呼叫,我们以一个简单的程式来说明video playback的流程。
在Java中,若要播放一个影片档,我们会这样写:
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(PATH_TO_FILE); ...... (1)
mp.prepare(); ........................ (2)、(3)
mp.start(); .......................... (4)

MediaPlayer decode 通过binder调用远程MediaPlayerService decode
     /*static*/ sp<IMemory> MediaPlayer::decode(int fd, int64_t offset, int64_t length, uint32_t *pSampleRate, int* pNumChannels, int* pFormat)
{
    LOGV("decode(%d, %lld, %lld)", fd, offset, length);
    sp<IMemory> p;
    const sp<IMediaPlayerService>& service = getMediaPlayerService();
    if (service != 0) {
        p = service->decode(fd, offset, length, pSampleRate, pNumChannels, pFormat);
    } else {
        LOGE("Unable to locate media service");
    }
    return p;

}
 在main_mediaserver.cpp 中有启动服务:
             AudioFlinger::instantiate();
        MediaPlayerService::instantiate();
        //CameraService::instantiate();
        AudioPolicyService::instantiate();
 sp<IMemory> MediaPlayerService::decode(const char* url, uint32_t *pSampleRate, int* pNumChannels, int* pFormat)
 中根据不同的url,调用createPlayer
    在MediaPlayerService 中根据不同的类型创建不同的player
    static sp<MediaPlayerBase> createPlayer(player_type playerType, void* cookie,
                                        notify_callback_f notifyFunc)
{
    sp<MediaPlayerBase> p;
    switch (playerType) {
    case HI_PLAYER:
 #ifdef SKYWORTH_STB_SUPPORT
  LOGV("Create StbPlayer");
            p = new StbPlayer();
 #else
        LOGV("Create HiPlayer");
        p = new HiMediaPlayer();
    #endif
        break;
#ifdef BUILD_DTV
        case HI_DTVPLAYER:
            p = new HiDtvPlayer();
            break;
#endif
    case SONIVOX_PLAYER:
        LOGV(" create MidiFile");
        p = new MidiFile();
        break;
    case STAGEFRIGHT_PLAYER:
        LOGV(" create StagefrightPlayer");
        p = new StagefrightPlayer;
        break;
       
   StagefrightPlayer 包装 AwesomePlayer ,继承MediaPlayerInterface 。
   StagefrightPlayer 使用 OMXCodec 解码。

 

在Stagefright中,则会看到相对应的处理;

(1) 将档案的绝对路径指定给mUri

status_t AwesomePlayer::setDataSource(constchar* uri, ...)

{

 return setDataSource_l(uri, ...);

}

status_t AwesomePlayer::setDataSource_l(constchar* uri, ...)

{

 mUri = uri;

}

(2)启动mQueue,作为eventhandler

 

status_t AwesomePlayer::prepare()

{

 return prepare_l();

}

status_t AwesomePlayer::prepare_l()

{

 prepareAsync_l();

 while (mFlags & PREPARING)

  {

   mPreparedCondition.wait(mLock);

  }

}

status_t AwesomePlayer::prepareAsync_l()

{

 mQueue.start();

 mFlags |= PREPARING;

 mAsyncPrepareEvent = new AwesomeEvent(this&AwesomePlayer::onPrepareAsyncEvent);

 mQueue.postEvent(mAsyncPrepareEvent);

}

(3)onPrepareAsyncEvent被触发

voidAwesomePlayer::onPrepareAsyncEvent()

{

 finishSetDataSource_l();

 initVideoDecoder(); ...... (3.3)

 initAudioDecoder();

}

status_tAwesomePlayer::finishSetDataSource_l()

{

 dataSource = DataSource::CreateFromURI(mUri.string(),...);

 sp<MediaExtractor> extractor =MediaExtractor::Create(dataSource);..... (3.1)

 return setDataSource_l(extractor); .........................(3.2)

}

(3.1) 解析mUri所指定的档案,并且根据其header来选择对应的extractor

sp<MediaExtractor>MediaExtractor::Create(constsp<DataSource> &source, ...)

{

 source->sniff(&tmp, ...);

 mime = tmp.string();

 if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)

  {

    return new MPEG4Extractor(source);

  }

 elseif (!strcasecmp(mime,MEDIA_MIMETYPE_AUDIO_MPEG))

  {

    return new MP3Extractor(source);

  }

 elseif (!strcasecmp(mime,MEDIA_MIMETYPE_AUDIO_AMR_NB)

  {

    return new AMRExtractor(source);

  }

}

(3.2) 使用extractor对档案做A/V(Audio/Video)的分离 (mVideoTrack/mAudioTrack)

status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor)

{

 for (size_t i = 0; i < extractor->countTracks(); ++i)

  {

   sp<MetaData> meta = extractor->getTrackMetaData(i);

   CHECK(meta->findCString(kKeyMIMEType, &mime));

    if(!haveVideo &&!strncasecmp(mime, "video/", 6))

    {

     setVideoSource(extractor->getTrack(i));

     haveVideo = true;

    }

    else if (!haveAudio && !strncasecmp(mime, "audio/", 6))

    {

     setAudioSource(extractor->getTrack(i));

     haveAudio = true;

    }

  }

}

voidAwesomePlayer::setVideoSource(sp<MediaSource> source)

{

 mVideoTrack = source;

}

(3.3) 根据mVideoTrack中的编码类型来选择videodecoder (mVideoSource)

status_t AwesomePlayer::initVideoDecoder()

{

 mVideoSource =OMXCodec::Create(mClient.interface(),mVideoTrack->getFormat(),false,mVideoTrack);

}

(4) 将mVideoEvent放入mQueue中,开始解码播放,并交由mVideoRenderer来画出

status_t AwesomePlayer::play()

{

 return play_l();

}

status_t AwesomePlayer::play_l()

{

 postVideoEvent_l();

}

voidAwesomePlayer::postVideoEvent_l(int64_tdelayUs)

{

 mQueue.postEventWithDelay(mVideoEvent, delayUs);

}

voidAwesomePlayer::onVideoEvent()

{

 mVideoSource->read(&mVideoBuffer, &options);

  [Check Timestamp]

 mVideoRenderer->render(mVideoBuffer);

 postVideoEvent_l();

}

 

选择Video Decoder

我们来看一看Stagefright是如何根据影片档的类型来选择适合的Video Decoder

(1) VideoDecoder是在onPrepareAsyncEvent中的initVideoDecoder被決定的。OMXCodec::Create()会回传Video Decoder给mVideoSource。

status_t AwesomePlayer::initVideoDecoder()

{

 mVideoSource =OMXCodec::Create(mClient.interface(),mVideoTrack->getFormat(),false,mVideoTrack);

}

sp<MediaSource>OMXCodec::Create(&omx, &meta, createEncoder, &source,matchComponentName)

{

 meta->findCString(kKeyMIMEType, &mime);

 findMatchingCodecs(mime, ..., &matchingCodecs); ........ (2)

 for (size_t i = 0; i < matchingCodecs.size(); ++i)

  {

   componentName = matchingCodecs[i].string();

   softwareCodec =InstantiateSoftwareCodec(componentName, ...); ..... (3)

    if (softwareCodec != NULL) return softwareCodec;

     err = omx->allocateNode(componentName, ..., &node); ... (4)

    if (err == OK)

    {

     codec = new OMXCodec(..., componentName,...); ...... (5)

      return codec;

    }

   }

}

(2) 根据mVideoTrack的MIME从kDecoderInfo挑出合适的components

voidOMXCodec::findMatchingCodecs(mime, ..., matchingCodecs)

{

 for (int index = 0;;++index)

  {

   componentName = GetCodec(kDecoderInfo,sizeof(kDecoderInfo)/sizeof(kDecoderInfo[0]),mime,index);

   matchingCodecs->push(String8(componentName));

  }

}

staticconst CodecInfo kDecoderInfo[] =

{

  ...

  {MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.qcom.video.decoder.mpeg4"},

  {MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.Video.Decoder"},

  {MEDIA_MIMETYPE_VIDEO_MPEG4, "M4vH263Decoder"},

  ...

}

GetCodec会依据MIME从kDecoderInfo挑出所有的component name,然后存到matchingCodecs中。

(3) 根据matchingCodecs中component的顺序,我们会先去检查其是否为softwaredecoder

static sp<MediaSource> InstantiateSoftwareCodec(name,...)
{
  FactoryInfo kFactoryInfo[] =
  {
    ...
    FACTORY_REF(M4vH263Decoder)
    ...
  };
  for (i = 0; i < sizeof(kFactoryInfo)/sizeof(kFactoryInfo[0]);++i)
  {
    if (!strcmp(name,kFactoryInfo[i].name))
      return(*kFactoryInfo[i].CreateFunc)(source);
  }
}

所有的software decoder都会被列在kFactoryInfo中,我们藉由传进来的name来对应到适合的decoder。


(4) 如果該component不是software decoder,则试著去配置对应的OMX component

status_t OMX::allocateNode(name, ..., node)

{

 mMaster->makeComponentInstance(name,&OMXNodeInstance::kCallbacks,instance,handle);

}

OMX_ERRORTYPEOMXMaster::makeComponentInstance(name, ...)

{

 plugin->makeComponentInstance(name, ...);

}

OMX_ERRORTYPEOMXPVCodecsPlugin::makeComponentInstance(name, ...)

{

 return OMX_MasterGetHandle(..., name, ...);

}

OMX_ERRORTYPE OMX_MasterGetHandle(...)

{

 return OMX_GetHandle(...);

}

(5) 若该component為OMX deocder,则回传;否则继续检查下一个component

 

3.VideoBuffer传输流程

下面介绍Stagefright中是如何和OMX Video decoder传递buffer

 

(1)OMXCodec会在一开始的时候透过read函式来传送未解码的data给decoder,并且要求decoder将解码后的data传回來

status_t OMXCodec::read(...)

{

 if(mInitialBufferSubmit)

  {

   mInitialBufferSubmit = false;

   drainInputBuffers(); //----- OMX_EmptyThisBuffer(清空InputBuffer)

   fillOutputBuffers(); //----- OMX_FillThisBuffer(填充OutputBuffer)

  }

  ...

}

voidOMXCodec::drainInputBuffers()

{

 Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];

 for (i = 0; i < buffers->size(); ++i)

  {

   drainInputBuffer(&buffers->editItemAt(i));

  }

}

voidOMXCodec::drainInputBuffer(BufferInfo *info)

{

 mOMX->emptyBuffer(...);

}

voidOMXCodec::fillOutputBuffers()

{

 Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];

 for (i = 0; i < buffers->size(); ++i)

  {

   fillOutputBuffer(&buffers->editItemAt(i));

  }

}

voidOMXCodec::fillOutputBuffer(BufferInfo *info)

{

 mOMX->fillBuffer(...);

}

(2)Decoder从input port读取资料后,开始进行解码,并且回传EmptyBufferDone通知OMXCodec

voidOMXCodec::on_message(const omx_message&msg)

{

 switch (msg.type)

  {

    caseomx_message::EMPTY_BUFFER_DONE:

    {

     IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;

     drainInputBuffer(&buffers->editItemAt(i));

    }

  }

}

OMXCodec收到EMPTY_BUFFER_DONE之后,继续传送下一个未解码的资料给decoder。


(3) Decoder将解码完的资料送到output port,并回传FillBufferDone通知OMXCodec

voidOMXCodec::on_message(const omx_message&msg)

{

 switch (msg.type)

  {

    caseomx_message::FILL_BUFFER_DONE:

   { 

     IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;

     fillOutputBuffer(info);

     mFilledBuffers.push_back(i);

     mBufferFilled.signal();

    }

  }

}

OMXCodec收到FILL_BUFFER_DONE之后,将解码后的资料放入mFilledBuffers,发出mBufferFilled信号,并且要求decoder继续送出资料。

(4) read函式在后段等待mBufferFilled信号。当mFilledBuffers被填入资料后,read函式将其指定给buffer指标,并回传给AwesomePlayer

status_t OMXCodec::read(MediaBuffer**buffer, ...)

{

  ...

 while (mFilledBuffers.empty())

  {

   mBufferFilled.wait(mLock);

  }

 BufferInfo *info =&mPortBuffers[kPortIndexOutput].editItemAt(index);

 info->mMediaBuffer->add_ref();

 *buffer = info->mMediaBuffer;

}

 

4.VideoRendering

AwesomePlayer::onVideoEvent除了透过OMXCodec::read取得解码后的资料外,还必须将这些资料(mVideoBuffer)传给Video renderer,以便画到UIScreen上去


(1) 要将mVideoBuffer中的资料绘制出来之前,必须先建立mVideoRenderer

voidAwesomePlayer::onVideoEvent()

{

...

if(mVideoRenderer == NULL)

{

   initRenderer_l();

}

...

}

voidAwesomePlayer::initRenderer_l()

{

 if (!strncmp("OMX.",component, 4))

  {

   mVideoRenderer = newAwesomeRemoteRenderer(mClient.interface()->createRenderer(mISurface,component,...));.......... (2)

  }

 else

  {

   mVideoRenderer = newAwesomeLocalRenderer(...,component,mISurface); ............................ (3)

  }

}

(2) 如果video decoder是OMX component,則建立一個AwesomeRemoteRenderer作為mVideoRenderer
从上段的程式码(1)来看,AwesomeRemoteRenderer的本质是由OMX::createRenderer所创建的。createRenderer会先建立一个hardware renderer -- SharedVideoRenderer (libstagefrighthw.so);若失败,则建立software renderer -- SoftwareRenderer (surface)。
sp<IOMXRenderer>OMX::createRenderer(...)
{
  VideoRenderer *impl = NULL;
  libHandle = dlopen("libstagefrighthw.so",RTLD_NOW);
  if (libHandle)
  {
    CreateRendererFunc func = dlsym(libHandle, ...);
    impl = (*func)(...); <----------------- Hardware Renderer
  }
  if (!impl)
  {
    impl = newSoftwareRenderer(...); <---- Software Renderer
  }
}

(3) 如果video decoder是software component,則建立一个AwesomeLocalRenderer作為mVideoRenderer
AwesomeLocalRenderer的constructor会呼叫本身的init函式,其所做的事和OMX::createRenderer一模一样。

voidAwesomeLocalRenderer::init(...)

{

  mLibHandle= dlopen("libstagefrighthw.so",RTLD_NOW);

 if(mLibHandle)

  {

   CreateRendererFunc func = dlsym(...);

   mTarget = (*func)(...); <---------------- Hardware Renderer

  }

 if(mTarget == NULL)

  {

   mTarget = new SoftwareRenderer(...);<--- Software Renderer

  }

}

(4)mVideoRenderer一经建立就可以开始将解码后的资料传給它

voidAwesomePlayer::onVideoEvent()

{

 if (!mVideoBuffer)

  {

   mVideoSource->read(&mVideoBuffer, ...);

  }

 [Check Timestamp]

 if (mVideoRenderer == NULL)

  {

   initRenderer_l();

  }

 mVideoRenderer->render(mVideoBuffer); <----- Render Data

}

.AudioPlayback的流程

这篇文章将会开始audio处理的流程。Stagefright中关於audio的部分是交由AudioPlayer来处理,它是在AwesomePlayer::play_l中被建立的。

 

(1) 当上层应用程式要求播放影音时,AudioPlayer同时被建立出来,并且被啟动

status_t AwesomePlayer::play_l()

{

...

mAudioPlayer = new AudioPlayer(mAudioSink, ...);

mAudioPlayer->start(...);

...

}

(2)AudioPlayer在啟动的过程中会先去读取第一笔解码后的资料,并且开啟audio output

status_t AudioPlayer::start(...)
{
mSource->read(&mFirstBuffer);
  if(mAudioSink.get() != NULL)
  {
    mAudioSink->open(...,&AudioPlayer::AudioSinkCallback, ...);
    mAudioSink->start();
  }
  else
  {
    mAudioTrack = new AudioTrack(...,&AudioPlayer::AudioCallback, ...);
    mAudioTrack->start();
  }
}
从AudioPlayer::start的程式码来看,AudioPlayer似乎并没有将mFirstBuffer传给audio output。

(3) 开启audio output的同时,AudioPlayer会将callback函式設給它,之后每当callback函式被呼叫,AudioPlayer便去audio decoder读取解码后的资料

size_tAudioPlayer::AudioSinkCallback(audioSink, buffer, size, ...)
{
  return fillBuffer(buffer, size);
}
void AudioPlayer::AudioCallback(..., info)
{
  buffer = info;
  fillBuffer(buffer->raw, buffer->size);
}
size_t AudioPlayer::fillBuffer(data, size)
{
  mSource->read(&mInputBuffer, ...);
  memcpy(data, mInputBuffer->data(), ...);
}

解码后audio资料的读取就是由callback函式所驱动,但是callback函式又是怎麼由audio output去驱动的,目前从程式码上还看不出来。另外一方面,从上面的程式片段可以看出,fillBuffer将资料(mInputBuffer)复製到data之后,audio output应该会去取用data。
至于audio decoder的工作流程则和video decoder相同,看参见上述第三部分Video Buffer传输流程

 

6.Audio和Video的同步

讲完了audio和video的处理流程,接下来要看的是audio和video同步化(synchronization)的问题。OpenCORE的做法是设置一个主clock,而audio和video就分别以此作為输出的依据。而在Stagefright中,audio的输出是透过callback函式来驱动,video则根据audio的timestamp来做同步。以下是详细的说明:
(1) 当callback函式驱动AudioPlayer读取解码后的资料时,AudioPlayer会取得两个时间戳 --mPositionTimeMediaUs和mPositionTimeRealUs

size_t AudioPlayer::fillBuffer(data, size)
{
  ...
  mSource->read(&mInputBuffer, ...);
  mInputBuffer->meta_data()->findInt64(kKeyTime,&mPositionTimeMediaUs);
  mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) *1000000) /mSampleRate;
  ...
}

mPositionTimeMediaUs是资料裡面所载明的时间戳(timestamp);mPositionTimeRealUs则是播放此资料的实际时间(依据frame number及sample rate得出)。
(2) Stagefright中的video便依据从AudioPlayer得出来之两个时间戳的差值,作為播放的依据

void AwesomePlayer::onVideoEvent()
{
...
  mVideoSource->read(&mVideoBuffer, ...);
  mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs);
  mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs);
  mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
  nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
  latenessUs = nowUs - timeUs;
...
}

AwesomePlayer從AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),并算出其差值mTimeSourceDeltaUs
(3) 最后我们将该video资料做排程

void AwesomePlayer::onVideoEvent()
{
  ...
  if (latenessUs > 40000)
  {
    mVideoBuffer->release();
    mVideoBuffer = NULL;
    postVideoEvent_l();
    return;
  }
  if (latenessUs < -10000)
  {
    postVideoEvent_l(10000);
    return;
  }
  mVideoRenderer->render(mVideoBuffer);
  ...
}