【Android 】【多媒体】stagefrightplayer框架

来源:互联网 发布:只有我知双语未删减版 编辑:程序博客网 时间:2024/05/21 04:01

概述


通过分析stagefrightplayer代码可以知道,stagefrightplayer 是awesomeplayer的封装,实际的工作都由awsomeplayer完成

一个典型的播放器框架包括如下组成部分:

stream: 流类型,一般有文件类型、网络流等

demuxer:解复用模块,主要是通过分析带播放的数据,得到基本信息,如audio video的基本参数等,还负责分解audio和video数据,为下层模块提供边界完整的包

decoder:解码模块,从demuxer模块得到原始的audio-packet video-packet,解码成实际可播放或者显示的 pcm以及 yuv(或rgb)等

render : 显示模块,将pcm以及yuv(或rgb)在声卡上播放以及在画布上显示

其他部分: 主要是一些播放器的机制,如同步、切音轨、pause-resume,seek等

stagefrightplayer作为媒体播放器,自然也包括上面这些基础模块,这里主要是从总体上分析stagefrightplayer的结构

本篇主要介绍如下内容

1、awesomeplayer总体结构

2、awesomeplayer的工作方式

3、awesomeplayer代码分析

4、其他事件分析

下面详细讲解

1、awesomeplayer的总体结构


整体结构如下图


这里先简单做几点说明,后面文章会对每个模块进行详细分析

数据结构

mExtractor -- MediaExtractor

mAudioTrack&mVideoTrack -- MediaSource

说明:

awesomeplayer利用mExtractor 和 mAudioTrack&mVideoTrack等成员来完成 数据的解析和读取。

对于每种格式的媒体文件,均需要实现一个MediaExtractor 的子类,例如AVI文件的extractor类为AVIExtractor,在构造函数中完成对数据的解析,主要信息由:媒体文件中流的个数,音频流的采样率、声道数、量化位数等,视频流的宽度、高度、帧率等信息。

对于每个流,都对应一个单独的MediaSource,以avi为例,为AVISource,MediaSource 提供了状态接口(start stop),数据读取接口(read),参数查询接口(getFormat)。其中调用一次read函数,可以认为是读取对应的一个数据包。

一般而言,由于MediaExtractor 负责解析工作,因此MediaSource 的read操作一般也通过MediaExtractor 的接口获取offset和size,对于MediaSource 只进行数据的读取工作,不参与解析。

这里mAudioTrack&mVideoTrack 分别对应的媒体文件中选中的音频流和视频流,从代码知道,选取规则是第一个视频和第一个音频。

mAudioSource &mVideoSource -- MediaSource

说明:

这里mAudioSource &mVideoSource可以认为是awesomeplayer与decoder之间的桥梁,awesomeplayer从mAudioSource &mVideoSource 里获取数据,进行播放,而decoder则负责解码并向mAudioSource &mVideoSource 填充数据

这里awesomeplayer与decoder的通信是通过OMXClient mClient; 成员进行的,OMX*解码器是一个服务,每个awesomeplayer对象都包含一个单独的client端与之交互。

mAudioPlayer -- AudioPlayer

说明:

mAudioPlayer 是负责音频输出的模块,主要封装关系为:mAudioPlayer->mAudioSink->mAudioTrack(这里的mAudioTrack 与前面不同,此处为AudioTrack对象)

实际进行音频播放的对象为mAudioTrack-audioflinger 结构

mVideoRenderer -- AwesomeRenderer

说明:

mVideoRenderer 是负责视频显示的模块,封装关系为:mVideoRenderer ->mTarget(SoftwareRenderer)->mNativeWindow 【注】此处理解不清晰

最后依靠surfaceflinger来显示

2、awesomeplayer的工作方式

上面介绍了,awesomeplayer中的主要成员,下面介绍下awesomeplayer依靠哪种方式驱动主要成员协同工作。

这里就不得不提到android中的一个类即TimedEventQueue,这是一个消息处理类,在之前的文章中有介绍过,【参考文章:TimedEventQueue】

这里通过对每个事件消息提供一个fire函数完成相应的操作。而整个播放过程的驱动方式为递归的触发mVideoEvent 事件来控制媒体文件的播放。

​​说明:详细可参考下面的代码分析

awesomeplayer中主要有如下几个事件

sp<TimedEventQueue::Event> mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent); --- 显示一帧画面,并负责同步处理
sp<TimedEventQueue::Event> mStreamDoneEvent = new AwesomeEvent(this, &AwesomePlayer::onStreamDone); -- 播放结束的处理

sp<TimedEventQueue::Event> mBufferingEvent = new AwesomeEvent(this, &AwesomePlayer::onBufferingUpdate); -- cache数据
sp<TimedEventQueue::Event> mCheckAudioStatusEvent = new AwesomeEvent(this, &AwesomePlayer::onCheckAudioStatus); -- 监测audio 状态:seek以及播放结束
sp<TimedEventQueue::Event> mVideoLagEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate); -- 监测video 解码性能

sp<TimedEventQueue::Event> mAsyncPrepareEvent = new AwesomeEvent(this, &AwesomePlayer::onPrepareAsyncEvent); -- 完成prepareAsync工作

这里会在下面的代码分析中看到具体的工作流程

3、awesomeplayer代码分析

下面通过分析实际的代码进一步对awesomaplayer的机构和流程加深理解。

这里我们还是按照播放一个媒体文件的实际调用步骤来分析,具体调用顺序如下

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. StagefrightPlayer player =newStagefrightPlayer();
  2. player->setDataSource(*)
  3. player->prepareAsync()
  4. player->start();


下面详细介绍每个调用.[说明]仅为说明流程,不严谨

3.1 构造函数

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. StagefrightPlayer::StagefrightPlayer()
  2. : mPlayer(newAwesomePlayer) {
  3. ALOGV("StagefrightPlayer");
  4. mPlayer->setListener(this);
  5. }

这里stagefrightplayer是awesomeplayer 封装。主要看下awesomaplayer的构造函数

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. AwesomePlayer::AwesomePlayer()
  2. : mQueueStarted(false),
  3. mUIDValid(false),
  4. mTimeSource(NULL),
  5. mVideoRenderingStarted(false),
  6. mVideoRendererIsPreview(false),
  7. mAudioPlayer(NULL),
  8. mDisplayWidth(0),
  9. mDisplayHeight(0),
  10. mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW),
  11. mFlags(0),
  12. mExtractorFlags(0),
  13. mVideoBuffer(NULL),
  14. mDecryptHandle(NULL),
  15. mLastVideoTimeUs(-1),
  16. mTextDriver(NULL) {
  17. CHECK_EQ(mClient.connect(), (status_t)OK);
  18. DataSource::RegisterDefaultSniffers();
  19. mVideoEvent =newAwesomeEvent(this, &AwesomePlayer::onVideoEvent);
  20. mVideoEventPending =false;
  21. mStreamDoneEvent =newAwesomeEvent(this, &AwesomePlayer::onStreamDone);
  22. mStreamDoneEventPending =false;
  23. mBufferingEvent =newAwesomeEvent(this, &AwesomePlayer::onBufferingUpdate);
  24. mBufferingEventPending =false;
  25. mVideoLagEvent =newAwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate);
  26. mVideoEventPending =false;
  27. mCheckAudioStatusEvent =newAwesomeEvent(
  28. this, &AwesomePlayer::onCheckAudioStatus);
  29. mAudioStatusEventPending =false;
  30. reset();
  31. }

在awesomeplayer的构造函数中主要就是做些准备工作:如创立event对象,并提供fire函数,设置对应的各个状态变量

还有一点需要注意的是 mClient.connect() 建立了awesomeplayer与omx解码器之间的链接,后面介绍解码器模块的时候会详细介绍

3.2 setDataSource

说明:这里以本地文件为例,传入参数为文件句柄

具体代码如下:

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t StagefrightPlayer::setDataSource(intfd, int64_t offset, int64_t length) {
  2. ALOGV("setDataSource(%d, %lld, %lld)", fd, offset, length);
  3. returnmPlayer->setDataSource(dup(fd), offset, length);
  4. }

继续深入看awesomeplayer

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::setDataSource(
  2. intfd, int64_t offset, int64_t length) {
  3. Mutex::Autolock autoLock(mLock);
  4. reset_l();
  5. sp<DataSource>dataSource =newFileSource(fd, offset, length);
  6. status_t err = dataSource->initCheck();
  7. if(err != OK) {
  8. returnerr;
  9. }
  10. mFileSource = dataSource;
  11. {
  12. Mutex::Autolock autoLock(mStatsLock);
  13. mStats.mFd = fd;
  14. mStats.mURI = String8();
  15. }
  16. returnsetDataSource_l(dataSource);
  17. }

首先建立文件对应的datasource,这里是FileSource,提供对实际文件的读写,seek等

dataSource->initCheck 是判断合法性,这里是判断fd是否有效

主要工作在setDataSource_l(dataSource); 完成,继续跟进代码

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::setDataSource_l(
  2. constsp<DataSource> &dataSource) {
  3. sp<MediaExtractor>extractor = MediaExtractor::Create(dataSource);
  4. if(extractor == NULL) {
  5. returnUNKNOWN_ERROR;
  6. }
  7. if(extractor->getDrmFlag()) {
  8. checkDrmStatus(dataSource);
  9. }
  10. returnsetDataSource_l(extractor);
  11. }

这里依据提供的dataSource建立对应的MediaExtractor,这里说明下,有了dataSource,就可以从文件中读取数据,就可以通过分析文件头解析出文件具体是哪种格式,然后建立相应的MediaExtractor

之前有介绍,在MediaExtractor 建立的时候便完成了对文件的解析,包括流数量,各个流的具体信息等,下面就可以直接使用了

这里不再深入了,等讲解MediaExtractor的时候再介绍

下面进入 setDataSource_l(extractor); ,代码比较多,我们分开来看

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::setDataSource_l(constsp<MediaExtractor> &extractor) {
  2. // Attempt to approximate overall stream bitrate by summing all
  3. // tracks' individual bitrates, if not all of them advertise bitrate,
  4. // we have to fail.
  5. int64_t totalBitRate =0;
  6. mExtractor = extractor;
  7. for(size_t i =0; i< extractor->countTracks(); ++i) {
  8. sp<MetaData>meta = extractor->getTrackMetaData(i);
  9. int32_t bitrate;
  10. if(!meta->findInt32(kKeyBitRate, &bitrate)) {
  11. constchar *mime;
  12. CHECK(meta->findCString(kKeyMIMEType, &mime));
  13. ALOGV("track of type '%s' does not publish bitrate", mime);
  14. totalBitRate = -1;
  15. break;
  16. }
  17. totalBitRate += bitrate;
  18. }
  19. mBitrate = totalBitRate;


前面说了,MediaExtractor 建立后,我们也就拿到了对应的信息,

这里流数量可以通过extractor->countTracks() 得到,每个流对应的信息存储在一个MetaData中通过extractor->getTrackMetaData(i); 得到

上面这段代码是通过解析每个流的bitrate来得到整个文件的波特率(当有一个流没有设置此参数时,则设置totalBitRate =-1),最后赋值给成员mBitrate

下面是选取待播放的视频流和音频流的过程

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. bool haveAudio = false;
  2. bool haveVideo = false;
  3. for (size_t i = 0; i < extractor->countTracks(); ++i) {
  4. sp<MetaData>meta = extractor->getTrackMetaData(i);
  5. constchar *_mime;
  6. CHECK(meta->findCString(kKeyMIMEType, &_mime));
  7. String8 mime = String8(_mime);

首先是对每个流,先获取kKeyMIMEType参数,此参数标示了流类型是音频 视频 还是字幕,看下具体分析

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (!haveVideo && !strncasecmp(mime.string(), "video/",6)) {
  2. setVideoSource(extractor->getTrack(i));
  3. haveVideo =true;
  4. // Set the presentation/display size
  5. int32_t displayWidth, displayHeight;
  6. bool success = meta->findInt32(kKeyDisplayWidth, &displayWidth);
  7. if(success) {
  8. success = meta->findInt32(kKeyDisplayHeight, &displayHeight);
  9. }
  10. if(success) {
  11. mDisplayWidth = displayWidth;
  12. mDisplayHeight = displayHeight;
  13. }
  14. {
  15. Mutex::Autolock autoLock(mStatsLock);
  16. mStats.mVideoTrackIndex = mStats.mTracks.size();
  17. mStats.mTracks.push();
  18. TrackStat *stat =
  19. &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);
  20. stat->mMIME = mime.string();
  21. }
  22. }

上面是视频的情况,如果是视频,设置video标志haveVideo为true,后面获取对应的参数,主要是宽度、高度

然后将此流信息保存在mStates.mTracks中

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {
  2. setAudioSource(extractor->getTrack(i));
  3. haveAudio =true;
  4. mActiveAudioTrackIndex = i;
  5. {
  6. Mutex::Autolock autoLock(mStatsLock);
  7. mStats.mAudioTrackIndex = mStats.mTracks.size();
  8. mStats.mTracks.push();
  9. TrackStat *stat =
  10. &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
  11. stat->mMIME = mime.string();
  12. }
  13. if(!strcasecmp(mime.string(), MEDIA_MIMETYPE_AUDIO_VORBIS)) {
  14. // Only do this for vorbis audio, none of the other audio
  15. // formats even support this ringtone specific hack and
  16. // retrieving the metadata on some extractors may turn out
  17. // to be very expensive.
  18. sp<MetaData>fileMeta = extractor->getMetaData();
  19. int32_t loop;
  20. if(fileMeta != NULL
  21. && fileMeta->findInt32(kKeyAutoLoop, &loop) && loop !=0) {
  22. modifyFlags(AUTO_LOOPING, SET);
  23. }
  24. }
  25. }

上面是音频的情况,也是保存参数,设置hasAudio标志,并保存在mStates.mTracks中

这里单独处理的vorbis的case,没去深究

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. MIMETYPE_TEXT_3GPP)) {
  2. addTextSource_l(i, extractor->getTrack(i));
  3. }

在后面是处理字幕的情况,这里android字幕还不完善,后面完善后在系统讲下。

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (!haveAudio && !haveVideo) {
  2. if(mWVMExtractor != NULL) {
  3. returnmWVMExtractor->getError();
  4. }else{
  5. returnUNKNOWN_ERROR;
  6. }
  7. }
  8. mExtractorFlags = extractor->flags();
  9. returnOK;
  10. }

最后更新下flags就结束了

这里总结下,setdatasource的主要工作就是解析文件头信息,获取媒体文件的各个流的基本参数

3.3 prepareAsync

在之前的译文【MediaPlayer类介绍】中,解释过prepare和prepareAsync的区别,即一个是同步的一个是异步的,但做的事情是一样的

看下入口代码

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t StagefrightPlayer::prepareAsync() {
  2. returnmPlayer->prepareAsync();
  3. }

继续进入awesomeplayer

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::prepareAsync() {
  2. ATRACE_CALL();
  3. Mutex::Autolock autoLock(mLock);
  4. if(mFlags & PREPARING) {
  5. returnUNKNOWN_ERROR; // async prepare already pending
  6. }
  7. mIsAsyncPrepare =true;
  8. returnprepareAsync_l();
  9. }

继续

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::prepareAsync_l() {
  2. if(mFlags & PREPARING) {
  3. returnUNKNOWN_ERROR; // async prepare already pending
  4. }
  5. if(!mQueueStarted) {
  6. mQueue.start();
  7. mQueueStarted =true;
  8. }
  9. modifyFlags(PREPARING, SET);
  10. mAsyncPrepareEvent =newAwesomeEvent(
  11. this, &AwesomePlayer::onPrepareAsyncEvent);
  12. mQueue.postEvent(mAsyncPrepareEvent);
  13. returnOK;
  14. }

这里比较重要,首先如果发现mQueue还没有启动,则启动,启动之后就可以处理事件了

随后构造了mAsyncPrepareEvent 时间,提供了事件响应函数AwesomePlayer::onPrepareAsyncEvent,最后将消息发送出去,mQueue.postEvent(mAsyncPrepareEvent);

消息发送后,便会触发消息响应函数,看下onPrepareAsyncEvent具体实现

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::onPrepareAsyncEvent() {
  2. Mutex::Autolock autoLock(mLock);
  3. if(mFlags & PREPARE_CANCELLED) {
  4. ALOGI("prepare was cancelled before doing anything");
  5. abortPrepare(UNKNOWN_ERROR);
  6. return;
  7. }
  8. if(mUri.size() >0) {
  9. status_t err = finishSetDataSource_l();
  10. if(err != OK) {
  11. abortPrepare(err);
  12. return;
  13. }
  14. }
  15. if(mVideoTrack != NULL && mVideoSource == NULL) {
  16. status_t err = initVideoDecoder();
  17. if(err != OK) {
  18. abortPrepare(err);
  19. return;
  20. }
  21. }
  22. if(mAudioTrack != NULL && mAudioSource == NULL) {
  23. status_t err = initAudioDecoder();
  24. if(err != OK) {
  25. abortPrepare(err);
  26. return;
  27. }
  28. }
  29. modifyFlags(PREPARING_CONNECTED, SET);
  30. if(isStreamingHTTP()) {
  31. postBufferingEvent_l();
  32. }else{
  33. finishAsyncPrepare_l();
  34. }
  35. }

这里主要是完成了如下几个重要步骤: -- initVideoDecoder -- initAudioDecoder -- finishAsyncPrepare_l

说明:还有个方法为finishSetDataSource_l ,这里由于调用的setdatasource传入的是fd,因此不会初始化mUri,因此不会调用,看实现可以知道,其功能与setDataSource(int fd, int64_t offset, int64_t length)一致

这里我们分三个小节来详细介绍

说明:这里我们不考虑drm的情况,这里只介绍结构和流程,因此会忽略一些代码

(1) initVideoDecoder

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::initVideoDecoder(uint32_t flags) {
  2. ATRACE_CALL();
  3. ALOGV("initVideoDecoder flags=0x%x", flags);
  4. mVideoSource = OMXCodec::Create(
  5. mClient.interface(), mVideoTrack->getFormat(),
  6. false,// createEncoder
  7. mVideoTrack,
  8. NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);
  9. if(mVideoSource != NULL) {
  10. int64_t durationUs;
  11. if(mVideoTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
  12. Mutex::Autolock autoLock(mMiscStateLock);
  13. if(mDurationUs <0|| durationUs> mDurationUs) {
  14. mDurationUs = durationUs;
  15. }
  16. }
  17. status_t err = mVideoSource->start();
  18. if(err != OK) {
  19. ALOGE("failed to start video source");
  20. mVideoSource.clear();
  21. returnerr;
  22. }
  23. }

上面是函数的主体部分,主要就是调用OMXCodec::Create创建解码器,将mClient作为参数传入作为桥梁,以及将mVideoTrack作为数据源提供给解码器模块

并将返回的MediaSource对象保存在mVideoSource中,这样后面awesomeplayer便可以从mVideoSource中获取数据进行显示了

创建完毕后,接着就启动解码器进行解码

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (mVideoSource != NULL) {
  2. const char *componentName;
  3. CHECK(mVideoSource->getFormat()
  4. ->findCString(kKeyDecoderComponent, &componentName));
  5. {
  6. Mutex::Autolock autoLock(mStatsLock);
  7. TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);
  8. stat->mDecoderName = componentName;
  9. }
  10. static const char *kPrefix = "OMX.Nvidia.";
  11. static const char *kSuffix = ".decode";
  12. static const size_t kSuffixLength = strlen(kSuffix);
  13. size_t componentNameLength = strlen(componentName);
  14. if (!strncmp(componentName, kPrefix, strlen(kPrefix))
  15. && componentNameLength >= kSuffixLength
  16. && !strcmp(&componentName[
  17. componentNameLength - kSuffixLength], kSuffix)) {
  18. modifyFlags(SLOW_DECODER_HACK, SET);
  19. }
  20. }
  21. return mVideoSource != NULL ? OK : UNKNOWN_ERROR;
  22. }

余下的代码就是将解码器信息等更新到mStats成员中

(2) initAudioDecoder

这里audio部分与video部分一样,也是创建解码器,以及保存信息,代码如下

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::initAudioDecoder() {
  2. ATRACE_CALL();
  3. sp<MetaData>meta = mAudioTrack->getFormat();
  4. constchar *mime;
  5. CHECK(meta->findCString(kKeyMIMEType, &mime));
  6. if(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
  7. mAudioSource = mAudioTrack;
  8. }else{
  9. mAudioSource = OMXCodec::Create(
  10. mClient.interface(), mAudioTrack->getFormat(),
  11. false,// createEncoder
  12. mAudioTrack);
  13. }
  14. if(mAudioSource != NULL) {
  15. int64_t durationUs;
  16. if(mAudioTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
  17. Mutex::Autolock autoLock(mMiscStateLock);
  18. if(mDurationUs <0|| durationUs> mDurationUs) {
  19. mDurationUs = durationUs;
  20. }
  21. }
  22. status_t err = mAudioSource->start();
  23. if(err != OK) {
  24. mAudioSource.clear();
  25. returnerr;
  26. }
  27. }elseif(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_QCELP)) {
  28. // For legacy reasons we're simply going to ignore the absence
  29. // of an audio decoder for QCELP instead of aborting playback
  30. // altogether.
  31. returnOK;
  32. }
  33. if(mAudioSource != NULL) {
  34. Mutex::Autolock autoLock(mStatsLock);
  35. TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
  36. constchar *component;
  37. if(!mAudioSource->getFormat()
  38. ->findCString(kKeyDecoderComponent, &component)) {
  39. component ="none";
  40. }
  41. stat->mDecoderName =component;
  42. }
  43. returnmAudioSource != NULL ? OK : UNKNOWN_ERROR;
  44. }

(3) finishAsyncPrepare_l

经过上面的步骤,解码器已经可以正常的解码,并将解码后的数据分别放在mAudioSource和mVideoSource中

后面就是prepare的收尾工作,代码如下

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::finishAsyncPrepare_l() {
  2. if(mIsAsyncPrepare) {
  3. if(mVideoSource == NULL) {
  4. notifyListener_l(MEDIA_SET_VIDEO_SIZE,0,0);
  5. }else{
  6. notifyVideoSize_l();
  7. }
  8. notifyListener_l(MEDIA_PREPARED);
  9. }
  10. mPrepareResult = OK;
  11. modifyFlags((PREPARING|PREPARE_CANCELLED|PREPARING_CONNECTED), CLEAR);
  12. modifyFlags(PREPARED, SET);
  13. mAsyncPrepareEvent = NULL;
  14. mPreparedCondition.broadcast();
  15. }

首先通过notifyVideoSize_l更新宽度、高度信息,还有rotate等信息,确定最终的输出样式

还有就是更新状态以及flags等

最后通过mPreparedCondition.broadcast();将prepare 成功的消息发送出去。【Condition用法参考文章:condition & mutex】

其作用主要用在,当调用的是prepare的方法的时候,需要等待prepareasync执行完毕后返回,即block调用方式

因此有如下代码:

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::prepare_l() {
  2. if(mFlags & PREPARED) {
  3. returnOK;
  4. }
  5. if(mFlags & PREPARING) {
  6. returnUNKNOWN_ERROR;
  7. }
  8. mIsAsyncPrepare =false;
  9. status_t err = prepareAsync_l();
  10. if(err != OK) {
  11. returnerr;
  12. }
  13. while(mFlags & PREPARING) {
  14. mPreparedCondition.wait(mLock);
  15. }
  16. returnmPrepareResult;
  17. }

这里broadcast 会使得mPreparedCondition.wait(mLock);退出。还有这里还有一点是,循环里还会检测awesomeplayer的状态,在上面的finishAsyncPrepare_l 中也会通过modifyFlags(PREPARED, SET); 修改

这里就完成的所有的准备工作。

3.4 start

入口

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t StagefrightPlayer::start() {
  2. ALOGV("start");
  3. returnmPlayer->play();
  4. }

继续,

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::play() {
  2. ATRACE_CALL();
  3. Mutex::Autolock autoLock(mLock);
  4. modifyFlags(CACHE_UNDERRUN, CLEAR);
  5. returnplay_l();
  6. }

awesomeplayer中play->play_l,继续看play_l

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. status_t AwesomePlayer::play_l() {
  2. modifyFlags(SEEK_PREVIEW, CLEAR);
  3. if(mFlags & PLAYING) {
  4. returnOK;
  5. }
  6. if(!(mFlags & PREPARED)) {
  7. status_t err = prepare_l();
  8. if(err != OK) {
  9. returnerr;
  10. }
  11. }
  12. modifyFlags(PLAYING, SET);
  13. modifyFlags(FIRST_FRAME, SET);

这里首先判断状态是否是PREPARED,如果不是则重新调用prepare_l方法

如果是,则修改状态为PLAYING

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (mAudioSource != NULL) {
  2. if(mAudioPlayer == NULL) {
  3. if(mAudioSink != NULL) {
  4. bool allowDeepBuffering;
  5. int64_t cachedDurationUs;
  6. bool eos;
  7. if(mVideoSource == NULL
  8. && (mDurationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US ||
  9. (getCachedDuration_l(&cachedDurationUs, &eos) &&
  10. cachedDurationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US))) {
  11. allowDeepBuffering =true;
  12. }else{
  13. allowDeepBuffering =false;
  14. }
  15. mAudioPlayer =newAudioPlayer(mAudioSink, allowDeepBuffering,this);
  16. mAudioPlayer->setSource(mAudioSource);
  17. mTimeSource = mAudioPlayer;
  18. // If there was a seek request before we ever started,
  19. // honor the request now.
  20. // Make sure to do this before starting the audio player
  21. // to avoid a race condition.
  22. seekAudioIfNecessary_l();
  23. }
  24. }
  25. CHECK(!(mFlags & AUDIO_RUNNING));
  26. if(mVideoSource == NULL) {
  27. // We don't want to post an error notification at this point,
  28. // the error returned from MediaPlayer::start() will suffice.
  29. status_t err = startAudioPlayer_l(
  30. false/* sendErrorNotification */);
  31. if(err != OK) {
  32. deletemAudioPlayer;
  33. mAudioPlayer = NULL;
  34. modifyFlags((PLAYING | FIRST_FRAME), CLEAR);
  35. if(mDecryptHandle != NULL) {
  36. mDrmManagerClient->setPlaybackStatus(
  37. mDecryptHandle, Playback::STOP,0);
  38. }
  39. returnerr;
  40. }
  41. }
  42. }

上面代码是创建audio的输出模块,mAudioPlayer

代码也比较清晰,即构建mAudioPlayer对象,将mAudioSink作为参数传入,之前有介绍,实际的播放顺序是 mAudioPlayer ->mAudioSink ->mAudioTrack(不要混淆,此处类是AudioTrack)

还有设置mAudioPlayer的数据源即mAudioSource

接着如果只有audio 即mVideoSource == NULL,则直接启动播放

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (mTimeSource == NULL && mAudioPlayer == NULL) {
  2. mTimeSource = &mSystemTimeSource;
  3. }

上面代码是设置同步时钟的,这里如果mAudioPlayer存在的话,以audio为基准进行播放,否则以系统时钟为基准控制播放

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (mVideoSource != NULL) {
  2. // Kick off video playback
  3. postVideoEvent_l();
  4. if(mAudioSource != NULL && mVideoSource != NULL) {
  5. postVideoLagEvent_l();
  6. }
  7. }
  8. if(mFlags & AT_EOS) {
  9. // Legacy behaviour, if a stream finishes playing and then
  10. // is started again, we play from the start...
  11. seekTo_l(0);
  12. }
  13. uint32_t params = IMediaPlayerService::kBatteryDataCodecStarted
  14. | IMediaPlayerService::kBatteryDataTrackDecoder;
  15. if((mAudioSource != NULL) && (mAudioSource != mAudioTrack)) {
  16. params |= IMediaPlayerService::kBatteryDataTrackAudio;
  17. }
  18. if(mVideoSource != NULL) {
  19. params |= IMediaPlayerService::kBatteryDataTrackVideo;
  20. }
  21. addBatteryData(params);
  22. returnOK;
  23. }

执行到此就是说audio video都是存在的,首先是触发mVideoEvent消息,之后触发mVideoLagEvent消息,最后seek 到0位置就开始播放了

下面分三个小节分别介绍,mAudioSink如何传入, postVideoEvent_l和postVideoLagEvent_l

(1)mAudioSink如何传入

在最初的mediaplayerservice中,调用setdatasource操作代码如下

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre(
  2. player_type playerType)
  3. {
  4. ALOGV("player type = %d", playerType);
  5. // create the right type of player
  6. sp<MediaPlayerBase>p = createPlayer(playerType);
  7. if(p == NULL) {
  8. returnp;
  9. }
  10. if(!p->hardwareOutput()) {
  11. mAudioOutput =newAudioOutput(mAudioSessionId);
  12. static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
  13. }
  14. returnp;
  15. }

这里会构造一个AudioOutput对象传入作为mAudioSink

(2)postVideoEvent

代码如下

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::postVideoEvent_l(int64_t delayUs) {
  2. ATRACE_CALL();
  3. if(mVideoEventPending) {
  4. return;
  5. }
  6. mVideoEventPending =true;
  7. mQueue.postEventWithDelay(mVideoEvent, delayUs <0?10000: delayUs);
  8. }

主要就是触发mVideoEvent事件,其响应函数是AwesomePlayer::onVideoEvent,

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::onVideoEvent() {
  2. ATRACE_CALL();
  3. Mutex::Autolock autoLock(mLock);
  4. if(!mVideoEventPending) {
  5. // The event has been cancelled in reset_l() but had already
  6. // been scheduled for execution at that time.
  7. return;
  8. }
  9. mVideoEventPending =false;
  10. if(mSeeking != NO_SEEK) {
  11. if(mVideoBuffer) {
  12. mVideoBuffer->release();
  13. mVideoBuffer = NULL;
  14. }
  15. if(mSeeking == SEEK && isStreamingHTTP() && mAudioSource != NULL
  16. && !(mFlags & SEEK_PREVIEW)) {
  17. // We're going to seek the video source first, followed by
  18. // the audio source.
  19. // In order to avoid jumps in the DataSource offset caused by
  20. // the audio codec prefetching data from the old locations
  21. // while the video codec is already reading data from the new
  22. // locations, we'll "pause" the audio source, causing it to
  23. // stop reading input data until a subsequent seek.
  24. if(mAudioPlayer != NULL && (mFlags & AUDIO_RUNNING)) {
  25. mAudioPlayer->pause();
  26. modifyFlags(AUDIO_RUNNING, CLEAR);
  27. }
  28. mAudioSource->pause();
  29. }
  30. }

首先是判断是否需要seek,若需要seek,则先pause住audio,先完成video的seek,后面再seek audio

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (!mVideoBuffer) {
  2. MediaSource::ReadOptions options;
  3. if(mSeeking != NO_SEEK) {
  4. ALOGV("seeking to %lld us (%.2f secs)", mSeekTimeUs, mSeekTimeUs / 1E6);
  5. options.setSeekTo(
  6. mSeekTimeUs,
  7. mSeeking == SEEK_VIDEO_ONLY
  8. ? MediaSource::ReadOptions::SEEK_NEXT_SYNC
  9. : MediaSource::ReadOptions::SEEK_CLOSEST_SYNC);
  10. }
  11. for(;;) {
  12. status_t err = mVideoSource->read(&mVideoBuffer, &options);
  13. options.clearSeekTo();
  14. if(err != OK) {
  15. CHECK(mVideoBuffer == NULL);
  16. if(err == INFO_FORMAT_CHANGED) {
  17. ALOGV("VideoSource signalled format change.");
  18. notifyVideoSize_l();
  19. if(mVideoRenderer != NULL) {
  20. mVideoRendererIsPreview =false;
  21. initRenderer_l();
  22. }
  23. continue;
  24. }
  25. // So video playback is complete, but we may still have
  26. // a seek request pending that needs to be applied
  27. // to the audio track.
  28. if(mSeeking != NO_SEEK) {
  29. ALOGV("video stream ended while seeking!");
  30. }
  31. finishSeekIfNecessary(-1);
  32. if(mAudioPlayer != NULL
  33. && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) {
  34. startAudioPlayer_l();
  35. }
  36. modifyFlags(VIDEO_AT_EOS, SET);
  37. postStreamDoneEvent_l(err);
  38. return;
  39. }
  40. if(mVideoBuffer->range_length() ==0) {
  41. // Some decoders, notably the PV AVC software decoder
  42. // return spurious empty buffers that we just want to ignore.
  43. mVideoBuffer->release();
  44. mVideoBuffer = NULL;
  45. continue;
  46. }
  47. break;
  48. }
  49. {
  50. Mutex::Autolock autoLock(mStatsLock);
  51. ++mStats.mNumVideoFramesDecoded;
  52. }
  53. }

上面代码可分为两部分,第一部分判断是否需要seek,若需要则设置option

第二部分是从mAudioSource中读取一帧画面,这里读取的时候会将option传入,如果需要seek,则读取出的数据直接就是seek后的解码数据,Nice

中间还有些小细节:如果数据读取失败,则检查宽度高度是否发生变化,否则即video播放完毕了,设置EOF标记,并触发mStreamDoneEvent消息

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. int64_t timeUs;
  2. CHECK(mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs));
  3. mLastVideoTimeUs = timeUs;
  4. if (mSeeking == SEEK_VIDEO_ONLY) {
  5. if(mSeekTimeUs > timeUs) {
  6. ALOGI("XXX mSeekTimeUs = %lld us, timeUs = %lld us",
  7. mSeekTimeUs, timeUs);
  8. }
  9. }
  10. {
  11. Mutex::Autolock autoLock(mMiscStateLock);
  12. mVideoTimeUs = timeUs;
  13. }
  14. SeekType wasSeeking = mSeeking;
  15. finishSeekIfNecessary(timeUs);

上面代码是当成功读取到一帧数据,则拿出此数据的时间戳信息

之前说如果有seek请求,则先pause住audio,读取seek的video数据,拿到第一帧数据后,以此数据为标准,来seek audio,此处finishSeekIfNecessary便是完成此功能,读者可自行阅读,比较简单

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (mAudioPlayer != NULL && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) {
  2. status_t err = startAudioPlayer_l();
  3. if(err != OK) {
  4. ALOGE("Starting the audio player failed w/ err %d", err);
  5. return;
  6. }
  7. }
  8. if ((mFlags & TEXTPLAYER_INITIALIZED)
  9. && !(mFlags & (TEXT_RUNNING | SEEK_PREVIEW))) {
  10. mTextDriver->start();
  11. modifyFlags(TEXT_RUNNING, SET);
  12. }

在后面是如果有audio则启动audio播放,如果有subtitle则启动播放

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. TimeSource *ts =
  2. ((mFlags & AUDIO_AT_EOS) || !(mFlags & AUDIOPLAYER_STARTED))
  3. ? &mSystemTimeSource : mTimeSource;
  4. if (mFlags & FIRST_FRAME) {
  5. modifyFlags(FIRST_FRAME, CLEAR);
  6. mSinceLastDropped =0;
  7. mTimeSourceDeltaUs = ts->getRealTimeUs() - timeUs;
  8. }
  9. int64_t realTimeUs, mediaTimeUs;
  10. if (!(mFlags & AUDIO_AT_EOS) && mAudioPlayer != NULL
  11. && mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs)) {
  12. mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
  13. }
  14. if (wasSeeking == SEEK_VIDEO_ONLY) {
  15. int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
  16. int64_t latenessUs = nowUs - timeUs;
  17. ATRACE_INT("Video Lateness (ms)", latenessUs / 1E3);
  18. if(latenessUs >0) {
  19. ALOGI("after SEEK_VIDEO_ONLY we're late by %.2f secs", latenessUs / 1E6);
  20. }
  21. }

上面是更新时间信息,首先获取时钟源,系统时钟或者audio时钟

如果是第一帧画面则mTimeSourceDeltaUs=ts->getRealTimeUs() - timeUs;

如果不是第一帧画面则:mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;

这里先解释下这几个变量即调用的意义

ts->getRealTimeUs() :这是通过计算播放了多少audio帧换算出来的实际时间

timeUs :这是下一帧画面的时间戳

realTimeUs = mPositionTimeRealUs 这是从mAudioplayer中获取的信息(如果有audio的话),是当前播放位置的时间

mPositionTimeMediaUs : 下一包音频数据的时间戳

通过这些信息便可以计算出当前播放位置:这里对于第一包audio或者video不是0的情况读者可自己思考上述机制是如何保证正常运行的。

后面单独分析mAudioPlayer的时候会仔细分析。

下面wasSeeking == SEEK_VIDEO_ONLY先忽略掉,继续

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if (wasSeeking == NO_SEEK) {
  2. // Let's display the first frame after seeking right away.
  3. int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
  4. int64_t latenessUs = nowUs - timeUs;
  5. ATRACE_INT("Video Lateness (ms)", latenessUs / 1E3);
  6. if(latenessUs > 500000ll
  7. && mAudioPlayer != NULL
  8. && mAudioPlayer->getMediaTimeMapping(
  9. &realTimeUs, &mediaTimeUs)) {
  10. if(mWVMExtractor == NULL) {
  11. ALOGI("we're much too late (%.2f secs), video skipping ahead",
  12. latenessUs / 1E6);
  13. mVideoBuffer->release();
  14. mVideoBuffer = NULL;
  15. mSeeking = SEEK_VIDEO_ONLY;
  16. mSeekTimeUs = mediaTimeUs;
  17. postVideoEvent_l();
  18. return;
  19. }else{
  20. // The widevine extractor doesn't deal well with seeking
  21. // audio and video independently. We'll just have to wait
  22. // until the decoder catches up, which won't be long at all.
  23. ALOGI("we're very late (%.2f secs)", latenessUs / 1E6);
  24. }
  25. }
  26. if(latenessUs >40000) {
  27. // We're more than 40ms late.
  28. ALOGV("we're late by %lld us (%.2f secs)",
  29. latenessUs, latenessUs / 1E6);
  30. if(!(mFlags & SLOW_DECODER_HACK)
  31. || mSinceLastDropped > FRAME_DROP_FREQ)
  32. {
  33. ALOGV("we're late by %lld us (%.2f secs) dropping "
  34. "one after %d frames",
  35. latenessUs, latenessUs / 1E6, mSinceLastDropped);
  36. mSinceLastDropped =0;
  37. mVideoBuffer->release();
  38. mVideoBuffer = NULL;
  39. {
  40. Mutex::Autolock autoLock(mStatsLock);
  41. ++mStats.mNumVideoFramesDropped;
  42. }
  43. postVideoEvent_l();
  44. return;
  45. }
  46. }
  47. if(latenessUs < -10000) {
  48. // We're more than 10ms early.
  49. postVideoEvent_l(10000);
  50. return;
  51. }
  52. }

上面代码和之前的时间处理要结合起来看,计算出来mTimeSourceDeltaUs之后,就可以分析播放信息如:

当前播放进度,即int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;

播放的latency: int64_t latenessUs = nowUs - timeUs; (这里timeUs是下一帧的时间戳)

下面是处理latency过大的情况:这里比对参考是audio或者系统时钟,即与音视频同步的处理,当视频与音频或者与系统时钟相差太多时

超过500000ll US,则seek到对应位置;超过40000 则丢帧处理;当比参考时钟早了10ms,则通过postVideoEvent_l(10000);延迟触发下一次的mVideoEvent

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. if ((mNativeWindow != NULL)
  2. && (mVideoRendererIsPreview || mVideoRenderer == NULL)) {
  3. mVideoRendererIsPreview =false;
  4. initRenderer_l();
  5. }
  6. if(mVideoRenderer != NULL) {
  7. mSinceLastDropped++;
  8. mVideoRenderer->render(mVideoBuffer);
  9. if(!mVideoRenderingStarted) {
  10. mVideoRenderingStarted =true;
  11. notifyListener_l(MEDIA_INFO, MEDIA_INFO_RENDERING_START);
  12. }
  13. }
  14. mVideoBuffer->release();
  15. mVideoBuffer = NULL;
  16. if(wasSeeking != NO_SEEK && (mFlags & SEEK_PREVIEW)) {
  17. modifyFlags(SEEK_PREVIEW, CLEAR);
  18. return;
  19. }
  20. postVideoEvent_l();
  21. }

最后就是显示此帧画面了,当播放过程一切正常时,则显示此帧画面,并通过 postVideoEvent_l();触发下一次的mVideoEvent事件

分析到这里大家应该明白,awesoemplayer的播放驱动机制即通过递归的调用postVideoEvent_l(); 来完成

而且由于postVideoEvent_l(); 里有延迟触发消息机制,因此也不会阻塞。

(3)postVideoLagEvent

看下此事件的处理方法

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::onVideoLagUpdate() {
  2. Mutex::Autolock autoLock(mLock);
  3. if(!mVideoLagEventPending) {
  4. return;
  5. }
  6. mVideoLagEventPending =false;
  7. int64_t audioTimeUs = mAudioPlayer->getMediaTimeUs();
  8. int64_t videoLateByUs = audioTimeUs - mVideoTimeUs;
  9. if(!(mFlags & VIDEO_AT_EOS) && videoLateByUs > 300000ll) {
  10. ALOGV("video late by %lld ms.", videoLateByUs / 1000ll);
  11. notifyListener_l(
  12. MEDIA_INFO,
  13. MEDIA_INFO_VIDEO_TRACK_LAGGING,
  14. videoLateByUs / 1000ll);
  15. }
  16. postVideoLagEvent_l();
  17. }

这里主要是为了更新信息,看定义

// The video is too complex for the decoder: it can't decode frames fast
// enough. Possibly only the audio plays fine at this stage.
MEDIA_INFO_VIDEO_TRACK_LAGGING = 700,

可以知道当视频解码速度不够时,会通知上层,decoder不给力

4、其他事件分析

到这里,整个播放器的流程就讲完了,但有些事件我们并没有涉及,这里也把脉络说一下

之前我们列出了所有的事件,这里再列举一下

sp<TimedEventQueue::Event> mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent);
sp<TimedEventQueue::Event> mStreamDoneEvent = new AwesomeEvent(this, & AwesomePlayer::onStreamDone);

sp<TimedEventQueue::Event> mBufferingEvent = new AwesomeEvent(this, & AwesomePlayer::onBufferingUpdate);
sp<TimedEventQueue::Event> mCheckAudioStatusEvent = new AwesomeEvent(this, & AwesomePlayer::onCheckAudioStatus);
sp<TimedEventQueue::Event> mVideoLagEvent = new AwesomeEvent(this, & AwesomePlayer::onVideoLagUpdate);

sp<TimedEventQueue::Event> mAsyncPrepareEvent = new AwesomeEvent(this, &AwesomePlayer::onPrepareAsyncEvent);

上面分析了mAsyncPrepareEvent mVideoLagEvent mVideoEvent ,下面分析下其他几个事件

(1)mStreamDoneEvent

这里是当vidoe播放结束后会触发,在onVideoEvent中当读取帧数据失败时

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::onStreamDone() {
  2. // Posted whenever any stream finishes playing.
  3. ATRACE_CALL();
  4. Mutex::Autolock autoLock(mLock);
  5. if(!mStreamDoneEventPending) {
  6. return;
  7. }
  8. mStreamDoneEventPending =false;
  9. if(mStreamDoneStatus != ERROR_END_OF_STREAM) {
  10. ALOGV("MEDIA_ERROR %d", mStreamDoneStatus);
  11. notifyListener_l(
  12. MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, mStreamDoneStatus);
  13. pause_l(true/* at eos */);
  14. modifyFlags(AT_EOS, SET);
  15. return;
  16. }
  17. constbool allDone =
  18. (mVideoSource == NULL || (mFlags & VIDEO_AT_EOS))
  19. && (mAudioSource == NULL || (mFlags & AUDIO_AT_EOS));
  20. if(!allDone) {
  21. return;
  22. }
  23. if((mFlags & LOOPING)
  24. || ((mFlags & AUTO_LOOPING)
  25. && (mAudioSink == NULL || mAudioSink->realtime()))) {
  26. // Don't AUTO_LOOP if we're being recorded, since that cannot be
  27. // turned off and recording would go on indefinitely.
  28. seekTo_l(0);
  29. if(mVideoSource != NULL) {
  30. postVideoEvent_l();
  31. }
  32. }else{
  33. ALOGV("MEDIA_PLAYBACK_COMPLETE");
  34. notifyListener_l(MEDIA_PLAYBACK_COMPLETE);
  35. pause_l(true/* at eos */);
  36. modifyFlags(AT_EOS, SET);
  37. }
  38. }

这里主要功能如下:

判断是否真的播放完毕了

若播放完毕了,是否需要循环,若需要则调用seekTo_l(0)继续播放

否则,通知上层本次播放结束,发送MEDIA_PLAYBACK_COMPLETE给调用者

(2)mBufferingEvent

awesomeplayer中通过调用postBufferingEvent_l来触发此事件

作用是为了cache数据

调用的位置有

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::onPrepareAsyncEvent() {
  2. *****************
  3. if(isStreamingHTTP()) {
  4. postBufferingEvent_l();
  5. }else{
  6. finishAsyncPrepare_l();
  7. }
  8. }

当时网络流的时候,先缓冲一部分数据,看下具体实现

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::postBufferingEvent_l() {
  2. if(mBufferingEventPending) {
  3. return;
  4. }
  5. mBufferingEventPending =true;
  6. mQueue.postEventWithDelay(mBufferingEvent, 1000000ll);
  7. }

首先修改标志位mBufferingEventPending,之后触发消息

这里就不贴代码了,说说原理:当需要cache数据的时候,在onPrepareAsyncEvent调用postBufferingEvent_l 后onPrepareAsyncEvent 就结束了。由于此时解码器已经开始解码,即数据链路已经建立

因此会不断的进行 读数据-解码的操作,而在onBufferingUpdate响应函数中,会先pause住输出,等数据缓存足够了之后,调用finishAsyncPrepare_l等完成prepareAsync的操作

(3)mCheckAudioStatusEvent

触发此事件的调用顺序是:

startAudioPlayer_l->postAudioSeekComplete->postCheckAudioStatusEvent

fillBuffer(mAudioPlayer)->postAudioEOS->postCheckAudioStatusEvent

作用就是查询audio的状态

代码如下

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::postCheckAudioStatusEvent(int64_t delayUs) {
  2. Mutex::Autolock autoLock(mAudioLock);
  3. if(mAudioStatusEventPending) {
  4. return;
  5. }
  6. mAudioStatusEventPending =true;
  7. // Do not honor delay when looping in order to limit audio gap
  8. if(mFlags & (LOOPING | AUTO_LOOPING)) {
  9. delayUs =0;
  10. }
  11. mQueue.postEventWithDelay(mCheckAudioStatusEvent, delayUs);
  12. }

具体响应代码

[html] view plaincopyprint?在CODE上查看代码片派生到我的代码片
  1. void AwesomePlayer::onCheckAudioStatus() {
  2. {
  3. Mutex::Autolock autoLock(mAudioLock);
  4. if(!mAudioStatusEventPending) {
  5. // Event was dispatched and while we were blocking on the mutex,
  6. // has already been cancelled.
  7. return;
  8. }
  9. mAudioStatusEventPending =false;
  10. }
  11. Mutex::Autolock autoLock(mLock);
  12. if(mWatchForAudioSeekComplete && !mAudioPlayer->isSeeking()) {
  13. mWatchForAudioSeekComplete =false;
  14. if(!mSeekNotificationSent) {
  15. notifyListener_l(MEDIA_SEEK_COMPLETE);
  16. mSeekNotificationSent =true;
  17. }
  18. mSeeking = NO_SEEK;
  19. }
  20. status_t finalStatus;
  21. if(mWatchForAudioEOS && mAudioPlayer->reachedEOS(&finalStatus)) {
  22. mWatchForAudioEOS =false;
  23. modifyFlags(AUDIO_AT_EOS, SET);
  24. modifyFlags(FIRST_FRAME, SET);
  25. postStreamDoneEvent_l(finalStatus);
  26. }
  27. }


作用一个是用来监测seek是否结束,第二个是播放是否到了结尾,这里与video一样,播放结束也会触发postStreamDoneEvent_l

【结束】