【Android 】【多媒体】stagefrightplayer框架
来源:互联网 发布:只有我知双语未删减版 编辑:程序博客网 时间:2024/05/21 04:01
概述
通过分析stagefrightplayer代码可以知道,stagefrightplayer 是awesomeplayer的封装,实际的工作都由awsomeplayer完成
一个典型的播放器框架包括如下组成部分:
stream: 流类型,一般有文件类型、网络流等
demuxer:解复用模块,主要是通过分析带播放的数据,得到基本信息,如audio video的基本参数等,还负责分解audio和video数据,为下层模块提供边界完整的包
decoder:解码模块,从demuxer模块得到原始的audio-packet video-packet,解码成实际可播放或者显示的 pcm以及 yuv(或rgb)等
render : 显示模块,将pcm以及yuv(或rgb)在声卡上播放以及在画布上显示
其他部分: 主要是一些播放器的机制,如同步、切音轨、pause-resume,seek等
stagefrightplayer作为媒体播放器,自然也包括上面这些基础模块,这里主要是从总体上分析stagefrightplayer的结构
本篇主要介绍如下内容
1、awesomeplayer总体结构
2、awesomeplayer的工作方式
3、awesomeplayer代码分析
4、其他事件分析
下面详细讲解
1、awesomeplayer的总体结构
整体结构如下图
这里先简单做几点说明,后面文章会对每个模块进行详细分析
数据结构
mExtractor -- MediaExtractor
mAudioTrack&mVideoTrack -- MediaSource
说明:
awesomeplayer利用mExtractor 和 mAudioTrack&mVideoTrack等成员来完成 数据的解析和读取。
对于每种格式的媒体文件,均需要实现一个MediaExtractor 的子类,例如AVI文件的extractor类为AVIExtractor,在构造函数中完成对数据的解析,主要信息由:媒体文件中流的个数,音频流的采样率、声道数、量化位数等,视频流的宽度、高度、帧率等信息。
对于每个流,都对应一个单独的MediaSource,以avi为例,为AVISource,MediaSource 提供了状态接口(start stop),数据读取接口(read),参数查询接口(getFormat)。其中调用一次read函数,可以认为是读取对应的一个数据包。
一般而言,由于MediaExtractor 负责解析工作,因此MediaSource 的read操作一般也通过MediaExtractor 的接口获取offset和size,对于MediaSource 只进行数据的读取工作,不参与解析。
这里mAudioTrack&mVideoTrack 分别对应的媒体文件中选中的音频流和视频流,从代码知道,选取规则是第一个视频和第一个音频。
mAudioSource &mVideoSource -- MediaSource
说明:
这里mAudioSource &mVideoSource可以认为是awesomeplayer与decoder之间的桥梁,awesomeplayer从mAudioSource &mVideoSource 里获取数据,进行播放,而decoder则负责解码并向mAudioSource &mVideoSource 填充数据
这里awesomeplayer与decoder的通信是通过OMXClient mClient; 成员进行的,OMX*解码器是一个服务,每个awesomeplayer对象都包含一个单独的client端与之交互。
mAudioPlayer -- AudioPlayer
说明:
mAudioPlayer 是负责音频输出的模块,主要封装关系为:mAudioPlayer->mAudioSink->mAudioTrack(这里的mAudioTrack 与前面不同,此处为AudioTrack对象)
实际进行音频播放的对象为mAudioTrack-audioflinger 结构
mVideoRenderer -- AwesomeRenderer
说明:
mVideoRenderer 是负责视频显示的模块,封装关系为:mVideoRenderer ->mTarget(SoftwareRenderer)->mNativeWindow 【注】此处理解不清晰
最后依靠surfaceflinger来显示
2、awesomeplayer的工作方式
上面介绍了,awesomeplayer中的主要成员,下面介绍下awesomeplayer依靠哪种方式驱动主要成员协同工作。
这里就不得不提到android中的一个类即TimedEventQueue,这是一个消息处理类,在之前的文章中有介绍过,【参考文章:TimedEventQueue】
这里通过对每个事件消息提供一个fire函数完成相应的操作。而整个播放过程的驱动方式为递归的触发mVideoEvent 事件来控制媒体文件的播放。
说明:详细可参考下面的代码分析
awesomeplayer中主要有如下几个事件
sp<TimedEventQueue::Event> mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent); --- 显示一帧画面,并负责同步处理
sp<TimedEventQueue::Event> mStreamDoneEvent = new AwesomeEvent(this, &AwesomePlayer::onStreamDone); -- 播放结束的处理
sp<TimedEventQueue::Event> mBufferingEvent = new AwesomeEvent(this, &AwesomePlayer::onBufferingUpdate); -- cache数据
sp<TimedEventQueue::Event> mCheckAudioStatusEvent = new AwesomeEvent(this, &AwesomePlayer::onCheckAudioStatus); -- 监测audio 状态:seek以及播放结束
sp<TimedEventQueue::Event> mVideoLagEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate); -- 监测video 解码性能
sp<TimedEventQueue::Event> mAsyncPrepareEvent = new AwesomeEvent(this, &AwesomePlayer::onPrepareAsyncEvent); -- 完成prepareAsync工作
这里会在下面的代码分析中看到具体的工作流程
3、awesomeplayer代码分析
下面通过分析实际的代码进一步对awesomaplayer的机构和流程加深理解。
这里我们还是按照播放一个媒体文件的实际调用步骤来分析,具体调用顺序如下
- StagefrightPlayer player =newStagefrightPlayer();
- player->setDataSource(*)
- player->prepareAsync()
- player->start();
StagefrightPlayer player =newStagefrightPlayer();player->setDataSource(*)player->prepareAsync()player->start();
下面详细介绍每个调用.[说明]仅为说明流程,不严谨
3.1 构造函数
- StagefrightPlayer::StagefrightPlayer()
- : mPlayer(newAwesomePlayer) {
- ALOGV("StagefrightPlayer");
- mPlayer->setListener(this);
- }
StagefrightPlayer::StagefrightPlayer() : mPlayer(newAwesomePlayer) { ALOGV("StagefrightPlayer"); mPlayer->setListener(this);}
这里stagefrightplayer是awesomeplayer 封装。主要看下awesomaplayer的构造函数
- AwesomePlayer::AwesomePlayer()
- : mQueueStarted(false),
- mUIDValid(false),
- mTimeSource(NULL),
- mVideoRenderingStarted(false),
- mVideoRendererIsPreview(false),
- mAudioPlayer(NULL),
- mDisplayWidth(0),
- mDisplayHeight(0),
- mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW),
- mFlags(0),
- mExtractorFlags(0),
- mVideoBuffer(NULL),
- mDecryptHandle(NULL),
- mLastVideoTimeUs(-1),
- mTextDriver(NULL) {
- CHECK_EQ(mClient.connect(), (status_t)OK);
- DataSource::RegisterDefaultSniffers();
- mVideoEvent =newAwesomeEvent(this, &AwesomePlayer::onVideoEvent);
- mVideoEventPending =false;
- mStreamDoneEvent =newAwesomeEvent(this, &AwesomePlayer::onStreamDone);
- mStreamDoneEventPending =false;
- mBufferingEvent =newAwesomeEvent(this, &AwesomePlayer::onBufferingUpdate);
- mBufferingEventPending =false;
- mVideoLagEvent =newAwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate);
- mVideoEventPending =false;
- mCheckAudioStatusEvent =newAwesomeEvent(
- this, &AwesomePlayer::onCheckAudioStatus);
- mAudioStatusEventPending =false;
- reset();
- }
AwesomePlayer::AwesomePlayer() : mQueueStarted(false), mUIDValid(false), mTimeSource(NULL), mVideoRenderingStarted(false), mVideoRendererIsPreview(false), mAudioPlayer(NULL), mDisplayWidth(0), mDisplayHeight(0), mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW), mFlags(0), mExtractorFlags(0), mVideoBuffer(NULL), mDecryptHandle(NULL), mLastVideoTimeUs(-1), mTextDriver(NULL) { CHECK_EQ(mClient.connect(), (status_t)OK); DataSource::RegisterDefaultSniffers(); mVideoEvent =newAwesomeEvent(this, &AwesomePlayer::onVideoEvent); mVideoEventPending =false; mStreamDoneEvent =newAwesomeEvent(this, &AwesomePlayer::onStreamDone); mStreamDoneEventPending =false; mBufferingEvent =newAwesomeEvent(this, &AwesomePlayer::onBufferingUpdate); mBufferingEventPending =false; mVideoLagEvent =newAwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate); mVideoEventPending =false; mCheckAudioStatusEvent =newAwesomeEvent( this, &AwesomePlayer::onCheckAudioStatus); mAudioStatusEventPending =false; reset();}
在awesomeplayer的构造函数中主要就是做些准备工作:如创立event对象,并提供fire函数,设置对应的各个状态变量
还有一点需要注意的是 mClient.connect() 建立了awesomeplayer与omx解码器之间的链接,后面介绍解码器模块的时候会详细介绍
3.2 setDataSource
说明:这里以本地文件为例,传入参数为文件句柄
具体代码如下:
- status_t StagefrightPlayer::setDataSource(intfd, int64_t offset, int64_t length) {
- ALOGV("setDataSource(%d, %lld, %lld)", fd, offset, length);
- returnmPlayer->setDataSource(dup(fd), offset, length);
- }
status_t StagefrightPlayer::setDataSource(intfd, int64_t offset, int64_t length) { ALOGV("setDataSource(%d, %lld, %lld)", fd, offset, length); returnmPlayer->setDataSource(dup(fd), offset, length);}
继续深入看awesomeplayer
- status_t AwesomePlayer::setDataSource(
- intfd, int64_t offset, int64_t length) {
- Mutex::Autolock autoLock(mLock);
- reset_l();
- sp<DataSource>dataSource =newFileSource(fd, offset, length);
- status_t err = dataSource->initCheck();
- if(err != OK) {
- returnerr;
- }
- mFileSource = dataSource;
- {
- Mutex::Autolock autoLock(mStatsLock);
- mStats.mFd = fd;
- mStats.mURI = String8();
- }
- returnsetDataSource_l(dataSource);
- }
status_t AwesomePlayer::setDataSource( intfd, int64_t offset, int64_t length) { Mutex::Autolock autoLock(mLock); reset_l(); sp<DataSource> dataSource =newFileSource(fd, offset, length); status_t err = dataSource->initCheck(); if(err != OK) { returnerr; } mFileSource = dataSource; { Mutex::Autolock autoLock(mStatsLock); mStats.mFd = fd; mStats.mURI = String8(); } returnsetDataSource_l(dataSource);}
首先建立文件对应的datasource,这里是FileSource,提供对实际文件的读写,seek等
dataSource->initCheck 是判断合法性,这里是判断fd是否有效
主要工作在setDataSource_l(dataSource); 完成,继续跟进代码
- status_t AwesomePlayer::setDataSource_l(
- constsp<DataSource> &dataSource) {
- sp<MediaExtractor>extractor = MediaExtractor::Create(dataSource);
- if(extractor == NULL) {
- returnUNKNOWN_ERROR;
- }
- if(extractor->getDrmFlag()) {
- checkDrmStatus(dataSource);
- }
- returnsetDataSource_l(extractor);
- }
status_t AwesomePlayer::setDataSource_l( constsp<DataSource> &dataSource) { sp<MediaExtractor> extractor = MediaExtractor::Create(dataSource); if(extractor == NULL) { returnUNKNOWN_ERROR; } if(extractor->getDrmFlag()) { checkDrmStatus(dataSource); } returnsetDataSource_l(extractor);}
这里依据提供的dataSource建立对应的MediaExtractor,这里说明下,有了dataSource,就可以从文件中读取数据,就可以通过分析文件头解析出文件具体是哪种格式,然后建立相应的MediaExtractor
之前有介绍,在MediaExtractor 建立的时候便完成了对文件的解析,包括流数量,各个流的具体信息等,下面就可以直接使用了
这里不再深入了,等讲解MediaExtractor的时候再介绍
下面进入 setDataSource_l(extractor); ,代码比较多,我们分开来看
- status_t AwesomePlayer::setDataSource_l(constsp<MediaExtractor> &extractor) {
- // Attempt to approximate overall stream bitrate by summing all
- // tracks' individual bitrates, if not all of them advertise bitrate,
- // we have to fail.
- int64_t totalBitRate =0;
- mExtractor = extractor;
- for(size_t i =0; i< extractor->countTracks(); ++i) {
- sp<MetaData>meta = extractor->getTrackMetaData(i);
- int32_t bitrate;
- if(!meta->findInt32(kKeyBitRate, &bitrate)) {
- constchar *mime;
- CHECK(meta->findCString(kKeyMIMEType, &mime));
- ALOGV("track of type '%s' does not publish bitrate", mime);
- totalBitRate = -1;
- break;
- }
- totalBitRate += bitrate;
- }
- mBitrate = totalBitRate;
status_t AwesomePlayer::setDataSource_l(constsp<MediaExtractor> &extractor) { // Attempt to approximate overall stream bitrate by summing all // tracks' individual bitrates, if not all of them advertise bitrate, // we have to fail. int64_t totalBitRate =0; mExtractor = extractor; for(size_t i =0; i < extractor->countTracks(); ++i) { sp<MetaData> meta = extractor->getTrackMetaData(i); int32_t bitrate; if(!meta->findInt32(kKeyBitRate, &bitrate)) { constchar *mime; CHECK(meta->findCString(kKeyMIMEType, &mime)); ALOGV("track of type '%s' does not publish bitrate", mime); totalBitRate = -1; break; } totalBitRate += bitrate; } mBitrate = totalBitRate;
前面说了,MediaExtractor 建立后,我们也就拿到了对应的信息,
这里流数量可以通过extractor->countTracks() 得到,每个流对应的信息存储在一个MetaData中通过extractor->getTrackMetaData(i); 得到
上面这段代码是通过解析每个流的bitrate来得到整个文件的波特率(当有一个流没有设置此参数时,则设置totalBitRate =-1),最后赋值给成员mBitrate
下面是选取待播放的视频流和音频流的过程
- bool haveAudio = false;
- bool haveVideo = false;
- for (size_t i = 0; i < extractor->countTracks(); ++i) {
- sp<MetaData>meta = extractor->getTrackMetaData(i);
- constchar *_mime;
- CHECK(meta->findCString(kKeyMIMEType, &_mime));
- String8 mime = String8(_mime);
bool haveAudio = false;bool haveVideo = false;for (size_t i = 0; i < extractor->countTracks(); ++i) { sp<MetaData> meta = extractor->getTrackMetaData(i); constchar *_mime; CHECK(meta->findCString(kKeyMIMEType, &_mime)); String8 mime = String8(_mime);
首先是对每个流,先获取kKeyMIMEType参数,此参数标示了流类型是音频 视频 还是字幕,看下具体分析
- if (!haveVideo && !strncasecmp(mime.string(), "video/",6)) {
- setVideoSource(extractor->getTrack(i));
- haveVideo =true;
- // Set the presentation/display size
- int32_t displayWidth, displayHeight;
- bool success = meta->findInt32(kKeyDisplayWidth, &displayWidth);
- if(success) {
- success = meta->findInt32(kKeyDisplayHeight, &displayHeight);
- }
- if(success) {
- mDisplayWidth = displayWidth;
- mDisplayHeight = displayHeight;
- }
- {
- Mutex::Autolock autoLock(mStatsLock);
- mStats.mVideoTrackIndex = mStats.mTracks.size();
- mStats.mTracks.push();
- TrackStat *stat =
- &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);
- stat->mMIME = mime.string();
- }
- }
if (!haveVideo && !strncasecmp(mime.string(), "video/",6)) { setVideoSource(extractor->getTrack(i)); haveVideo =true; // Set the presentation/display size int32_t displayWidth, displayHeight; bool success = meta->findInt32(kKeyDisplayWidth, &displayWidth); if(success) { success = meta->findInt32(kKeyDisplayHeight, &displayHeight); } if(success) { mDisplayWidth = displayWidth; mDisplayHeight = displayHeight; } { Mutex::Autolock autoLock(mStatsLock); mStats.mVideoTrackIndex = mStats.mTracks.size(); mStats.mTracks.push(); TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex); stat->mMIME = mime.string(); } }
上面是视频的情况,如果是视频,设置video标志haveVideo为true,后面获取对应的参数,主要是宽度、高度
然后将此流信息保存在mStates.mTracks中
- else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {
- setAudioSource(extractor->getTrack(i));
- haveAudio =true;
- mActiveAudioTrackIndex = i;
- {
- Mutex::Autolock autoLock(mStatsLock);
- mStats.mAudioTrackIndex = mStats.mTracks.size();
- mStats.mTracks.push();
- TrackStat *stat =
- &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
- stat->mMIME = mime.string();
- }
- if(!strcasecmp(mime.string(), MEDIA_MIMETYPE_AUDIO_VORBIS)) {
- // Only do this for vorbis audio, none of the other audio
- // formats even support this ringtone specific hack and
- // retrieving the metadata on some extractors may turn out
- // to be very expensive.
- sp<MetaData>fileMeta = extractor->getMetaData();
- int32_t loop;
- if(fileMeta != NULL
- && fileMeta->findInt32(kKeyAutoLoop, &loop) && loop !=0) {
- modifyFlags(AUTO_LOOPING, SET);
- }
- }
- }
else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) { setAudioSource(extractor->getTrack(i)); haveAudio =true; mActiveAudioTrackIndex = i; { Mutex::Autolock autoLock(mStatsLock); mStats.mAudioTrackIndex = mStats.mTracks.size(); mStats.mTracks.push(); TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex); stat->mMIME = mime.string(); } if(!strcasecmp(mime.string(), MEDIA_MIMETYPE_AUDIO_VORBIS)) { // Only do this for vorbis audio, none of the other audio // formats even support this ringtone specific hack and // retrieving the metadata on some extractors may turn out // to be very expensive. sp<MetaData> fileMeta = extractor->getMetaData(); int32_t loop; if(fileMeta != NULL && fileMeta->findInt32(kKeyAutoLoop, &loop) && loop !=0) { modifyFlags(AUTO_LOOPING, SET); } } }
上面是音频的情况,也是保存参数,设置hasAudio标志,并保存在mStates.mTracks中
这里单独处理的vorbis的case,没去深究
- MIMETYPE_TEXT_3GPP)) {
- addTextSource_l(i, extractor->getTrack(i));
- }
MIMETYPE_TEXT_3GPP)) { addTextSource_l(i, extractor->getTrack(i)); }
在后面是处理字幕的情况,这里android字幕还不完善,后面完善后在系统讲下。
- if (!haveAudio && !haveVideo) {
- if(mWVMExtractor != NULL) {
- returnmWVMExtractor->getError();
- }else{
- returnUNKNOWN_ERROR;
- }
- }
- mExtractorFlags = extractor->flags();
- returnOK;
- }
if (!haveAudio && !haveVideo) { if(mWVMExtractor != NULL) { returnmWVMExtractor->getError(); }else{ returnUNKNOWN_ERROR; } } mExtractorFlags = extractor->flags(); returnOK;}
最后更新下flags就结束了
这里总结下,setdatasource的主要工作就是解析文件头信息,获取媒体文件的各个流的基本参数
3.3 prepareAsync
在之前的译文【MediaPlayer类介绍】中,解释过prepare和prepareAsync的区别,即一个是同步的一个是异步的,但做的事情是一样的
看下入口代码
- status_t StagefrightPlayer::prepareAsync() {
- returnmPlayer->prepareAsync();
- }
status_t StagefrightPlayer::prepareAsync() { returnmPlayer->prepareAsync();}
继续进入awesomeplayer
- status_t AwesomePlayer::prepareAsync() {
- ATRACE_CALL();
- Mutex::Autolock autoLock(mLock);
- if(mFlags & PREPARING) {
- returnUNKNOWN_ERROR; // async prepare already pending
- }
- mIsAsyncPrepare =true;
- returnprepareAsync_l();
- }
status_t AwesomePlayer::prepareAsync() { ATRACE_CALL(); Mutex::Autolock autoLock(mLock); if(mFlags & PREPARING) { returnUNKNOWN_ERROR; // async prepare already pending } mIsAsyncPrepare =true; returnprepareAsync_l();}
继续
- status_t AwesomePlayer::prepareAsync_l() {
- if(mFlags & PREPARING) {
- returnUNKNOWN_ERROR; // async prepare already pending
- }
- if(!mQueueStarted) {
- mQueue.start();
- mQueueStarted =true;
- }
- modifyFlags(PREPARING, SET);
- mAsyncPrepareEvent =newAwesomeEvent(
- this, &AwesomePlayer::onPrepareAsyncEvent);
- mQueue.postEvent(mAsyncPrepareEvent);
- returnOK;
- }
status_t AwesomePlayer::prepareAsync_l() { if(mFlags & PREPARING) { returnUNKNOWN_ERROR; // async prepare already pending } if(!mQueueStarted) { mQueue.start(); mQueueStarted =true; } modifyFlags(PREPARING, SET); mAsyncPrepareEvent =newAwesomeEvent( this, &AwesomePlayer::onPrepareAsyncEvent); mQueue.postEvent(mAsyncPrepareEvent); returnOK;}
这里比较重要,首先如果发现mQueue还没有启动,则启动,启动之后就可以处理事件了
随后构造了mAsyncPrepareEvent 时间,提供了事件响应函数AwesomePlayer::onPrepareAsyncEvent,最后将消息发送出去,mQueue.postEvent(mAsyncPrepareEvent);
消息发送后,便会触发消息响应函数,看下onPrepareAsyncEvent具体实现
- void AwesomePlayer::onPrepareAsyncEvent() {
- Mutex::Autolock autoLock(mLock);
- if(mFlags & PREPARE_CANCELLED) {
- ALOGI("prepare was cancelled before doing anything");
- abortPrepare(UNKNOWN_ERROR);
- return;
- }
- if(mUri.size() >0) {
- status_t err = finishSetDataSource_l();
- if(err != OK) {
- abortPrepare(err);
- return;
- }
- }
- if(mVideoTrack != NULL && mVideoSource == NULL) {
- status_t err = initVideoDecoder();
- if(err != OK) {
- abortPrepare(err);
- return;
- }
- }
- if(mAudioTrack != NULL && mAudioSource == NULL) {
- status_t err = initAudioDecoder();
- if(err != OK) {
- abortPrepare(err);
- return;
- }
- }
- modifyFlags(PREPARING_CONNECTED, SET);
- if(isStreamingHTTP()) {
- postBufferingEvent_l();
- }else{
- finishAsyncPrepare_l();
- }
- }
void AwesomePlayer::onPrepareAsyncEvent() { Mutex::Autolock autoLock(mLock); if(mFlags & PREPARE_CANCELLED) { ALOGI("prepare was cancelled before doing anything"); abortPrepare(UNKNOWN_ERROR); return; } if(mUri.size() >0) { status_t err = finishSetDataSource_l(); if(err != OK) { abortPrepare(err); return; } } if(mVideoTrack != NULL && mVideoSource == NULL) { status_t err = initVideoDecoder(); if(err != OK) { abortPrepare(err); return; } } if(mAudioTrack != NULL && mAudioSource == NULL) { status_t err = initAudioDecoder(); if(err != OK) { abortPrepare(err); return; } } modifyFlags(PREPARING_CONNECTED, SET); if(isStreamingHTTP()) { postBufferingEvent_l(); }else{ finishAsyncPrepare_l(); }}
这里主要是完成了如下几个重要步骤: -- initVideoDecoder -- initAudioDecoder -- finishAsyncPrepare_l
说明:还有个方法为finishSetDataSource_l ,这里由于调用的setdatasource传入的是fd,因此不会初始化mUri,因此不会调用,看实现可以知道,其功能与setDataSource(int fd, int64_t offset, int64_t length)一致
这里我们分三个小节来详细介绍
说明:这里我们不考虑drm的情况,这里只介绍结构和流程,因此会忽略一些代码
(1) initVideoDecoder
- status_t AwesomePlayer::initVideoDecoder(uint32_t flags) {
- ATRACE_CALL();
- ALOGV("initVideoDecoder flags=0x%x", flags);
- mVideoSource = OMXCodec::Create(
- mClient.interface(), mVideoTrack->getFormat(),
- false,// createEncoder
- mVideoTrack,
- NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);
- if(mVideoSource != NULL) {
- int64_t durationUs;
- if(mVideoTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
- Mutex::Autolock autoLock(mMiscStateLock);
- if(mDurationUs <0|| durationUs> mDurationUs) {
- mDurationUs = durationUs;
- }
- }
- status_t err = mVideoSource->start();
- if(err != OK) {
- ALOGE("failed to start video source");
- mVideoSource.clear();
- returnerr;
- }
- }
status_t AwesomePlayer::initVideoDecoder(uint32_t flags) { ATRACE_CALL(); ALOGV("initVideoDecoder flags=0x%x", flags); mVideoSource = OMXCodec::Create( mClient.interface(), mVideoTrack->getFormat(), false,// createEncoder mVideoTrack, NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL); if(mVideoSource != NULL) { int64_t durationUs; if(mVideoTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) { Mutex::Autolock autoLock(mMiscStateLock); if(mDurationUs <0|| durationUs > mDurationUs) { mDurationUs = durationUs; } } status_t err = mVideoSource->start(); if(err != OK) { ALOGE("failed to start video source"); mVideoSource.clear(); returnerr; } }
上面是函数的主体部分,主要就是调用OMXCodec::Create创建解码器,将mClient作为参数传入作为桥梁,以及将mVideoTrack作为数据源提供给解码器模块
并将返回的MediaSource对象保存在mVideoSource中,这样后面awesomeplayer便可以从mVideoSource中获取数据进行显示了
创建完毕后,接着就启动解码器进行解码
- if (mVideoSource != NULL) {
- const char *componentName;
- CHECK(mVideoSource->getFormat()
- ->findCString(kKeyDecoderComponent, &componentName));
- {
- Mutex::Autolock autoLock(mStatsLock);
- TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex);
- stat->mDecoderName = componentName;
- }
- static const char *kPrefix = "OMX.Nvidia.";
- static const char *kSuffix = ".decode";
- static const size_t kSuffixLength = strlen(kSuffix);
- size_t componentNameLength = strlen(componentName);
- if (!strncmp(componentName, kPrefix, strlen(kPrefix))
- && componentNameLength >= kSuffixLength
- && !strcmp(&componentName[
- componentNameLength - kSuffixLength], kSuffix)) {
- modifyFlags(SLOW_DECODER_HACK, SET);
- }
- }
- return mVideoSource != NULL ? OK : UNKNOWN_ERROR;
- }
if (mVideoSource != NULL) { const char *componentName; CHECK(mVideoSource->getFormat() ->findCString(kKeyDecoderComponent, &componentName)); { Mutex::Autolock autoLock(mStatsLock); TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mVideoTrackIndex); stat->mDecoderName = componentName; } static const char *kPrefix = "OMX.Nvidia."; static const char *kSuffix = ".decode"; static const size_t kSuffixLength = strlen(kSuffix); size_t componentNameLength = strlen(componentName); if (!strncmp(componentName, kPrefix, strlen(kPrefix)) && componentNameLength >= kSuffixLength && !strcmp(&componentName[ componentNameLength - kSuffixLength], kSuffix)) { modifyFlags(SLOW_DECODER_HACK, SET); } } return mVideoSource != NULL ? OK : UNKNOWN_ERROR;}
余下的代码就是将解码器信息等更新到mStats成员中
(2) initAudioDecoder
这里audio部分与video部分一样,也是创建解码器,以及保存信息,代码如下
- status_t AwesomePlayer::initAudioDecoder() {
- ATRACE_CALL();
- sp<MetaData>meta = mAudioTrack->getFormat();
- constchar *mime;
- CHECK(meta->findCString(kKeyMIMEType, &mime));
- if(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) {
- mAudioSource = mAudioTrack;
- }else{
- mAudioSource = OMXCodec::Create(
- mClient.interface(), mAudioTrack->getFormat(),
- false,// createEncoder
- mAudioTrack);
- }
- if(mAudioSource != NULL) {
- int64_t durationUs;
- if(mAudioTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
- Mutex::Autolock autoLock(mMiscStateLock);
- if(mDurationUs <0|| durationUs> mDurationUs) {
- mDurationUs = durationUs;
- }
- }
- status_t err = mAudioSource->start();
- if(err != OK) {
- mAudioSource.clear();
- returnerr;
- }
- }elseif(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_QCELP)) {
- // For legacy reasons we're simply going to ignore the absence
- // of an audio decoder for QCELP instead of aborting playback
- // altogether.
- returnOK;
- }
- if(mAudioSource != NULL) {
- Mutex::Autolock autoLock(mStatsLock);
- TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex);
- constchar *component;
- if(!mAudioSource->getFormat()
- ->findCString(kKeyDecoderComponent, &component)) {
- component ="none";
- }
- stat->mDecoderName =component;
- }
- returnmAudioSource != NULL ? OK : UNKNOWN_ERROR;
- }
status_t AwesomePlayer::initAudioDecoder() { ATRACE_CALL(); sp<MetaData> meta = mAudioTrack->getFormat(); constchar *mime; CHECK(meta->findCString(kKeyMIMEType, &mime)); if(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) { mAudioSource = mAudioTrack; }else{ mAudioSource = OMXCodec::Create( mClient.interface(), mAudioTrack->getFormat(), false,// createEncoder mAudioTrack); } if(mAudioSource != NULL) { int64_t durationUs; if(mAudioTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) { Mutex::Autolock autoLock(mMiscStateLock); if(mDurationUs <0|| durationUs > mDurationUs) { mDurationUs = durationUs; } } status_t err = mAudioSource->start(); if(err != OK) { mAudioSource.clear(); returnerr; } }elseif(!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_QCELP)) { // For legacy reasons we're simply going to ignore the absence // of an audio decoder for QCELP instead of aborting playback // altogether. returnOK; } if(mAudioSource != NULL) { Mutex::Autolock autoLock(mStatsLock); TrackStat *stat = &mStats.mTracks.editItemAt(mStats.mAudioTrackIndex); constchar *component; if(!mAudioSource->getFormat() ->findCString(kKeyDecoderComponent, &component)) { component ="none"; } stat->mDecoderName = component; } returnmAudioSource != NULL ? OK : UNKNOWN_ERROR;}
(3) finishAsyncPrepare_l
经过上面的步骤,解码器已经可以正常的解码,并将解码后的数据分别放在mAudioSource和mVideoSource中
后面就是prepare的收尾工作,代码如下
- void AwesomePlayer::finishAsyncPrepare_l() {
- if(mIsAsyncPrepare) {
- if(mVideoSource == NULL) {
- notifyListener_l(MEDIA_SET_VIDEO_SIZE,0,0);
- }else{
- notifyVideoSize_l();
- }
- notifyListener_l(MEDIA_PREPARED);
- }
- mPrepareResult = OK;
- modifyFlags((PREPARING|PREPARE_CANCELLED|PREPARING_CONNECTED), CLEAR);
- modifyFlags(PREPARED, SET);
- mAsyncPrepareEvent = NULL;
- mPreparedCondition.broadcast();
- }
void AwesomePlayer::finishAsyncPrepare_l() { if(mIsAsyncPrepare) { if(mVideoSource == NULL) { notifyListener_l(MEDIA_SET_VIDEO_SIZE,0,0); }else{ notifyVideoSize_l(); } notifyListener_l(MEDIA_PREPARED); } mPrepareResult = OK; modifyFlags((PREPARING|PREPARE_CANCELLED|PREPARING_CONNECTED), CLEAR); modifyFlags(PREPARED, SET); mAsyncPrepareEvent = NULL; mPreparedCondition.broadcast();}
首先通过notifyVideoSize_l更新宽度、高度信息,还有rotate等信息,确定最终的输出样式
还有就是更新状态以及flags等
最后通过mPreparedCondition.broadcast();将prepare 成功的消息发送出去。【Condition用法参考文章:condition & mutex】
其作用主要用在,当调用的是prepare的方法的时候,需要等待prepareasync执行完毕后返回,即block调用方式
因此有如下代码:
- status_t AwesomePlayer::prepare_l() {
- if(mFlags & PREPARED) {
- returnOK;
- }
- if(mFlags & PREPARING) {
- returnUNKNOWN_ERROR;
- }
- mIsAsyncPrepare =false;
- status_t err = prepareAsync_l();
- if(err != OK) {
- returnerr;
- }
- while(mFlags & PREPARING) {
- mPreparedCondition.wait(mLock);
- }
- returnmPrepareResult;
- }
status_t AwesomePlayer::prepare_l() { if(mFlags & PREPARED) { returnOK; } if(mFlags & PREPARING) { returnUNKNOWN_ERROR; } mIsAsyncPrepare =false; status_t err = prepareAsync_l(); if(err != OK) { returnerr; } while(mFlags & PREPARING) { mPreparedCondition.wait(mLock); } returnmPrepareResult;}
这里broadcast 会使得mPreparedCondition.wait(mLock);退出。还有这里还有一点是,循环里还会检测awesomeplayer的状态,在上面的finishAsyncPrepare_l 中也会通过modifyFlags(PREPARED, SET); 修改
这里就完成的所有的准备工作。
3.4 start
入口
- status_t StagefrightPlayer::start() {
- ALOGV("start");
- returnmPlayer->play();
- }
status_t StagefrightPlayer::start() { ALOGV("start"); returnmPlayer->play();}
继续,
- status_t AwesomePlayer::play() {
- ATRACE_CALL();
- Mutex::Autolock autoLock(mLock);
- modifyFlags(CACHE_UNDERRUN, CLEAR);
- returnplay_l();
- }
status_t AwesomePlayer::play() { ATRACE_CALL(); Mutex::Autolock autoLock(mLock); modifyFlags(CACHE_UNDERRUN, CLEAR); returnplay_l();}
awesomeplayer中play->play_l,继续看play_l
- status_t AwesomePlayer::play_l() {
- modifyFlags(SEEK_PREVIEW, CLEAR);
- if(mFlags & PLAYING) {
- returnOK;
- }
- if(!(mFlags & PREPARED)) {
- status_t err = prepare_l();
- if(err != OK) {
- returnerr;
- }
- }
- modifyFlags(PLAYING, SET);
- modifyFlags(FIRST_FRAME, SET);
status_t AwesomePlayer::play_l() { modifyFlags(SEEK_PREVIEW, CLEAR); if(mFlags & PLAYING) { returnOK; } if(!(mFlags & PREPARED)) { status_t err = prepare_l(); if(err != OK) { returnerr; } } modifyFlags(PLAYING, SET); modifyFlags(FIRST_FRAME, SET);
这里首先判断状态是否是PREPARED,如果不是则重新调用prepare_l方法
如果是,则修改状态为PLAYING
- if (mAudioSource != NULL) {
- if(mAudioPlayer == NULL) {
- if(mAudioSink != NULL) {
- bool allowDeepBuffering;
- int64_t cachedDurationUs;
- bool eos;
- if(mVideoSource == NULL
- && (mDurationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US ||
- (getCachedDuration_l(&cachedDurationUs, &eos) &&
- cachedDurationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US))) {
- allowDeepBuffering =true;
- }else{
- allowDeepBuffering =false;
- }
- mAudioPlayer =newAudioPlayer(mAudioSink, allowDeepBuffering,this);
- mAudioPlayer->setSource(mAudioSource);
- mTimeSource = mAudioPlayer;
- // If there was a seek request before we ever started,
- // honor the request now.
- // Make sure to do this before starting the audio player
- // to avoid a race condition.
- seekAudioIfNecessary_l();
- }
- }
- CHECK(!(mFlags & AUDIO_RUNNING));
- if(mVideoSource == NULL) {
- // We don't want to post an error notification at this point,
- // the error returned from MediaPlayer::start() will suffice.
- status_t err = startAudioPlayer_l(
- false/* sendErrorNotification */);
- if(err != OK) {
- deletemAudioPlayer;
- mAudioPlayer = NULL;
- modifyFlags((PLAYING | FIRST_FRAME), CLEAR);
- if(mDecryptHandle != NULL) {
- mDrmManagerClient->setPlaybackStatus(
- mDecryptHandle, Playback::STOP,0);
- }
- returnerr;
- }
- }
- }
if (mAudioSource != NULL) { if(mAudioPlayer == NULL) { if(mAudioSink != NULL) { bool allowDeepBuffering; int64_t cachedDurationUs; bool eos; if(mVideoSource == NULL && (mDurationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US || (getCachedDuration_l(&cachedDurationUs, &eos) && cachedDurationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US))) { allowDeepBuffering =true; }else{ allowDeepBuffering =false; } mAudioPlayer =newAudioPlayer(mAudioSink, allowDeepBuffering,this); mAudioPlayer->setSource(mAudioSource); mTimeSource = mAudioPlayer; // If there was a seek request before we ever started, // honor the request now. // Make sure to do this before starting the audio player // to avoid a race condition. seekAudioIfNecessary_l(); } } CHECK(!(mFlags & AUDIO_RUNNING)); if(mVideoSource == NULL) { // We don't want to post an error notification at this point, // the error returned from MediaPlayer::start() will suffice. status_t err = startAudioPlayer_l( false/* sendErrorNotification */); if(err != OK) { deletemAudioPlayer; mAudioPlayer = NULL; modifyFlags((PLAYING | FIRST_FRAME), CLEAR); if(mDecryptHandle != NULL) { mDrmManagerClient->setPlaybackStatus( mDecryptHandle, Playback::STOP,0); } returnerr; } }}
上面代码是创建audio的输出模块,mAudioPlayer
代码也比较清晰,即构建mAudioPlayer对象,将mAudioSink作为参数传入,之前有介绍,实际的播放顺序是 mAudioPlayer ->mAudioSink ->mAudioTrack(不要混淆,此处类是AudioTrack)
还有设置mAudioPlayer的数据源即mAudioSource
接着如果只有audio 即mVideoSource == NULL,则直接启动播放
- if (mTimeSource == NULL && mAudioPlayer == NULL) {
- mTimeSource = &mSystemTimeSource;
- }
if (mTimeSource == NULL && mAudioPlayer == NULL) { mTimeSource = &mSystemTimeSource;}
上面代码是设置同步时钟的,这里如果mAudioPlayer存在的话,以audio为基准进行播放,否则以系统时钟为基准控制播放
- if (mVideoSource != NULL) {
- // Kick off video playback
- postVideoEvent_l();
- if(mAudioSource != NULL && mVideoSource != NULL) {
- postVideoLagEvent_l();
- }
- }
- if(mFlags & AT_EOS) {
- // Legacy behaviour, if a stream finishes playing and then
- // is started again, we play from the start...
- seekTo_l(0);
- }
- uint32_t params = IMediaPlayerService::kBatteryDataCodecStarted
- | IMediaPlayerService::kBatteryDataTrackDecoder;
- if((mAudioSource != NULL) && (mAudioSource != mAudioTrack)) {
- params |= IMediaPlayerService::kBatteryDataTrackAudio;
- }
- if(mVideoSource != NULL) {
- params |= IMediaPlayerService::kBatteryDataTrackVideo;
- }
- addBatteryData(params);
- returnOK;
- }
if (mVideoSource != NULL) { // Kick off video playback postVideoEvent_l(); if(mAudioSource != NULL && mVideoSource != NULL) { postVideoLagEvent_l(); } } if(mFlags & AT_EOS) { // Legacy behaviour, if a stream finishes playing and then // is started again, we play from the start... seekTo_l(0); } uint32_t params = IMediaPlayerService::kBatteryDataCodecStarted | IMediaPlayerService::kBatteryDataTrackDecoder; if((mAudioSource != NULL) && (mAudioSource != mAudioTrack)) { params |= IMediaPlayerService::kBatteryDataTrackAudio; } if(mVideoSource != NULL) { params |= IMediaPlayerService::kBatteryDataTrackVideo; } addBatteryData(params); returnOK;}
执行到此就是说audio video都是存在的,首先是触发mVideoEvent消息,之后触发mVideoLagEvent消息,最后seek 到0位置就开始播放了
下面分三个小节分别介绍,mAudioSink如何传入, postVideoEvent_l和postVideoLagEvent_l
(1)mAudioSink如何传入
在最初的mediaplayerservice中,调用setdatasource操作代码如下
- sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre(
- player_type playerType)
- {
- ALOGV("player type = %d", playerType);
- // create the right type of player
- sp<MediaPlayerBase>p = createPlayer(playerType);
- if(p == NULL) {
- returnp;
- }
- if(!p->hardwareOutput()) {
- mAudioOutput =newAudioOutput(mAudioSessionId);
- static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
- }
- returnp;
- }
sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre( player_type playerType){ ALOGV("player type = %d", playerType); // create the right type of player sp<MediaPlayerBase> p = createPlayer(playerType); if(p == NULL) { returnp; } if(!p->hardwareOutput()) { mAudioOutput =newAudioOutput(mAudioSessionId); static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput); } returnp;}
这里会构造一个AudioOutput对象传入作为mAudioSink
(2)postVideoEvent
代码如下
- void AwesomePlayer::postVideoEvent_l(int64_t delayUs) {
- ATRACE_CALL();
- if(mVideoEventPending) {
- return;
- }
- mVideoEventPending =true;
- mQueue.postEventWithDelay(mVideoEvent, delayUs <0?10000: delayUs);
- }
void AwesomePlayer::postVideoEvent_l(int64_t delayUs) { ATRACE_CALL(); if(mVideoEventPending) { return; } mVideoEventPending =true; mQueue.postEventWithDelay(mVideoEvent, delayUs <0?10000: delayUs);}
主要就是触发mVideoEvent事件,其响应函数是AwesomePlayer::onVideoEvent,
- void AwesomePlayer::onVideoEvent() {
- ATRACE_CALL();
- Mutex::Autolock autoLock(mLock);
- if(!mVideoEventPending) {
- // The event has been cancelled in reset_l() but had already
- // been scheduled for execution at that time.
- return;
- }
- mVideoEventPending =false;
- if(mSeeking != NO_SEEK) {
- if(mVideoBuffer) {
- mVideoBuffer->release();
- mVideoBuffer = NULL;
- }
- if(mSeeking == SEEK && isStreamingHTTP() && mAudioSource != NULL
- && !(mFlags & SEEK_PREVIEW)) {
- // We're going to seek the video source first, followed by
- // the audio source.
- // In order to avoid jumps in the DataSource offset caused by
- // the audio codec prefetching data from the old locations
- // while the video codec is already reading data from the new
- // locations, we'll "pause" the audio source, causing it to
- // stop reading input data until a subsequent seek.
- if(mAudioPlayer != NULL && (mFlags & AUDIO_RUNNING)) {
- mAudioPlayer->pause();
- modifyFlags(AUDIO_RUNNING, CLEAR);
- }
- mAudioSource->pause();
- }
- }
void AwesomePlayer::onVideoEvent() { ATRACE_CALL(); Mutex::Autolock autoLock(mLock); if(!mVideoEventPending) { // The event has been cancelled in reset_l() but had already // been scheduled for execution at that time. return; } mVideoEventPending =false; if(mSeeking != NO_SEEK) { if(mVideoBuffer) { mVideoBuffer->release(); mVideoBuffer = NULL; } if(mSeeking == SEEK && isStreamingHTTP() && mAudioSource != NULL && !(mFlags & SEEK_PREVIEW)) { // We're going to seek the video source first, followed by // the audio source. // In order to avoid jumps in the DataSource offset caused by // the audio codec prefetching data from the old locations // while the video codec is already reading data from the new // locations, we'll "pause" the audio source, causing it to // stop reading input data until a subsequent seek. if(mAudioPlayer != NULL && (mFlags & AUDIO_RUNNING)) { mAudioPlayer->pause(); modifyFlags(AUDIO_RUNNING, CLEAR); } mAudioSource->pause(); } }
首先是判断是否需要seek,若需要seek,则先pause住audio,先完成video的seek,后面再seek audio
- if (!mVideoBuffer) {
- MediaSource::ReadOptions options;
- if(mSeeking != NO_SEEK) {
- ALOGV("seeking to %lld us (%.2f secs)", mSeekTimeUs, mSeekTimeUs / 1E6);
- options.setSeekTo(
- mSeekTimeUs,
- mSeeking == SEEK_VIDEO_ONLY
- ? MediaSource::ReadOptions::SEEK_NEXT_SYNC
- : MediaSource::ReadOptions::SEEK_CLOSEST_SYNC);
- }
- for(;;) {
- status_t err = mVideoSource->read(&mVideoBuffer, &options);
- options.clearSeekTo();
- if(err != OK) {
- CHECK(mVideoBuffer == NULL);
- if(err == INFO_FORMAT_CHANGED) {
- ALOGV("VideoSource signalled format change.");
- notifyVideoSize_l();
- if(mVideoRenderer != NULL) {
- mVideoRendererIsPreview =false;
- initRenderer_l();
- }
- continue;
- }
- // So video playback is complete, but we may still have
- // a seek request pending that needs to be applied
- // to the audio track.
- if(mSeeking != NO_SEEK) {
- ALOGV("video stream ended while seeking!");
- }
- finishSeekIfNecessary(-1);
- if(mAudioPlayer != NULL
- && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) {
- startAudioPlayer_l();
- }
- modifyFlags(VIDEO_AT_EOS, SET);
- postStreamDoneEvent_l(err);
- return;
- }
- if(mVideoBuffer->range_length() ==0) {
- // Some decoders, notably the PV AVC software decoder
- // return spurious empty buffers that we just want to ignore.
- mVideoBuffer->release();
- mVideoBuffer = NULL;
- continue;
- }
- break;
- }
- {
- Mutex::Autolock autoLock(mStatsLock);
- ++mStats.mNumVideoFramesDecoded;
- }
- }
if (!mVideoBuffer) { MediaSource::ReadOptions options; if(mSeeking != NO_SEEK) { ALOGV("seeking to %lld us (%.2f secs)", mSeekTimeUs, mSeekTimeUs / 1E6); options.setSeekTo( mSeekTimeUs, mSeeking == SEEK_VIDEO_ONLY ? MediaSource::ReadOptions::SEEK_NEXT_SYNC : MediaSource::ReadOptions::SEEK_CLOSEST_SYNC); } for(;;) { status_t err = mVideoSource->read(&mVideoBuffer, &options); options.clearSeekTo(); if(err != OK) { CHECK(mVideoBuffer == NULL); if(err == INFO_FORMAT_CHANGED) { ALOGV("VideoSource signalled format change."); notifyVideoSize_l(); if(mVideoRenderer != NULL) { mVideoRendererIsPreview =false; initRenderer_l(); } continue; } // So video playback is complete, but we may still have // a seek request pending that needs to be applied // to the audio track. if(mSeeking != NO_SEEK) { ALOGV("video stream ended while seeking!"); } finishSeekIfNecessary(-1); if(mAudioPlayer != NULL && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) { startAudioPlayer_l(); } modifyFlags(VIDEO_AT_EOS, SET); postStreamDoneEvent_l(err); return; } if(mVideoBuffer->range_length() ==0) { // Some decoders, notably the PV AVC software decoder // return spurious empty buffers that we just want to ignore. mVideoBuffer->release(); mVideoBuffer = NULL; continue; } break; } { Mutex::Autolock autoLock(mStatsLock); ++mStats.mNumVideoFramesDecoded; } }
上面代码可分为两部分,第一部分判断是否需要seek,若需要则设置option
第二部分是从mAudioSource中读取一帧画面,这里读取的时候会将option传入,如果需要seek,则读取出的数据直接就是seek后的解码数据,Nice
中间还有些小细节:如果数据读取失败,则检查宽度高度是否发生变化,否则即video播放完毕了,设置EOF标记,并触发mStreamDoneEvent消息
- int64_t timeUs;
- CHECK(mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs));
- mLastVideoTimeUs = timeUs;
- if (mSeeking == SEEK_VIDEO_ONLY) {
- if(mSeekTimeUs > timeUs) {
- ALOGI("XXX mSeekTimeUs = %lld us, timeUs = %lld us",
- mSeekTimeUs, timeUs);
- }
- }
- {
- Mutex::Autolock autoLock(mMiscStateLock);
- mVideoTimeUs = timeUs;
- }
- SeekType wasSeeking = mSeeking;
- finishSeekIfNecessary(timeUs);
int64_t timeUs;CHECK(mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs)); mLastVideoTimeUs = timeUs; if (mSeeking == SEEK_VIDEO_ONLY) { if(mSeekTimeUs > timeUs) { ALOGI("XXX mSeekTimeUs = %lld us, timeUs = %lld us", mSeekTimeUs, timeUs); }} { Mutex::Autolock autoLock(mMiscStateLock); mVideoTimeUs = timeUs;} SeekType wasSeeking = mSeeking;finishSeekIfNecessary(timeUs);
上面代码是当成功读取到一帧数据,则拿出此数据的时间戳信息
之前说如果有seek请求,则先pause住audio,读取seek的video数据,拿到第一帧数据后,以此数据为标准,来seek audio,此处finishSeekIfNecessary便是完成此功能,读者可自行阅读,比较简单
- if (mAudioPlayer != NULL && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) {
- status_t err = startAudioPlayer_l();
- if(err != OK) {
- ALOGE("Starting the audio player failed w/ err %d", err);
- return;
- }
- }
- if ((mFlags & TEXTPLAYER_INITIALIZED)
- && !(mFlags & (TEXT_RUNNING | SEEK_PREVIEW))) {
- mTextDriver->start();
- modifyFlags(TEXT_RUNNING, SET);
- }
if (mAudioPlayer != NULL && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) { status_t err = startAudioPlayer_l(); if(err != OK) { ALOGE("Starting the audio player failed w/ err %d", err); return; }} if ((mFlags & TEXTPLAYER_INITIALIZED) && !(mFlags & (TEXT_RUNNING | SEEK_PREVIEW))) { mTextDriver->start(); modifyFlags(TEXT_RUNNING, SET);}
在后面是如果有audio则启动audio播放,如果有subtitle则启动播放
- TimeSource *ts =
- ((mFlags & AUDIO_AT_EOS) || !(mFlags & AUDIOPLAYER_STARTED))
- ? &mSystemTimeSource : mTimeSource;
- if (mFlags & FIRST_FRAME) {
- modifyFlags(FIRST_FRAME, CLEAR);
- mSinceLastDropped =0;
- mTimeSourceDeltaUs = ts->getRealTimeUs() - timeUs;
- }
- int64_t realTimeUs, mediaTimeUs;
- if (!(mFlags & AUDIO_AT_EOS) && mAudioPlayer != NULL
- && mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs)) {
- mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
- }
- if (wasSeeking == SEEK_VIDEO_ONLY) {
- int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
- int64_t latenessUs = nowUs - timeUs;
- ATRACE_INT("Video Lateness (ms)", latenessUs / 1E3);
- if(latenessUs >0) {
- ALOGI("after SEEK_VIDEO_ONLY we're late by %.2f secs", latenessUs / 1E6);
- }
- }
TimeSource *ts = ((mFlags & AUDIO_AT_EOS) || !(mFlags & AUDIOPLAYER_STARTED)) ? &mSystemTimeSource : mTimeSource; if (mFlags & FIRST_FRAME) { modifyFlags(FIRST_FRAME, CLEAR); mSinceLastDropped =0; mTimeSourceDeltaUs = ts->getRealTimeUs() - timeUs;} int64_t realTimeUs, mediaTimeUs;if (!(mFlags & AUDIO_AT_EOS) && mAudioPlayer != NULL && mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs)) { mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;} if (wasSeeking == SEEK_VIDEO_ONLY) { int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs; int64_t latenessUs = nowUs - timeUs; ATRACE_INT("Video Lateness (ms)", latenessUs / 1E3); if(latenessUs >0) { ALOGI("after SEEK_VIDEO_ONLY we're late by %.2f secs", latenessUs / 1E6); }}
上面是更新时间信息,首先获取时钟源,系统时钟或者audio时钟
如果是第一帧画面则mTimeSourceDeltaUs=ts->getRealTimeUs() - timeUs;
如果不是第一帧画面则:mTimeSourceDeltaUs = realTimeUs - mediaTimeUs;
这里先解释下这几个变量即调用的意义
ts->getRealTimeUs() :这是通过计算播放了多少audio帧换算出来的实际时间
timeUs :这是下一帧画面的时间戳
realTimeUs = mPositionTimeRealUs 这是从mAudioplayer中获取的信息(如果有audio的话),是当前播放位置的时间
mPositionTimeMediaUs : 下一包音频数据的时间戳
通过这些信息便可以计算出当前播放位置:这里对于第一包audio或者video不是0的情况读者可自己思考上述机制是如何保证正常运行的。
后面单独分析mAudioPlayer的时候会仔细分析。
下面wasSeeking == SEEK_VIDEO_ONLY先忽略掉,继续
- if (wasSeeking == NO_SEEK) {
- // Let's display the first frame after seeking right away.
- int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
- int64_t latenessUs = nowUs - timeUs;
- ATRACE_INT("Video Lateness (ms)", latenessUs / 1E3);
- if(latenessUs > 500000ll
- && mAudioPlayer != NULL
- && mAudioPlayer->getMediaTimeMapping(
- &realTimeUs, &mediaTimeUs)) {
- if(mWVMExtractor == NULL) {
- ALOGI("we're much too late (%.2f secs), video skipping ahead",
- latenessUs / 1E6);
- mVideoBuffer->release();
- mVideoBuffer = NULL;
- mSeeking = SEEK_VIDEO_ONLY;
- mSeekTimeUs = mediaTimeUs;
- postVideoEvent_l();
- return;
- }else{
- // The widevine extractor doesn't deal well with seeking
- // audio and video independently. We'll just have to wait
- // until the decoder catches up, which won't be long at all.
- ALOGI("we're very late (%.2f secs)", latenessUs / 1E6);
- }
- }
- if(latenessUs >40000) {
- // We're more than 40ms late.
- ALOGV("we're late by %lld us (%.2f secs)",
- latenessUs, latenessUs / 1E6);
- if(!(mFlags & SLOW_DECODER_HACK)
- || mSinceLastDropped > FRAME_DROP_FREQ)
- {
- ALOGV("we're late by %lld us (%.2f secs) dropping "
- "one after %d frames",
- latenessUs, latenessUs / 1E6, mSinceLastDropped);
- mSinceLastDropped =0;
- mVideoBuffer->release();
- mVideoBuffer = NULL;
- {
- Mutex::Autolock autoLock(mStatsLock);
- ++mStats.mNumVideoFramesDropped;
- }
- postVideoEvent_l();
- return;
- }
- }
- if(latenessUs < -10000) {
- // We're more than 10ms early.
- postVideoEvent_l(10000);
- return;
- }
- }
if (wasSeeking == NO_SEEK) { // Let's display the first frame after seeking right away. int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs; int64_t latenessUs = nowUs - timeUs; ATRACE_INT("Video Lateness (ms)", latenessUs / 1E3); if(latenessUs > 500000ll && mAudioPlayer != NULL && mAudioPlayer->getMediaTimeMapping( &realTimeUs, &mediaTimeUs)) { if(mWVMExtractor == NULL) { ALOGI("we're much too late (%.2f secs), video skipping ahead", latenessUs / 1E6); mVideoBuffer->release(); mVideoBuffer = NULL; mSeeking = SEEK_VIDEO_ONLY; mSeekTimeUs = mediaTimeUs; postVideoEvent_l(); return; }else{ // The widevine extractor doesn't deal well with seeking // audio and video independently. We'll just have to wait // until the decoder catches up, which won't be long at all. ALOGI("we're very late (%.2f secs)", latenessUs / 1E6); } } if(latenessUs >40000) { // We're more than 40ms late. ALOGV("we're late by %lld us (%.2f secs)", latenessUs, latenessUs / 1E6); if(!(mFlags & SLOW_DECODER_HACK) || mSinceLastDropped > FRAME_DROP_FREQ) { ALOGV("we're late by %lld us (%.2f secs) dropping " "one after %d frames", latenessUs, latenessUs / 1E6, mSinceLastDropped); mSinceLastDropped =0; mVideoBuffer->release(); mVideoBuffer = NULL; { Mutex::Autolock autoLock(mStatsLock); ++mStats.mNumVideoFramesDropped; } postVideoEvent_l(); return; } } if(latenessUs < -10000) { // We're more than 10ms early. postVideoEvent_l(10000); return; }}
上面代码和之前的时间处理要结合起来看,计算出来mTimeSourceDeltaUs之后,就可以分析播放信息如:
当前播放进度,即int64_t nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs;
播放的latency: int64_t latenessUs = nowUs - timeUs; (这里timeUs是下一帧的时间戳)
下面是处理latency过大的情况:这里比对参考是audio或者系统时钟,即与音视频同步的处理,当视频与音频或者与系统时钟相差太多时
超过500000ll US,则seek到对应位置;超过40000 则丢帧处理;当比参考时钟早了10ms,则通过postVideoEvent_l(10000);延迟触发下一次的mVideoEvent
- if ((mNativeWindow != NULL)
- && (mVideoRendererIsPreview || mVideoRenderer == NULL)) {
- mVideoRendererIsPreview =false;
- initRenderer_l();
- }
- if(mVideoRenderer != NULL) {
- mSinceLastDropped++;
- mVideoRenderer->render(mVideoBuffer);
- if(!mVideoRenderingStarted) {
- mVideoRenderingStarted =true;
- notifyListener_l(MEDIA_INFO, MEDIA_INFO_RENDERING_START);
- }
- }
- mVideoBuffer->release();
- mVideoBuffer = NULL;
- if(wasSeeking != NO_SEEK && (mFlags & SEEK_PREVIEW)) {
- modifyFlags(SEEK_PREVIEW, CLEAR);
- return;
- }
- postVideoEvent_l();
- }
if ((mNativeWindow != NULL) && (mVideoRendererIsPreview || mVideoRenderer == NULL)) { mVideoRendererIsPreview =false; initRenderer_l(); } if(mVideoRenderer != NULL) { mSinceLastDropped++; mVideoRenderer->render(mVideoBuffer); if(!mVideoRenderingStarted) { mVideoRenderingStarted =true; notifyListener_l(MEDIA_INFO, MEDIA_INFO_RENDERING_START); } } mVideoBuffer->release(); mVideoBuffer = NULL; if(wasSeeking != NO_SEEK && (mFlags & SEEK_PREVIEW)) { modifyFlags(SEEK_PREVIEW, CLEAR); return; } postVideoEvent_l();}
最后就是显示此帧画面了,当播放过程一切正常时,则显示此帧画面,并通过 postVideoEvent_l();触发下一次的mVideoEvent事件
分析到这里大家应该明白,awesoemplayer的播放驱动机制即通过递归的调用postVideoEvent_l(); 来完成
而且由于postVideoEvent_l(); 里有延迟触发消息机制,因此也不会阻塞。
(3)postVideoLagEvent
看下此事件的处理方法
- void AwesomePlayer::onVideoLagUpdate() {
- Mutex::Autolock autoLock(mLock);
- if(!mVideoLagEventPending) {
- return;
- }
- mVideoLagEventPending =false;
- int64_t audioTimeUs = mAudioPlayer->getMediaTimeUs();
- int64_t videoLateByUs = audioTimeUs - mVideoTimeUs;
- if(!(mFlags & VIDEO_AT_EOS) && videoLateByUs > 300000ll) {
- ALOGV("video late by %lld ms.", videoLateByUs / 1000ll);
- notifyListener_l(
- MEDIA_INFO,
- MEDIA_INFO_VIDEO_TRACK_LAGGING,
- videoLateByUs / 1000ll);
- }
- postVideoLagEvent_l();
- }
void AwesomePlayer::onVideoLagUpdate() { Mutex::Autolock autoLock(mLock); if(!mVideoLagEventPending) { return; } mVideoLagEventPending =false; int64_t audioTimeUs = mAudioPlayer->getMediaTimeUs(); int64_t videoLateByUs = audioTimeUs - mVideoTimeUs; if(!(mFlags & VIDEO_AT_EOS) && videoLateByUs > 300000ll) { ALOGV("video late by %lld ms.", videoLateByUs / 1000ll); notifyListener_l( MEDIA_INFO, MEDIA_INFO_VIDEO_TRACK_LAGGING, videoLateByUs / 1000ll); } postVideoLagEvent_l();}
这里主要是为了更新信息,看定义
// The video is too complex for the decoder: it can't decode frames fast
// enough. Possibly only the audio plays fine at this stage.
MEDIA_INFO_VIDEO_TRACK_LAGGING = 700,
可以知道当视频解码速度不够时,会通知上层,decoder不给力
4、其他事件分析
到这里,整个播放器的流程就讲完了,但有些事件我们并没有涉及,这里也把脉络说一下
之前我们列出了所有的事件,这里再列举一下
sp<TimedEventQueue::Event> mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent);
sp<TimedEventQueue::Event> mStreamDoneEvent = new AwesomeEvent(this, & AwesomePlayer::onStreamDone);
sp<TimedEventQueue::Event> mBufferingEvent = new AwesomeEvent(this, & AwesomePlayer::onBufferingUpdate);
sp<TimedEventQueue::Event> mCheckAudioStatusEvent = new AwesomeEvent(this, & AwesomePlayer::onCheckAudioStatus);
sp<TimedEventQueue::Event> mVideoLagEvent = new AwesomeEvent(this, & AwesomePlayer::onVideoLagUpdate);
sp<TimedEventQueue::Event> mAsyncPrepareEvent = new AwesomeEvent(this, &AwesomePlayer::onPrepareAsyncEvent);
上面分析了mAsyncPrepareEvent mVideoLagEvent mVideoEvent ,下面分析下其他几个事件
(1)mStreamDoneEvent
这里是当vidoe播放结束后会触发,在onVideoEvent中当读取帧数据失败时
- void AwesomePlayer::onStreamDone() {
- // Posted whenever any stream finishes playing.
- ATRACE_CALL();
- Mutex::Autolock autoLock(mLock);
- if(!mStreamDoneEventPending) {
- return;
- }
- mStreamDoneEventPending =false;
- if(mStreamDoneStatus != ERROR_END_OF_STREAM) {
- ALOGV("MEDIA_ERROR %d", mStreamDoneStatus);
- notifyListener_l(
- MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, mStreamDoneStatus);
- pause_l(true/* at eos */);
- modifyFlags(AT_EOS, SET);
- return;
- }
- constbool allDone =
- (mVideoSource == NULL || (mFlags & VIDEO_AT_EOS))
- && (mAudioSource == NULL || (mFlags & AUDIO_AT_EOS));
- if(!allDone) {
- return;
- }
- if((mFlags & LOOPING)
- || ((mFlags & AUTO_LOOPING)
- && (mAudioSink == NULL || mAudioSink->realtime()))) {
- // Don't AUTO_LOOP if we're being recorded, since that cannot be
- // turned off and recording would go on indefinitely.
- seekTo_l(0);
- if(mVideoSource != NULL) {
- postVideoEvent_l();
- }
- }else{
- ALOGV("MEDIA_PLAYBACK_COMPLETE");
- notifyListener_l(MEDIA_PLAYBACK_COMPLETE);
- pause_l(true/* at eos */);
- modifyFlags(AT_EOS, SET);
- }
- }
void AwesomePlayer::onStreamDone() { // Posted whenever any stream finishes playing. ATRACE_CALL(); Mutex::Autolock autoLock(mLock); if(!mStreamDoneEventPending) { return; } mStreamDoneEventPending =false; if(mStreamDoneStatus != ERROR_END_OF_STREAM) { ALOGV("MEDIA_ERROR %d", mStreamDoneStatus); notifyListener_l( MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, mStreamDoneStatus); pause_l(true/* at eos */); modifyFlags(AT_EOS, SET); return; } constbool allDone = (mVideoSource == NULL || (mFlags & VIDEO_AT_EOS)) && (mAudioSource == NULL || (mFlags & AUDIO_AT_EOS)); if(!allDone) { return; } if((mFlags & LOOPING) || ((mFlags & AUTO_LOOPING) && (mAudioSink == NULL || mAudioSink->realtime()))) { // Don't AUTO_LOOP if we're being recorded, since that cannot be // turned off and recording would go on indefinitely. seekTo_l(0); if(mVideoSource != NULL) { postVideoEvent_l(); } }else{ ALOGV("MEDIA_PLAYBACK_COMPLETE"); notifyListener_l(MEDIA_PLAYBACK_COMPLETE); pause_l(true/* at eos */); modifyFlags(AT_EOS, SET); }}
这里主要功能如下:
判断是否真的播放完毕了
若播放完毕了,是否需要循环,若需要则调用seekTo_l(0)继续播放
否则,通知上层本次播放结束,发送MEDIA_PLAYBACK_COMPLETE给调用者
(2)mBufferingEvent
awesomeplayer中通过调用postBufferingEvent_l来触发此事件
作用是为了cache数据
调用的位置有
- void AwesomePlayer::onPrepareAsyncEvent() {
- *****************
- if(isStreamingHTTP()) {
- postBufferingEvent_l();
- }else{
- finishAsyncPrepare_l();
- }
- }
void AwesomePlayer::onPrepareAsyncEvent() { ***************** if(isStreamingHTTP()) { postBufferingEvent_l(); }else{ finishAsyncPrepare_l(); }}
当时网络流的时候,先缓冲一部分数据,看下具体实现
- void AwesomePlayer::postBufferingEvent_l() {
- if(mBufferingEventPending) {
- return;
- }
- mBufferingEventPending =true;
- mQueue.postEventWithDelay(mBufferingEvent, 1000000ll);
- }
void AwesomePlayer::postBufferingEvent_l() { if(mBufferingEventPending) { return; } mBufferingEventPending =true; mQueue.postEventWithDelay(mBufferingEvent, 1000000ll);}
首先修改标志位mBufferingEventPending,之后触发消息
这里就不贴代码了,说说原理:当需要cache数据的时候,在onPrepareAsyncEvent调用postBufferingEvent_l 后onPrepareAsyncEvent 就结束了。由于此时解码器已经开始解码,即数据链路已经建立
因此会不断的进行 读数据-解码的操作,而在onBufferingUpdate响应函数中,会先pause住输出,等数据缓存足够了之后,调用finishAsyncPrepare_l等完成prepareAsync的操作
(3)mCheckAudioStatusEvent
触发此事件的调用顺序是:
startAudioPlayer_l->postAudioSeekComplete->postCheckAudioStatusEvent
fillBuffer(mAudioPlayer)->postAudioEOS->postCheckAudioStatusEvent
作用就是查询audio的状态
代码如下
- void AwesomePlayer::postCheckAudioStatusEvent(int64_t delayUs) {
- Mutex::Autolock autoLock(mAudioLock);
- if(mAudioStatusEventPending) {
- return;
- }
- mAudioStatusEventPending =true;
- // Do not honor delay when looping in order to limit audio gap
- if(mFlags & (LOOPING | AUTO_LOOPING)) {
- delayUs =0;
- }
- mQueue.postEventWithDelay(mCheckAudioStatusEvent, delayUs);
- }
void AwesomePlayer::postCheckAudioStatusEvent(int64_t delayUs) { Mutex::Autolock autoLock(mAudioLock); if(mAudioStatusEventPending) { return; } mAudioStatusEventPending =true; // Do not honor delay when looping in order to limit audio gap if(mFlags & (LOOPING | AUTO_LOOPING)) { delayUs =0; } mQueue.postEventWithDelay(mCheckAudioStatusEvent, delayUs);}
具体响应代码
- void AwesomePlayer::onCheckAudioStatus() {
- {
- Mutex::Autolock autoLock(mAudioLock);
- if(!mAudioStatusEventPending) {
- // Event was dispatched and while we were blocking on the mutex,
- // has already been cancelled.
- return;
- }
- mAudioStatusEventPending =false;
- }
- Mutex::Autolock autoLock(mLock);
- if(mWatchForAudioSeekComplete && !mAudioPlayer->isSeeking()) {
- mWatchForAudioSeekComplete =false;
- if(!mSeekNotificationSent) {
- notifyListener_l(MEDIA_SEEK_COMPLETE);
- mSeekNotificationSent =true;
- }
- mSeeking = NO_SEEK;
- }
- status_t finalStatus;
- if(mWatchForAudioEOS && mAudioPlayer->reachedEOS(&finalStatus)) {
- mWatchForAudioEOS =false;
- modifyFlags(AUDIO_AT_EOS, SET);
- modifyFlags(FIRST_FRAME, SET);
- postStreamDoneEvent_l(finalStatus);
- }
- }
void AwesomePlayer::onCheckAudioStatus() { { Mutex::Autolock autoLock(mAudioLock); if(!mAudioStatusEventPending) { // Event was dispatched and while we were blocking on the mutex, // has already been cancelled. return; } mAudioStatusEventPending =false; } Mutex::Autolock autoLock(mLock); if(mWatchForAudioSeekComplete && !mAudioPlayer->isSeeking()) { mWatchForAudioSeekComplete =false; if(!mSeekNotificationSent) { notifyListener_l(MEDIA_SEEK_COMPLETE); mSeekNotificationSent =true; } mSeeking = NO_SEEK; } status_t finalStatus; if(mWatchForAudioEOS && mAudioPlayer->reachedEOS(&finalStatus)) { mWatchForAudioEOS =false; modifyFlags(AUDIO_AT_EOS, SET); modifyFlags(FIRST_FRAME, SET); postStreamDoneEvent_l(finalStatus); }}
作用一个是用来监测seek是否结束,第二个是播放是否到了结尾,这里与video一样,播放结束也会触发postStreamDoneEvent_l
【结束】
- 【Android 】【多媒体】stagefrightplayer框架
- android 多媒体框架服务之StagefrightPlayer和OMXCodec实现原理学习
- android 多媒体框架服务之StagefrightPlayer和OMXCodec实现原理学习
- Android StagefrightPlayer
- Android StagefrightPlayer调用流程
- Android StagefrightPlayer调用流程
- Android StagefrightPlayer调用流程
- Android的多媒体框架
- Android多媒体框架OpenCore
- Android多媒体框架OpenCore
- Android多媒体框架OpenCore
- Android多媒体框架OpenCore
- Android多媒体框架图
- Android系统多媒体框架
- android多媒体框架综述
- Android多媒体框架
- Android多媒体框架图
- Android多媒体框架图
- C程序编译过程浅析
- Python 日志
- java中多线程与单双核关系
- Java 8 Stream API
- JavaScript RegExp对象
- 【Android 】【多媒体】stagefrightplayer框架
- C++ & Pascal——NOIP2016提高组day2 t3——愤怒的小鸟
- c++ boost 库中提供的share_ptr(智能指针)
- Qt入门之信号与槽机制
- Matlab画图常用命令
- [Makefile-随笔] ifdef多条件判断
- libiconv-1.14 error: ‘gets’ undeclared here (not in a function)
- 【WEB】Vue2.0音乐APP实战中的知识点总结(三)
- HOG特征原理