FFMPEG架构分析

来源:互联网 发布:淘宝宝贝数量多少合适 编辑:程序博客网 时间:2024/05/17 02:44

[http://blog.csdn.net/lsy5631932/article/details/8643351]


1.简介

FFmpeg是一个集录制、转换、音/视频编码解码功能为一体的完整的开源解决方案。FFmpeg

开发是基于Linux操作系统,但是可以在大多数操作系统中编译和使用。FFmpeg支持MPEG

DivXMPEG4AC3DVFLV40多种编码,AVIMPEGOGGMatroskaASF90多种解码.

TCPMP,VLC, MPlayer等开源播放器都用到了FFmpeg

FFmpeg主目录下主要有libavcodeclibavformatlibavutil等子目录。其中libavcodec

于存放各个encode/decode模块,libavformat用于存放muxer/demuxer模块,libavutil用于

存放内存操作等辅助性模块。

flashmovieflv文件格式为例,muxer/demuxerflvenc.cflvdec.c文件

libavformat目录下,encode/decodempegvideo.ch263de.clibavcodec目录下。

2.muxer/demuxerencoder/decoder定义与初始化

muxer/demuxerencoder/decoderFFmpeg中的实现代码里,有许多相同的地方,而二者最

大的差别是muxerdemuxer分别是不同的结构AVOutputFormatAVInputFormat,而encoder

decoder都是用的AVCodec结构。

muxer/demuxerencoder/decoderFFmpeg中相同的地方有:

二者都是在main()开始的av_register_all()函数内初始化的

二者都是以链表的形式保存在全局变量中的

muxer/demuxer是分别保存在全局变量AVOutputFormat*first_oformat

AVInputFormat*first_iformat中的。

encoder/decoder都是保存在全局变量AVCodec*first_avcodec中的。

二者都用函数指针的方式作为开放的公共接口

demuxer开放的接口有:

int(*read_probe)(AVProbeData *);

int(*read_header)(struct AVFormatContext *, AVFormatParameters *ap);

int(*read_packet)(struct AVFormatContext *, AVPacket *pkt);

int(*read_close)(struct AVFormatContext *);

int(*read_seek)(struct AVFormatContext *, int stream_index, int64_ttimestamp, int flags);

muxer开放的接口有:

int(*write_header)(struct AVFormatContext *);

int(*write_packet)(struct AVFormatContext *, AVPacket *pkt);

int(*write_trailer)(struct AVFormatContext *);

encoder/decoder的接口是一样的,只不过二者分别只实现encoderdecoder函数:

int(*init)(AVCodecContext *);

int(*encode)(AVCodecContext *, uint8_t *buf, int buf_size, void *data);

int(*close)(AVCodecContext *);

int(*decode)(AVCodecContext *, void *outdata, int *outdata_size, uint8_t*buf, int buf_size);

        仍以flv文件为例来说明muxer/demuxer的初始化。

        在libavformat\allformats.c文件的av_register_all(void)函数中,通过执行

REGISTER_MUXDEMUX(FLV,flv);

将支持flv格式的flv_muxerflv_demuxer变量分别注册到全局变量first_oformatfirst_iformat链表的最后位置。

其中flv_muxerlibavformat\flvenc.c中定义如下:

AVOutputFormatflv_muxer = {

"flv",

"flvformat",

"video/x-flv",

"flv",

sizeof(FLVContext),

#ifdefCONFIG_LIBMP3LAME

CODEC_ID_MP3,

#else// CONFIG_LIBMP3LAME

CODEC_ID_NONE,

CODEC_ID_FLV1,

flv_write_header,

flv_write_packet,

flv_write_trailer,

.codec_tag=(const AVCodecTag*[]){flv_video_codec_ids, flv_audio_codec_ids, 0},

}

AVOutputFormat结构的定义如下:

typedefstruct AVOutputFormat {

constchar *name;

constchar *long_name;

constchar *mime_type;

constchar *extensions; /**< comma separated filename extensions */

/**size of private data so that it can be allocated in the wrapper */

intpriv_data_size;

/*output support */

enumCodecID audio_codec; /**< default audio codec */

enumCodecID video_codec; /**< default video codec */


int(*write_header)(struct AVFormatContext *);

int(*write_packet)(struct AVFormatContext *, AVPacket *pkt);

int(*write_trailer)(struct AVFormatContext *);

/**can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_GLOBALHEADER */

intflags;

/**currently only used to set pixel format if not YUV420P */

int(*set_parameters)(struct AVFormatContext *, AVFormatParameters *);

int(*interleave_packet)(struct AVFormatContext *, AVPacket *out,AVPacket *in, int flush);

/**

*listof supported codec_id-codec_tag pairs,ordered by "better choicefirst"

*the arrays are all CODEC_ID_NONE terminated

*/

conststruct AVCodecTag **codec_tag;

/*private fields */

structAVOutputFormat *next;

}AVOutputFormat;

AVOutputFormat结构的定义可知,flv_muxer变量初始化的第一、二个成员分别为该muxer

的名称与长名称,第三、第四个成员为所对应MIMEType和后缀名,第五个成员是所对应的

私有结构的大小,第六、第七个成员为所对应的音频编码和视频编码类型ID,接下来就是三

个重要的接口函数,muxer的功能也就是通过调用这三个接口实现的

flv_demuxerlibavformat\flvdec.c中定义如下,flv_muxer类似,在这儿主要也是设置

5个接口函数,其中flv_probe接口用途是测试传入的数据段是否是符合当前文件格式,这

个接口在匹配当前demuxer时会用到。

AVInputFormatflv_demuxer = {

"flv",

"flvformat",

0,

flv_probe,

flv_read_header,

flv_read_packet,

flv_read_close,

flv_read_seek,

.extensions= "flv",

.value= CODEC_ID_FLV1,

};

在上述av_register_all(void)函数中通过执行libavcodec\allcodecs.c文件里的

avcodec_register_all(void)函数来初始化全部的encoder/decoder

因为不是每种编码方式都支持encodedecode,所以有以下三种注册方式

#defineREGISTER_ENCODER(X,x) \

if(ENABLE_##X##_ENCODER)register_avcodec(&x##_encoder)

#defineREGISTER_DECODER(X,x) \

if(ENABLE_##X##_DECODER)register_avcodec(&x##_decoder)

#defineREGISTER_ENCDEC(X,x) REGISTER_ENCODER(X,x); REGISTER_DECODER(X,x)

如支持flvflv_encoderflv_decoder变量就分别是在libavcodec\mpegvideo.clibavcodec\h263de.c中创建的。

3.当前muxer/demuxer的匹配

FFmpeg的文件转换过程中,首先要做的就是根据传入文件和传出文件的后缀名[FIXME]匹配合适的demuxermuxer

匹配上的demuxermuxer都保存在如下所示,定义在ffmpeg.c里的

全局变量file_iformatfile_oformat中:

    staticAVInputFormat *file_iformat;

    staticAVOutputFormat *file_oformat;

3.1demuxer匹配

libavformat\utils.c中的staticAVInputFormat *av_probe_input_format2(

AVProbeData*pd, int is_opened, int *score_max)函数用途是根据传入的probedata数据

,依次调用每个demuxerread_probe接口,来进行该demuxer是否和传入的文件内容匹配的

判断。其调用顺序如下:

voidparse_options(int argc, char **argv, const OptionDef *options,

void(* parse_arg_function)(const char *));

staticvoid opt_input_file(const char *filename)

intav_open_input_file(…… )

AVInputFormat*av_probe_input_format(AVProbeData *pd,

                               intis_opened)

staticAVInputFormat *av_probe_input_format2(……)

opt_input_file函数是在保存在constOptionDef options[]数组中,用于

voidparse_options(int argc, char **argv, const OptionDef*options)中解析argv里的

-i”参数,也就是输入文件名时调用的。

3.2muxer匹配

demuxer的匹配不同,muxer的匹配是调用guess_format函数,根据main()函数的argv里的

输出文件后缀名来进行的。

voidparse_options(int argc, char **argv, const OptionDef *options,

          void(* parse_arg_function)(const char *));

voidparse_arg_file(const char *filename)

staticvoid opt_output_file(const char *filename)

AVOutputFormat*guess_format(const char *short_name,

                           constchar *filename,

                           constchar *mime_type)

3.3当前encoder/decoder的匹配

main()函数中除了解析传入参数并初始化demuxermuxerparse_options()函数以外,

其他的功能都是在av_encode()函数里完成的。

libavcodec\utils.c中有如下二个函数:

   AVCodec*avcodec_find_encoder(enum CodecID id)

    AVCodec*avcodec_find_decoder(enum CodecID id)

他们的功能就是根据传入的CodecID,找到匹配的encoderdecoder

av_encode()函数的开头,首先初始化各个AVInputStreamAVOutputStream,然后分别调

用上述二个函数,并将匹配上的encoderdecoder分别保存在:

AVInputStream->AVStream*st->AVCodecContext *codec->struct AVCodec *codec

AVOutputStream->AVStream*st->AVCodecContext *codec->struct AVCodec *codec变量。

4.其他主要数据结构

4.1AVFormatContext

AVFormatContextFFMpeg格式转换过程中实现输入和输出功能保存相关数据的主要结构。

每一个输入和输出文件,都在如下定义的指针数组全局变量中有对应的实体。

    staticAVFormatContext *output_files[MAX_FILES];

    staticAVFormatContext *input_files[MAX_FILES];

对于输入和输出,因为共用的是同一个结构体,所以需要分别对该结构中如下定义的iformat

oformat成员赋值。

    structAVInputFormat *iformat;

    structAVOutputFormat *oformat;

对一个AVFormatContext来说,这二个成员不能同时有值,即一个AVFormatContext不能同时

含有demuxermuxer。在main()函数开头的parse_options()函数中找到了匹配的muxer

demuxer之后,根据传入的argv参数,初始化每个输入和输出的AVFormatContext结构,并保

存在相应的output_filesinput_files指针数组中。在av_encode()函数中,output_files

input_files是作为函数参数传入后,在其他地方就没有用到了。

4.2AVCodecContext

保存AVCodec指针和与codec相关数据,如videowidthheightaudiosamplerate等。

AVCodecContext中的codec_typecodec_id二个变量对于encoder/decoder的匹配来说,最为

重要

    enumCodecType codec_type;    /* see CODEC_TYPE_xxx */

    enumCodecID codec_id;        /* see CODEC_ID_xxx */

如上所示,codec_type保存的是CODEC_TYPE_VIDEOCODEC_TYPE_AUDIO等媒体类型,

codec_id保存的是CODEC_ID_FLV1CODEC_ID_VP6F等编码方式。

以支持flv格式为例,在前述的av_open_input_file(……)函数中,匹配到正确的

AVInputFormatdemuxer后,通过av_open_input_stream()函数中调用AVInputFormat

read_header接口来执行flvdec.c中的flv_read_header()函数。在flv_read_header()函数

内,根据文件头中的数据,创建相应的视频或音频AVStream,并设置AVStream

AVCodecContext的正确的codec_type值。codec_id值是在解码过程中flv_read_packet()

数执行时根据每一个packet头中的数据来设置的。

4.3AVStream

AVStream结构保存与数据流相关的编解码器,数据段等信息。比较重要的有如下二个成员:

    AVCodecContext*codec; /**< codec context */

    void*priv_data;

其中codec指针保存的就是上节所述的encoderdecoder结构。priv_data指针保存的是和具

体编解码流相关的数据,如下代码所示,在ASF的解码过程中,priv_data保存的就是

ASFStream结构的数据。

    AVStream*st;

    ASFStream*asf_st; 

    … …

    st->priv_data= asf_st;

4.4AVInputStream/ AVOutputStream

根据输入和输出流的不同,前述的AVStream结构都是封装在AVInputStreamAVOutputStream结构中,在av_encode()函数中使用。

AVInputStream中还保存的有与时间有关的信息。

AVOutputStream中还保存有与音视频同步等相关的信息。

4.5AVPacket

AVPacket结构定义如下,其是用于保存读取的packet数据。

typedefstruct AVPacket {

    int64_tpts;            ///< presentationtime stamp in time_base units

    int64_tdts;            ///< decompressiontime stamp in time_base units

    uint8_t*data;

    int size;

    int stream_index;

    int flags;

    int duration;        ///< presentation duration intime_base units (0 if not available)

    void(*destruct)(struct AVPacket *);

    void*priv;

    int64_tpos;          ///< byte position instream, -1 if unknown

}AVPacket;

av_encode()函数中,调用AVInputFormat

(*read_packet)(structAVFormatContext *, AVPacket *pkt)接口,读取输入文件的一帧数

据保存在当前输入AVFormatContextAVPacket成员中。

---------------------------------------------------------------------

FFMPEG是目前被应用最广泛的编解码软件库,支持多种流行的编解码器,它是C语言实现的,不仅被集成到各种PC软件,也经常被移植到多种嵌入式设备中。使用面向对象的办法来设想这样一个编解码库,首先让人想到的是构造各种编解码器的类,然后对于它们的抽象基类确定运行数据流的规则,根据算法转换输入输出对象。

在实际的代码,将这些编解码器分成encoder/decodermuxer/demuxerdevice三种对象,分别对应于编解码,输入输出格式和设备。在main函数的开始,就是初始化这三类对象。在avcodec_register_all中,很多编解码器被注册,包括视频的H.264解码器和X264编码器等,

REGISTER_DECODER(H264, h264);

REGISTER_ENCODER(LIBX264, libx264);

找到相关的宏代码如下

#defineREGISTER_ENCODER(X,x) { \

          externAVCodec x##_encoder; \

         if(CONFIG_##X##_ENCODER) avcodec_register(&x##_encoder); }

#defineREGISTER_DECODER(X,x) { \

          externAVCodec x##_decoder; \

         if(CONFIG_##X##_DECODER) avcodec_register(&x##_decoder); }

这样就实际在代码中根据CONFIG_##X##_ENCODER这样的编译选项来注册libx264_encoderh264_decoder,注册的过程发生在avcodec_register(AVCodec*codec)函数中,实际上就是向全局链表first_avcodec中加入libx264_encoderh264_decoder特定的编解码器,输入参数AVCodec是一个结构体,可以理解为编解码器的基类,其中不仅包含了名称,id等属性,而且包含了如下函数指针,让每个具体的编解码器扩展类实现。

    int(*init)(AVCodecContext *);

    int(*encode)(AVCodecContext *, uint8_t *buf, int buf_size, void *data);

    int(*close)(AVCodecContext *);

    int(*decode)(AVCodecContext *, void *outdata, int *outdata_size,

                 constuint8_t *buf, int buf_size);

    void(*flush)(AVCodecContext *);

继续追踪libx264,也就是X264的静态编码库,它在FFMPEG编译的时候被引入作为H.264编码器。在libx264.c中有如下代码

AVCodeclibx264_encoder = {

    .name= "libx264",

    .type= CODEC_TYPE_VIDEO,

    .id= CODEC_ID_H264,

    .priv_data_size= sizeof(X264Context),

    .init= X264_init,

    .encode= X264_frame,

    .close= X264_close,

    .capabilities= CODEC_CAP_DELAY,

    .pix_fmts= (enum PixelFormat[]) { PIX_FMT_YUV420P, PIX_FMT_NONE },

    .long_name= NULL_IF_CONFIG_SMALL("libx264 H.264 / AVC / MPEG-4 AVC /MPEG-4 part 10"),

};

这里具体对来自AVCodec得属性和方法赋值。其中

    .init= X264_init,

    .encode= X264_frame,

    .close= X264_close,

将函数指针指向了具体函数,这三个函数将使用libx264静态库中提供的API,也就是X264的主要接口函数进行具体实现。pix_fmts定义了所支持的输入格式,这里420

PIX_FMT_YUV420P,  ///< planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Ysamples)

上面看到的X264Context封装了X264所需要的上下文管理数据,

typedefstruct X264Context {

    x264_param_tparams;

    x264_t*enc;

    x264_picture_tpic;

    AVFrameout_pic;

}X264Context;

它属于结构体AVCodecContextvoid*priv_data变量,定义了每种编解码器私有的上下文属性,AVCodecContext也类似上下文基类一样,还提供其他表示屏幕解析率、量化范围等的上下文属性和rtp_callback等函数指针供编解码使用。

回到main函数,可以看到完成了各类编解码器,输入输出格式和设备注册以后,将进行上下文初始化和编解码参数读入,然后调用av_encode()函数进行具体的编解码工作。根据该函数的注释一路查看其过程:

1.输入输出流初始化。

2.根据输入输出流确定需要的编解码器,并初始化。

3.写输出文件的各部分

重点关注一下step23,看看怎么利用前面分析的编解码器基类来实现多态。大概查看一下这段代码的关系,发现在FFMPEG里,可以用类图来表示大概的编解码器组合。

可以参考【3】来了解这些结构的含义(见附录)。在这里会调用一系列来自utils.c的函数,这里的avcodec_open()函数,在打开编解码器都会调用到,它将运行如下代码:

    avctx->codec= codec;

    avctx->codec_id= codec->id;

    avctx->frame_number= 0;

    if(avctx->codec->init){

        ret= avctx->codec->init(avctx);

进行具体适配的编解码器初始化,而这里的avctx->codec->init(avctx)就是调用AVCodec中函数指针定义的具体初始化函数,例如X264_init

avcodec_encode_video()和avcodec_encode_audio()被output_packet()调用进行音视频编码,将同样利用函数指针avctx->codec->encode()调用适配编码器的编码函数,如X264_frame进行具体工作。

从上面的分析,我们可以看到FFMPEG怎么利用面向对象来抽象编解码器行为,通过组合和继承关系具体化每个编解码器实体。设想要在FFMPEG中加入新的解码器H265,要做的事情如下:

1.config编译配置中加入CONFIG_H265_DECODER

2.利用宏注册H265解码器

3.定义AVCodec265_decoder变量,初始化属性和函数指针

4.利用解码器API具体化265_decoderinit等函数指针

完成以上步骤,就可以把新的解码器放入FFMPEG,外部的匹配和运行规则由基类的多态实现了。

4.X264架构分析

X264是一款从2004年有法国大学生发起的开源H.264编码器,对PC进行汇编级代码优化,舍弃了片组和多参考帧等性能效率比不高的功能来提高编码效率,它被FFMPEG作为引入的.264编码库,也被移植到很多DSP嵌入平台。前面第三节已经对FFMPEG中的X264进行举例分析,这里将继续结合X264框架加深相关内容的了解。

查看代码前,还是思考一下对于一款具体的编码器,怎么面向对象分析呢?对熵编码部分对不同算法的抽象,还有帧内或帧间编码各种估计算法的抽象,都可以作为类来构建。

X264中,我们看到的对外API和上下文变量都声明在X264.h中,API函数中,关于辅助功能的函数在common.c中定义

voidx264_picture_alloc( x264_picture_t *pic, int i_csp, int i_width, inti_height );

voidx264_picture_clean( x264_picture_t *pic );

intx264_nal_encode( void *, int *, int b_annexeb, x264_nal_t *nal );

而编码功能函数定义在encoder.c

x264_t*x264_encoder_open   ( x264_param_t * );

int    x264_encoder_reconfig( x264_t *, x264_param_t * );

int    x264_encoder_headers( x264_t *, x264_nal_t **, int * );

int    x264_encoder_encode ( x264_t *, x264_nal_t **, int *, x264_picture_t*, x264_picture_t * );

void   x264_encoder_close  ( x264_t * );

x264.c文件中,有程序的main函数,可以看作做API使用的例子,它也是通过调用X264.h中的API和上下文变量来实现实际功能。

X264最重要的记录上下文数据的结构体x264_t定义在common.h中,它包含了从线程控制变量到具体的SPSPPS、量化矩阵、cabac上下文等所有的H.264编码相关变量。其中包含如下的结构体

    x264_predict_t     predict_16x16[4+3];

    x264_predict_t     predict_8x8c[4+3];

    x264_predict8x8_t  predict_8x8[9+3];

    x264_predict_t     predict_4x4[9+3];

    x264_predict_8x8_filter_tpredict_8x8_filter;

    x264_pixel_function_tpixf;

    x264_mc_functions_t  mc;

    x264_dct_function_t  dctf;

    x264_zigzag_function_tzigzagf;

    x264_quant_function_tquantf;

    x264_deblock_function_tloopf;

跟踪查看可以看到它们或是一个函数指针,或是由函数指针组成的结构,这样的用法很想面向对象中的interface接口声明。这些函数指针将在x264_encoder_open()函数中被初始化,这里的初始化首先根据CPU的不同提供不同的函数实现代码段,很多与可能是汇编实现,以提高代码运行效率。其次把功能相似的函数集中管理,例如类似intra164种和intra4的九种预测函数都被用函数指针数组管理起来。

x264_encoder_encode()是负责编码的主要函数,而其内包含的x264_slice_write()负责片层一下的具体编码,包括了帧内和帧间宏块编码。在这里,cabaccavlc的行为是根据h->param.b_cabac来区别的,分别运行x264_macroblock_write_cabac()和x264_macroblock_write_cavlc()来写码流,在这一部分,功能函数按文件定义归类,基本按照编码流程图运行,看起来更像面向过程的写法,在已经初始化了具体的函数指针,程序就一直按编码过程的逻辑实现。如果从整体架构来看,x264利用这种类似接口的形式实现了弱耦合和可重用,利用x264_t这个贯穿始终的上下文,实现信息封装和多态。

本文大概分析了FFMPEG/X264的代码架构,重点探讨用C语言来实现面向对象编码,虽不至于强行向C++靠拢,但是也各有实现特色,保证实用性。值得规划C语言软件项目所借鉴。 

 

【参考文献】

1.“用例子说明面向对象和面向过程的区别

2. liyuming1978,“liyuming1978的专栏

3.“FFMpeg框架代码阅读”

 

Using libavformat and libavcodec

MartinBöhme(boehme@inb.uni-luebeckREMOVETHIS.de)

February 18, 2004

Update (January 23 2009): By now,these articles are quite out of date... unfortunately, I haven'tfound the time to update them, but thankfully, others have jumped in.Stephen Dranger hasamore recent tutorial, ryanfb of cryptosystem.org hasanupdatedversion of the code, and David Hoerl has amorerecent update.

Update (July 22 2004): Idiscovered that the code I originally presented contained a memoryleak (av_free_packet() wasn't being called). My apologies - I'veupdated the demo program and the code in the article to eliminate theleak.

Update (July 21 2004):There's a new prerelease of ffmpeg (0.4.9-pre1). I describe thechanges to the libavformat / libavcodec API in thisarticle.

The libavformat and libavcodec librariesthat come withffmpegare a great way of accessing a large variety of video file formats.Unfortunately, there is no real documentation on using theselibraries in your own programs (at least I couldn't find any), andthe example programs aren't really very helpful either.

This situation meant that, when I usedlibavformat/libavcodec on a recent project, it took quite a lot ofexperimentation to find out how to use them. Here's what I learned -hopefully I'll be able to save others from having to go through thesame trial-and-error process. There's also a small demoprogram that youcan download. The code I'll present works with libavformat/libavcodecas included in version 0.4.8 of ffmpeg (the most recent version asI'm writing this). If you find that later versions break the code,please let me know.

In this document, I'll only cover how toread video streams from a file; audio streams work pretty much thesame way, but I haven't actually used them, so I can't present anyexample code.

In case you're wondering why there aretwo libraries, libavformat and libavcodec: Many video file formats(AVI being a prime example) don't actually specify which codec(s)should be used to encode audio and video data; they merely define howan audio and a video stream (or, potentially, several audio/videostreams) should be combined into a single file. This is whysometimes, when you open an AVI file, you get only sound, but nopicture - because the right video codec isn't installed on yoursystem. Thus, libavformat deals with parsing video files andseparating the streams contained in them, and libavcodec deals withdecoding raw audio and video streams.

Opening a Video File

First things first - let's look at how toopen a video file and get at the streams contained in it. The firstthing we need to do is to initialize libavformat/libavcodec:

av_register_all();

This registers all available file formatsand codecs with the library so they will be used automatically when afile with the corresponding format/codec is opened. Note that youonly need to call av_register_all() once, so it's probably best to dothis somewhere in your startup code. If you like, it's possible toregister only certain individual file formats and codecs, but there'susually no reason why you would have to do that.

Next off, opening the file:

AVFormatContext *pFormatCtx;

const char *filename="myvideo.mpg";

// Open video file

if(av_open_input_file(&pFormatCtx,filename, NULL, 0, NULL)!=0)

handle_error(); // Couldn't open file

The last three parameters specify thefile format, buffer size and format parameters; by simply specifyingNULL or 0 we ask libavformat to auto-detect the format and use adefault buffer size. Replace handle_error() with appropriate errorhandling code for your application.

Next, we need to retrieve informationabout the streams contained in the file:

// Retrieve stream information

if(av_find_stream_info(pFormatCtx)<0)

handle_error(); // Couldn't findstream information

This fills the streams field of theAVFormatContext with valid information. As a debugging aid, we'lldump this information onto standard error, but of course you don'thave to do this in a production application:

dump_format(pFormatCtx, 0, filename,false);

As mentioned in the introduction, we'llhandle only video streams, not audio streams. To make things nice andeasy, we simply use the first video stream we find:

int i, videoStream;

AVCodecContext *pCodecCtx;

// Find the first video stream

videoStream=-1;

for(i=0; i<pFormatCtx->nb_streams;i++)

if(pFormatCtx->streams[i]->codec.codec_type==CODEC_TYPE_VIDEO)

{

videoStream=i;

break;

}

if(videoStream==-1)

handle_error(); // Didn't find avideo stream

// Get a pointer to the codec context forthe video stream

pCodecCtx=&pFormatCtx->streams[videoStream]->codec;

OK, so now we've got a pointer to theso-called codec context for our video stream, but we still have tofind the actual codec and open it:

AVCodec *pCodec;

// Find the decoder for the video stream

pCodec=avcodec_find_decoder(pCodecCtx->codec_id);

if(pCodec==NULL)

handle_error(); // Codec not found

// Inform the codec that we can handletruncated bitstreams -- i.e.,

// bitstreams where frame boundaries canfall in the middle of packets

if(pCodec->capabilities &CODEC_CAP_TRUNCATED)

pCodecCtx->flags|=CODEC_FLAG_TRUNCATED;

// Open codec

if(avcodec_open(pCodecCtx, pCodec)<0)

handle_error(); // Could not opencodec

(So what's up with those "truncatedbitstreams"? Well, as we'll see in a moment, the data in a videostream is split up into packets. Since the amount of data per videoframe can vary, the boundary between two video frames need notcoincide with a packet boundary. Here, we're telling the codec thatwe can handle this situation.)

One important piece of information thatis stored in the AVCodecContext structure is the frame rate of thevideo. To allow for non-integer frame rates (like NTSC's 29.97 fps),the rate is stored as a fraction, with the numerator inpCodecCtx->frame_rate and the denominator inpCodecCtx->frame_rate_base. While testing the library withdifferent video files, I noticed that some codecs (notably ASF) seemto fill these fields incorrectly (frame_rate_base contains 1 insteadof 1000). The following hack fixes this:

// Hack to correct wrong frame rates thatseem to be generated by some

// codecs

if(pCodecCtx->frame_rate>1000 &&pCodecCtx->frame_rate_base==1)

pCodecCtx->frame_rate_base=1000;

Note that it shouldn't be a problem toleave this fix in place even if the bug is corrected some day - it'sunlikely that a video would have a frame rate of more than 1000 fps.

One more thing left to do: Allocate avideo frame to store the decoded images in:

AVFrame *pFrame;

pFrame=avcodec_alloc_frame();

That's it! Now let's start decoding somevideo.

Decoding Video Frames

As I've already mentioned, a video filecan contain several audio and video streams, and each of thosestreams is split up into packets of a particular size. Our job is toread these packets one by one using libavformat, filter out all thosethat aren't part of the video stream we're interested in, and handthem on to libavcodec for decoding. In doing this, we'll have to takecare of the fact that the boundary between two frames can occur inthe middle of a packet.

Sound complicated? Lucikly, we canencapsulate this whole process in a routine that simply returns thenext video frame:

bool GetNextFrame(AVFormatContext*pFormatCtx, AVCodecContext *pCodecCtx,

int videoStream, AVFrame *pFrame)

{

static AVPacket packet;

static int bytesRemaining=0;

static uint8_t *rawData;

static bool fFirstTime=true;

int bytesDecoded;

int frameFinished;

// First time we're called, setpacket.data to NULL to indicate it

// doesn't have to be freed

if(fFirstTime)

{

fFirstTime=false;

packet.data=NULL;

}

// Decode packets until we havedecoded a complete frame

while(true)

{

// Work on the current packetuntil we have decoded all of it

while(bytesRemaining > 0)

{

// Decode the next chunk ofdata

bytesDecoded=avcodec_decode_video(pCodecCtx,pFrame,

&frameFinished,rawData, bytesRemaining);

// Was there an error?

if(bytesDecoded < 0)

{

fprintf(stderr, "Errorwhile decoding frame\n");

return false;

}

bytesRemaining-=bytesDecoded;

rawData+=bytesDecoded;

// Did we finish the currentframe? Then we can return

if(frameFinished)

return true;

}

// Read the next packet, skippingall packets that aren't for this

// stream

do

{

// Free old packet

if(packet.data!=NULL)

av_free_packet(&packet);

// Read new packet

if(av_read_packet(pFormatCtx,&packet)<0)

goto loop_exit;

}while(packet.stream_index!=videoStream);

bytesRemaining=packet.size;

rawData=packet.data;

}

loop_exit:

// Decode the rest of the last frame

bytesDecoded=avcodec_decode_video(pCodecCtx,pFrame, &frameFinished,

rawData, bytesRemaining);

// Free last packet

if(packet.data!=NULL)

av_free_packet(&packet);

return frameFinished!=0;

}

Now, all we have to do is sit in a loop,calling GetNextFrame() until it returns false. Just one more thing totake care of: Most codecs return images in YUV 420 format (oneluminance and two chrominance channels, with the chrominance channelssamples at half the spatial resolution of the luminance channel).Depending on what you want to do with the video data, you may want toconvert this to RGB. (Note, though, that this is not necessary if allyou want to do is display the video data; take a look at the X11Xvideo extension, which does YUV-to-RGB and scaling in hardware.)Fortunately, libavcodec provides a conversion routine calledimg_convert, which does conversion between YUV and RGB as well as avariety of other image formats. The loop that decodes the video thusbecomes:

while(GetNextFrame(pFormatCtx, pCodecCtx,videoStream, pFrame))

{

img_convert((AVPicture *)pFrameRGB,PIX_FMT_RGB24, (AVPicture*)pFrame,

pCodecCtx->pix_fmt,pCodecCtx->width, pCodecCtx->height);

// Process the video frame (save todisk etc.)

DoSomethingWithTheImage(pFrameRGB);

}

The RGB image pFrameRGB (of type AVFrame*) is allocated like this:

AVFrame *pFrameRGB;

int numBytes;

uint8_t *buffer;

// Allocate an AVFrame structure

pFrameRGB=avcodec_alloc_frame();

if(pFrameRGB==NULL)

handle_error();

// Determine required buffer size andallocate buffer

numBytes=avpicture_get_size(PIX_FMT_RGB24,pCodecCtx->width,

pCodecCtx->height);

buffer=new uint8_t[numBytes];

// Assign appropriate parts of buffer toimage planes in pFrameRGB

avpicture_fill((AVPicture *)pFrameRGB,buffer, PIX_FMT_RGB24,

pCodecCtx->width,pCodecCtx->height);

Cleaning up

OK, we've read and processed our video,now all that's left for us to do is clean up after ourselves:

// Free the RGB image

delete [] buffer;

av_free(pFrameRGB);

// Free the YUV frame

av_free(pFrame);

// Close the codec

avcodec_close(pCodecCtx);

// Close the video file

av_close_input_file(pFormatCtx);

Done!

Sample Code

Asample app that wraps all of this code up in compilable form ishere.If you have any additional comments, please contact me atboehme@inb.uni-luebeckREMOVETHIS.de. Standard disclaimer: I assume noliability for the correct functioning of the code and techniquespresented in this article.


0 0
原创粉丝点击
热门问题 老师的惩罚 人脸识别 我在镇武司摸鱼那些年 重生之率土为王 我在大康的咸鱼生活 盘龙之生命进化 天生仙种 凡人之先天五行 春回大明朝 姑娘不必设防,我是瞎子 mac玩刺客信条卡怎么办 阴部长了个疙瘩怎么办 两个人觉得累了怎么办 朋友把我拉黑了怎么办 下雨了怎么办我好想你 雨停怎么办我好想你 下雨天怎么办我好想你 天谕账号忘记了怎么办 天谕账号被冻结怎么办 促黄体生成素低怎么办 地暖家里太干燥怎么办 剑灵摧毁了东西怎么办 想打嗝打不出来怎么办 孩子满100天要怎么办 宝宝吃奶粉过敏了怎么办 1岁宝宝不喝奶粉怎么办 母乳不够宝宝不喝奶粉怎么办 宝宝吃奶粉上火了怎么办 我小孩不喝奶粉怎么办 2岁宝宝不喝奶粉怎么办 婴儿吃奶粉上火了怎么办 100天的宝宝咳嗽怎么办 40天的小孩咳嗽怎么办 40天的婴儿咳嗽怎么办 50天的婴儿咳嗽怎么办 宝宝20天感冒了怎么办 1个月宝宝咳嗽怎么办 40天的宝宝干咳怎么办 百天的宝宝咳嗽怎么办 50天的孩子咳嗽怎么办 百天宝宝咳嗽有痰怎么办 1岁半宝宝拉肚子怎么办 百天的宝宝拉肚子怎么办 激战2帧数三十多怎么办 太受欢迎了怎么办txt微 太受欢迎了怎么办网盘 太受欢迎了怎么办微盘 我太受欢迎了该怎么办h 我太受欢迎了该怎么办1 卡培他滨副作用怎么办 究极风暴4卡怎么办