An ffmpeg and SDL Tutorial 01
来源:互联网 发布:淘宝ka商家标准 编辑:程序博客网 时间:2024/05/16 04:57
An ffmpeg and SDL Tutorial
Tutorial 01: Making Screencaps
Code: tutorial01.cOverview
Movie files have a few basic components. First, the file itself is called a container, and the type of container determines where the information in the file goes. Examples of containers are AVI and Quicktime. Next, you have a bunch of streams; for example, you usually have an audio stream and a video stream. (A "stream" is just a fancy word for "a succession of data elements made available over time".) The data elements in a stream are called frames. Each stream is encoded by a different kind of codec. The codec defines how the actual data is COded and DECoded - hence the name CODEC. Examples of codecs are DivX and MP3. Packets are then read from the stream. Packets are pieces of data that can contain bits of data that are decoded into raw frames that we can finally manipulate for our application. For our purposes, each packet contains complete frames, or multiple frames in the case of audio.
At its very basic level, dealing with video and audio streams is very easy:
10 OPEN video_stream FROM video.avi20 READ packet FROM video_stream INTO frame30 IF frame NOT COMPLETE GOTO 2040 DO SOMETHING WITH frame50 GOTO 20Handling multimedia with ffmpeg is pretty much as simple as this program, although some programs might have a very complex "DO SOMETHING" step. So in this tutorial, we're going to open a file, read from the video stream inside it, and our DO SOMETHING is going to be writing the frame to a PPM file.
Opening the File
First, let's see how we open a file in the first place. With ffmpeg, you have to first initialize the library.
#include <libavcodec/avcodec.h>#include <libavformat/avformat.h>#include <ffmpeg/swscale.h>...int main(int argc, charg *argv[]) {av_register_all();This registers all available file formats and codecs with the library so they will be used automatically when a file with the corresponding format/codec is opened. Note that you only need to call av_register_all() once, so we do it here in main(). If you like, it's possible to register only certain individual file formats and codecs, but there's usually no reason why you would have to do that.
Now we can actually open the file:
AVFormatContext *pFormatCtx = NULL;// Open video fileif(avformat_open_input(&pFormatCtx, argv[1], NULL, 0, NULL)!=0) return -1; // Couldn't open fileWe get our filename from the first argument. This function reads the file header and stores information about the file format in the AVFormatContext structure we have given it. The last three arguments are used to specify the file format, buffer size, and format options, but by setting this to NULL or 0, libavformat will auto-detect these.
This function only looks at the header, so next we need to check out the stream information in the file.:
// Retrieve stream informationif(avformat_find_stream_info(pFormatCtx, NULL)<0) return -1; // Couldn't find stream informationThis function populates pFormatCtx->streams with the proper information. We introduce a handy debugging function to show us what's inside:
// Dump information about file onto standard errorav_dump_format(pFormatCtx, 0, argv[1], 0);Now pFormatCtx->streams is just an array of pointers, of size pFormatCtx->nb_streams, so let's walk through it until we find a video stream.
int i;AVCodecContext *pCodecCtxOrig = NULL;AVCodecContext *pCodecCtx = NULL;// Find the first video streamvideoStream=-1;for(i=0; i<pFormatCtx->nb_streams; i++) if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) { videoStream=i; break; }if(videoStream==-1) return -1; // Didn't find a video stream// Get a pointer to the codec context for the video streampCodecCtx=pFormatCtx->streams[videoStream]->codec;The stream's information about the codec is in what we call the "codec context." This contains all the information about the codec that the stream is using, and now we have a pointer to it. But we still have to find the actual codec and open it:
AVCodec *pCodec = NULL;// Find the decoder for the video streampCodec=avcodec_find_decoder(pCodecCtx->codec_id);if(pCodec==NULL) { fprintf(stderr, "Unsupported codec!\n"); return -1; // Codec not found}// Copy contextpCodecCtx = avcodec_alloc_context3(pCodec);if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) { fprintf(stderr, "Couldn't copy codec context"); return -1; // Error copying codec context}// Open codecif(avcodec_open2(pCodecCtx, pCodec)<0) return -1; // Could not open codecNote that we must not use the AVCodecContext from the video stream directly! So we have to use avcodec_copy_context() to copy the context to a new location (after allocating memory for it, of course).
Storing the Data
Now we need a place to actually store the frame:
AVFrame *pFrame = NULL;// Allocate video framepFrame=av_frame_alloc();Since we're planning to output PPM files, which are stored in 24-bit RGB, we're going to have to convert our frame from its native format to RGB. ffmpeg will do these conversions for us. For most projects (including ours) we're going to want to convert our initial frame to a specific format. Let's allocate a frame for the converted frame now.
// Allocate an AVFrame structurepFrameRGB=av_frame_alloc();if(pFrameRGB==NULL) return -1;Even though we've allocated the frame, we still need a place to put the raw data when we convert it. We use avpicture_get_size to get the size we need, and allocate the space manually:
uint8_t *buffer = NULL;int numBytes;// Determine required buffer size and allocate buffernumBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));av_malloc is ffmpeg's malloc that is just a simple wrapper around malloc that makes sure the memory addresses are aligned and such. It will not protect you from memory leaks, double freeing, or other malloc problems.
Now we use avpicture_fill to associate the frame with our newly allocated buffer. About the AVPicture cast: the AVPicture struct is a subset of the AVFrame struct - the beginning of the AVFrame struct is identical to the AVPicture struct.
// Assign appropriate parts of buffer to image planes in pFrameRGB// Note that pFrameRGB is an AVFrame, but AVFrame is a superset// of AVPictureavpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);Finally! Now we're ready to read from the stream!
Reading the Data
What we're going to do is read through the entire video stream by reading in the packet, decoding it into our frame, and once our frame is complete, we will convert and save it.
struct SwsContext *sws_ctx = NULL;int frameFinished;AVPacket packet;// initialize SWS context for software scalingsws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL );i=0;while(av_read_frame(pFormatCtx, &packet)>=0) { // Is this a packet from the video stream? if(packet.stream_index==videoStream) {// Decode video frame avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet); // Did we get a video frame? if(frameFinished) { // Convert the image from its native format to RGB sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize); // Save the frame to disk if(++i<=5) SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i); } } // Free the packet that was allocated by av_read_frame av_free_packet(&packet);}A note on packets
Technically a packet can contain partial frames or other bits of data, but ffmpeg's parser ensures that the packets we get contain either complete or multiple frames.
The process, again, is simple: av_read_frame() reads in a packet and stores it in the AVPacket struct. Note that we've only allocated the packet structure - ffmpeg allocates the internal data for us, which is pointed to by packet.data. This is freed by the av_free_packet() later. avcodec_decode_video() converts the packet to a frame for us. However, we might not have all the information we need for a frame after decoding a packet, so avcodec_decode_video() sets frameFinished for us when we have the next frame. Finally, we use sws_scale() to convert from the native format (pCodecCtx->pix_fmt) to RGB. Remember that you can cast an AVFrame pointer to an AVPicture pointer. Finally, we pass the frame and height and width information to our SaveFrame function.Now all we need to do is make the SaveFrame function to write the RGB information to a file in PPM format. We're going to be kind of sketchy on the PPM format itself; trust us, it works.
void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) { FILE *pFile; char szFilename[32]; int y; // Open file sprintf(szFilename, "frame%d.ppm", iFrame); pFile=fopen(szFilename, "wb"); if(pFile==NULL) return; // Write header fprintf(pFile, "P6\n%d %d\n255\n", width, height); // Write pixel data for(y=0; y<height; y++) fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile); // Close file fclose(pFile);}We do a bit of standard file opening, etc., and then write the RGB data. We write the file one line at a time. A PPM file is simply a file that has RGB information laid out in a long string. If you know HTML colors, it would be like laying out the color of each pixel end to end like #ff0000#ff0000.... would be a red screen. (It's stored in binary and without the separator, but you get the idea.) The header indicated how wide and tall the image is, and the max size of the RGB values.
Now, going back to our main() function. Once we're done reading from the video stream, we just have to clean everything up:
// Free the RGB imageav_free(buffer);av_free(pFrameRGB);// Free the YUV frameav_free(pFrame);// Close the codecsavcodec_close(pCodecCtx);avcodec_close(pCodecCtxOrig);// Close the video fileavformat_close_input(&pFormatCtx);return 0;You'll notice we use av_free for the memory we allocated with avcode_alloc_frame and av_malloc.
That's it for the code! Now, if you're on Linux or a similar platform, you'll run:
gcc -o tutorial01 tutorial01.c -lavutil -lavformat -lavcodec -lz -lavutil -lmIf you have an older version of ffmpeg, you may need to drop -lavutil:
gcc -o tutorial01 tutorial01.c -lavformat -lavcodec -lz -lmMost image programs should be able to open PPM files. Test it on some movie files.
>> Tutorial 2: Outputting to the Screen
Data Reference
Code examples are based off of FFplay, Copyright (c) 2003 Fabrice Bellard, and a tutorial by Martin Bohme.
- An ffmpeg and SDL Tutorial 01
- An ffmpeg and SDL Tutorial
- An ffmpeg and SDL Tutorial
- An ffmpeg and SDL Tutorial
- An ffmpeg and SDL Tutorial
- An ffmpeg and SDL Tutorial 02
- An ffmpeg and SDL Tutorial 00
- An ffmpeg and SDL Tutorial 03
- An ffmpeg and SDL Tutorial 04
- An ffmpeg and SDL Tutorial 05
- An ffmpeg and SDL Tutorial 07
- An ffmpeg and SDL Tutorial 06
- An ffmpeg and SDL Tutorial 08
- 【笔记】An ffmpeg and SDL Tutorial 00
- An ffmpeg and SDL Tutorial Tutorial 05: Synching Video
- ffmpeg学习(2)--An ffmpeg and SDL Tutorial
- FFmpeg入门(3)-An ffmpeg and SDL Tutorial
- FFmpeg入门(4)-An ffmpeg and SDL Tutorial 2
- Leetcode 237. Delete Node in a Linked List
- Android 下载并打开PDF,Doc,Dwg文档
- 用xampp搭建wordpress的一些常见问题
- 浅谈二分查找算法
- c++模板类/模板函数的声明与定义应该放在头文件里
- An ffmpeg and SDL Tutorial 01
- 【Unet】Unet 客户端 与 服务器行为
- 在Windows x64中加载驱动
- Android自定义view之measure、layout、draw三大流程
- 程序杂谈
- 中小型棋牌类网络游戏服务端架构
- iOS WebRTC语音视频通话实现与demo
- OTB Results
- 移动Web App、Hybrid App与Native App的差异