Live555接收h264使用ffmpeg解码为YUV42

来源:互联网 发布:外贸进出口软件哪家好 编辑:程序博客网 时间:2024/06/09 13:25

转载:http://www.tuicool.com/articles/rmQ732

本文概要:

                本文介绍了一种常用成熟的多媒体解码方案。使用live555作为流媒体数据源,建立rtsp会话请求h264数据流。后端使用ffmpeg解码h264流并保存为yuv420格式。

                该方案比较成熟,可行性高,但网络相关资料较少,给初学者带来了不小的入门难度。本文介绍了其中实现的几个关键步骤和容易出现错误的地方。希望能给从事该方向开发的朋友有一些启示。本文使用的开发环境Ubuntu12.04.。转载请注明出处 CSDN--固本培元。

Live555介绍:

          Live555 是一个为 流媒体 提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体 传输协议 如RTP/RTCP、RTSP、SIP等的支持。Live555实现了对多种音视频编码格式的音视频数据的流化、接收和处理等支持,包括MPEG、H.263+、DV、JPEG视频和多种音频编码。 

ffmpeg介绍:

          FFmpeg是一个开源免费跨平台的视频和 音频流 方案,属于自由 软件 ,采用LGPL或GPL许可证(依据你选择的组件)。它提供了录制、转换以及流化音视频的完整解决方案。它包含了非常先进的音频/视频编解码库libavcodec,为了保证高可移植性和编解码质量,libavcodec里很多codec都是从头开发的。 

方案说明:

              有朋友说,ffmpeg也具有rtsp、rtp相关的支持,为何要使用live555作为rtsp会话来传输数据呢?原因如下。

             ffmpeg对rtsp、rtp的支持相对于live555的支持较弱。一个明显的例子:对于用户的验证。live555有十分简易的接口。而ffmpeg需要更加ffserver进行修改rtsp的请求。同时,ffmpeg代码风格使用c风格,单文件长度大。更改虽然是可行的,但需要使用较长的时间。个人不建议这样,毕竟有的时候可以走捷径,我们就快点提高效率吧。

开始:

          废话不多说了,我们开始。

         首先工程中src目录结构如下:


          目录包括,h264解码,live555客户端,SDL、Ui。本文介绍h264和live555client部分。

Live555客户端

          在编译完成live555之后会产生很多例程。其中便有客户端的改写例程。本文使用了testRTSPClient.cpp 例程进行改写。

         live555的官方文档中有记录: http://www.live555.com/liveMedia/#testProgs

RTSP clienttestRTSPClient is a command-line program that shows you how to open and receive media streams that are specified by a RTSP URL - i.e., an URL that begins with rtsp://In this demonstration application, nothing is done with the received audio/video data. You could, however, use and adapt this code in your own application to (for example) decode and play the received data.
          该例程专用于接收rtsp会话数据流,用于修改十分方便。

          当接收流用于ffmpeg解码h264时需要补偿sps以及pps位,同时如果需要保存h264流文件需要添加起始码:(live555官方常见问题记录如下)

I have successfully used the "testRTSPClient" demo application to receive a RTSP/RTP stream. Using this application code as a model, how can I decode the received video (and/or audio) data?The "testRTSPClient" demo application receives each (video and/or audio) frame into a memory buffer, but does not do anything with the frame data. You can, however, use this code as a model for a 'media player' application that decodes and renders these frames. Note, in particular, the "DummySink" class that the "testRTSPClient" demo application uses - and the (non-static) "DummySink::afterGettingFrame()" function. When this function is called, a complete 'frame' (for H.264, this will be a "NAL unit") will have already been delivered into "fReceiveBuffer". Note that our "DummySink" implementation doesn't actually do anything with this data; that's why it's called a 'dummy' sink.If you want to decode (or otherwise process) these frames, you would replace "DummySink" with your own "MediaSink" subclass. Its "afterGettingFrame()" function would pass the data (at "fReceiveBuffer", of length "frameSize") to a decoder. (A decoder would also use the "presentationTime" timestamp to properly time the rendering of each frame, and to synchronize audio and video.)If you are receiving H.264 video data, there is one more thing that you have to do before you start feeding frames to your decoder. H.264 streams have out-of-band configuration information ("SPS" and "PPS" NAL units) that you may need to feed to the decoder to initialize it. To get this information, call "MediaSubsession::fmtp_spropparametersets()" (on the video 'subsession' object). This will give you a (ASCII) character string. You can then pass this to "parseSPropParameterSets()" (defined in the file "include/H264VideoRTPSource.hh"), to generate binary NAL units for your decoder.

live555保存h264文件:

            live555在传输h264流时省略了起始码,若需要存储h264码流的朋友并需要能使用vlc播放加入起始码即可。

            起始码:0x00 0x00 0x00 0x01

            (注意:0x01  在高地址)

ffmpeg解码h264流:

                官方文档或者大部分网络资料中均有思路的提示,但是说明不够详细。

                上文live555官方文档中说到需要使用fmtp_sproparametersets()方法获得sps以及pps在base64编码后的内容,随后调用par色SPropPa让meterSets()方法还原为二进制码,具体h264流用于ffmpeg解码流的sps以及pps位补偿的获取方式如下:

unsigned int Num=0 ;  unsigned int &SPropRecords =Num;  SPropRecord *p_record=new SPropRecord();  p_record=parseSPropParameterSets(fSubsession.fmtp_spropparametersets(),SPropRecords);
            

                   获取sps位以及pps位后排列方式如下:

                   起始码 sps 起始码 pps 将此码流补偿给ffmpeg解码器及可以正常解码。

                   交流邮箱:leoluopy@gmail.com

参考代码:

ffmpeg解码环境初始化

void InitH264DecodeEnv(){  av_register_all();  av_init_packet(&packet);  /* find the video decoder */  if (!(videoCodec=avcodec_find_decoder(CODEC_ID_H264))){      printf("codec not found!");      return ;  }  context= avcodec_alloc_context();    picture = avcodec_alloc_frame();// Allocate video frame    if (avcodec_open(context, videoCodec) < 0) {        fprintf(stderr, "could not open codec\n");        exit(1);    }}

ffmpeg解码网络h264流至YUV420(YV12)

int h264_decode_from_rtsp_and_save_yuv_serial_file( char* inputbuf,int inputbuf_size,const char *outfile,    const char *sps,const int sps_len,const char *pps,const int pps_len){  char Name[100]; memset(Name,0,100);  Set_CONTEXT_EXTRA(context,sps,sps_len,pps,pps_len);    packet.data = inputbuf ;  packet.size = inputbuf_size;  while (packet.size > 0)  {    char outproperties[200];    printf("##decode cost:%d\n",len = avcodec_decode_video2(context, picture, &got_picture, &packet));    if (len < 0) {      fprintf(stderr, "Error while decoding frame %d\n", frame);      return 0;    }    if (got_picture) {      printf("saving frame %3d\n", frame);//SaveOneYUVImage(got_picture,context,picture,outfile);            pgm_save(picture->data[0], picture->linesize[0],  //Y                     context->width, context->height, outfile);      pgm_save(picture->data[1], picture->linesize[1],        context->width/2, context->height/2, outfile);       //U      pgm_save(picture->data[2], picture->linesize[2],        context->width/2, context->height/2, outfile);       //V      frame++;      sprintf(outproperties,"Got width:%d height:%d frame:num:%d cost:%d pic_type:%s \n",context->width,context->height,frame,len,          GetFrameTypeName(Name,picture->pict_type));      Fprint_String(outproperties,"./tmp/outproperties.txt","a+");    }    else{      sprintf(outproperties,"NoneGot width:%d height:%d frame:num:%d cost:%d pic_type:%s \n",context->width,context->height,frame,len,          GetFrameTypeName(Name,picture->pict_type));      Fprint_String(outproperties,"./tmp/outproperties.txt","a+");      printf("this frame no picture gained \n");    }    packet.size -= len;    packet.data += len;  }//    avcodec_close(context);//    av_free(context);//    av_free(picture);  return -1 ;}

参考文章:

live555官网介绍的live555流解码补偿pps及sps

http://www.live555.com/liveMedia/faq.html#testRTSPClient-how-to-decode-data

live555中的nal units

http://blog.sina.com.cn/s/blog_82065d010100wd0t.html 

H264标准:

http://wenku.baidu.com/view/fd07f50590c69ec3d5bb75bf.html 

pudn源码

http://www.pudn.com/downloads290/sourcecode/windows/multimedia/detail1303846.html 

csdn源码

http://download.csdn.net/detail/lawishere/4382056 

数据帧参考:

http://bbs.csdn.net/topics/330029802 

LIVE555播放:

http://bbs.csdn.net/topics/330027295 

H264中SPS/PPS作用和提取

http://blog.csdn.net/sunnylgz/article/details/7680262 

H264的NAL技术

http://blog.csdn.net/ericbaner/article/details/3950810 

各种SDP协议介绍:

http://www.cnblogs.com/qingquan/archive/2011/08/02/2125585.html 

RTP视频流广播:

http://blog.csdn.net/Tinnal/article/details/2871734 

PPS问题参考:PPS 0 referenced decode_slice_header error   0001 sps 0001 pps

http://blog.csdn.net/cosmoslife/article/details/7557310 

H.264 over RTP - Identify SPS and PPS Frames

http://stackoverflow.com/questions/9618369/h-264-over-rtp-identify-sps-and-pps-frames 

从ffmpeg源代码分析如何解决ffmpeg编码的延迟问题 

http://blog.csdn.net/nogodoss/article/details/19112807 

H264的码流结构:

http://bbs.csdn.net/topics/330072707 

字节流格式(Annex B)和RTP格式流浅析

http://blog.csdn.net/qingkong8832/article/details/6669731 

NALU详解

http://blog.csdn.net/d_l_u_f/article/details/7260772

ffmpeg 从mp4上提取H264的nalu 

http://blog.csdn.net/gavinr/article/details/7183499 

0 0
原创粉丝点击