linux之学习live555视频笔记

来源:互联网 发布:android 应用启动优化 编辑:程序博客网 时间:2024/06/05 13:29
开发板上交叉编译live555步骤笔记交叉编译live555MediaServer流媒体服务器RTP/RTCP, RTSP, SIPluther@gliethttp:~/live$ vi config.armlinux修改为CROSS_COMPILE=arm-linux-如果静态编译还需要做如下设置config.armlinux如下2处加入-static编译选项COMPILE_OPTS = $(INCLUDES) -I. -O2 -DSOCKLEN_T=socklen_t -DNO_SSTREAM=1 -D_LARGEFILE_SOURCE=1 -D_FILE_OFFSET_BITS=64 -staticCONSOLE_LINK_OPTS = -static #$(LINK_OPTS)luther@gliethttp:~/live$ ./genMakefiles armlinuxluther@gliethttp:~/live$ make -j4最后文件生成到了mediaServer/live555MediaServer 同时在testProgs目录下生成了测试live555MediaServer流媒体服务器的所有测试程序[luther.gliethttp]比如openRTSP和playSIP协议测试程序(http://www.live555.com/openRTSP/和http://www.live555.com/playSIP/)testProgs/openRTSPtestProgs/playSIPtestProgs/testMPEG1or2VideoStreamertestProgs/testMPEG2TransportStreamertestProgs/vobStreamertestProgs/testMP3StreamertestProgs/testMPEG4VideoToDarwintestProgs/testAMRAudioStreamertestProgs/testMPEG2TransportStreamTrickPlaytestProgs/testMPEG1or2ProgramToTransportStreamtestProgs/testMP3ReceivertestProgs/testMPEG4VideoStreamertestProgs/MPEG2TransportStreamIndexertestProgs/testRelaytestProgs/testMPEG1or2SplittertestProgs/testDVVideoStreamertestProgs/testMPEG1or2VideoReceivertestProgs/testMPEG1or2AudioVideoToDarwintestProgs/testMPEG1or2AudioVideoStreamertestProgs/sapWatchtestProgs/testWAVAudioStreamertestProgs/testOnDemandRTSPServer1:LINUX下Live555不能获取ip地址解决方法?1:把linux主机改成固定静态ip就可以,在ubuntu环境下/etc/network/interfaces下修改比如:auto lo# eth0iface lo inet loopback# iface eth0 inet dhcp#hostname dc_00 auto eth0 iface                  eth0 inet static address                192.168.8.15 netmask                255.255.255.0 gateway                192.168.8.1 broadcast              192.168.8.255 dns-nameservers        202.96.128.862:在linux下不能获取到IP地址的解决方法,设置一个默认网关即可:实践证明这个才是最有效的route add default gw 192.168.8.1 eth0   2:在开发上运行./live555MediaServer 当做rtsp服务器,在pc机上运行vlc播放器,可以播放MP3和2.264视频,可是在手机上运行vlc     只能播放mp3不能播放2.264视频,是不是手机的vlc版本有问题?rtsp服务器有时候运行出错Segmentation fault?     好好研究下live555服务器的工作流程?    # ./live555MediaServer LIVE555 Media Server        version 0.88 (LIVE555 Streaming Media library version 2015.08.07).Play streams from this server using the URL        rtsp://192.168.8.15/<filename>where <filename> is a file present in the current directory.Each file's type is inferred from its name suffix:        ".264" => a H.264 Video Elementary Stream file        ".265" => a H.265 Video Elementary Stream file        ".aac" => an AAC Audio (ADTS format) file        ".ac3" => an AC-3 Audio file        ".amr" => an AMR Audio file        ".dv" => a DV Video file        ".m4e" => a MPEG-4 Video Elementary Stream file        ".mkv" => a Matroska audio+video+(optional)subtitles file        ".mp3" => a MPEG-1 or 2 Audio file        ".mpg" => a MPEG-1 or 2 Program Stream (audio+video) file        ".ogg" or ".ogv" or ".opus" => an Ogg audio and/or video file        ".ts" => a MPEG Transport Stream file                (a ".tsx" index file - if present - provides server 'trick play' support)        ".vob" => a VOB (MPEG-2 video with AC-3 audio) file        ".wav" => a WAV Audio file        ".webm" => a WebM audio(Vorbis)+video(VP8) fileSee http://www.live555.com/mediaServer/ for additional documentation.(We use port 80 for optional RTSP-over-HTTP tunneling, or for HTTP live streaming (for indexed Transport Stream files only).)Segmentation fault      问3:把live555的点播改成直播,在liveMedia目录下增加2个WW_H264VideoServerMediaSubsession.cpp和WW_H264VideoSource.cpp     可是Make的时候,这两个文件没有编译,如何修改相应目录的Makefile?     ./genMakefiles  armlinux  --->顶层Makefile        --->相应目录Makefile(Makefile.head和Makefile.tail生成)     Makefile.tail比较复杂,怎么修改添加新的文件?
                                          自己写的放在 WAVAudioFileSource.cpp WAVAudioFileSource.hh参考学习     H264VideoFileServerMediaSubsession.cpp H264VideoFileServerMediaSubsession.hhByteStreamFileSource-->H264VideoStreamParser-->H264VideoStreamFramer-->H264FUAFragmenter-->H264VideoRTPSink//编译live555_app测试程序arm-linux-g++ RTSPStream.h RTSPStream.cpp main.cpp cp a.out  /home/sean/work/nfs/cx92755/rishun_driver_bin/问4:按照移植博文,运行两个线程,一个是h264产生视频流线程,一个是live555发送线程,通过vlc还是不能显示视频,为什么?./write_fifokillall read_fifokillall live555MediaServer基于live555的视频直播一直很想做流媒体的直播,最近花时间看了有关live555的有关代码,这里隆重的推荐两篇:    http://blog.csdn.net/nkmnkm   (道长的文章,分析的很不错)    http://blog.csdn.net/gavinr   (这里面的文章容易让人理清思路)问5:2.264文件大小为82102,fFrameSize为什么也是82102,执行memcpy(fTo,m_pFrameBuffer,fFrameSize);出现段错误?# ./rtspstream &# ./live555MediaServer using url "rtsp://192.168.8.15/live"ljh---before open(FIFO_NAME,O_RDONLY)[RTSPStream] open fifo result = [4][MEDIA SERVER] open fifo result = [5]ljh---WW_H264VideoSource::WW_H264VideoSource(UsageEnvironment & env)ljh---OnDemandServerMediaSubsession::sdpLines()ljh---numBytesNeeded=4ljh---numBytesNeeded=102400ljh---maxInputFrameSize=102400ljh---WW_H264VideoSource::doGetNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrameljh--- m_hFifo=5ljh---PIPE_BUF=4096[RTSPStream] SendH264Data datalen[82102], sendsize = [82102][MEDIA SERVER] GetFrameData len = [82102],fMaxSize = [150000]ljh---fFrameSize=82102ljh---WW_H264VideoSource::GetFrameDataljh---WW_H264VideoSource::getNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrame[MEDIA SERVER] rtsp connection closedljh---before open(FIFO_NAME,O_RDONLY)[MEDIA SERVER] open fifo result = [5]ljh---WW_H264VideoSource::WW_H264VideoSource(UsageEnvironment & env)Segmentation fault# ls -ltotal 2274948-rwxrwxr-x    1 default  default      82102 Sep 11  2015 2.264-rwxr-xr-x    1 default  default     919735 Sep 10  2015 App-rwxrwxr-x    1 default  default     430486 Sep 19  2015 VideoEncoder.elf-rwxrwxr-x    1 default  default       8813 Sep 20  2015 a.out问6:没有输入流和有输入流输出区别,有输入流会去执行2.264---MPEGVideoStreamFramer::doGetNextFrame,同时出现段错误,要跟踪代码分析???//没有收入流[MEDIA SERVER] GetFrameData len = [0],fMaxSize = [150000]ljh---fFrameSize=0ljh---numBytesNeeded=4ljh---numBytesNeeded=102400ljh---maxInputFrameSize=102400ljh---WW_H264VideoSource::doGetNextFrameljh---WW_H264VideoSource::GetFrameDataljh---WW_H264VideoSource::getNextFrameljh--- m_hFifo=5ljh---PIPE_BUF=4096//执行了输入流## ./rtspstream &# ./live555MediaServerusing url "rtsp://192.168.8.15/live"ljh---before open(FIFO_NAME,O_RDONLY)[RTSPStream] open fifo result = [4][MEDIA SERVER] open fifo result = [5]ljh---WW_H264VideoSource::WW_H264VideoSource(UsageEnvironment & env)ljh---OnDemandServerMediaSubsession::sdpLines()ljh---numBytesNeeded=4ljh---numBytesNeeded=102400ljh---maxInputFrameSize=102400ljh---WW_H264VideoSource::doGetNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrameljh--- m_hFifo=5ljh---PIPE_BUF=4096[RTSPStream] SendH264Data datalen[82102], sendsize = [82102][MEDIA SERVER] GetFrameData len = [82102],fMaxSize = [150000]ljh---fFrameSize=82102ljh---WW_H264VideoSource::GetFrameDataljh---WW_H264VideoSource::getNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrame2.264---MPEGVideoStreamFramer::doGetNextFrame[MEDIA SERVER] rtsp connection closedljh---before open(FIFO_NAME,O_RDONLY)[MEDIA SERVER] open fifo result = [5]ljh---WW_H264VideoSource::WW_H264VideoSource(UsageEnvironment & env)Segmentation fault分析流程:void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {---packFrame();---fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),afterGettingFrame, this, ourHandleClosure, this);---doGetNextFrame();---执行WW_H264VideoSource::doGetNextFrame//为什么?可以参考文档 live555源码分析----RTP的打包与发送  http://blog.csdn.net/gavinr/article/details/7035799Boolean MediaSink::startPlaying(MediaSource& source,--->continuePlaying();//重要的函数在这里--->Boolean H264or5VideoRTPSink::continuePlaying() {--->MultiFramedRTPSink::continuePlaying();--->buildAndSendPacket(True);//发送第一个包--->packFrame();--->fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),  afterGettingFrame, this, ourHandleClosure, this);//两个回调函数afterGettingFrame和ourHandleClosure  --->sink->afterGettingFrame1(numBytesRead, numTruncatedBytes,  --->sendPacketIfNecessary();      // The packet is ready to be sent now  --->if (!fRTPInterface.sendPacket(fOutBuf->packet(), fOutBuf->curPacketSize())) {  ---> nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*)sendNext, this);     //live555主程序框架 其实做一个也不难!要做的工作就是对h.264字节流进行分析,分离出单个的nalu,然后经rtp协议组包发送即可。这个可以完全从头做,不用 live555,rtp库可以用jrtpib,rtsp协议部分自己写,代码量不到3千就可以搞定。只不过live555是从文件读h.264字节流,实 时监控的字节流则是现场采集。二者的差别仅此而已。 基于Live555的多路视频流的流媒体服务器框架基于Live555的多路视频流的流媒体服务器的整体框架 创建静态的多路视频流的流媒体服务器的框架(一开始就创建好) 1、创建一个RTSPServer; 2、为每路视频流创建一个 ServerMediaSession,用一个唯一标识跟ServerMediaSession构建成一个hash_map保存起来; 3、根据视频流的类型的不同,为每路视频流创建一个 OnDemandMediaSubsession ; 4、将创建的 OnDemandMediaSubsession 添加到第2步创建的 ServerMediaSession ; 5、将第2步创建的 ServerMediaSession 添加到 RTSPServer ; 6、 开启 doEventLoop ;问8:使用live555,构建rtsp流媒体服务器?答8:1,修改testOnDemandRTSPServer.cpp中的Boolean reuseFirstSource = true;//重复使用     2,交叉编译live555(具体看网络文档)     3,mkfifo test.264 创建命名管道,采集摄像头数据硬编码 通过管道与 live555进程通信     4,./VideoEncoder.elf test.264 &//采集摄像头数据硬编码     5, ./testOnDemandRTSPServer //运行rtsp测试程序     ----------------------------------------------------------------------访问FIFO文件同样有两种方式访问FIFO文件。 命令行方式首先用cat命令读取刚才创建的FIFO文件:cat < /tmp/my_fifo这个时候,cat命令将一直挂起,直到终端或者有数据发送到FIFO中。 然后尝试向FIFO中写数据(在另外一个终端执行这个命令)echo "FIFO test" > /tmp/my_fifo这个时候cat将会输出内容。                                                       ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////移植音频G722协议到live555笔记问1: snd_pcm_readi()和snd_pcm_writei()如何理解?答1: 我对ALSA录音和播音的理解,当我们通过麦克风讲话的声音搜集到声卡里之后,      内存从声卡里读取声音数据的过程就是录音过程,即snd_pcm_readi()函数的执行,      把内存中的声音数据写入到声卡的过程就是播音过程,即snd_pcm_writei()函数的执行            ##INFO_sound.c>init_palyback>226:playback sound buff_time is:256000##INFO_sound.c>init_palyback>252:pb=>cs:256,bs:4096##INFO_sound.c>init_palyback>278:PlayBack Params=>pb_chunk_size:256,pb_bit_per_frame:32##INFO_sound.c>init_record>364:record sound buff_time is:256000##INFO_sound.c>init_record>393:Record Params=>cap_chunk_size:512,cap_bit_per_frame:32##INFO_sound.c>init_sound>147:init_sound init finish!##INFO_main.c>main>155:Sound Init Success!      问2:这个初始化列表 FileServerMediaSubsession(env, fileName, reuseFirstSource),如何初始化?     执行本身的构造函数吗?MP3AudioFileServerMediaSubsession::MP3AudioFileServerMediaSubsession(UsageEnvironment& env,    char const* fileName, Boolean reuseFirstSource,    Boolean generateADUs,    Interleaving* interleaving)  : FileServerMediaSubsession(env, fileName, reuseFirstSource),    fGenerateADUs(generateADUs), fInterleaving(interleaving), fFileDuration(0.0) {}class FileServerMediaSubsession: public OnDemandServerMediaSubsession {protected: // we're a virtual base class  FileServerMediaSubsession(UsageEnvironment& env, char const* fileName,    Boolean reuseFirstSource);  virtual ~FileServerMediaSubsession();protected:  char const* fFileName;  u_int64_t fFileSize; // if known}问3:在Makefile中加入 -g -DDEBUG,live555源码中DEBUG也没有被定义,为什么?难道只能在源码中定义 #define DEBUG 1 吗?4:主程序中代码如何分析?如何跟踪代码?  // A H.264 video elementary stream:  {    char const* streamName = "h264ESVideoTest";    char const* inputFileName = "test.264";    ServerMediaSession* sms      = ServerMediaSession::createNew(*env, streamName, streamName,      descriptionString);    sms->addSubsession(H264VideoFileServerMediaSubsession       ::createNew(*env, inputFileName, reuseFirstSource));    rtspServer->addServerMediaSession(sms);    announceStream(rtspServer, sms, streamName, inputFileName);  }      5:如何查看 静态库(.a) ?    还有其他库文件 比如动态库(.so) 目标文件(.o)5:在Linux 下经常需要链接一些 *.a的库文件,那怎么查看这些*.a 中包 含哪些文件、函数、变量:        1. 查看文件:ar -t *.a        2. 查看函数、变量:nm *.a