ios直播技术(4)-- 视频渲染

来源:互联网 发布:电影男人四十知乎 编辑:程序博客网 时间:2024/06/05 12:29

IOS视频渲染相对编解码要简单一些,系统也提供了GLKView,当然我们也可以自定义实现。这里我就分享下自己实现的一点经验。

自定义的renderframe

#import <UIKit/UIKit.h>@class AVCaptureVideoPreviewLayer;class I420VideoFrame;@interface VideoRenderFrame : UIView@property (nonatomic, strong) AVCaptureVideoPreviewLayer *prevLayer;- (id)initWithParent:(UIView*)parent Preview:(AVCaptureVideoPreviewLayer*)prev;- (void)setScalingType:(int)scalingType;- (void)pause;- (void)resume;- (void)rawFrameComes:(I420VideoFrame*)raw;@end

视频渲染又分为本地和远端的,首先来说下本地视频渲染。

ios系统提供了本地视频预览层 AVCaptureVideoPreviewLayer,我们只需要在采集视频时  将其创建出来。

prelayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];renderframe = [[VideoRenderFrame alloc] initWithParent:parentView Preview:prelayer];

这里的parentView是用户调用playvideo时传入的view,然后在将AVCaptureVideoPreviewLayer添加到VideoRenderFrame的layer中。

- (id)initWithParent:(UIView*)view Preview:(AVCaptureVideoPreviewLayer*)prev{    self = [super initWithFrame:view.bounds];    if (self) {                self.backgroundColor = [UIColor blackColor];        self.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;        self.clipsToBounds = YES;        [view addSubview:self];        [view setAutoresizesSubviews:YES];                _lock = [[NSLock alloc] init];                if (prev)        {            _prevLayer = prev;            _prevLayer.frame = [self bounds];            [self.layer addSublayer:_prevLayer];        }                NSNotificationCenter *notificationCenter = [NSNotificationCenter defaultCenter];        [notificationCenter addObserver:self                               selector:@selector(willResignActive)                                   name:UIApplicationWillResignActiveNotification                                 object:nil];        [notificationCenter addObserver:self                               selector:@selector(didBecomeActive)                                   name:UIApplicationDidBecomeActiveNotification                                 object:nil];    }    return self;}
当view的大小变化时 需要给prelayer的frame重新赋值

- (void)layoutSubviews{    [super layoutSubviews];    if (_prevLayer != nil)    {        _prevLayer.frame = [self bounds];    }}
当需要改变rendermode时 直接改变prevLayer的videoGravity值,而不需要去自己计算。

dispatch_async(dispatch_get_main_queue(), ^{        _scalingType = scaling;        _curVideoSize = CGSizeMake(0, 0);        if (_prevLayer)        {            switch (_scalingType){                case 0:                     _prevLayer.videoGravity = AVLayerVideoGravityResizeAspect;                    break;                case 1:                     _prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;                    break;                default:                     _prevLayer.videoGravity = AVLayerVideoGravityResize;                    break;            }        }    });

接下来说一下远端视频的渲染,远端视频的渲染要比本地预览复杂一些,不仅要用到opengl,而且在改变rendermode时  我们需要自己计算需要渲染的rect。

当我们在收到一帧画面时,首先要检查宽高是否发生变化,如果变化了 需要重新初始化texture,然后在更新textures

bool OpenGles20::Render(const I420VideoFrame& frame) {  if (texture_width_ != (GLsizei)frame.width() ||      texture_height_ != (GLsizei)frame.height()) {    SetupTextures(frame);  }  UpdateTextures(frame);  glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices_);  return true;}static void InitializeTexture(int name, int id, int width, int height) {  glActiveTexture(name);  glBindTexture(GL_TEXTURE_2D, id);  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);  glTexImage2D(GL_TEXTURE_2D,               0,               GL_LUMINANCE,               width,               height,               0,               GL_LUMINANCE,               GL_UNSIGNED_BYTE,               NULL);}void OpenGles20::SetupTextures(const I420VideoFrame& frame) {  const GLsizei width = frame.width();  const GLsizei height = frame.height();  if (!texture_ids_[0]) {    glGenTextures(3, texture_ids_);  // Generate  the Y, U and V texture  }  InitializeTexture(GL_TEXTURE0, texture_ids_[0], width, height);  InitializeTexture(GL_TEXTURE1, texture_ids_[1], width / 2, height / 2);  InitializeTexture(GL_TEXTURE2, texture_ids_[2], width / 2, height / 2);  texture_width_ = width;  texture_height_ = height;}static void GlTexSubImage2D(GLsizei width,                            GLsizei height,                            int stride,                            const uint8_t* plane) {  if (stride == width) {    // Yay!  We can upload the entire plane in a single GL call.    glTexSubImage2D(GL_TEXTURE_2D,                    0,                    0,                    0,                    width,                    height,                    GL_LUMINANCE,                    GL_UNSIGNED_BYTE,                    static_cast<const GLvoid*>(plane));  } else {    // Boo!  Since GLES2 doesn't have GL_UNPACK_ROW_LENGTH and iOS doesn't    // have GL_EXT_unpack_subimage we have to upload a row at a time.  Ick.    for (int row = 0; row < height; ++row) {      glTexSubImage2D(GL_TEXTURE_2D,                      0,                      0,                      row,                      width,                      1,                      GL_LUMINANCE,                      GL_UNSIGNED_BYTE,                      static_cast<const GLvoid*>(plane + (row * stride)));    }  }}void OpenGles20::UpdateTextures(const I420VideoFrame& frame) {  const GLsizei width = frame.width();  const GLsizei height = frame.height();  glActiveTexture(GL_TEXTURE0);  glBindTexture(GL_TEXTURE_2D, texture_ids_[0]);  GlTexSubImage2D(width, height, frame.stride(kYPlane), frame.buffer(kYPlane));  glActiveTexture(GL_TEXTURE1);  glBindTexture(GL_TEXTURE_2D, texture_ids_[1]);  GlTexSubImage2D(      width / 2, height / 2, frame.stride(kUPlane), frame.buffer(kUPlane));  glActiveTexture(GL_TEXTURE2);  glBindTexture(GL_TEXTURE_2D, texture_ids_[2]);  GlTexSubImage2D(      width / 2, height / 2, frame.stride(kVPlane), frame.buffer(kVPlane));}

这里有几个需要注意的点:
1. 渲染视频的时候需要监听程序是否是活动状态,如果在后台需要暂停opengl的渲染,恢复到前台在重新开始,否则如果在后台还去渲染会造成程序奔溃。
2. 如果渲染窗口大小经常变化,渲染出来的视频不清晰。这时就需要在窗口大小变化的时候重新设置glviewport的宽高。

- (void)resetViewport{    if (![EAGLContext setCurrentContext:_context]) {        return;    }    CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;    if (_colorRenderBuffer) {         glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);    }    if (_context && eaglLayer) {        [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:eaglLayer];    }    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);    if (status != GL_FRAMEBUFFER_COMPLETE) {        AVLogInfo(@"fail to make complete frame buffer object %x",status);    }    glViewport(0, 0, self.drawableWidth, self.drawableHeight);}

转载请注明原址,谢谢!
源码地址:https://github.com/haowei8196/VideoEngineMgr





0 0
原创粉丝点击