android 音频采集、FLTP重采样与AAC编码推流

来源:互联网 发布:umts网络是什么 编辑:程序博客网 时间:2024/06/14 08:20

相比较视频编码,音频编码要简单很多,主要就是将采集到的音频源数据PCM编码AAC.
MediaPlus中FFmpeg使用的是libfdk-aac编码器,这里有个问题需要注意下:FFmpeg已经废弃了AV_SAMPLE_FMT_S16格式PCM编码AAC,也就是说如果使用FFmpeg自带的AAC编码器,必须做音频的重采样(重采样为:AV_SAMPLE_FMT_FLTP),否则AAC编码是失败的。

接下来,看下MediaPlus中是如何采集音频与AAC编码的.

在app.mobile.nativeapp.com.libmedia.core.streamer.RtmpPushStreamer的AudioThread获取AudioRecord采集到的音频数据并传入底层:

  class AudioThread extends Thread {        public volatile boolean m_bExit = false;        @Override        public void run() {            // TODO Auto-generated method stub            super.run();            int[] dataLength;            byte[] audioBuffer;            AudioCaptureInterface.GetAudioDataReturn ret;            dataLength = new int[1];            audioBuffer = new byte[m_aiBufferLength[0]];            while (!m_bExit) {                try {                    Thread.sleep(1, 10);                    if (m_bExit) {                        break;                    }                } catch (InterruptedException e) {                    e.printStackTrace();                }                try {                    ret = mAudioCapture.GetAudioData(audioBuffer,                            m_aiBufferLength[0], dataLength);                    if (ret == AudioCaptureInterface.GetAudioDataReturn.RET_SUCCESS) {                        encodeAudio(audioBuffer, dataLength[0]);                    }                } catch (Exception e) {                    e.printStackTrace();                    stopThread();                }            }        }

具体AudioRecord采集具体实现是avcapture.jar包中,代码比较简单,相关android音视频采集初始化及API调用网上都有相关Demo,这里不再赘述!

  • encodeAudio(audioBuffer, dataLength[0]);将音频数据传入底层。
      /**  * 采集的PCM音频数据  *  * @param audioBuffer  * @param length  */ public void encodeAudio(byte[] audioBuffer, int length) {     try {         LiveJniMediaManager.EncodeAAC(audioBuffer, length);     } catch (Exception e) {         e.printStackTrace();     } }
  • JNI层接收到PCM音频数据,添加到AudioCapture同步队列中:
    JNIEXPORT jint JNICALLJava_app_mobile_nativeapp_com_libmedia_core_jni_LiveJniMediaManager_EncodeAAC(JNIEnv *env,                                                                           jclass type,                                                                           jbyteArray audioBuffer_,                                                                           jint length) { if (audioCaptureInit && !isClose) {     jbyte *audioSrc = env->GetByteArrayElements(audioBuffer_, 0);     uint8_t *audioDstData = (uint8_t *) malloc(length);     memcpy(audioDstData, audioSrc, length);     OriginData *audioOriginData = new OriginData();     audioOriginData->size = length;     audioOriginData->data = audioDstData;     audioCapture->PushAudioData(audioOriginData);     env->ReleaseByteArrayElements(audioBuffer_, audioSrc, 0); } return 0;}
    以上代码,是在调用app.mobile.nativeapp.com.libmedia.core.streamer.RtmpPushStreamer>>startPushStream()开启推流前的相关调用:主要就是初始化音频采集,并将数据传入底层。
  • startPushStream的调用,会重置AudioCapture::ExitCapture=false;
    数据才会被加入到audioCaputureframeQueue对列中.
    /**  * 开启推流  * @param pushUrl  * @return  */ private boolean startPushStream(String pushUrl) {     if (nativeInt) {         int ret = 0;         ret = LiveJniMediaManager.StartPush(pushUrl);         if (ret < 0) {             Log.d("initNative", "native push failed!");             return false;         }         return true;     }     return false; }
    如下图:

  • 重置标记后,audioCaputureframeQueue.push将数据添中到队列中.
int AudioCapture::PushAudioData(OriginData *originData) {    if (ExitCapture) {        return 0;    }    originData->pts = av_gettime();    LOG_D(DEBUG,"audio capture pts :%lld",originData->pts);    audioCaputureframeQueue.push(originData);    return 0;}

上面这些代码与视频的处理方式都是一样的流程,在调用app.mobile.nativeapp.com.libmedia.core.streamer.RtmpPushStreamer>>startPushStream(),已经开始往音频队列中添加数据,紧接着调用rtmpStreamer->StartPushStream() ,实际也就是开启了音视频的两个编码线程及推流,推流相关代码与视频一致.

int RtmpStreamer::StartPushStream() {    videoStreamIndex = AddStream(videoEncoder->videoCodecContext);    audioStreamIndex = AddStream(audioEncoder->audioCodecContext);    pthread_create(&t3, NULL, RtmpStreamer::WriteHead, this);    pthread_join(t3, NULL);    VideoCapture *pVideoCapture = videoEncoder->GetVideoCapture();    AudioCapture *pAudioCapture = audioEncoder->GetAudioCapture();    pVideoCapture->videoCaputureframeQueue.clear();    pAudioCapture->audioCaputureframeQueue.clear();    if(writeHeadFinish) {        pthread_create(&t1, NULL, RtmpStreamer::PushAudioStreamTask, this);        pthread_create(&t2, NULL, RtmpStreamer::PushVideoStreamTask, this);    }else{        return -1;    }    return 0;}
  • PushAudioStreamTask中从队列中获取数据编码、推流.
    rtmpStreamer->audioEncoder->EncodeAAC(&pAudioData);AAC编码.
    rtmpStreamer->SendFrame(pAudioData, rtmpStreamer->audioStreamIndex);推流(与视频推流一致)

这里说明下,音频编码前获取编码器及一些参数的指定:
libmedia/src/main/cpp/AudioEncoder.cpp是音频编码的核心类,int AudioEncoder::InitEncode() 方法封装了音频编码器的初始化。

int AudioEncoder::InitEncode() {    std::lock_guard<std::mutex> lk(mut);    avCodec = avcodec_find_encoder_by_name("libfdk_aac");    int ret = 0;    if (!avCodec) {        LOG_D(DEBUG, "aac encoder not found!")        return -1;    }    audioCodecContext = avcodec_alloc_context3(avCodec);    if (!audioCodecContext) {        LOG_D(DEBUG, "avcodec alloc context3 failed!");        return -1;    }    audioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;    audioCodecContext->sample_fmt = AV_SAMPLE_FMT_S16;    audioCodecContext->sample_rate = audioCapture->GetAudioEncodeArgs()->sampleRate;    audioCodecContext->thread_count = 8;    audioCodecContext->bit_rate = 50*1024*8;    audioCodecContext->channels = audioCapture->GetAudioEncodeArgs()->channels;    audioCodecContext->frame_size = audioCapture->GetAudioEncodeArgs()->nb_samples;    audioCodecContext->time_base = {1, 1000000};//AUDIO VIDEO 两边时间基数要相同    audioCodecContext->channel_layout = av_get_default_channel_layout(audioCodecContext->channels);    outputFrame = av_frame_alloc();    outputFrame->channels = audioCodecContext->channels;    outputFrame->channel_layout = av_get_default_channel_layout(outputFrame->channels);    outputFrame->format = audioCodecContext->sample_fmt;    outputFrame->nb_samples = 1024;    ret = av_frame_get_buffer(outputFrame, 0);    if (ret != 0) {        LOG_D(DEBUG, "av_frame_get_buffer failed!");        return -1;    }    LOG_D(DEBUG, "av_frame_get_buffer success!");    ret = avcodec_open2(audioCodecContext, NULL, NULL);    if (ret != 0) {        char buf[1024] = {0};        av_strerror(ret, buf, sizeof(buf));        LOG_D(DEBUG, "avcodec open failed! info:%s", buf);        return -1;    }    LOG_D(DEBUG, "open audio codec success!");    LOG_D(DEBUG, "Complete init Audio Encode!")    return 0;}
  • 指定获取libfdk_aac编码器
    avCodec = avcodec_find_encoder_by_name("libfdk_aac");
  • 初始化编码器上下文
    audioCodecContext = avcodec_alloc_context3(avCodec);
  • 创建AVFrame,并分配内存负责封装PCM源数据

    outputFrame = av_frame_alloc();  outputFrame->channels = audioCodecContext->channels;//通道数  outputFrame->channel_layout = av_get_default_channel_layout(outputFrame->channels);  outputFrame->format = audioCodecContext->sample_fmt;  outputFrame->nb_samples = 1024;//默认值  ret = av_frame_get_buffer(outputFrame, 0);  if (ret != 0) {      LOG_D(DEBUG, "av_frame_get_buffer failed!");      return -1;  }  LOG_D(DEBUG, "av_frame_get_buffer success!");
  • 打开编码器

    ret = avcodec_open2(audioCodecContext, NULL, NULL);

    以上是编码前必须要完成的初始化.

int AudioEncoder::EncodeAAC 方法封装了AAC编码:

int AudioEncoder::EncodeAAC(OriginData **originData) {    int ret = 0;    ret = avcodec_fill_audio_frame(outputFrame,                                   audioCodecContext->channels,                                   audioCodecContext->sample_fmt, (*originData)->data,                                   8192, 0);    outputFrame->pts = (*originData)->pts;    ret = avcodec_send_frame(audioCodecContext, outputFrame);    if (ret != 0) {#ifdef SHOW_DEBUG_INFO        LOG_D(DEBUG, "send frame failed!");#endif    }    av_packet_unref(&audioPacket);    ret = avcodec_receive_packet(audioCodecContext, &audioPacket);    if (ret != 0) {#ifdef SHOW_DEBUG_INFO        LOG_D(DEBUG, "receive packet failed!");#endif    }    (*originData)->Drop();    (*originData)->avPacket = &audioPacket;#ifdef SHOW_DEBUG_INFO    LOG_D(DEBUG, "encode audio packet size:%d pts:%lld", (*originData)->avPacket->size,          (*originData)->avPacket->pts);    LOG_D(DEBUG, "Audio frame encode success!");#endif    (*originData)->avPacket->size;    return audioPacket.size;}
  • *originData->data填充到AVFrame中,
    ret = avcodec_send_frame(audioCodecContext, outputFrame);ret = avcodec_receive_packet(audioCodecContext, &audioPacket);
    audioPacket就是编码后的数据了,data是编码后的数据,size是大小,这样就完成了编码.

注意:在int AudioEncoder::InitEncode()方法中

avcodec_find_encoder_by_name("libfdk_aac");

这里使用了fdk-aac编码器,前提是你必须要将libfdk-aac库,链接到ffmpeg动态库中,否则是找不到此编码器的。FFmpeg自带有AAC编码器,可以通过:

 avcodec_find_encoder(AV_CODEC_ID_AAC);

获取到AAC编码器,当然如果使用FFmpeg的AAC编码器,就会涉及到一个问题,就是刚开始文中提到了,AV_SAMPLE_FMT_S16需要重采样为:AV_SAMPLE_FMT_FLTP的问题,由于FFmpeg废弃了AV_SAMPLE_FMT_S16格式PCM编码AAC,那么在编码前就需要多一步重采样的处理.

以下AV_SAMPLE_FMT_S16 PCM音频数据重采样相关代码仅供参考:
  • 初始化SwrContext,指定输入输出参数

      swrContext = swr_alloc_set_opts(swrContext, av_get_default_channel_layout(CHANNELS),//输出通道Layout                                  AV_SAMPLE_FMT_FLTP,//输出格式                                  48000,//输出采样率                                  av_get_default_channel_layout(CHANNELS),//输入通道Layout                                  AV_SAMPLE_FMT_S16,//输入格式                                  48000,//输入采样率                                  NULL,//NULL                                  NULL);//NULL    ret = swr_init(swrContext);//初始化SwrContext  if (ret != 0) {      LOG_D(DEBUG, "swr_init failed!");      return -1;  }
  • AAC编码前,将源数据重采样

     for (; ;) {      if (encodeAAC->exit) {          break;      }      if (encodeAAC->frame_queue.empty()) {          continue;      }      const uint8_t *indata[AV_NUM_DATA_POINTERS] = {0};      //PCM s16      uint8_t *buf = *encodeAAC->frame_queue.wait_and_pop().get();//PCM 16bit#ifdef FDK_CODEC      //fdk-aac无需重采样      ret = avcodec_fill_audio_frame(encodeAAC->outputFrame, encodeAAC->avCodecContext->channels,                                     encodeAAC->avCodecContext->sample_fmt, buf, BUFFER_SIZE, 0);      if (ret < 0) {          LOG_D(DEBUG, "fill frame failed!");          continue;      }#else      //重采样AM_SAMPLE_FMT_FLTP      indata[0] = buf;      swr_convert(encodeAAC->swrContext, encodeAAC->outputFrame->data,                  encodeAAC->outputFrame->nb_samples, indata,                  encodeAAC->outputFrame->nb_samples);#endif
    以上代码就可以实现音频重采样,这样就可以再使用FFMPEG AAC编码器完成编码.

以上简述了android 采集音频PCM数据及AAC编码、AAC编码涉及的相关初始化、FFmpeg AAC编码器的重采样示例.android camera采集、H264编码与Rtmp推流与本文描述了音视频采集、编码过程及如何完成推流,相关文章待续......


作者:swordman
链接:https://juejin.im/post/5a1b6bdbf265da43040654a6
来源:掘金
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
原创粉丝点击