Using RemoteIO audio unit

来源:互联网 发布:淘宝首页搜索 编辑:程序博客网 时间:2024/05/16 00:43

转自:http://atastypixel.com/blog/using-remoteio-audio-unit/


I’ve had nasty old time trying to get some audio stuff going on the iPhone, no thanks to Apple’s lack of documentation. If you’re an iPhone developer interested in getting RemoteIO/IO Remote/whatever it’s called working on the iPhone… Do I have good news for you. Read on.

Wanna skip the Core Audio learning curve and start writing code straight away? Check out my new project:

The Amazing Audio Engine: Core Audio, Cordially

Update: Thanks to Joel Reymont, we now have an explanation for the “CrashIfClientProvidedBogusAudioBufferList” iPhone simulator bug: The simulator doesn’t like mono audio. Thanks, Joel!

Update: Happily, Apple have now created some excellent documentation on Remote IO, with some good sample projects. I recommend using that as a resource, now that it’s there, as that will continue to be updated.

Update: Tom Zicarelli has created a very extensive sample app that demonstrates the use of AUGraph, with all sorts of goodies.

So, we need to obtain an instance of the RemoteIO audio unit, configure it, and hook it up to a recording callback, which is used to notify you that there is data ready to be grabbed, and where you pull the data from the audio unit.


Overview

  1. Identify the audio component (kAudioUnitType_Output/ kAudioUnitSubType_RemoteIO/ kAudioUnitManufacturerApple)
  2. Use AudioComponentFindNext(NULL, &descriptionOfAudioComponent) to obtain the AudioComponent, which is like the factory with which you obtain the audio unit
  3. Use AudioComponentInstanceNew(ourComponent, &audioUnit) to make an instance of the audio unit
  4. Enable IO for recording and possibly playback with AudioUnitSetProperty
  5. Describe the audio format in an AudioStreamBasicDescription structure, and apply the format using AudioUnitSetProperty
  6. Provide a callback for recording, and possibly playback, again using AudioUnitSetProperty
  7. Allocate some buffers
  8. Initialise the audio unit
  9. Start the audio unit
  10. Rejoice

Here’s my code: I’m using both recording and playback. Use what applies to you!

Initialisation

Initialisation looks like this. We have a member variable of type AudioComponentInstance which will contain our audio unit.

The audio format described below uses SInt16 for samples (i.e. signed, 16 bits per sample)

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107
#define kOutputBus 0#define kInputBus 1 // ...  OSStatus status;AudioComponentInstance audioUnit; // Describe audio componentAudioComponentDescription desc;desc.componentType = kAudioUnitType_Output;desc.componentSubType = kAudioUnitSubType_RemoteIO;desc.componentFlags = 0;desc.componentFlagsMask = 0;desc.componentManufacturer = kAudioUnitManufacturer_Apple; // Get componentAudioComponent inputComponent = AudioComponentFindNext(NULL, &desc)// Get audio unitsstatus = AudioComponentInstanceNew(inputComponent, &audioUnit);checkStatus(status)// Enable IO for recordingUInt32 flag = 1;status = AudioUnitSetProperty(audioUnit,                               kAudioOutputUnitProperty_EnableIO,                               kAudioUnitScope_Input,                               kInputBus,                              &flag,                               sizeof(flag));checkStatus(status)// Enable IO for playbackstatus = AudioUnitSetProperty(audioUnit,                               kAudioOutputUnitProperty_EnableIO,                               kAudioUnitScope_Output,                               kOutputBus,                              &flag,                               sizeof(flag));checkStatus(status)// Describe formataudioFormat.mSampleRate= 44100.00;audioFormat.mFormatID= kAudioFormatLinearPCM;audioFormat.mFormatFlags= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;audioFormat.mFramesPerPacket= 1;audioFormat.mChannelsPerFrame= 1;audioFormat.mBitsPerChannel= 16;audioFormat.mBytesPerPacket= 2;audioFormat.mBytesPerFrame= 2// Apply formatstatus = AudioUnitSetProperty(audioUnit,                               kAudioUnitProperty_StreamFormat,                               kAudioUnitScope_Output,                               kInputBus,                               &audioFormat,                               sizeof(audioFormat));checkStatus(status);status = AudioUnitSetProperty(audioUnit,                               kAudioUnitProperty_StreamFormat,                               kAudioUnitScope_Input,                               kOutputBus,                               &audioFormat,                               sizeof(audioFormat));checkStatus(status);  // Set input callbackAURenderCallbackStruct callbackStruct;callbackStruct.inputProc = recordingCallback;callbackStruct.inputProcRefCon = self;status = AudioUnitSetProperty(audioUnit,                               kAudioOutputUnitProperty_SetInputCallback,                               kAudioUnitScope_Global,                               kInputBus,                               &callbackStruct,                               sizeof(callbackStruct));checkStatus(status)// Set output callbackcallbackStruct.inputProc = playbackCallback;callbackStruct.inputProcRefCon = self;status = AudioUnitSetProperty(audioUnit,                               kAudioUnitProperty_SetRenderCallback,                               kAudioUnitScope_Global,                               kOutputBus,                              &callbackStruct,                               sizeof(callbackStruct));checkStatus(status)// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)flag = 0;status = AudioUnitSetProperty(audioUnit,                               kAudioUnitProperty_ShouldAllocateBuffer,                              kAudioUnitScope_Output,                               kInputBus,                              &flag,                               sizeof(flag))// TODO: Allocate our own buffers if we want // Initialisestatus = AudioUnitInitialize(audioUnit);checkStatus(status);

Then, when you’re ready to start:

12
OSStatus status = AudioOutputUnitStart(audioUnit);checkStatus(status);

And to stop:

12
OSStatus status = AudioOutputUnitStop(audioUnit);checkStatus(status);

Then, when we’re finished:

1
AudioUnitUninitialize(audioUnit);

And now for our callbacks.

RECORDING

123456789101112131415161718192021222324252627282930
static OSStatus recordingCallback(void *inRefCon,                                   AudioUnitRenderActionFlags *ioActionFlags,                                   const AudioTimeStamp *inTimeStamp,                                   UInt32 inBusNumber,                                   UInt32 inNumberFrames,                                   AudioBufferList *ioData) {     // TODO: Use inRefCon to access our interface object to do stuff    // Then, use inNumberFrames to figure out how much data is available, and make    // that much space available in buffers in an AudioBufferList.     AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)     // Then:    // Obtain recorded samples     OSStatus status;     status = AudioUnitRender([audioInterface audioUnit],                              ioActionFlags,                              inTimeStamp,                              inBusNumber,                              inNumberFrames,                              bufferList);    checkStatus(status)// Now, we have the samples we just read sitting in buffers in bufferList    DoStuffWithTheRecordedAudio(bufferList);    return noErr;}

PLAYBACK

1234567891011
static OSStatus playbackCallback(void *inRefCon,                                   AudioUnitRenderActionFlags *ioActionFlags,                                   const AudioTimeStamp *inTimeStamp,                                   UInt32 inBusNumber,                                   UInt32 inNumberFrames,                                   AudioBufferList *ioData) {        // Notes: ioData contains buffers (may be more than one!)    // Fill them up as much as you can. Remember to set the size value in each buffer to match how    // much data is in the buffer.    return noErr;}

Finally, rejoice with me in this discovery ;)

Resources that helped

  • http://pastie.org/pastes/219616
  • http://developer.apple.com/samplecode/CAPlayThrough/listing8.html
  • http://listas.apesol.org/pipermail/svn-libsdl.org/2008-July/000797.html

No thanks at all to Apple for their lack of accessible documentation on this topic – They really have a long way to go here! Also boo to them with their lack of search engine, and refusal to open up their docs to Google. It’s a jungle out there!

Update: You can adjust the latency of RemoteIO (and, in fact, any other audio framework) by setting the kAudioSessionProperty_PreferredHardwareIOBufferDuration property:

float aBufferLength = 0.005; // In secondsAudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,                         sizeof(aBufferLength), &aBufferLength);

This adjusts the length of buffers that’re passed to you – if buffer length was originally, say, 1024 samples, then halving the number of samples halves the amount of time taken to process them.

RELATED POSTS

  • Error -12986 and you A customer recently got in touch with me with an...
  • Core Audio and freakin’ error -66632 This will only be of interest to a very small...
  • A simple, fast circular buffer implementation for audio processing Circular buffers are pretty much what they sound like –...
  • Easy AAC compressed audio conversion on iOS From the iPhone 3Gs up, it’s possible to encode compressed...
  • Playing audio in time using Remote IO I got an email today with a question about how...