深入剖析Android音频之AudioTrack

来源:互联网 发布:火车票抢票软件 编辑:程序博客网 时间:2024/05/16 17:21
深入剖析Android音频之AudioTrack

播放声音可以用MediaPlayer和AudioTrack,两者都提供了java API供应用开发者使用。虽然都可以播放声音,但两者还是有很大的区别的。其中最大的区别是MediaPlayer可以播放多种格式的声音文件,例如MP3,AAC,WAV,OGG,MIDI等。MediaPlayer会在framework层创建对应的音频解码器。而AudioTrack只能播放已经解码的PCM流,如果是文件的话只支持wav格式的音频文件,因为wav格式的音频文件大部分都是PCM流。AudioTrack不创建解码器,所以只能播放不需要解码的wav文件。当然两者之间还是有紧密的联系,MediaPlayer在framework层还是会创建AudioTrack,把解码后的PCM数流传递给AudioTrack,AudioTrack再传递给AudioFlinger进行混音,然后才传递给硬件播放,所以是MediaPlayer包含了AudioTrack。使用AudioTrack播放音乐示例:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
AudioTrack audio =new AudioTrack(
     AudioManager.STREAM_MUSIC,// 指定流的类型
     32000,// 设置音频数据的采样率 32k,如果是44.1k就是44100
     AudioFormat.CHANNEL_OUT_STEREO,// 设置输出声道为双声道立体声,而CHANNEL_OUT_MONO类型是单声道
     AudioFormat.ENCODING_PCM_16BIT,// 设置音频数据块是8位还是16位,这里设置为16位。好像现在绝大多数的音频都是16位的了
     AudioTrack.MODE_STREAM// 设置模式类型,在这里设置为流类型,另外一种MODE_STATIC貌似没有什么效果
     );
audio.play(); // 启动音频设备,下面就可以真正开始音频数据的播放了
// 打开mp3文件,读取数据,解码等操作省略 ...
byte[] buffer =new buffer[4096];
int count;
while(true)
{
    // 最关键的是将解码后的数据,从缓冲区写入到AudioTrack对象中
    audio.write(buffer,0, 4096);
    if(文件结束)break;
}
//关闭并释放资源
audio.stop();
audio.release();

加载中...

AudioTrack构造过程

每一个音频流对应着一个AudioTrack类的一个实例,每个AudioTrack会在创建时注册到 AudioFlinger中,由AudioFlinger把所有的AudioTrack进行混合(Mixer),然后输送到AudioHardware中进行播放,目前Android同时最多可以创建32个音频流,也就是说,Mixer最多会同时处理32个AudioTrack的数据流。

加载中...

frameworks\base\media\java\android\media\AudioTrack.java

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
/**
 * streamType:音频流类型
 * sampleRateInHz:采样率
 * channelConfig:音频声道
 * audioFormat:音频格式
 * bufferSizeInBytes缓冲区大小:
 * mode:音频数据加载模式
 * sessionId:会话id
 */
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
        intbufferSizeInBytes, intmode, int sessionId)
throws IllegalArgumentException {
    // mState already == STATE_UNINITIALIZED
 
    // remember which looper is associated with the AudioTrack instantiation
    Looper looper;
    if((looper = Looper.myLooper()) == null) {
        looper = Looper.getMainLooper();
    }
    mInitializationLooper = looper;
    /**
     * 参数检查
     * 1.检查streamType是否为:STREAM_ALARM、STREAM_MUSIC、STREAM_RING、STREAM_SYSTEM、STREAM_VOICE_CALL、
     *  STREAM_NOTIFICATION、STREAM_BLUETOOTH_SCO、STREAM_BLUETOOTH_SCO,并赋值给mStreamType
     * 2.检查sampleRateInHz是否在4000到48000之间,并赋值给mSampleRate
     * 3.设置mChannels:
     *      CHANNEL_OUT_DEFAULT、CHANNEL_OUT_MONO、CHANNEL_CONFIGURATION_MONO ---> CHANNEL_OUT_MONO
     *      CHANNEL_OUT_STEREO、CHANNEL_CONFIGURATION_STEREO                  ---> CHANNEL_OUT_STEREO
     * 4.设置mAudioFormat:
     *      ENCODING_PCM_16BIT、ENCODING_DEFAULT ---> ENCODING_PCM_16BIT
     *      ENCODING_PCM_8BIT ---> ENCODING_PCM_8BIT
     * 5.设置mDataLoadMode:
     *      MODE_STREAM
     *      MODE_STATIC
     */
    audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode);
    /**
     * buffer大小检查,计算每帧字节大小,如果是ENCODING_PCM_16BIT,则为mChannelCount * 2
     * mNativeBufferSizeInFrames为帧数
     */
    audioBuffSizeCheck(bufferSizeInBytes);
    if(sessionId < 0) {
        thrownew IllegalArgumentException("Invalid audio session ID: "+sessionId);
    }
    //进入native层初始化
    int[] session =new int[1];
    session[0] = sessionId;
    // native initialization
    intinitResult = native_setup(newWeakReference(this),
            mStreamType, mSampleRate, mChannels, mAudioFormat,
            mNativeBufferSizeInBytes, mDataLoadMode, session);
    if(initResult != SUCCESS) {
        loge("Error code "+initResult+" when initializing AudioTrack.");
        return;// with mState == STATE_UNINITIALIZED
    }
    mSessionId = session[0];
    if(mDataLoadMode == MODE_STATIC) {
        mState = STATE_NO_STATIC_DATA;
    }else {
        mState = STATE_INITIALIZED;
    }
}
</audiotrack>

with audio session. Use this constructor when the AudioTrack must be attached to a particular audio session. The primary use of the audio session ID is to associate audio effects to a particular instance of AudioTrack: if an audio session ID is provided when creating an AudioEffect, this effect will be applied only to audio tracks and media players in the same session and not to the output mix. When an AudioTrack is created without specifying a session, it will create its own session which can be retreived by calling the getAudioSessionId() method. If a non-zero session ID is provided, this AudioTrack will share effects attached to this session with all other media players or audio tracks in the same session, otherwise a new session will be created for this track if none is supplied.

streamType

the type of the audio stream. See STREAM_VOICE_CALL,STREAM_SYSTEM,STREAM_RING,STREAM_MUSIC,STREAM_ALARM, andSTREAM_NOTIFICATION.

sampleRateInHz

the sample rate expressed in Hertz.

channelConfig

describes the configuration of the audio channels. SeeCHANNEL_OUT_MONO andCHANNEL_OUT_STEREO

audioFormat

the format in which the audio data is represented. SeeENCODING_PCM_16BIT andENCODING_PCM_8BIT

bufferSizeInBytes

the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum size of the sound that will be played for this instance. SeegetMinBufferSize(int, int, int) to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.

mode

streaming or static buffer. See MODE_STATIC andMODE_STREAM

sessionId

Id of audio session the AudioTrack must be attached to

AudioTrack有两种数据加载模式:

MODE_STREAM

在这种模式下,应用程序持续地write音频数据流到AudioTrack中,并且write动作将阻塞直到数据流从Java层传输到native层,同时加入到播放队列中。这种模式适用于播放大音频数据,但该模式也造成了一定的延时;

MODE_STATIC

在播放之前,先把所有数据一次性write到AudioTrack的内部缓冲区中。适用于播放内存占用小、延时要求较高的音频数据。

frameworks\base\core\jni\android_media_AudioTrack.cpp

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
static int android_media_AudioTrack_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,jint streamType, jint sampleRateInHertz, jint javaChannelMask,
        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession)
{
    ALOGV("sampleRate=%d, audioFormat(from Java)=%d, channel mask=%x, buffSize=%d",
        sampleRateInHertz, audioFormat, javaChannelMask, buffSizeInBytes);
    intafSampleRate;//采样率
    intafFrameCount;//帧数
    //通过AudioSystem从AudioPolicyService中读取对应音频流类型的帧数
    if(AudioSystem::getOutputFrameCount(&afFrameCount, (audio_stream_type_t) streamType) != NO_ERROR) {
        ALOGE("Error creating AudioTrack: Could not get AudioSystem frame count.");
        returnAUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
    }
    //通过AudioSystem从AudioPolicyService中读取对应音频流类型的采样率
    if(AudioSystem::getOutputSamplingRate(&afSampleRate, (audio_stream_type_t) streamType) != NO_ERROR) {
        ALOGE("Error creating AudioTrack: Could not get AudioSystem sampling rate.");
        returnAUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
    }
    // Java channel masks don't map directly to the native definition, but it's a simple shift
    // to skip the two deprecated channel configurations "default" and "mono".
    uint32_t nativeChannelMask = ((uint32_t)javaChannelMask) >>2;
    //判断是否为输出通道
    if(!audio_is_output_channel(nativeChannelMask)) {
        ALOGE("Error creating AudioTrack: invalid channel mask.");
        returnAUDIOTRACK_ERROR_SETUP_INVALIDCHANNELMASK;
    }
    //得到通道个数,popcount函数用于统计一个整数中有多少位为1
    intnbChannels = popcount(nativeChannelMask);
    // check the stream type
    audio_stream_type_t atStreamType;
    switch(streamType) {
    caseAUDIO_STREAM_VOICE_CALL:
    caseAUDIO_STREAM_SYSTEM:
    caseAUDIO_STREAM_RING:
    caseAUDIO_STREAM_MUSIC:
    caseAUDIO_STREAM_ALARM:
    caseAUDIO_STREAM_NOTIFICATION:
    caseAUDIO_STREAM_BLUETOOTH_SCO:
    caseAUDIO_STREAM_DTMF:
        atStreamType = (audio_stream_type_t) streamType;
        break;
    default:
        ALOGE("Error creating AudioTrack: unknown stream type.");
        returnAUDIOTRACK_ERROR_SETUP_INVALIDSTREAMTYPE;
    }
    // This function was called from Java, so we compare the format against the Java constants
    if((audioFormat != javaAudioTrackFields.PCM16) && (audioFormat != javaAudioTrackFields.PCM8)) {
        ALOGE("Error creating AudioTrack: unsupported audio format.");
        returnAUDIOTRACK_ERROR_SETUP_INVALIDFORMAT;
    }
    // for the moment 8bitPCM in MODE_STATIC is not supported natively in the AudioTrack C++ class so we declare everything as 16bitPCM, the 8->16bit conversion for MODE_STATIC will be handled in android_media_AudioTrack_native_write_byte()
    if((audioFormat == javaAudioTrackFields.PCM8)
        && (memoryMode == javaAudioTrackFields.MODE_STATIC)) {
        ALOGV("android_media_AudioTrack_native_setup(): requesting MODE_STATICfor 8bit \
            buff size of %dbytes, switching to 16bit, buff size of %dbytes",
            buffSizeInBytes,2*buffSizeInBytes);
        audioFormat = javaAudioTrackFields.PCM16;
        // we will need twice the memory to store the data
        buffSizeInBytes *=2;
    }
    //根据不同的采样方式得到一个采样点的字节数
    intbytesPerSample = audioFormat == javaAudioTrackFields.PCM16 ?2 : 1;
    audio_format_t format = audioFormat == javaAudioTrackFields.PCM16 ?
            AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT;
    //根据buffer大小反向计算帧数  , 一帧大小=一个采样点字节数 * 声道数
    intframeCount = buffSizeInBytes / (nbChannels * bytesPerSample);
    //判断参数的合法性
    jclass clazz = env->GetObjectClass(thiz);
    if(clazz == NULL) {
        ALOGE("Can't find %s when setting up callback.", kClassPathName);
        returnAUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
    }
    if(jSession == NULL) {
        ALOGE("Error creating AudioTrack: invalid session ID pointer");
        returnAUDIOTRACK_ERROR;
    }
    jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
    if(nSession == NULL) {
        ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
        returnAUDIOTRACK_ERROR;
    }
    intsessionId = nSession[0];
    env->ReleasePrimitiveArrayCritical(jSession, nSession,0);
    nSession = NULL;
    // create the native AudioTrack object
    sp lpTrack =new AudioTrack();
    if(lpTrack == NULL) {
        ALOGE("Error creating uninitialized AudioTrack");
        returnAUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
    }
    // 创建存储音频数据的容器
    AudioTrackJniStorage* lpJniStorage =new AudioTrackJniStorage();
    lpJniStorage->mStreamType = atStreamType;
    //将Java层的AudioTrack引用保存到AudioTrackJniStorage中
    lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);
    // we use a weak reference so the AudioTrack object can be garbage collected.
    lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);
    lpJniStorage->mCallbackData.busy =false;
    //初始化不同模式下的native AudioTrack对象
    if(memoryMode == javaAudioTrackFields.MODE_STREAM) { //stream模式
        lpTrack->set(
            atStreamType,// stream type
            sampleRateInHertz,
            format,// word length, PCM
            nativeChannelMask,
            frameCount,
            AUDIO_OUTPUT_FLAG_NONE,
            audioCallback,
            &(lpJniStorage->mCallbackData),//callback, callback data (user)
            0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
            0,//stream模式下的共享内存在AudioFlinger中创建
            true,// thread can call Java
            sessionId);// audio session ID
    }else if(memoryMode == javaAudioTrackFields.MODE_STATIC) {//static模式
        // 为AudioTrack分配共享内存区域
        if(!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
            ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
            gotonative_init_failure;
        }
        lpTrack->set(
            atStreamType,// stream type
            sampleRateInHertz,
            format,// word length, PCM
            nativeChannelMask,
            frameCount,
            AUDIO_OUTPUT_FLAG_NONE,
            audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
            0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
            lpJniStorage->mMemBase,// shared mem
            true,// thread can call Java
            sessionId);// audio session ID
    }
    if(lpTrack->initCheck() != NO_ERROR) {
        ALOGE("Error initializing AudioTrack");
        gotonative_init_failure;
    }
    nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
    if(nSession == NULL) {
        ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
        gotonative_init_failure;
    }
    // read the audio session ID back from AudioTrack in case we create a new session
    nSession[0] = lpTrack->getSessionId();
    env->ReleasePrimitiveArrayCritical(jSession, nSession,0);
    nSession = NULL;
    {  // scope for the lock
        Mutex::Autolock l(sLock);
        sAudioTrackCallBackCookies.add(&lpJniStorage->mCallbackData);
    }
    // save our newly created C++ AudioTrack in the "nativeTrackInJavaObj" field
    // of the Java object (in mNativeTrackInJavaObj)
    setAudioTrack(env, thiz, lpTrack);
    // save the JNI resources so we can free them later
    //ALOGV("storing lpJniStorage: %x\n", (int)lpJniStorage);
    env->SetIntField(thiz, javaAudioTrackFields.jniData, (int)lpJniStorage);
    returnAUDIOTRACK_SUCCESS;
    // failures:
native_init_failure:
    if(nSession != NULL) {
        env->ReleasePrimitiveArrayCritical(jSession, nSession,0);
    }
    env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_class);
    env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_ref);
    delete lpJniStorage;
    env->SetIntField(thiz, javaAudioTrackFields.jniData,0);
    returnAUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}
</audiotrack>

1. 检查音频参数;

2. 创建一个AudioTrack(native)对象;

3. 创建一个AudioTrackJniStorage对象;

4. 调用set函数初始化AudioTrack;

buffersize = frameCount * 每帧数据量 = frameCount * (Channel数 * 每个Channel数据量)

构造native AudioTrack

frameworks\av\media\libmedia\AudioTrack.cpp

?
1
2
3
4
5
6
7
AudioTrack::AudioTrack(): mStatus(NO_INIT),
  mIsTimed(false),
  mPreviousPriority(ANDROID_PRIORITY_NORMAL),
  mPreviousSchedulingGroup(SP_DEFAULT),
  mCblk(NULL)
{
}

构造AudioTrackJniStorage

AudioTrackJniStorage是音频数据存储的容器,是对匿名共享内存的封装。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
struct audiotrack_callback_cookie {
    jclass      audioTrack_class;
    jobject     audioTrack_ref;//Java层AudioTrack对象引用
    bool        busy;//忙判断
    Condition   cond;//互斥量
};
 
class AudioTrackJniStorage {
    public:
        sp<memoryheapbase>         mMemHeap;
        sp<memorybase>             mMemBase;
        audiotrack_callback_cookie mCallbackData;
        audio_stream_type_t        mStreamType;
 
    AudioTrackJniStorage() {
        mCallbackData.audioTrack_class =0;
        mCallbackData.audioTrack_ref =0;
        mStreamType = AUDIO_STREAM_DEFAULT;
    }
 
    ~AudioTrackJniStorage() {
        mMemBase.clear();
        mMemHeap.clear();
    }
    /**
     * 分配一块指定大小的匿名共享内存
     * @param sizeInBytes:匿名共享内存大小
     * @return
     */
    bool allocSharedMem(intsizeInBytes) {
         //创建一个匿名共享内存
        mMemHeap =new MemoryHeapBase(sizeInBytes,0, "AudioTrack Heap Base");
        if(mMemHeap->getHeapID() < 0) {
            returnfalse;
        }
        mMemBase =new MemoryBase(mMemHeap,0, sizeInBytes);
        returntrue;
    }
};
 
/**
 * 创建匿名共享内存区域
 * @param size:匿名共享内存大小
 * @param flags:创建标志位
 * @param name:匿名共享内存名称
 */
MemoryHeapBase::MemoryHeapBase(size_t size, uint32_t flags,char const* name)
: mFD(-1), mSize(0), mBase(MAP_FAILED), mFlags(flags),
  mDevice(0), mNeedUnmap(false), mOffset(0)
{
    //获取内存页大小
    constsize_t pagesize = getpagesize();
    //字节对齐
    size = ((size + pagesize-1) & ~(pagesize-1));
    /* 创建共享内存,打开/dev/ashmem设备,得到一个文件描述符 */
    intfd = ashmem_create_region(name == NULL ? "MemoryHeapBase": name, size);
    ALOGE_IF(fd<0,"error creating ashmem region: %s", strerror(errno));
    if(fd >= 0) {
        //通过mmap将匿名共享内存映射到当前进程地址空间
        if(mapfd(fd, size) == NO_ERROR) {
            if(flags & READ_ONLY) {
                ashmem_set_prot_region(fd, PROT_READ);
            }
        }
    }
}
</memorybase></memoryheapbase>

初始化AudioTrack

为AudioTrack设置音频参数信息,在Android4.4中,增加了一个参数transfer_type用于指定音频数据的传输方式,Android4.4定义了4种音频数据传输方式:

enum transfer_type {

TRANSFER_DEFAULT, // not specified explicitly; determine from the other parameters

TRANSFER_CALLBACK, // callback EVENT_MORE_DATA

TRANSFER_OBTAIN, // FIXME deprecated: call obtainBuffer() and releaseBuffer()

TRANSFER_SYNC, // synchronous write()

TRANSFER_SHARED, // shared memory

};

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
/**
 * 初始化AudioTrack
 * @param streamType  音频流类型
 * @param sampleRate  采样率
 * @param format      音频格式
 * @param channelMask 输出声道
 * @param frameCount  帧数
 * @param flags       输出标志位
 * @param cbf   Callback function. If not null, this function is called periodically
 *   to provide new data and inform of marker, position updates, etc.
 * @param user   Context for use by the callback receiver.
 * @param notificationFrames   The callback function is called each time notificationFrames          *  PCM frames have been consumed from track input buffer.
 * @param sharedBuffer 共享内存
 * @param threadCanCallJava
 * @param sessionId                
 * @return
 */
status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        intframeCountInt,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        intnotificationFrames,
        constsp<imemory>& sharedBuffer,
        bool threadCanCallJava,
        intsessionId,
        transfer_type transferType,
        constaudio_offload_info_t *offloadInfo,
        intuid)
{
    //设置音频数据传输类型
    switch(transferType) {
    caseTRANSFER_DEFAULT:
        if(sharedBuffer != 0) {
            transferType = TRANSFER_SHARED;
        }else if(cbf == NULL || threadCanCallJava) {
            transferType = TRANSFER_SYNC;
        }else {
            transferType = TRANSFER_CALLBACK;
        }
        break;
    caseTRANSFER_CALLBACK:
        if(cbf == NULL || sharedBuffer != 0) {
            ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL || sharedBuffer != 0");
            returnBAD_VALUE;
        }
        break;
    caseTRANSFER_OBTAIN:
    caseTRANSFER_SYNC:
        if(sharedBuffer != 0) {
            ALOGE("Transfer type TRANSFER_OBTAIN but sharedBuffer != 0");
            returnBAD_VALUE;
        }
        break;
    caseTRANSFER_SHARED:
        if(sharedBuffer == 0) {
            ALOGE("Transfer type TRANSFER_SHARED but sharedBuffer == 0");
            returnBAD_VALUE;
        }
        break;
    default:
        ALOGE("Invalid transfer type %d", transferType);
        returnBAD_VALUE;
    }
    mTransfer = transferType;
    // FIXME "int" here is legacy and will be replaced by size_t later
    if(frameCountInt < 0) {
        ALOGE("Invalid frame count %d", frameCountInt);
        returnBAD_VALUE;
    }
    size_t frameCount = frameCountInt;
    ALOGV_IF(sharedBuffer !=0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),
            sharedBuffer->size());
    ALOGV("set() streamType %d frameCount %u flags %04x", streamType, frameCount, flags);
    AutoMutex lock(mLock);
    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if(mAudioTrack != 0) {
        ALOGE("Track already in use");
        returnINVALID_OPERATION;
    }
    mOutput =0;
    // 音频流类型设置
    if(streamType == AUDIO_STREAM_DEFAULT) {
        streamType = AUDIO_STREAM_MUSIC;
    }
    //根据音频流类型从AudioPolicyService中得到对应的音频采样率
    if(sampleRate == 0) {
        uint32_t afSampleRate;
        if(AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) {
            returnNO_INIT;
        }
        sampleRate = afSampleRate;
    }
    mSampleRate = sampleRate;
    //音频格式设置
    if(format == AUDIO_FORMAT_DEFAULT) {
        format = AUDIO_FORMAT_PCM_16_BIT;
    }
    //如果没有设置声道,则默认设置为立体声通道
    if(channelMask == 0) {
        channelMask = AUDIO_CHANNEL_OUT_STEREO;
    }
    // validate parameters
    if(!audio_is_valid_format(format)) {
        ALOGE("Invalid format %d", format);
        returnBAD_VALUE;
    }
    // AudioFlinger does not currently support 8-bit data in shared memory
    if(format == AUDIO_FORMAT_PCM_8_BIT && sharedBuffer != 0) {
        ALOGE("8-bit data in shared memory is not supported");
        returnBAD_VALUE;
    }
    // force direct flag if format is not linear PCM
    // or offload was requested
    if((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
            || !audio_is_linear_pcm(format)) {
        ALOGV( (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
                    ?"Offload request, forcing to Direct Output"
                    :"Not linear PCM, forcing to Direct Output");
        flags = (audio_output_flags_t)
                // FIXME why can't we allow direct AND fast?
                ((flags | AUDIO_OUTPUT_FLAG_DIRECT) & ~AUDIO_OUTPUT_FLAG_FAST);
    }
    // only allow deep buffering for music stream type
    if(streamType != AUDIO_STREAM_MUSIC) {
        flags = (audio_output_flags_t)(flags &~AUDIO_OUTPUT_FLAG_DEEP_BUFFER);
    }
    //输出声道合法性检查
    if(!audio_is_output_channel(channelMask)) {
        ALOGE("Invalid channel mask %#x", channelMask);
        returnBAD_VALUE;
    }
    mChannelMask = channelMask;
    //计算声道个数
    uint32_t channelCount = popcount(channelMask);
    mChannelCount = channelCount;
    if(audio_is_linear_pcm(format)) {
        mFrameSize = channelCount * audio_bytes_per_sample(format);
        mFrameSizeAF = channelCount * sizeof(int16_t);
    }else {
        mFrameSize = sizeof(uint8_t);
        mFrameSizeAF = sizeof(uint8_t);
    }
    /**
     * audio_io_handle_t是一个整形值,用于标示音频播放线程,这里更加音频参数
     * 从AudioFlinger中查找用于播放此音频的播放线程,并返回该播放线程的ID值
     */
    audio_io_handle_t output = AudioSystem::getOutput(
                                    streamType,
                                    sampleRate, format, channelMask,
                                    flags,
                                    offloadInfo);
    if(output == 0) {
        ALOGE("Could not get audio output for stream type %d", streamType);
        returnBAD_VALUE;
    }
    //AudioTrack初始化
    mVolume[LEFT] =1.0f;
    mVolume[RIGHT] =1.0f;
    mSendLevel =0.0f;
    mFrameCount = frameCount;
    mReqFrameCount = frameCount;
    mNotificationFramesReq = notificationFrames;
    mNotificationFramesAct =0;
    mSessionId = sessionId;
    if(uid == -1 || (IPCThreadState::self()->getCallingPid() != getpid())) {
        mClientUid = IPCThreadState::self()->getCallingUid();
    }else {
        mClientUid = uid;
    }
    mAuxEffectId =0;
    mFlags = flags;
    mCbf = cbf;
    //如果设置了提供音频数据的回调函数,则启动AudioTrackThread线程来提供音频数据
    if(cbf != NULL) {
        mAudioTrackThread =new AudioTrackThread(*this, threadCanCallJava);
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO,0 /*stack*/);
    }
    // create the IAudioTrack
    status_t status = createTrack_l(streamType,
                                  sampleRate,
                                  format,
                                  frameCount,
                                  flags,
                                  sharedBuffer,
                                  output,
                                  0/*epoch*/);
    if(status != NO_ERROR) {
        if(mAudioTrackThread != 0) {
            mAudioTrackThread->requestExit();  // see comment in AudioTrack.h
            mAudioTrackThread->requestExitAndWait();
            mAudioTrackThread.clear();
        }
        //Use of direct and offloaded output streams is ref counted by audio policy manager.
        // As getOutput was called above and resulted in an output stream to be opened,
        // we need to release it.
        AudioSystem::releaseOutput(output);
        returnstatus;
    }
    mStatus = NO_ERROR;
    mStreamType = streamType;
    mFormat = format;
    mSharedBuffer = sharedBuffer;
    mState = STATE_STOPPED;
    mUserData = user;
    mLoopPeriod =0;
    mMarkerPosition =0;
    mMarkerReached =false;
    mNewPosition =0;
    mUpdatePeriod =0;
    AudioSystem::acquireAudioSessionId(mSessionId);
    mSequence =1;
    mObservedSequence = mSequence;
    mInUnderrun =false;
    mOutput = output;
    returnNO_ERROR;
}
</imemory>

我们知道,AudioPolicyService启动时加载了系统支持的所有音频接口,并且打开了默认的音频输出,打开音频输出时,调用AudioFlinger::openOutput()函数为当前打开的音频输出接口创建一个PlaybackThread线程,同时为该线程分配一个全局唯一的audio_io_handle_t值,并以键值对的形式保存在AudioFlinger的成员变量mPlaybackThreads中。在这里首先根据音频参数通过调用AudioSystem::getOutput()函数得到当前音频输出接口的PlaybackThread线程id号,同时传递给createTrack函数用于创建Track。AudioTrack在AudioFlinger中是以Track来管理的。不过因为它们之间是跨进程的关系,因此需要一个“桥梁”来维护,这个沟通的媒介是IAudioTrack。函数createTrack_l除了为AudioTrack在AudioFlinger中申请一个Track外,还会建立两者间IAudioTrack桥梁。

获取音频输出

获取音频输出就是根据音频参数如采样率、声道、格式等从已经打开的音频输出描述符列表中查找合适的音频输出AudioOutputDescriptor,并返回该音频输出在AudioFlinger中创建的播放线程id号,如果没有合适当前音频输出参数的AudioOutputDescriptor,则请求AudioFlinger打开一个新的音频输出通道,并为当前音频输出创建对应的播放线程,返回该播放线程的id号。具体过程请参考Android AudioPolicyService服务启动过程中的打开输出小节。

创建AudioTrackThread线程

初始化AudioTrack时,如果audioCallback为Null,就会创建AudioTrackThread线程。

AudioTrack支持两种数据输入方式:

1) Push方式:用户主动write,MediaPlayerService通常采用此方式;

2) Pull方式: AudioTrackThread线程通过audioCallback回调函数主动从用户那里获取数据,ToneGenerator就是采用这种方式;

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bool AudioTrack::AudioTrackThread::threadLoop()
{
    {
        AutoMutex _l(mMyLock);
        if(mPaused) {
            mMyCond.wait(mMyLock);
            // caller will check for exitPending()
            returntrue;
        }
}
//调用创建当前AudioTrackThread线程的AudioTrack的processAudioBuffer函数
    if(!mReceiver.processAudioBuffer(this)) {
        pause();
    }
    returntrue;
}

申请Track

音频播放需要AudioTrack写入音频数据,同时需要AudioFlinger完成混音,因此需要在AudioTrack与AudioFlinger之间建立数据通道,而AudioTrack与AudioFlinger又分属不同的进程空间,Android系统采用Binder通信方式来搭建它们之间的桥梁。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
status_t AudioTrack::createTrack_l(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        size_t frameCount,
        audio_output_flags_t flags,
        constsp<imemory>& sharedBuffer,
        audio_io_handle_t output,
        size_t epoch)
{
    status_t status;
    //得到AudioFlinger的代理对象
    constsp<iaudioflinger>& audioFlinger = AudioSystem::get_audio_flinger();
    if(audioFlinger == 0) {
        ALOGE("Could not get audioflinger");
        returnNO_INIT;
    }
    //得到输出时延
    uint32_t afLatency;
    status = AudioSystem::getLatency(output, streamType, &afLatency);
    if(status != NO_ERROR) {
        ALOGE("getLatency(%d) failed status %d", output, status);
        returnNO_INIT;
    }
    //得到音频帧数
    size_t afFrameCount;
    status = AudioSystem::getFrameCount(output, streamType, &afFrameCount);
    if(status != NO_ERROR) {
        ALOGE("getFrameCount(output=%d, streamType=%d) status %d", output, streamType, status);
        returnNO_INIT;
    }
    //得到采样率
    uint32_t afSampleRate;
    status = AudioSystem::getSamplingRate(output, streamType, &afSampleRate);
    if(status != NO_ERROR) {
        ALOGE("getSamplingRate(output=%d, streamType=%d) status %d", output, streamType, status);
        returnNO_INIT;
    }
    // Client decides whether the track is TIMED (see below), but can only express a preference
    // for FAST.  Server will perform additional tests.
    if((flags & AUDIO_OUTPUT_FLAG_FAST) && !(
            // either of these use cases:
            // use case 1: shared buffer
            (sharedBuffer !=0) ||
            // use case 2: callback handler
            (mCbf != NULL))) {
        ALOGW("AUDIO_OUTPUT_FLAG_FAST denied by client");
        // once denied, do not request again if IAudioTrack is re-created
        flags = (audio_output_flags_t) (flags & ~AUDIO_OUTPUT_FLAG_FAST);
        mFlags = flags;
    }
    ALOGV("createTrack_l() output %d afLatency %d", output, afLatency);
    // The client's AudioTrack buffer is divided into n parts for purpose of wakeup by server, where
    //  n = 1   fast track; nBuffering is ignored
    //  n = 2   normal track, no sample rate conversion
    //  n = 3   normal track, with sample rate conversion
    //          (pessimistic; some non-1:1 conversion ratios don't actually need triple-buffering)
    //  n > 3   very high latency or very small notification interval; nBuffering is ignored
    constuint32_t nBuffering = (sampleRate == afSampleRate) ? 2 : 3;
    mNotificationFramesAct = mNotificationFramesReq;
    if(!audio_is_linear_pcm(format)) {
        if(sharedBuffer != 0) {//static模式
            // Same comment as below about ignoring frameCount parameter for set()
            frameCount = sharedBuffer->size();
        }else if(frameCount == 0) {
            frameCount = afFrameCount;
        }
        if(mNotificationFramesAct != frameCount) {
            mNotificationFramesAct = frameCount;
        }
    }else if(sharedBuffer != 0) {// static模式
        // Ensure that buffer alignment matches channel count
        // 8-bit data in shared memory is not currently supported by AudioFlinger
        size_t alignment =/* format == AUDIO_FORMAT_PCM_8_BIT ? 1 : */2;
        if(mChannelCount > 1) {
            // More than 2 channels does not require stronger alignment than stereo
            alignment <<=1;
        }
        if(((size_t)sharedBuffer->pointer() & (alignment - 1)) != 0) {
            ALOGE("Invalid buffer alignment: address %p, channel count %u",
                    sharedBuffer->pointer(), mChannelCount);
            returnBAD_VALUE;
        }
        // When initializing a shared buffer AudioTrack via constructors,
        // there's no frameCount parameter.
        // But when initializing a shared buffer AudioTrack via set(),
        // there _is_ a frameCount parameter.  We silently ignore it.
        frameCount = sharedBuffer->size()/mChannelCount/sizeof(int16_t);
    }else if(!(flags & AUDIO_OUTPUT_FLAG_FAST)) {
        // FIXME move these calculations and associated checks to server
        // Ensure that buffer depth covers at least audio hardware latency
        uint32_t minBufCount = afLatency / ((1000* afFrameCount)/afSampleRate);
        ALOGV("afFrameCount=%d, minBufCount=%d, afSampleRate=%u, afLatency=%d",
                afFrameCount, minBufCount, afSampleRate, afLatency);
        if(minBufCount <= nBuffering) {
            minBufCount = nBuffering;
        }
        size_t minFrameCount = (afFrameCount*sampleRate*minBufCount)/afSampleRate;
        ALOGV("minFrameCount: %u, afFrameCount=%d, minBufCount=%d, sampleRate=%u, afSampleRate=%u"", afLatency=%d",minFrameCount, afFrameCount, minBufCount, sampleRate, afSampleRate, afLatency);
        if(frameCount == 0) {
            frameCount = minFrameCount;
        }else if(frameCount < minFrameCount) {
            // not ALOGW because it happens all the time when playing key clicks over A2DP
            ALOGV("Minimum buffer size corrected from %d to %d",
                     frameCount, minFrameCount);
            frameCount = minFrameCount;
        }
        // Make sure that application is notified with sufficient margin before underrun
        if(mNotificationFramesAct == 0|| mNotificationFramesAct > frameCount/nBuffering) {
            mNotificationFramesAct = frameCount/nBuffering;
        }
    }else {
        // For fast tracks, the frame count calculations and checks are done by server
    }
    IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;
    if(mIsTimed) {
        trackFlags |= IAudioFlinger::TRACK_TIMED;
    }
    pid_t tid = -1;
    if(flags & AUDIO_OUTPUT_FLAG_FAST) {
        trackFlags |= IAudioFlinger::TRACK_FAST;
        if(mAudioTrackThread != 0) {
            tid = mAudioTrackThread->getTid();
        }
    }
    if(flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
        trackFlags |= IAudioFlinger::TRACK_OFFLOAD;
    }
    //向AudioFlinger发送createTrack请求,在stream模式下sharedBuffer为空,output为AudioFlinger中播放线程的id号
    sp<iaudiotrack> track = audioFlinger->createTrack(streamType,
                                                      sampleRate,
                                                      // AudioFlinger only sees 16-bit PCM
                                                      format == AUDIO_FORMAT_PCM_8_BIT ?
                                                              AUDIO_FORMAT_PCM_16_BIT : format,
                                                      mChannelMask,
                                                      frameCount,
                                                      &trackFlags,
                                                      sharedBuffer,
                                                      output,
                                                      tid,
                                                      &mSessionId,
                                                      mName,
                                                      mClientUid,
                                                      &status);
    if(track == 0) {
        ALOGE("AudioFlinger could not create track, status: %d", status);
        returnstatus;
    }
    //AudioFlinger创建Tack对象时会分配一块共享内存,这里得到这块共享内存的代理对象BpMemory
    sp<imemory> iMem = track->getCblk();
    if(iMem == 0) {
        ALOGE("Could not get control block");
        returnNO_INIT;
    }
    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if(mAudioTrack != 0) {
        mAudioTrack->asBinder()->unlinkToDeath(mDeathNotifier,this);
        mDeathNotifier.clear();
    }
    //将创建的Track代理对象、匿名共享内存代理对象保存到AudioTrack的成员变量中
    mAudioTrack = track;
    mCblkMemory = iMem;
    //保存匿名共享内存的首地址,在匿名共享内存的头部存放了一个audio_track_cblk_t对象
    audio_track_cblk_t* cblk = static_cast(iMem->pointer());
    mCblk = cblk;
    size_t temp = cblk->frameCount_;
    if(temp < frameCount || (frameCount == 0&& temp == 0)) {
        // In current design, AudioTrack client checks and ensures frame count validity before
        // passing it to AudioFlinger so AudioFlinger should not return a different value except
        // for fast track as it uses a special method of assigning frame count.
        ALOGW("Requested frameCount %u but received frameCount %u", frameCount, temp);
    }
    frameCount = temp;
    mAwaitBoost =false;
    if(flags & AUDIO_OUTPUT_FLAG_FAST) {
        if(trackFlags & IAudioFlinger::TRACK_FAST) {
            ALOGV("AUDIO_OUTPUT_FLAG_FAST successful; frameCount %u", frameCount);
            mAwaitBoost =true;
            if(sharedBuffer == 0) {
                // double-buffering is not required for fast tracks, due to tighter scheduling
                if(mNotificationFramesAct == 0|| mNotificationFramesAct > frameCount) {
                    mNotificationFramesAct = frameCount;
                }
            }
        }else {
            ALOGV("AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount %u", frameCount);
            // once denied, do not request again if IAudioTrack is re-created
            flags = (audio_output_flags_t) (flags & ~AUDIO_OUTPUT_FLAG_FAST);
            mFlags = flags;
            if(sharedBuffer == 0) {//stream模式
                if(mNotificationFramesAct == 0|| mNotificationFramesAct > frameCount/nBuffering) {
                    mNotificationFramesAct = frameCount/nBuffering;
                }
            }
        }
    }
    if(flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
        if(trackFlags & IAudioFlinger::TRACK_OFFLOAD) {
            ALOGV("AUDIO_OUTPUT_FLAG_OFFLOAD successful");
        }else {
            ALOGW("AUDIO_OUTPUT_FLAG_OFFLOAD denied by server");
            flags = (audio_output_flags_t) (flags & ~AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD);
            mFlags = flags;
            returnNO_INIT;
        }
    }
    mRefreshRemaining =true;
    // Starting address of buffers in shared memory.  If there is a shared buffer, buffers
    // is the value of pointer() for the shared buffer, otherwise buffers points
    // immediately after the control block.  This address is for the mapping within client
    // address space.  AudioFlinger::TrackBase::mBuffer is for the server address space.
    void* buffers;
    if(sharedBuffer == 0) {//stream模式
        buffers = (char*)cblk + sizeof(audio_track_cblk_t);
    }else {
        buffers = sharedBuffer->pointer();
    }
    mAudioTrack->attachAuxEffect(mAuxEffectId);
    // FIXME don't believe this lie
    mLatency = afLatency + (1000*frameCount) / sampleRate;
    mFrameCount = frameCount;
    // If IAudioTrack is re-created, don't let the requested frameCount
    // decrease.  This can confuse clients that cache frameCount().
    if(frameCount > mReqFrameCount) {
        mReqFrameCount = frameCount;
    }
    // update proxy
    if(sharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy =new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
    }else {
        mStaticProxy =new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
        mProxy = mStaticProxy;
    }
    mProxy->setVolumeLR((uint32_t(uint16_t(mVolume[RIGHT] *0x1000)) << 16) |
            uint16_t(mVolume[LEFT] *0x1000));
    mProxy->setSendLevel(mSendLevel);
    mProxy->setSampleRate(mSampleRate);
    mProxy->setEpoch(epoch);
    mProxy->setMinimum(mNotificationFramesAct);
    mDeathNotifier =new DeathNotifier(this);
    mAudioTrack->asBinder()->linkToDeath(mDeathNotifier,this);
    returnNO_ERROR;
}
</audio_track_cblk_t*></imemory></iaudiotrack></iaudioflinger></imemory>

IAudioTrack建立了AudioTrack与AudioFlinger之间的关系,在static模式下,用于存放音频数据的匿名共享内存在AudioTrack这边创建,在stream播放模式下,匿名共享内存却是在AudioFlinger这边创建。这两种播放模式下创建的匿名共享内存是有区别的,stream模式下的匿名共享内存头部会创建一个audio_track_cblk_t对象,用于协调生产者AudioTrack和消费者AudioFlinger之间的步调。createTrack就是在AudioFlinger中创建一个Track对象。

frameworks\av\services\audioflinger\ AudioFlinger.cpp

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
sp<iaudiotrack> AudioFlinger::createTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        IAudioFlinger::track_flags_t *flags,
        constsp<imemory>& sharedBuffer,
        audio_io_handle_t output,
        pid_t tid,
        int*sessionId,
        String8& name,
        intclientUid,
        status_t *status)
{
    sp<playbackthread::track> track;
    sp<trackhandle> trackHandle;
    sp<client> client;
    status_t lStatus;
    intlSessionId;
    // client AudioTrack::set already implements AUDIO_STREAM_DEFAULT => AUDIO_STREAM_MUSIC,
    // but if someone uses binder directly they could bypass that and cause us to crash
    if(uint32_t(streamType) >= AUDIO_STREAM_CNT) {
        ALOGE("createTrack() invalid stream type %d", streamType);
        lStatus = BAD_VALUE;
        gotoExit;
    }
    // client is responsible for conversion of 8-bit PCM to 16-bit PCM,
    // and we don't yet support 8.24 or 32-bit PCM
    if(audio_is_linear_pcm(format) && format != AUDIO_FORMAT_PCM_16_BIT) {
        ALOGE("createTrack() invalid format %d", format);
        lStatus = BAD_VALUE;
        gotoExit;
    }
    {
        Mutex::Autolock _l(mLock);
        //根据播放线程ID号查找出对应的PlaybackThread,在openout时,播放线程以key/value形式保存在AudioFlinger的mPlaybackThreads中
        PlaybackThread *thread = checkPlaybackThread_l(output);
        PlaybackThread *effectThread = NULL;
        if(thread == NULL) {
            ALOGE("no playback thread found for output handle %d", output);
            lStatus = BAD_VALUE;
            gotoExit;
        }
        pid_t pid = IPCThreadState::self()->getCallingPid();
        //根据客户端进程pid查找是否已经为该客户进程创建了Client对象,如果没有,则创建一个Client对象
        client = registerPid_l(pid);
        ALOGV("createTrack() sessionId: %d", (sessionId == NULL) ? -2: *sessionId);
        if(sessionId != NULL && *sessionId != AUDIO_SESSION_OUTPUT_MIX) {
            // check if an effect chain with the same session ID is present on another
            // output thread and move it here.
            //遍历所有的播放线程,不包括输出线程,如果该线程中Track的sessionId与当前相同,则取出该线程作为当前Track的effectThread。
            for(size_t i = 0; i < mPlaybackThreads.size(); i++) {
                sp<playbackthread> t = mPlaybackThreads.valueAt(i);
                if(mPlaybackThreads.keyAt(i) != output) {
                    uint32_t sessions = t->hasAudioSession(*sessionId);
                    if(sessions & PlaybackThread::EFFECT_SESSION) {
                        effectThread = t.get();
                        break;
                    }
                }
            }
            lSessionId = *sessionId;
        }else {
            // if no audio session id is provided, create one here
            lSessionId = nextUniqueId();
            if(sessionId != NULL) {
                *sessionId = lSessionId;
            }
        }
        ALOGV("createTrack() lSessionId: %d", lSessionId);
        //在找到的PlaybackThread线程中创建Track
        track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);
        // move effect chain to this output thread if an effect on same session was waiting
        // for a track to be created
        if(lStatus == NO_ERROR && effectThread != NULL) {
            Mutex::Autolock _dl(thread->mLock);
            Mutex::Autolock _sl(effectThread->mLock);
            moveEffectChain_l(lSessionId, effectThread, thread,true);
        }
        // Look for sync events awaiting for a session to be used.
        for(int i = 0; i < (int)mPendingSyncEvents.size(); i++) {
            if(mPendingSyncEvents[i]->triggerSession() == lSessionId) {
                if(thread->isValidSyncEvent(mPendingSyncEvents[i])) {
                    if(lStatus == NO_ERROR) {
                        (void) track->setSyncEvent(mPendingSyncEvents[i]);
                    }else {
                        mPendingSyncEvents[i]->cancel();
                    }
                    mPendingSyncEvents.removeAt(i);
                    i--;
                }
            }
        }
    }
    //此时Track已成功创建,还需要为该Track创建代理对象TrackHandle
    if(lStatus == NO_ERROR) {
        // s for server's pid, n for normal mixer name, f for fast index
        name = String8::format("s:%d;n:%d;f:%d", getpid_cached, track->name() - AudioMixer::TRACK0,track->fastIndex());
        trackHandle =new TrackHandle(track);
    }else {
        // remove local strong reference to Client before deleting the Track so that the Client destructor is called by the TrackBase destructor with mLock held
        client.clear();
        track.clear();
    }
Exit:
    if(status != NULL) {
        *status = lStatus;
    }
    /**
     * 向客户进程返回IAudioTrack的代理对象,这样客户进程就可以跨进程访问创建的Track了,
     * 访问方式如下:BpAudioTrack --> BnAudioTrack --> TrackHandle --> Track
     */
    returntrackHandle;
}
</playbackthread></client></trackhandle></playbackthread::track></imemory></iaudiotrack>

该函数首先以单例模式为应用程序进程创建一个Client对象,直接对话某个客户进程。然后根据播放线程ID找出对应的PlaybackThread,并将创建Track的任务转交给它,PlaybackThread完成Track创建后,由于Track没有通信功能,因此还需要为其创建一个代理通信业务的TrackHandle对象。

加载中...

构造Client对象

根据进程pid,为请求播放音频的客户端创建一个Client对象。

?
1
2
3
4
5
6
7
8
9
10
11
12
sp AudioFlinger::registerPid_l(pid_t pid)
{
    // If pid is already in the mClients wp<> map, then use that entry
    // (for which promote() is always != 0), otherwise create a new entry and Client.
    sp<client> client = mClients.valueFor(pid).promote();
    if(client == 0) {
        client =new Client(this, pid);
        mClients.add(pid, client);
    }
    returnclient;
}
</client></audioflinger::client>

AudioFlinger的成员变量mClients以键值对的形式保存pid和Client对象,这里首先取出pid对应的Client对象,如果该对象为空,则为客户端进程创建一个新的Client对象。

?
1
2
3
4
5
6
7
8
9
10
AudioFlinger::Client::Client(constsp& audioFlinger, pid_t pid)
    : RefBase(),mAudioFlinger(audioFlinger),
        // FIXME should be a "k" constant not hard-coded, in .h or ro. property, see 4 lines below
        mMemoryDealer(newMemoryDealer(2*1024*1024,"AudioFlinger::Client")),
        mPid(pid),
        mTimedTrackCount(0)
{
    // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
}
</audioflinger>

构造Client对象时,创建了一个MemoryDealer对象,该对象用于分配共享内存。

frameworks\native\libs\binder\ MemoryDealer.cpp

?
1
2
3
4
5
MemoryDealer::MemoryDealer(size_t size,const char* name)
    : mHeap(newMemoryHeapBase(size, 0, name)),//创建指定大小的共享内存
    mAllocator(newSimpleBestFitAllocator(size))//创建内存分配器
{   
}

MemoryDealer是个工具类,用于分配共享内存,每一个Client都拥有一个MemoryDealer对象,这就意味着每个客户端进程都是在自己独有的内存空间中分配共享内存。MemoryDealer构造时创建了一个大小为2*1024*1024的匿名共享内存,该客户进程所有的AudioTrack在AudioFlinger中创建的Track都是在这块共享内存中分配buffer。

?
1
2
3
4
5
6
7
SimpleBestFitAllocator::SimpleBestFitAllocator(size_t size)
{
    size_t pagesize = getpagesize();
    mHeapSize = ((size + pagesize-1) & ~(pagesize-1));//页对齐
    chunk_t* node =new chunk_t(0, mHeapSize / kMemoryAlign);
    mList.insertHead(node);
}

由此可知,当应用程序进程中的AudioTrack请求AudioFlinger在某个PlaybackThread中创建Track对象时,AudioFlinger首先会为应用程序进程创建一个Client对象,同时创建一块大小为2M的共享内存。在创建Track时,Track将在2M共享内存中分配buffer用于音频播放。

加载中...

创建Track对象
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
sp AudioFlinger::PlaybackThread::createTrack_l(
        constsp& client,
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        constsp<imemory>& sharedBuffer,
        intsessionId,
        IAudioFlinger::track_flags_t *flags,
        pid_t tid,
        intuid,
        status_t *status)
{
    sp<track> track;
    status_t lStatus;
    bool isTimed = (*flags & IAudioFlinger::TRACK_TIMED) !=0;
    // client expresses a preference for FAST, but we get the final say
    if(*flags & IAudioFlinger::TRACK_FAST) {
      if(
            // not timed
            (!isTimed) &&
            // either of these use cases:
            (
              // use case 1: shared buffer with any frame count
              (
                (sharedBuffer !=0)
              ) ||
              // use case 2: callback handler and frame count is default or at least as large as HAL
              (
                (tid != -1) &&
                ((frameCount ==0) ||
                (frameCount >= (mFrameCount * kFastTrackMultiplier)))
              )
            ) &&
            // PCM data
            audio_is_linear_pcm(format) &&
            // mono or stereo
            ( (channelMask == AUDIO_CHANNEL_OUT_MONO) ||
              (channelMask == AUDIO_CHANNEL_OUT_STEREO) ) &&
#ifndef FAST_TRACKS_AT_NON_NATIVE_SAMPLE_RATE
            // hardware sample rate
            (sampleRate == mSampleRate) &&
#endif
            // normal mixer has an associated fast mixer
            hasFastMixer() &&
            // there are sufficient fast track slots available
            (mFastTrackAvailMask !=0)
            // FIXME test that MixerThread for this fast track has a capable output HAL
            // FIXME add a permission test also?
        ) {
        // if frameCount not specified, then it defaults to fast mixer (HAL) frame count
        if(frameCount == 0) {
            frameCount = mFrameCount * kFastTrackMultiplier;
        }
        ALOGV("AUDIO_OUTPUT_FLAG_FAST accepted: frameCount=%d mFrameCount=%d",
                frameCount, mFrameCount);
      }else {
        ALOGV("AUDIO_OUTPUT_FLAG_FAST denied: isTimed=%d sharedBuffer=%p frameCount=%d "
                "mFrameCount=%d format=%d isLinear=%d channelMask=%#x sampleRate=%u mSampleRate=%u ""hasFastMixer=%d tid=%d fastTrackAvailMask=%#x",
                isTimed, sharedBuffer.get(), frameCount, mFrameCount, format,
                audio_is_linear_pcm(format),
                channelMask, sampleRate, mSampleRate, hasFastMixer(), tid, mFastTrackAvailMask);
        *flags &= ~IAudioFlinger::TRACK_FAST;
        // For compatibility with AudioTrack calculation, buffer depth is forced
        // to be at least 2 x the normal mixer frame count and cover audio hardware latency.
        // This is probably too conservative, but legacy application code may depend on it.
        // If you change this calculation, also review the start threshold which is related.
        uint32_t latencyMs = mOutput->stream->get_latency(mOutput->stream);
        uint32_t minBufCount = latencyMs / ((1000* mNormalFrameCount) / mSampleRate);
        if(minBufCount < 2) {
            minBufCount =2;
        }
        size_t minFrameCount = mNormalFrameCount * minBufCount;
        if(frameCount < minFrameCount) {
            frameCount = minFrameCount;
        }
      }
    }
    if(mType == DIRECT) {
        if((format & AUDIO_FORMAT_MAIN_MASK) == AUDIO_FORMAT_PCM) {
            if(sampleRate != mSampleRate || format != mFormat || channelMask != mChannelMask) {
                ALOGE("createTrack_l() Bad parameter: sampleRate %u format %d, channelMask 0x%08x ""for output %p with format %d",sampleRate, format, channelMask, mOutput, mFormat);
                lStatus = BAD_VALUE;
                gotoExit;
            }
        }
    }else if(mType == OFFLOAD) {
        if(sampleRate != mSampleRate || format != mFormat || channelMask != mChannelMask) {
            ALOGE("createTrack_l() Bad parameter: sampleRate %d format %d, channelMask 0x%08x \"""for output %p with format %d",sampleRate, format, channelMask, mOutput, mFormat);
            lStatus = BAD_VALUE;
            gotoExit;
        }
    }else {
        if((format & AUDIO_FORMAT_MAIN_MASK) != AUDIO_FORMAT_PCM) {
                ALOGE("createTrack_l() Bad parameter: format %d \""
                        "for output %p with format %d",format, mOutput, mFormat);
                lStatus = BAD_VALUE;
                gotoExit;
        }
        // Resampler implementation limits input sampling rate to 2 x output sampling rate.
        if(sampleRate > mSampleRate*2) {
            ALOGE("Sample rate out of range: %u mSampleRate %u", sampleRate, mSampleRate);
            lStatus = BAD_VALUE;
            gotoExit;
        }
    }
    lStatus = initCheck();
    if(lStatus != NO_ERROR) {
        ALOGE("Audio driver not initialized.");
        gotoExit;
    }
    {// scope for mLock
        Mutex::Autolock _l(mLock);
        ALOGD("ceateTrack_l() got lock");// SPRD: Add some log
        // all tracks in same audio session must share the same routing strategy otherwise
        // conflicts will happen when tracks are moved from one output to another by audio policy
        // manager
        uint32_t strategy = AudioSystem::getStrategyForStream(streamType);
        for(size_t i = 0; i < mTracks.size(); ++i) {
            sp<track> t = mTracks[i];
            if(t != 0 && !t->isOutputTrack()) {
                uint32_t actual = AudioSystem::getStrategyForStream(t->streamType());
                if(sessionId == t->sessionId() && strategy != actual) {
                    ALOGE("createTrack_l() mismatched strategy; expected %u but found %u",
                            strategy, actual);
                    lStatus = BAD_VALUE;
                    gotoExit;
                }
            }
        }
        if(!isTimed) {
            track =new Track(this, client, streamType, sampleRate, format,
                    channelMask, frameCount, sharedBuffer, sessionId, uid, *flags);
        }else {
            track = TimedTrack::create(this, client, streamType, sampleRate, format,
                    channelMask, frameCount, sharedBuffer, sessionId, uid);
        }
        if(track == 0 || track->getCblk() == NULL || track->name() < 0) {
            lStatus = NO_MEMORY;
            gotoExit;
        }
        mTracks.add(track);
        sp<effectchain> chain = getEffectChain_l(sessionId);
        if(chain != 0) {
            ALOGV("createTrack_l() setting main buffer %p", chain->inBuffer());
            track->setMainBuffer(chain->inBuffer());
            chain->setStrategy(AudioSystem::getStrategyForStream(track->streamType()));
            chain->incTrackCnt();
        }
        if((*flags & IAudioFlinger::TRACK_FAST) && (tid != -1)) {
            pid_t callingPid = IPCThreadState::self()->getCallingPid();
            // we don't have CAP_SYS_NICE, nor do we want to have it as it's too powerful,
            // so ask activity manager to do this on our behalf
            sendPrioConfigEvent_l(callingPid, tid, kPriorityAudioApp);
        }
    }
    lStatus = NO_ERROR;
Exit:
    if(status) {
        *status = lStatus;
    }
    returntrack;
}
</effectchain></imemory></audioflinger::client></audioflinger::playbackthread::track>

这里就为AudioTrack创建了一个Track对象。Track继承于TrackBase,因此构造Track时,首先执行TrackBase的构造函数。

加载中...

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            ThreadBase *thread,//所属的播放线程
            constsp<client>& client,//所属的Client
            uint32_t sampleRate,//采样率
            audio_format_t format,//音频格式
            audio_channel_mask_t channelMask,//声道
            size_t frameCount,//音频帧个数
            constsp<imemory>& sharedBuffer,//共享内存
            intsessionId,
            intclientUid,
            bool isOut)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(NULL),
        // mBuffer
        mState(IDLE),
        mSampleRate(sampleRate),
        mFormat(format),
        mChannelMask(channelMask),
        mChannelCount(popcount(channelMask)),
        mFrameSize(audio_is_linear_pcm(format) ?
                mChannelCount * audio_bytes_per_sample(format) : sizeof(int8_t)),
        mFrameCount(frameCount),
        mSessionId(sessionId),
        mIsOut(isOut),
        mServerProxy(NULL),
        mId(android_atomic_inc(&nextTrackId)),
        mTerminated(false)
{
    // if the caller is us, trust the specified uid
    if(IPCThreadState::self()->getCallingPid() != getpid_cached || clientUid == -1) {
        intnewclientUid = IPCThreadState::self()->getCallingUid();
        if(clientUid != -1 && clientUid != newclientUid) {
            ALOGW("uid %d tried to pass itself off as %d", newclientUid, clientUid);
        }
        clientUid = newclientUid;
    }
    // clientUid contains the uid of the app that is responsible for this track, so we can blame
    //得到应用进程uid
    mUid = clientUid;
    // client == 0 implies sharedBuffer == 0
    ALOG_ASSERT(!(client ==0 && sharedBuffer !=0));
    ALOGV_IF(sharedBuffer !=0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),
            sharedBuffer->size());
    //计算audio_track_cblk_t大小
    size_t size = sizeof(audio_track_cblk_t);
    //计算存放音频数据的buffer大小,= frameCount*mFrameSize
    size_t bufferSize = (sharedBuffer ==0 ? roundup(frameCount) : frameCount) * mFrameSize;
    /**
     * stream模式下,需要audio_track_cblk_t来协调生成者和消费者,计算共享内存大小  
     *  --------------------------------------------------------
     * | audio_track_cblk_t |               buffer                   |
     *  --------------------------------------------------------
     */
    if(sharedBuffer == 0) {//stream模式下
        size += bufferSize;
    }
    //如果Client不为空,就通过Client来分配buffer
    if(client != 0) {
        //请求Client中的MemoryDealer工具类来分配buffer
        mCblkMemory = client->heap()->allocate(size);
        //分配成功
        if(mCblkMemory != 0) {
            //将共享内存的指针强制转换为audio_track_cblk_t
            mCblk = static_cast(mCblkMemory->pointer());
            // can't assume mCblk != NULL
        }else {
            ALOGE("not enough memory for AudioTrack size=%u", size);
            client->heap()->dump("AudioTrack");
            return;
        }
    }else {//Client为空,使用数组方式分配内存空间
        // this syntax avoids calling the audio_track_cblk_t constructor twice
        mCblk = (audio_track_cblk_t *)new uint8_t[size];
        // assume mCblk != NULL
    }
    /**
     * 当为应用进程创建了Client对象,则通过Client来分配音频数据buffer,否则通过数组方式分配buffer。 
     * stream模式下,在分配好的buffer头部创建audio_track_cblk_t对象,而static模式下,创建单独的
     * audio_track_cblk_t对象。
     */
if (mCblk != NULL) {
      // construct the shared structure in-place.
        new(mCblk) audio_track_cblk_t();
        // clear all buffers
        mCblk->frameCount_ = frameCount;
        if(sharedBuffer == 0) {//stream模式
            //将mBuffer指向数据buffer的首地址
            mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
            //清空数据buffer
            memset(mBuffer,0, bufferSize);
         }else {//static模式
            mBuffer = sharedBuffer->pointer();
 #if0
            mCblk->mFlags = CBLK_FORCEREADY;   // FIXME hack, need to fix the track ready logic
#endif
        }
#ifdef TEE_SINK
    
#endif
        ALOGD("TrackBase constructed");// SPRD: add some log
    }
}
 
</audio_track_cblk_t></imemory></client>

TrackBase构造过程主要是为音频播放分配共享内存,在static模式下,共享内存由应用进程自身分配,但在stream模式,共享内存由AudioFlinger分配,static和stream模式下,都会创建audio_track_cblk_t对象,唯一的区别在于,在stream模式下,audio_track_cblk_t对象创建在共享内存的头部。

static模式:

加载中...

stream模式:

加载中...

接下来继续分析Track的构造过程:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
AudioFlinger::PlaybackThread::Track::Track(
            PlaybackThread *thread,//所属的播放线程
            constsp<client>& client, //所属的Client
            audio_stream_type_t streamType,//音频流类型
            uint32_t sampleRate,//采样率
            audio_format_t format,//音频格式
            audio_channel_mask_t channelMask,//声道
            size_t frameCount,//音频帧个数
            constsp<imemory>& sharedBuffer, //共享内存
            intsessionId,
            intuid,
            IAudioFlinger::track_flags_t flags)
    :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount, sharedBuffer,sessionId, uid,true /*isOut*/),
    mFillingUpStatus(FS_INVALID),
    // mRetryCount initialized later when needed
    mSharedBuffer(sharedBuffer),
    mStreamType(streamType),
    mName(-1), // see note below
    mMainBuffer(thread->mixBuffer()),
    mAuxBuffer(NULL),
    mAuxEffectId(0), mHasVolumeController(false),
    mPresentationCompleteFrames(0),
    mFlags(flags),
    mFastIndex(-1),
    mCachedVolume(1.0),
    mIsInvalid(false),
    mAudioTrackServerProxy(NULL),
    mResumeToStopping(false)
{
    if(mCblk != NULL) {//audio_track_cblk_t对象不为空
        if(sharedBuffer == 0) {//stream模式
            mAudioTrackServerProxy =new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                    mFrameSize);
        }else {//static模式
            mAudioTrackServerProxy =new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,mFrameSize);
        }
        mServerProxy = mAudioTrackServerProxy;
        // to avoid leaking a track name, do not allocate one unless there is an mCblk
        mName = thread->getTrackName_l(channelMask, sessionId);
        if(mName < 0) {
            ALOGE("no more track names available");
            return;
        }
        // only allocate a fast track index if we were able to allocate a normal track name
        if(flags & IAudioFlinger::TRACK_FAST) {
            mAudioTrackServerProxy->framesReadyIsCalledByMultipleThreads();
            ALOG_ASSERT(thread->mFastTrackAvailMask !=0);
            inti = __builtin_ctz(thread->mFastTrackAvailMask);
            ALOG_ASSERT(0< i && i < (int)FastMixerState::kMaxFastTracks);
            // FIXME This is too eager.  We allocate a fast track index before the
            //       fast track becomes active.  Since fast tracks are a scarce resource,
            //       this means we are potentially denying other more important fast tracks
            //       from being created.  It would be better to allocate the index dynamically.
            mFastIndex = i;
            // Read the initial underruns because this field is never cleared by the fast mixer
            mObservedUnderruns = thread->getFastTrackUnderruns(i);
            thread->mFastTrackAvailMask &= ~(1<< i);
        }
    }
    ALOGV("Track constructor name %d, calling pid %d", mName,
            IPCThreadState::self()->getCallingPid());
}
</imemory></client>

在TrackBase的构造过程中根据是否创建Client对象来采取不同方式分配audio_track_cblk_t对象内存空间,并且创建audio_track_cblk_t对象。在Track构造中,根据不同的播放模式,创建不同的代理对象:

Stream模式下,创建AudioTrackServerProxy代理对象;Static模式下,创建StaticAudioTrackServerProxy代理对象;

加载中...

在stream模式下,同时分配指定大小的音频数据buffer ,该buffer的结构如下所示:

加载中...

我们知道在构造Client对象时,创建了一个内存分配工具对象MemoryDealer,同时创建了一块大小为2M的匿名共享内存,这里就是使用MemoryDealer对象在这块匿名共享内存上分配指定大小的buffer。

frameworks\native\libs\binder\MemoryDealer.cpp

?
1
2
3
4
5
6
7
8
9
10
11
12
sp<imemory> MemoryDealer::allocate(size_t size)
{
sp<imemory> memory;
    //分配size大小的共享内存,并返回该buffer的偏移量
    constssize_t offset = allocator()->allocate(size);
if (offset >= 0) {
        //将分配的buffer封装为Allocation对象
        memory =new Allocation(this, heap(), offset, size);
    }
    returnmemory;
}
</imemory></imemory>
?
1
2
3
4
5
6
size_t SimpleBestFitAllocator::allocate(size_t size, uint32_t flags)
{
    Mutex::Autolock _l(mLock);
    ssize_t offset = alloc(size, flags);
    returnoffset;
}

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
ssize_t SimpleBestFitAllocator::alloc(size_t size, uint32_t flags)
{
    if(size == 0) {
        return0;
    }
    size = (size + kMemoryAlign-1) / kMemoryAlign;
    chunk_t* free_chunk =0;
    chunk_t* cur = mList.head();
    size_t pagesize = getpagesize();
    while(cur) {
        intextra = 0;
        if(flags & PAGE_ALIGNED)
            extra = ( -cur->start & ((pagesize/kMemoryAlign)-1) ) ;
        // best fit
        if(cur->free && (cur->size >= (size+extra))) {
            if((!free_chunk) || (cur->size < free_chunk->size)) {
                free_chunk = cur;
            }
            if(cur->size == size) {
                break;
            }
        }
        cur = cur->next;
    }
    if(free_chunk) {
        constsize_t free_size = free_chunk->size;
        free_chunk->free =0;
        free_chunk->size = size;
        if(free_size > size) {
            intextra = 0;
            if(flags & PAGE_ALIGNED)
                extra = ( -free_chunk->start & ((pagesize/kMemoryAlign)-1) ) ;
            if(extra) {
                chunk_t* split =new chunk_t(free_chunk->start, extra);
                free_chunk->start += extra;
                mList.insertBefore(free_chunk, split);
            }
            ALOGE_IF((flags&PAGE_ALIGNED) &&
                    ((free_chunk->start*kMemoryAlign)&(pagesize-1)),
                    "PAGE_ALIGNED requested, but page is not aligned!!!");
            constssize_t tail_free = free_size - (size+extra);
            if(tail_free > 0) {
                chunk_t* split =new chunk_t(
                        free_chunk->start + free_chunk->size, tail_free);
                mList.insertAfter(free_chunk, split);
            }
        }
        return(free_chunk->start)*kMemoryAlign;
    }
    returnNO_MEMORY;
}

audio_track_cblk_t对象用于协调生产者AudioTrack和消费者AudioFlinger之间的步调。

加载中...

在createTrack时由AudioFlinger申请相应的内存,然后通过IMemory接口返回AudioTrack,这样AudioTrack和AudioFlinger管理着同一个audio_track_cblk_t,通过它实现了环形FIFO,AudioTrack向FIFO中写入音频数据,AudioFlinger从FIFO中读取音频数据,经Mixer后送给AudioHardware进行播放。

1) AudioTrack是FIFO的数据生产者;

2) AudioFlinger是FIFO的数据消费者;

构造TrackHandle

Track对象只负责音频相关业务,对外并没有提供夸进程的Binder调用接口,因此需要将通信业务委托给另外一个对象来完成,这就是TrackHandle存在的意义,TrackHandle负责代理Track的通信业务,它是Track与AudioTrack之间的跨进程通道。

?
1
2
3
4
AudioFlinger::TrackHandle::TrackHandle(constsp& track): BnAudioTrack(),mTrack(track)
{
}
</audioflinger::playbackthread::track>

加载中...

AudioFlinger拥有多个工作线程,每个线程拥有多个Track。播放线程实际上是MixerThread的实例,MixerThread的threadLoop()中,会把该线程中的各个Track进行混合,必要时还要进行ReSample(重采样)的动作,转换为统一的采样率(44.1K),然后通过音频系统的AudioHardware层输出音频数据。

加载中...

? Framework或者Java层通过JNI创建AudioTrack对象;

? 根据StreamType等参数,查找已打开的音频输出设备,如果查找不到匹配的音频输出设备,则请求AudioFlinger打开新的音频输出设备;

? AudioFlinger为该输出设备创建混音线程MixerThread,并把该线程的id作为getOutput()的返回值返回给AudioTrack;

? AudioTrack通过binder机制调用AudioFlinger的createTrack()创建Track,并且创建TrackHandle Binder本地对象,同时返回IAudioTrack的代理对象。

? AudioFlinger注册该Track到MixerThread中;

? AudioTrack通过IAudioTrack接口,得到在AudioFlinger中创建的FIFO(audio_track_cblk_t);



AudioTrack启动过程


AudioTrack数据写入过程


AudioTrack停止过程

0 0