Android Audio代码分析6 - AudioEffect
来源:互联网 发布:python中turtle画玫瑰 编辑:程序博客网 时间:2024/06/06 17:50
在看AudioSessionId相关代码的时候了解到,共用一个AudioSessionId的AudioTrack和MediaPlayer会共用一个AudioEffect。
今天就来看看AudioEffect是个什么东东。
看这个类的目的,主要是为了搞清楚AudioEffect是个什么东东。
打算重点看看类的介绍及其构造函数上。
*****************************************源码*************************************************
public class AudioEffect { static { System.loadLibrary("audioeffect_jni"); native_init(); }... public AudioEffect(UUID type, UUID uuid, int priority, int audioSession) throws IllegalArgumentException, UnsupportedOperationException, RuntimeException { int[] id = new int[1]; Descriptor[] desc = new Descriptor[1]; // native initialization int initResult = native_setup(new WeakReference<AudioEffect>(this), type.toString(), uuid.toString(), priority, audioSession, id, desc); if (initResult != SUCCESS && initResult != ALREADY_EXISTS) { Log.e(TAG, "Error code " + initResult + " when initializing AudioEffect."); switch (initResult) { case ERROR_BAD_VALUE: throw (new IllegalArgumentException("Effect type: " + type + " not supported.")); case ERROR_INVALID_OPERATION: throw (new UnsupportedOperationException( "Effect library not loaded")); default: throw (new RuntimeException( "Cannot initialize effect engine for type: " + type + "Error: " + initResult)); } } mId = id[0]; mDescriptor = desc[0]; synchronized (mStateLock) { mState = STATE_INITIALIZED; } }...}
**********************************************************************************************
源码路径:
frameworks\base\media\java\android\media\audiofx\AudioEffect.java
###########################################说明################################################
先看看整个类的注释:
/** * AudioEffect is the base class for controlling audio effects provided by the android audio * framework. * <p>Applications should not use the AudioEffect class directly but one of its derived classes to * control specific effects: * <ul> * <li> {@link android.media.audiofx.Equalizer}</li> * <li> {@link android.media.audiofx.Virtualizer}</li> * <li> {@link android.media.audiofx.BassBoost}</li> * <li> {@link android.media.audiofx.PresetReverb}</li> * <li> {@link android.media.audiofx.EnvironmentalReverb}</li> * </ul> * <p>If the audio effect is to be applied to a specific AudioTrack or MediaPlayer instance, * the application must specify the audio session ID of that instance when creating the AudioEffect. * (see {@link android.media.MediaPlayer#getAudioSessionId()} for details on audio sessions). * To apply an effect to the global audio output mix, session 0 must be specified when creating the * AudioEffect. * <p>Creating an effect on the output mix (audio session 0) requires permission * {@link android.Manifest.permission#MODIFY_AUDIO_SETTINGS} * <p>Creating an AudioEffect object will create the corresponding effect engine in the audio * framework if no instance of the same effect type exists in the specified audio session. * If one exists, this instance will be used. * <p>The application creating the AudioEffect object (or a derived class) will either receive * control of the effect engine or not depending on the priority parameter. If priority is higher * than the priority used by the current effect engine owner, the control will be transfered to the * new object. Otherwise control will remain with the previous object. In this case, the new * application will be notified of changes in effect engine state or control ownership by the * appropiate listener. */
大致意思如下:
AudioEffect是由android audio framework提供的控制audio effect的基类。
应用程序不应该直接创建AudioEffect的对象,而应该创建由其派生的,控制特定effect的类的对象。
可创建对象的派生类如下:
* <li> {@link android.media.audiofx.Equalizer}</li>
* <li> {@link android.media.audiofx.Virtualizer}</li>
* <li> {@link android.media.audiofx.BassBoost}</li>
* <li> {@link android.media.audiofx.PresetReverb}</li>
* <li> {@link android.media.audiofx.EnvironmentalReverb}</li>
* </ul>
如果一个audio effect被指定给一个特定的AudioTrack或者MediaPlayer,创建AudioEffect对象的时候必须指定一个AudioSessionId。
如果想创建一个对global audio output mix都起作用的audio effect,需要将AudioSessionId指定为0.
想创建一个对global audio output mix都起作用的audio effect需要特定的许可:
* {@link android.Manifest.permission#MODIFY_AUDIO_SETTINGS}
创建AudioEffect对象时,如果指定的AudioSession中不存在该类型的AudioEffect,则会在audio framework中创建一个effect engine。
否则,将使用已存在的AudioEffect对象。
应用程序创建了一个AudioEffect对象后,能否取得effect enging的控制权,取决于优先级。
如果优先级比正控制effect enging的应用程序的优先级高,则会把控制权抢过来。
否则只能眼睁睁的看着人家继续控制。
不过,如果effect enging的状态或者所有权改变时,监听器将会通知应用程序。
再看看其构造函数及注释
// Constructor, Finalize // -------------------- /** * Class constructor. * * @param type type of effect engine created. See {@link #EFFECT_TYPE_ENV_REVERB}, * {@link #EFFECT_TYPE_EQUALIZER} ... Types corresponding to * built-in effects are defined by AudioEffect class. Other types * can be specified provided they correspond an existing OpenSL * ES interface ID and the corresponsing effect is available on * the platform. If an unspecified effect type is requested, the * constructor with throw the IllegalArgumentException. This * parameter can be set to {@link #EFFECT_TYPE_NULL} in which * case only the uuid will be used to select the effect. * @param uuid unique identifier of a particular effect implementation. * Must be specified if the caller wants to use a particular * implementation of an effect type. This parameter can be set to * {@link #EFFECT_TYPE_NULL} in which case only the type will * be used to select the effect. * @param priority the priority level requested by the application for * controlling the effect engine. As the same effect engine can * be shared by several applications, this parameter indicates * how much the requesting application needs control of effect * parameters. The normal priority is 0, above normal is a * positive number, below normal a negative number. * @param audioSession system wide unique audio session identifier. If audioSession * is not 0, the effect will be attached to the MediaPlayer or * AudioTrack in the same audio session. Otherwise, the effect * will apply to the output mix. * * @throws java.lang.IllegalArgumentException * @throws java.lang.UnsupportedOperationException * @throws java.lang.RuntimeException * @hide */
注释中详细介绍了各个参数。
参数type是要创建的effect的类型。标准的类型已经在AudioEffect类中定义了,可以直接使用。
若要使用其他的类型,需要提供OpenSLES interface ID,并且在平台中存在该类型。
若指定的类型不存在,则会抛出IllegalArgumentException异常。
若type为EFFECT_TYPE_NULL,则只会根据uuid来选择effect。
标准的类型如下:
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ /** * The following UUIDs define effect types corresponding to standard audio * effects whose implementation and interface conform to the OpenSL ES * specification. The definitions match the corresponding interface IDs in * OpenSLES_IID.h */ /** * UUID for environmental reverb effect * @hide */ public static final UUID EFFECT_TYPE_ENV_REVERB = UUID .fromString("c2e5d5f0-94bd-4763-9cac-4e234d06839e"); /** * UUID for preset reverb effect * @hide */ public static final UUID EFFECT_TYPE_PRESET_REVERB = UUID .fromString("47382d60-ddd8-11db-bf3a-0002a5d5c51b"); /** * UUID for equalizer effect * @hide */ public static final UUID EFFECT_TYPE_EQUALIZER = UUID .fromString("0bed4300-ddd6-11db-8f34-0002a5d5c51b"); /** * UUID for bass boost effect * @hide */ public static final UUID EFFECT_TYPE_BASS_BOOST = UUID .fromString("0634f220-ddd4-11db-a0fc-0002a5d5c51b"); /** * UUID for virtualizer effect * @hide */ public static final UUID EFFECT_TYPE_VIRTUALIZER = UUID .fromString("37cc2c00-dddd-11db-8577-0002a5d5c51b"); /** * Null effect UUID. Used when the UUID for effect type of * @hide */ public static final UUID EFFECT_TYPE_NULL = UUID .fromString("ec7178ec-e5e1-4432-a3f4-4657e6795210");//----------------------------------------------------------------
参数uuid用于指定一个特定effect的实现。
若uuid为EFFECT_TYPE_NULL,则只会根据type来选择effect。
关于uuid的取值,搜了下代码,发现,如果type不为EFFECT_TYPE_NULL,uuid多被指定为EFFECT_TYPE_NULL。
uuid不为EFFECT_TYPE_NULL的地方,type多被指定为EFFECT_TYPE_NULL。
基本上是靠两个中的一个来选择类型的,为何要两个参数?
难道type用来指定标准的?uuid用来指定其他的?
发现有如下的代码:
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // creating a volume controller on output mix ensures that ro.audio.silent mutes // audio after the effects and not before vc = new AudioEffect( AudioEffect.EFFECT_TYPE_NULL, UUID.fromString("119341a0-8469-11df-81f9-0002a5d5c51b"), 0, 0);//----------------------------------------------------------------
priority是优先级,前文有说过,会根据优先级来判断effect engine的控制权。
audioSession我们已经多次提到过。
相同audioSession ID的AudioTrack和MediaPlayer共享Audio Effect。
public AudioEffect(UUID type, UUID uuid, int priority, int audioSession) throws IllegalArgumentException, UnsupportedOperationException, RuntimeException { int[] id = new int[1];// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 类Descriptor的注释: /** * The effect descriptor contains information on a particular effect implemented in the * audio framework:<br> * <ul> * <li>type: UUID corresponding to the OpenSL ES interface implemented by this effect</li> * <li>uuid: UUID for this particular implementation</li> * <li>connectMode: {@link #EFFECT_INSERT} or {@link #EFFECT_AUXILIARY}</li> * <li>name: human readable effect name</li> * <li>implementor: human readable effect implementor name</li> * </ul> * The method {@link #queryEffects()} returns an array of Descriptors to facilitate effects * enumeration. */// ---------------------------------------------------------------- Descriptor[] desc = new Descriptor[1]; // native initialization // native_setup在以前有说过。 // 该函数肯定会调到native中的XXX_native_setup之类的函数中, // 并最终创建一个native侧的AudioEffect对象,并通过函数函数SetIntField保存到java侧。// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 果然,在文件frameworks\base\media\audioeffect\android_media_AudioEffect.cpp中找到了JNI函数对照表:// Dalvik VM type signaturesstatic JNINativeMethod gMethods[] = { {"native_init", "()V", (void *)android_media_AudioEffect_native_init}, {"native_setup", "(Ljava/lang/Object;Ljava/lang/String;Ljava/lang/String;II[I[Ljava/lang/Object;)I", (void *)android_media_AudioEffect_native_setup}, {"native_finalize", "()V", (void *)android_media_AudioEffect_native_finalize}, {"native_release", "()V", (void *)android_media_AudioEffect_native_release}, {"native_setEnabled", "(Z)I", (void *)android_media_AudioEffect_native_setEnabled}, {"native_getEnabled", "()Z", (void *)android_media_AudioEffect_native_getEnabled}, {"native_hasControl", "()Z", (void *)android_media_AudioEffect_native_hasControl}, {"native_setParameter", "(I[BI[B)I", (void *)android_media_AudioEffect_native_setParameter}, {"native_getParameter", "(I[B[I[B)I", (void *)android_media_AudioEffect_native_getParameter}, {"native_command", "(II[B[I[B)I", (void *)android_media_AudioEffect_native_command}, {"native_query_effects", "()[Ljava/lang/Object;", (void *)android_media_AudioEffect_native_queryEffects},};// ----------------------------------------------------------------// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 函数android_media_AudioEffect_native_setup的实现也在文件frameworks\base\media\audioeffect\android_media_AudioEffect.cpp中。static jintandroid_media_AudioEffect_native_setup(JNIEnv *env, jobject thiz, jobject weak_this, jstring type, jstring uuid, jint priority, jint sessionId, jintArray jId, jobjectArray javadesc){ LOGV("android_media_AudioEffect_native_setup"); AudioEffectJniStorage* lpJniStorage = NULL; int lStatus = AUDIOEFFECT_ERROR_NO_MEMORY; AudioEffect* lpAudioEffect = NULL; jint* nId = NULL; const char *typeStr = NULL; const char *uuidStr = NULL; effect_descriptor_t desc; jobject jdesc; char str[EFFECT_STRING_LEN_MAX]; jstring jdescType; jstring jdescUuid; jstring jdescConnect; jstring jdescName; jstring jdescImplementor; // 下面的几步都是参数的获取与判断 if (type != NULL) { typeStr = env->GetStringUTFChars(type, NULL); if (typeStr == NULL) { // Out of memory jniThrowException(env, "java/lang/RuntimeException", "Out of memory"); goto setup_failure; } } if (uuid != NULL) { uuidStr = env->GetStringUTFChars(uuid, NULL); if (uuidStr == NULL) { // Out of memory jniThrowException(env, "java/lang/RuntimeException", "Out of memory"); goto setup_failure; } } if (typeStr == NULL && uuidStr == NULL) { lStatus = AUDIOEFFECT_ERROR_BAD_VALUE; goto setup_failure; } // 此处创建了一个AudioEffectJniStorage对象。 // 在看AudioTrack代码的时候,见到过相似的类。 // 是Android进程间共享内存用的。 lpJniStorage = new AudioEffectJniStorage(); if (lpJniStorage == NULL) { LOGE("setup: Error creating JNI Storage"); goto setup_failure; } lpJniStorage->mCallbackData.audioEffect_class = (jclass)env->NewGlobalRef(fields.clazzEffect); // we use a weak reference so the AudioEffect object can be garbage collected. lpJniStorage->mCallbackData.audioEffect_ref = env->NewGlobalRef(weak_this); LOGV("setup: lpJniStorage: %p audioEffect_ref %p audioEffect_class %p, &mCallbackData %p", lpJniStorage, lpJniStorage->mCallbackData.audioEffect_ref, lpJniStorage->mCallbackData.audioEffect_class, &lpJniStorage->mCallbackData); if (jId == NULL) { LOGE("setup: NULL java array for id pointer"); lStatus = AUDIOEFFECT_ERROR_BAD_VALUE; goto setup_failure; } // 如我们所料,此处创建了一个native侧的AudioEffect(class AudioEffect : public RefBase)对象。// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 关于类AudioEffect的说明。与java侧类的说明意思相同。 /* Constructor. * AudioEffect is the base class for creating and controlling an effect engine from * the application process. Creating an AudioEffect object will create the effect engine * in the AudioFlinger if no engine of the specified type exists. If one exists, this engine * will be used. The application creating the AudioEffect object (or a derived class like * Reverb for instance) will either receive control of the effect engine or not, depending * on the priority parameter. If priority is higher than the priority used by the current * effect engine owner, the control will be transfered to the new application. Otherwise * control will remain to the previous application. In this case, the new application will be * notified of changes in effect engine state or control ownership by the effect callback. * After creating the AudioEffect, the application must call the initCheck() method and * check the creation status before trying to control the effect engine (see initCheck()). * If the effect is to be applied to an AudioTrack or MediaPlayer only the application * must specify the audio session ID corresponding to this player. */// ----------------------------------------------------------------// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 构造函数中,各参数的说明。 /* Constructor. * * Parameters: * * type: type of effect created: can be null if uuid is specified. This corresponds to * the OpenSL ES interface implemented by this effect. * uuid: Uuid of effect created: can be null if type is specified. This uuid corresponds to * a particular implementation of an effect type. * priority: requested priority for effect control: the priority level corresponds to the * value of priority parameter: negative values indicate lower priorities, positive values * higher priorities, 0 being the normal priority. * cbf: optional callback function (see effect_callback_t) * user: pointer to context for use by the callback receiver. * sessionID: audio session this effect is associated to. If 0, the effect will be global to * the output mix. If not 0, the effect will be applied to all players * (AudioTrack or MediaPLayer) within the same audio session. * output: HAL audio output stream to which this effect must be attached. Leave at 0 for * automatic output selection by AudioFlinger. */// 这些参数以前都有遇到过。----------------------------------------------------------------++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 类AudioEffect的构造函数代码://(可见其中没做太多工作。// 主要工作应该是在set函数中完成的。)AudioEffect::AudioEffect(const char *typeStr, const char *uuidStr, int32_t priority, effect_callback_t cbf, void* user, int sessionId, audio_io_handle_t output ) : mStatus(NO_INIT){ effect_uuid_t type; effect_uuid_t *pType = NULL; effect_uuid_t uuid; effect_uuid_t *pUuid = NULL; LOGV("Constructor string\n - type: %s\n - uuid: %s", typeStr, uuidStr); if (typeStr != NULL) { if (stringToGuid(typeStr, &type) == NO_ERROR) { pType = &type; } } if (uuidStr != NULL) { if (stringToGuid(uuidStr, &uuid) == NO_ERROR) { pUuid = &uuid; } }// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// set函数注释: /* Initialize an uninitialized AudioEffect. * Returned status (from utils/Errors.h) can be: * - NO_ERROR or ALREADY_EXISTS: successful initialization * - INVALID_OPERATION: AudioEffect is already initialized * - BAD_VALUE: invalid parameter * - NO_INIT: audio flinger or audio hardware not initialized * */ // 代码:status_t AudioEffect::set(const effect_uuid_t *type, const effect_uuid_t *uuid, int32_t priority, effect_callback_t cbf, void* user, int sessionId, audio_io_handle_t output){ sp<IEffect> iEffect; sp<IMemory> cblk; int enabled; LOGV("set %p mUserData: %p", this, user); if (mIEffect != 0) { LOGW("Effect already in use"); return INVALID_OPERATION; } const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger(); if (audioFlinger == 0) { LOGE("set(): Could not get audioflinger"); return NO_INIT; } if (type == NULL && uuid == NULL) { LOGW("Must specify at least type or uuid"); return BAD_VALUE; } mPriority = priority; mCbf = cbf; mUserData = user; mSessionId = sessionId; memset(&mDescriptor, 0, sizeof(effect_descriptor_t)); memcpy(&mDescriptor.type, EFFECT_UUID_NULL, sizeof(effect_uuid_t)); memcpy(&mDescriptor.uuid, EFFECT_UUID_NULL, sizeof(effect_uuid_t)); if (type != NULL) { memcpy(&mDescriptor.type, type, sizeof(effect_uuid_t)); } if (uuid != NULL) { memcpy(&mDescriptor.uuid, uuid, sizeof(effect_uuid_t)); } // 这边只是相当于客户端,服务端在audioflinger侧 // Implements the IEffectClient interface // class EffectClient : public android::BnEffectClient, public android::IBinder::DeathRecipient mIEffectClient = new EffectClient(this);// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++sp<IEffect> AudioFlinger::createEffect(pid_t pid, effect_descriptor_t *pDesc, const sp<IEffectClient>& effectClient, int32_t priority, int output, int sessionId, status_t *status, int *id, int *enabled){ status_t lStatus = NO_ERROR; sp<EffectHandle> handle; effect_interface_t itfe; effect_descriptor_t desc; sp<Client> client; wp<Client> wclient; LOGV("createEffect pid %d, client %p, priority %d, sessionId %d, output %d", pid, effectClient.get(), priority, sessionId, output); // 参数检查 if (pDesc == NULL) { lStatus = BAD_VALUE; goto Exit; } // AudioSystem::SESSION_OUTPUT_MIX其实就是0. // 介绍AudioSessionId的时候,说过如果AudioSessionid为0,则AudioEffect对整个output mix都起作用。 // 不过需要有特殊的许可。该许可通过函数settingsAllowed来判断// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// 函数settingsAllowed的实现:static bool settingsAllowed() {#ifndef HAVE_ANDROID_OS return true;#endif#if AUDIOFLINGER_SECURITY_ENABLED if (getpid() == IPCThreadState::self()->getCallingPid()) return true; bool ok = checkCallingPermission(String16("android.permission.MODIFY_AUDIO_SETTINGS")); if (!ok) LOGE("Request requires android.permission.MODIFY_AUDIO_SETTINGS"); return ok;#else if (!checkCallingPermission(String16("android.permission.MODIFY_AUDIO_SETTINGS"))) LOGW("WARNING: Need to add android.permission.MODIFY_AUDIO_SETTINGS to manifest"); return true;#endif}// ---------------------------------------------------------------- // check audio settings permission for global effects if (sessionId == AudioSystem::SESSION_OUTPUT_MIX && !settingsAllowed()) { lStatus = PERMISSION_DENIED; goto Exit; }// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SESSION_OUTPUT_STAGE = -1, // session for effects attached to a particular output stream // (value must be less than 0)// ---------------------------------------------------------------- // Session AudioSystem::SESSION_OUTPUT_STAGE is reserved for output stage effects // that can only be created by audio policy manager (running in same process) if (sessionId == AudioSystem::SESSION_OUTPUT_STAGE && getpid() != pid) { lStatus = PERMISSION_DENIED; goto Exit; } // check recording permission for visualizer if ((memcmp(&pDesc->type, SL_IID_VISUALIZATION, sizeof(effect_uuid_t)) == 0 || memcmp(&pDesc->uuid, &VISUALIZATION_UUID_, sizeof(effect_uuid_t)) == 0) && !recordingAllowed()) { lStatus = PERMISSION_DENIED; goto Exit; } if (output == 0) { if (sessionId == AudioSystem::SESSION_OUTPUT_STAGE) { // 这种情况下,output应该由AudioPolicyManager来指定 // output must be specified by AudioPolicyManager when using session // AudioSystem::SESSION_OUTPUT_STAGE lStatus = BAD_VALUE; goto Exit; } else if (sessionId == AudioSystem::SESSION_OUTPUT_MIX) { // if the output returned by getOutputForEffect() is removed before we lock the // mutex below, the call to checkPlaybackThread_l(output) below will detect it // and we will exit safely// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++audio_io_handle_t AudioSystem::getOutputForEffect(effect_descriptor_t *desc){ const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); if (aps == 0) return PERMISSION_DENIED;// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ virtual audio_io_handle_t getOutputForEffect(effect_descriptor_t *desc) { Parcel data, reply; data.writeInterfaceToken(IAudioPolicyService::getInterfaceDescriptor()); data.write(desc, sizeof(effect_descriptor_t));// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++status_t BnAudioPolicyService::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){... case GET_OUTPUT_FOR_EFFECT: { CHECK_INTERFACE(IAudioPolicyService, data, reply); effect_descriptor_t desc; data.read(&desc, sizeof(effect_descriptor_t));// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++audio_io_handle_t AudioPolicyService::getOutputForEffect(effect_descriptor_t *desc){ if (mpPolicyManager == NULL) { return NO_INIT; } Mutex::Autolock _l(mLock);// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++audio_io_handle_t AudioPolicyManagerBase::getOutputForEffect(effect_descriptor_t *desc){ LOGV("getOutputForEffect()"); // 函数getOutput以前有看过,不过当时由于篇幅问题,未列出代码,今天列出来看看// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++audio_io_handle_t AudioPolicyManagerBase::getOutput(AudioSystem::stream_type stream, uint32_t samplingRate, uint32_t format, uint32_t channels, AudioSystem::output_flags flags){ audio_io_handle_t output = 0; uint32_t latency = 0; // 获取策略// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++AudioPolicyManagerBase::routing_strategy AudioPolicyManagerBase::getStrategy( AudioSystem::stream_type stream) { // stream to strategy mapping switch (stream) { case AudioSystem::VOICE_CALL: case AudioSystem::BLUETOOTH_SCO: return STRATEGY_PHONE; case AudioSystem::RING: case AudioSystem::NOTIFICATION: case AudioSystem::ALARM: case AudioSystem::ENFORCED_AUDIBLE: return STRATEGY_SONIFICATION; case AudioSystem::DTMF: return STRATEGY_DTMF; default: LOGE("unknown stream type"); case AudioSystem::SYSTEM: // NOTE: SYSTEM stream uses MEDIA strategy because muting music and switching outputs // while key clicks are played produces a poor result case AudioSystem::TTS: case AudioSystem::MUSIC: return STRATEGY_MEDIA; }}// ---------------------------------------------------------------- routing_strategy strategy = getStrategy((AudioSystem::stream_type)stream); // 获取device// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++uint32_t AudioPolicyManagerBase::getDeviceForStrategy(routing_strategy strategy, bool fromCache){ uint32_t device = 0; if (fromCache) { LOGV("getDeviceForStrategy() from cache strategy %d, device %x", strategy, mDeviceForStrategy[strategy]); // mDeviceForStrategy在以下函数中赋值// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++void AudioPolicyManagerBase::updateDeviceForStrategy(){ for (int i = 0; i < NUM_STRATEGIES; i++) { mDeviceForStrategy[i] = getDeviceForStrategy((routing_strategy)i, false); }}// ---------------------------------------------------------------- return mDeviceForStrategy[strategy]; } switch (strategy) { case STRATEGY_DTMF: if (!isInCall()) { // when off call, DTMF strategy follows the same rules as MEDIA strategy device = getDeviceForStrategy(STRATEGY_MEDIA, false); break; } // when in call, DTMF and PHONE strategies follow the same rules // FALL THROUGH case STRATEGY_PHONE: // for phone strategy, we first consider the forced use and then the available devices by order // of priority switch (mForceUse[AudioSystem::FOR_COMMUNICATION]) { case AudioSystem::FORCE_BT_SCO: if (!isInCall() || strategy != STRATEGY_DTMF) { // 所以device的类型// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // output devices public static final int DEVICE_OUT_EARPIECE = 0x1; public static final int DEVICE_OUT_SPEAKER = 0x2; public static final int DEVICE_OUT_WIRED_HEADSET = 0x4; public static final int DEVICE_OUT_WIRED_HEADPHONE = 0x8; public static final int DEVICE_OUT_BLUETOOTH_SCO = 0x10; public static final int DEVICE_OUT_BLUETOOTH_SCO_HEADSET = 0x20; public static final int DEVICE_OUT_BLUETOOTH_SCO_CARKIT = 0x40; public static final int DEVICE_OUT_BLUETOOTH_A2DP = 0x80; public static final int DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES = 0x100; public static final int DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER = 0x200; public static final int DEVICE_OUT_AUX_DIGITAL = 0x400; public static final int DEVICE_OUT_WIRED_HDMI = 0x4000; public static final int DEVICE_OUT_DEFAULT = 0x8000; // input devices public static final int DEVICE_IN_COMMUNICATION = 0x10000; public static final int DEVICE_IN_AMBIENT = 0x20000; public static final int DEVICE_IN_BUILTIN_MIC1 = 0x40000; public static final int DEVICE_IN_BUILTIN_MIC2 = 0x80000; public static final int DEVICE_IN_MIC_ARRAY = 0x100000; public static final int DEVICE_IN_BLUETOOTH_SCO_HEADSET = 0x200000; public static final int DEVICE_IN_WIRED_HEADSET = 0x400000; public static final int DEVICE_IN_AUX_DIGITAL = 0x800000; public static final int DEVICE_IN_DEFAULT = 0x80000000;// ---------------------------------------------------------------- device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_SCO_CARKIT; if (device) break; } device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_SCO_HEADSET; if (device) break; device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_SCO; if (device) break; // if SCO device is requested but no SCO device is available, fall back to default case // FALL THROUGH default: // FORCE_NONE device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HEADPHONE; if (device) break; device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HEADSET; if (device) break;#ifdef WITH_A2DP // when not in a phone call, phone strategy should route STREAM_VOICE_CALL to A2DP if (!isInCall()) { device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP; if (device) break; device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES; if (device) break; }#endif device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_EARPIECE; if (device == 0) { LOGE("getDeviceForStrategy() earpiece device not found"); } break; case AudioSystem::FORCE_SPEAKER: if (!isInCall() || strategy != STRATEGY_DTMF) { device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_SCO_CARKIT; if (device) break; }#ifdef WITH_A2DP // when not in a phone call, phone strategy should route STREAM_VOICE_CALL to // A2DP speaker when forcing to speaker output if (!isInCall()) { device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER; if (device) break; }#endif device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_SPEAKER; if (device == 0) { LOGE("getDeviceForStrategy() speaker device not found"); } break; } break; case STRATEGY_SONIFICATION: // If incall, just select the STRATEGY_PHONE device: The rest of the behavior is handled by // handleIncallSonification(). if (isInCall()) { device = getDeviceForStrategy(STRATEGY_PHONE, false); break; } device = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_SPEAKER; if (device == 0) { LOGE("getDeviceForStrategy() speaker device not found"); } // The second device used for sonification is the same as the device used by media strategy // FALL THROUGH case STRATEGY_MEDIA: { uint32_t device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_AUX_DIGITAL; if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HDMI; } if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HEADPHONE; } if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_WIRED_HEADSET; }#ifdef WITH_A2DP if (mA2dpOutput != 0) { if (strategy == STRATEGY_SONIFICATION && !a2dpUsedForSonification()) { break; } if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP; } if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES; } if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER; } }#endif if (device2 == 0) { device2 = mAvailableOutputDevices & AudioSystem::DEVICE_OUT_SPEAKER; } // device is DEVICE_OUT_SPEAKER if we come from case STRATEGY_SONIFICATION, 0 otherwise device |= device2; if (device == 0) { LOGE("getDeviceForStrategy() speaker device not found"); } } break; default: LOGW("getDeviceForStrategy() unknown strategy: %d", strategy); break; } LOGV("getDeviceForStrategy() strategy %d, device %x", strategy, device); return device;}// ---------------------------------------------------------------- uint32_t device = getDeviceForStrategy(strategy); LOGV("getOutput() stream %d, samplingRate %d, format %d, channels %x, flags %x", stream, samplingRate, format, channels, flags);#ifdef AUDIO_POLICY_TEST if (mCurOutput != 0) { LOGV("getOutput() test output mCurOutput %d, samplingRate %d, format %d, channels %x, mDirectOutput %d", mCurOutput, mTestSamplingRate, mTestFormat, mTestChannels, mDirectOutput); if (mTestOutputs[mCurOutput] == 0) { LOGV("getOutput() opening test output"); AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(); outputDesc->mDevice = mTestDevice; outputDesc->mSamplingRate = mTestSamplingRate; outputDesc->mFormat = mTestFormat; outputDesc->mChannels = mTestChannels; outputDesc->mLatency = mTestLatencyMs; outputDesc->mFlags = (AudioSystem::output_flags)(mDirectOutput ? AudioSystem::OUTPUT_FLAG_DIRECT : 0); outputDesc->mRefCount[stream] = 0; // mpClientInterface在构造函数中被赋值。// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface) :#ifdef AUDIO_POLICY_TEST Thread(false),#endif //AUDIO_POLICY_TEST mPhoneState(AudioSystem::MODE_NORMAL), mRingerMode(0), mMusicStopTime(0), mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f), mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0), mA2dpSuspended(false){ mpClientInterface = clientInterface; for (int i = 0; i < AudioSystem::NUM_FORCE_USE; i++) { mForceUse[i] = AudioSystem::FORCE_NONE; } // devices available by default are speaker, ear piece and microphone mAvailableOutputDevices = AudioSystem::DEVICE_OUT_EARPIECE | AudioSystem::DEVICE_OUT_SPEAKER; mAvailableInputDevices = AudioSystem::DEVICE_IN_BUILTIN_MIC;#ifdef WITH_A2DP mA2dpOutput = 0; mDuplicatedOutput = 0; mA2dpDeviceAddress = String8("");#endif mScoDeviceAddress = String8(""); // open hardware output AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(); outputDesc->mDevice = (uint32_t)AudioSystem::DEVICE_OUT_SPEAKER; mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice, &outputDesc->mSamplingRate, &outputDesc->mFormat, &outputDesc->mChannels, &outputDesc->mLatency, outputDesc->mFlags); if (mHardwareOutput == 0) { LOGE("Failed to initialize hardware output stream, samplingRate: %d, format %d, channels %d", outputDesc->mSamplingRate, outputDesc->mFormat, outputDesc->mChannels); } else { addOutput(mHardwareOutput, outputDesc); setOutputDevice(mHardwareOutput, (uint32_t)AudioSystem::DEVICE_OUT_SPEAKER, true); //TODO: configure audio effect output stage here } updateDeviceForStrategy();#ifdef AUDIO_POLICY_TEST AudioParameter outputCmd = AudioParameter(); outputCmd.addInt(String8("set_id"), 0); mpClientInterface->setParameters(mHardwareOutput, outputCmd.toString()); mTestDevice = AudioSystem::DEVICE_OUT_SPEAKER; mTestSamplingRate = 44100; mTestFormat = AudioSystem::PCM_16_BIT; mTestChannels = AudioSystem::CHANNEL_OUT_STEREO; mTestLatencyMs = 0; mCurOutput = 0; mDirectOutput = false; for (int i = 0; i < NUM_TEST_OUTPUTS; i++) { mTestOutputs[i] = 0; } const size_t SIZE = 256; char buffer[SIZE]; snprintf(buffer, SIZE, "AudioPolicyManagerTest"); run(buffer, ANDROID_PRIORITY_AUDIO);#endif //AUDIO_POLICY_TEST}// ----------------------------------------------------------------// AudioPolicyManagerBase对象在AudioPolicyService的构造函数中被创建// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++AudioPolicyService::AudioPolicyService() : BnAudioPolicyService() , mpPolicyManager(NULL){ char value[PROPERTY_VALUE_MAX]; // start tone playback thread mTonePlaybackThread = new AudioCommandThread(String8("")); // start audio commands thread mAudioCommandThread = new AudioCommandThread(String8("ApmCommandThread"));#if (defined GENERIC_AUDIO) || (defined AUDIO_POLICY_TEST) mpPolicyManager = new AudioPolicyManagerBase(this); LOGV("build for GENERIC_AUDIO - using generic audio policy");#else // if running in emulation - use the emulator driver if (property_get("ro.kernel.qemu", value, 0)) { LOGV("Running in emulation - using generic audio policy"); mpPolicyManager = new AudioPolicyManagerBase(this); } else { LOGV("Using hardware specific audio policy"); // 我们用到的肯定是这儿的 // 函数createAudioPolicyManager的实现都在hardware中 // 我们使用到的是alsa中的// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++extern "C" AudioPolicyInterface* createAudioPolicyManager(AudioPolicyClientInterface *clientInterface){// AudioPolicyManagerALSA的构造函数中没作什么特殊处理// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// Nothing currently different between the Base implementation.// 注释说它和base没什么差别// getOutput函数中调用的openOutput接口,应该是在AudioPolicyService中实现的了AudioPolicyManagerALSA::AudioPolicyManagerALSA(AudioPolicyClientInterface *clientInterface) : AudioPolicyManagerBase(clientInterface){}// ---------------------------------------------------------------- return new AudioPolicyManagerALSA(clientInterface);}// ---------------------------------------------------------------- mpPolicyManager = createAudioPolicyManager(this); }#endif // load properties property_get("ro.camera.sound.forced", value, "0"); mpPolicyManager->setSystemProperty("ro.camera.sound.forced", value);}// ----------------------------------------------------------------// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++audio_io_handle_t AudioPolicyService::openOutput(uint32_t *pDevices, uint32_t *pSamplingRate, uint32_t *pFormat, uint32_t *pChannels, uint32_t *pLatencyMs, AudioSystem::output_flags flags){ sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); if (af == 0) { LOGW("openOutput() could not get AudioFlinger"); return 0; } // 又调到了audioflinger中// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++int AudioFlinger::openOutput(uint32_t *pDevices, uint32_t *pSamplingRate, uint32_t *pFormat, uint32_t *pChannels, uint32_t *pLatencyMs, uint32_t flags){ status_t status; PlaybackThread *thread = NULL; mHardwareStatus = AUDIO_HW_OUTPUT_OPEN; uint32_t samplingRate = pSamplingRate ? *pSamplingRate : 0; uint32_t format = pFormat ? *pFormat : 0; uint32_t channels = pChannels ? *pChannels : 0; uint32_t latency = pLatencyMs ? *pLatencyMs : 0; LOGV("openOutput(), Device %x, SamplingRate %d, Format %d, Channels %x, flags %x", pDevices ? *pDevices : 0, samplingRate, format, channels, flags); if (pDevices == NULL || *pDevices == 0) { return 0; } Mutex::Autolock _l(mLock);// 又调到了HAL层的openOutputStream函数 AudioStreamOut *output = mAudioHardware->openOutputStream(*pDevices, (int *)&format, &channels, &samplingRate, &status); LOGV("openOutput() openOutputStream returned output %p, SamplingRate %d, Format %d, Channels %x, status %d", output, samplingRate, format, channels, status); mHardwareStatus = AUDIO_HW_IDLE; if (output != 0) { int id = nextUniqueId(); if ((flags & AudioSystem::OUTPUT_FLAG_DIRECT) || (format != AudioSystem::PCM_16_BIT) || (channels != AudioSystem::CHANNEL_OUT_STEREO)) { thread = new DirectOutputThread(this, output, id, *pDevices); LOGV("openOutput() created direct output: ID %d thread %p", id, thread); } else { thread = new MixerThread(this, output, id, *pDevices); LOGV("openOutput() created mixer output: ID %d thread %p", id, thread);#ifdef LVMX unsigned bitsPerSample = (format == AudioSystem::PCM_16_BIT) ? 16 : ((format == AudioSystem::PCM_8_BIT) ? 8 : 0); unsigned channelCount = (channels == AudioSystem::CHANNEL_OUT_STEREO) ? 2 : 1; int audioOutputType = LifeVibes::threadIdToAudioOutputType(thread->id()); LifeVibes::init_aot(audioOutputType, samplingRate, bitsPerSample, channelCount); LifeVibes::setDevice(audioOutputType, *pDevices);#endif } mPlaybackThreads.add(id, thread); if (pSamplingRate) *pSamplingRate = samplingRate; if (pFormat) *pFormat = format; if (pChannels) *pChannels = channels; if (pLatencyMs) *pLatencyMs = thread->latency(); // notify client processes of the new output creation thread->audioConfigChanged_l(AudioSystem::OUTPUT_OPENED); return id; } return 0;}// ---------------------------------------------------------------- return af->openOutput(pDevices, pSamplingRate, (uint32_t *)pFormat, pChannels, pLatencyMs, flags);}// ---------------------------------------------------------------- mTestOutputs[mCurOutput] = mpClientInterface->openOutput(&outputDesc->mDevice, &outputDesc->mSamplingRate, &outputDesc->mFormat, &outputDesc->mChannels, &outputDesc->mLatency, outputDesc->mFlags); if (mTestOutputs[mCurOutput]) { AudioParameter outputCmd = AudioParameter(); outputCmd.addInt(String8("set_id"),mCurOutput); mpClientInterface->setParameters(mTestOutputs[mCurOutput],outputCmd.toString()); addOutput(mTestOutputs[mCurOutput], outputDesc); } } return mTestOutputs[mCurOutput]; }#endif //AUDIO_POLICY_TEST // open a direct output if required by specified parameters if (needsDirectOuput(stream, samplingRate, format, channels, flags, device)) { LOGV("getOutput() opening direct output device %x", device); AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(); outputDesc->mDevice = device; outputDesc->mSamplingRate = samplingRate; outputDesc->mFormat = format; outputDesc->mChannels = channels; outputDesc->mLatency = 0; outputDesc->mFlags = (AudioSystem::output_flags)(flags | AudioSystem::OUTPUT_FLAG_DIRECT); outputDesc->mRefCount[stream] = 0; output = mpClientInterface->openOutput(&outputDesc->mDevice, &outputDesc->mSamplingRate, &outputDesc->mFormat, &outputDesc->mChannels, &outputDesc->mLatency, outputDesc->mFlags); // only accept an output with the requeted parameters if (output == 0 || (samplingRate != 0 && samplingRate != outputDesc->mSamplingRate) || (format != 0 && format != outputDesc->mFormat) || (channels != 0 && channels != outputDesc->mChannels)) { LOGV("getOutput() failed opening direct output: samplingRate %d, format %d, channels %d", samplingRate, format, channels); if (output != 0) { mpClientInterface->closeOutput(output); } delete outputDesc; return 0; } addOutput(output, outputDesc); return output; } if (channels != 0 && channels != AudioSystem::CHANNEL_OUT_MONO && channels != AudioSystem::CHANNEL_OUT_STEREO) { return 0; } // open a non direct output // get which output is suitable for the specified stream. The actual routing change will happen // when startOutput() will be called uint32_t a2dpDevice = device & AudioSystem::DEVICE_OUT_ALL_A2DP; if (AudioSystem::popCount((AudioSystem::audio_devices)device) == 2) {#ifdef WITH_A2DP if (a2dpUsedForSonification() && a2dpDevice != 0) { // if playing on 2 devices among which one is A2DP, use duplicated output LOGV("getOutput() using duplicated output"); LOGW_IF((mA2dpOutput == 0), "getOutput() A2DP device in multiple %x selected but A2DP output not opened", device); output = mDuplicatedOutput; } else#endif { // if playing on 2 devices among which none is A2DP, use hardware output output = mHardwareOutput; } LOGV("getOutput() using output %d for 2 devices %x", output, device); } else {#ifdef WITH_A2DP if (a2dpDevice != 0) { // if playing on A2DP device, use a2dp output LOGW_IF((mA2dpOutput == 0), "getOutput() A2DP device %x selected but A2DP output not opened", device); output = mA2dpOutput; } else#endif { // if playing on not A2DP device, use hardware output output = mHardwareOutput; } } LOGW_IF((output ==0), "getOutput() could not find output for stream %d, samplingRate %d, format %d, channels %x, flags %x", stream, samplingRate, format, channels, flags); return output;}// ---------------------------------------------------------------- // apply simple rule where global effects are attached to the same output as MUSIC streams return getOutput(AudioSystem::MUSIC);}原来到最后只是获取了MUSIC的output。// ---------------------------------------------------------------- return mpPolicyManager->getOutputForEffect(desc);}// ---------------------------------------------------------------- audio_io_handle_t output = getOutputForEffect(&desc); reply->writeInt32(static_cast <int>(output)); return NO_ERROR; } break;...}// ---------------------------------------------------------------- remote()->transact(GET_OUTPUT_FOR_EFFECT, data, &reply); return static_cast <audio_io_handle_t> (reply.readInt32()); }// ---------------------------------------------------------------- return aps->getOutputForEffect(desc);}// ---------------------------------------------------------------- output = AudioSystem::getOutputForEffect(&desc); } } { Mutex::Autolock _l(mLock); if (!EffectIsNullUuid(&pDesc->uuid)) { // if uuid is specified, request effect descriptor lStatus = EffectGetDescriptor(&pDesc->uuid, &desc); if (lStatus < 0) { LOGW("createEffect() error %d from EffectGetDescriptor", lStatus); goto Exit; } } else { // if uuid is not specified, look for an available implementation // of the required type in effect factory if (EffectIsNullUuid(&pDesc->type)) { LOGW("createEffect() no effect type"); lStatus = BAD_VALUE; goto Exit; } uint32_t numEffects = 0; effect_descriptor_t d; bool found = false; // 得到Effect的个数 lStatus = EffectQueryNumberEffects(&numEffects); if (lStatus < 0) { LOGW("createEffect() error %d from EffectQueryNumberEffects", lStatus); goto Exit; } // 寻找匹配的effect for (uint32_t i = 0; i < numEffects; i++) { lStatus = EffectQueryEffect(i, &desc); if (lStatus < 0) { LOGW("createEffect() error %d from EffectQueryEffect", lStatus); continue; } if (memcmp(&desc.type, &pDesc->type, sizeof(effect_uuid_t)) == 0) { // If matching type found save effect descriptor. If the session is // 0 and the effect is not auxiliary, continue enumeration in case // an auxiliary version of this effect type is available found = true; memcpy(&d, &desc, sizeof(effect_descriptor_t)); if (sessionId != AudioSystem::SESSION_OUTPUT_MIX || (desc.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { break; } } } if (!found) { lStatus = BAD_VALUE; LOGW("createEffect() effect not found"); goto Exit; } // For same effect type, chose auxiliary version over insert version if // connect to output mix (Compliance to OpenSL ES) if (sessionId == AudioSystem::SESSION_OUTPUT_MIX && (d.flags & EFFECT_FLAG_TYPE_MASK) != EFFECT_FLAG_TYPE_AUXILIARY) { memcpy(&desc, &d, sizeof(effect_descriptor_t)); } } // Do not allow auxiliary effects on a session different from 0 (output mix) if (sessionId != AudioSystem::SESSION_OUTPUT_MIX && (desc.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { lStatus = INVALID_OPERATION; goto Exit; } // return effect descriptor memcpy(pDesc, &desc, sizeof(effect_descriptor_t)); // If output is not specified try to find a matching audio session ID in one of the // output threads. // If output is 0 here, sessionId is neither SESSION_OUTPUT_STAGE nor SESSION_OUTPUT_MIX // because of code checking output when entering the function. if (output == 0) { // look for the thread where the specified audio session is present for (size_t i = 0; i < mPlaybackThreads.size(); i++) { if (mPlaybackThreads.valueAt(i)->hasAudioSession(sessionId) != 0) { output = mPlaybackThreads.keyAt(i); break; } } // If no output thread contains the requested session ID, default to // first output. The effect chain will be moved to the correct output // thread when a track with the same session ID is created if (output == 0 && mPlaybackThreads.size()) { output = mPlaybackThreads.keyAt(0); } } LOGV("createEffect() got output %d for effect %s", output, desc.name); PlaybackThread *thread = checkPlaybackThread_l(output); if (thread == NULL) { LOGE("createEffect() unknown output thread"); lStatus = BAD_VALUE; goto Exit; } // TODO: allow attachment of effect to inputs wclient = mClients.valueFor(pid); if (wclient != NULL) { client = wclient.promote(); } else { client = new Client(this, pid); mClients.add(pid, client); } // 在函数AudioEffect::Set函数中创建的EffectClient对象(effectClient),在此处并未作太多处理,而是直接传给了函数thread->createEffect_l// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++// PlaybackThread::createEffect_l() must be called with AudioFlinger::mLock heldsp<AudioFlinger::EffectHandle> AudioFlinger::PlaybackThread::createEffect_l( const sp<AudioFlinger::Client>& client, const sp<IEffectClient>& effectClient, int32_t priority, int sessionId, effect_descriptor_t *desc, int *enabled, status_t *status ){ sp<EffectModule> effect; sp<EffectHandle> handle; status_t lStatus; sp<Track> track; sp<EffectChain> chain; bool chainCreated = false; bool effectCreated = false; bool effectRegistered = false; if (mOutput == 0) { LOGW("createEffect_l() Audio driver not initialized."); lStatus = NO_INIT; goto Exit; } // Do not allow auxiliary effect on session other than 0 if ((desc->flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY && sessionId != AudioSystem::SESSION_OUTPUT_MIX) { LOGW("createEffect_l() Cannot add auxiliary effect %s to session %d", desc->name, sessionId); lStatus = BAD_VALUE; goto Exit; } // Do not allow effects with session ID 0 on direct output or duplicating threads // TODO: add rule for hw accelerated effects on direct outputs with non PCM format if (sessionId == AudioSystem::SESSION_OUTPUT_MIX && mType != MIXER) { LOGW("createEffect_l() Cannot add auxiliary effect %s to session %d", desc->name, sessionId); lStatus = BAD_VALUE; goto Exit; } LOGV("createEffect_l() thread %p effect %s on session %d", this, desc->name, sessionId); { // scope for mLock Mutex::Autolock _l(mLock); // 先看是否存在指定session的effect链,如果不存在,创建一个 // check for existing effect chain with the requested audio session chain = getEffectChain_l(sessionId); if (chain == 0) { // create a new chain for this session LOGV("createEffect_l() new effect chain for session %d", sessionId); chain = new EffectChain(this, sessionId); addEffectChain_l(chain); chain->setStrategy(getStrategyForSession_l(sessionId)); chainCreated = true; } else { effect = chain->getEffectFromDesc_l(desc); } LOGV("createEffect_l() got effect %p on chain %p", effect == 0 ? 0 : effect.get(), chain.get()); // 是否存在相同描述的effect,不存在的话,创建一个 if (effect == 0) { int id = mAudioFlinger->nextUniqueId(); // Check CPU and memory usage lStatus = AudioSystem::registerEffect(desc, mId, chain->strategy(), sessionId, id); if (lStatus != NO_ERROR) { goto Exit; } effectRegistered = true; // create a new effect module if none present in the chain effect = new EffectModule(this, chain, desc, id, sessionId); lStatus = effect->status(); if (lStatus != NO_ERROR) { goto Exit; } lStatus = chain->addEffect_l(effect); if (lStatus != NO_ERROR) { goto Exit; } effectCreated = true; effect->setDevice(mDevice); effect->setMode(mAudioFlinger->getMode()); } // 看看下面EffectHandle的注释,便于理解// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ // The EffectHandle class implements the IEffect interface. It provides resources // to receive parameter updates, keeps track of effect control // ownership and state and has a pointer to the EffectModule object it is controlling. // There is one EffectHandle object for each application controlling (or using) // an effect module. // The EffectHandle is obtained by calling AudioFlinger::createEffect().// ---------------------------------------------------------------- // create effect handle and connect it to effect module handle = new EffectHandle(effect, client, effectClient, priority); lStatus = effect->addHandle(handle); if (enabled) { *enabled = (int)effect->isEnabled(); } }Exit: if (lStatus != NO_ERROR && lStatus != ALREADY_EXISTS) { Mutex::Autolock _l(mLock); if (effectCreated) { chain->removeEffect_l(effect); } if (effectRegistered) { AudioSystem::unregisterEffect(effect->id()); } if (chainCreated) { removeEffectChain_l(chain); } handle.clear(); } if(status) { *status = lStatus; } return handle;}// ---------------------------------------------------------------- // create effect on selected output trhead handle = thread->createEffect_l(client, effectClient, priority, sessionId, &desc, enabled, &lStatus); if (handle != 0 && id != NULL) { *id = handle->id(); } }Exit: if(status) { *status = lStatus; } return handle;}// ---------------------------------------------------------------- iEffect = audioFlinger->createEffect(getpid(), (effect_descriptor_t *)&mDescriptor, mIEffectClient, priority, output, mSessionId, &mStatus, &mId, &enabled); if (iEffect == 0 || (mStatus != NO_ERROR && mStatus != ALREADY_EXISTS)) { LOGE("set(): AudioFlinger could not create effect, status: %d", mStatus); return mStatus; } mEnabled = (volatile int32_t)enabled; mIEffect = iEffect; cblk = iEffect->getCblk(); if (cblk == 0) { mStatus = NO_INIT; LOGE("Could not get control block"); return mStatus; } mIEffect = iEffect; mCblkMemory = cblk; mCblk = static_cast<effect_param_cblk_t*>(cblk->pointer()); int bufOffset = ((sizeof(effect_param_cblk_t) - 1) / sizeof(int) + 1) * sizeof(int); mCblk->buffer = (uint8_t *)mCblk + bufOffset; iEffect->asBinder()->linkToDeath(mIEffectClient); LOGV("set() %p OK effect: %s id: %d status %d enabled %d, ", this, mDescriptor.name, mId, mStatus, mEnabled); return mStatus;}// ---------------------------------------------------------------- mStatus = set(pType, pUuid, priority, cbf, user, sessionId, output);}// ---------------------------------------------------------------- // create the native AudioEffect object lpAudioEffect = new AudioEffect(typeStr, uuidStr, priority, effectCallback, &lpJniStorage->mCallbackData, sessionId, 0); if (lpAudioEffect == NULL) { LOGE("Error creating AudioEffect"); goto setup_failure; } lStatus = translateError(lpAudioEffect->initCheck()); if (lStatus != AUDIOEFFECT_SUCCESS && lStatus != AUDIOEFFECT_ERROR_ALREADY_EXISTS) { LOGE("AudioEffect initCheck failed %d", lStatus); goto setup_failure; } nId = (jint *) env->GetPrimitiveArrayCritical(jId, NULL); if (nId == NULL) { LOGE("setup: Error retrieving id pointer"); lStatus = AUDIOEFFECT_ERROR_BAD_VALUE; goto setup_failure; } nId[0] = lpAudioEffect->id(); env->ReleasePrimitiveArrayCritical(jId, nId, 0); nId = NULL; if (typeStr) { env->ReleaseStringUTFChars(type, typeStr); typeStr = NULL; } if (uuidStr) { env->ReleaseStringUTFChars(uuid, uuidStr); uuidStr = NULL; } // get the effect descriptor desc = lpAudioEffect->descriptor(); AudioEffect::guidToString(&desc.type, str, EFFECT_STRING_LEN_MAX); jdescType = env->NewStringUTF(str); AudioEffect::guidToString(&desc.uuid, str, EFFECT_STRING_LEN_MAX); jdescUuid = env->NewStringUTF(str); if ((desc.flags & EFFECT_FLAG_TYPE_MASK) == EFFECT_FLAG_TYPE_AUXILIARY) { jdescConnect = env->NewStringUTF("Auxiliary"); } else { jdescConnect = env->NewStringUTF("Insert"); } jdescName = env->NewStringUTF(desc.name); jdescImplementor = env->NewStringUTF(desc.implementor); jdesc = env->NewObject(fields.clazzDesc, fields.midDescCstor, jdescType, jdescUuid, jdescConnect, jdescName, jdescImplementor); env->DeleteLocalRef(jdescType); env->DeleteLocalRef(jdescUuid); env->DeleteLocalRef(jdescConnect); env->DeleteLocalRef(jdescName); env->DeleteLocalRef(jdescImplementor); if (jdesc == NULL) { LOGE("env->NewObject(fields.clazzDesc, fields.midDescCstor)"); goto setup_failure; } env->SetObjectArrayElement(javadesc, 0, jdesc); env->SetIntField(thiz, fields.fidNativeAudioEffect, (int)lpAudioEffect); env->SetIntField(thiz, fields.fidJniData, (int)lpJniStorage); return AUDIOEFFECT_SUCCESS; // failures:setup_failure: if (nId != NULL) { env->ReleasePrimitiveArrayCritical(jId, nId, 0); } if (lpAudioEffect) { delete lpAudioEffect; } env->SetIntField(thiz, fields.fidNativeAudioEffect, 0); if (lpJniStorage) { delete lpJniStorage; } env->SetIntField(thiz, fields.fidJniData, 0); if (uuidStr != NULL) { env->ReleaseStringUTFChars(uuid, uuidStr); } if (typeStr != NULL) { env->ReleaseStringUTFChars(type, typeStr); } return lStatus;}// ---------------------------------------------------------------- int initResult = native_setup(new WeakReference<AudioEffect>(this), type.toString(), uuid.toString(), priority, audioSession, id, desc); if (initResult != SUCCESS && initResult != ALREADY_EXISTS) { Log.e(TAG, "Error code " + initResult + " when initializing AudioEffect."); switch (initResult) { case ERROR_BAD_VALUE: throw (new IllegalArgumentException("Effect type: " + type + " not supported.")); case ERROR_INVALID_OPERATION: throw (new UnsupportedOperationException( "Effect library not loaded")); default: throw (new RuntimeException( "Cannot initialize effect engine for type: " + type + "Error: " + initResult)); } } mId = id[0]; mDescriptor = desc[0]; synchronized (mStateLock) { mState = STATE_INITIALIZED; } }
###############################################################################################
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
每一个session id都对应一个effect链表。
创建effect时,会首先检查指定session id的effect链表是否存在,不存在的话创建一个。
然后看链表中相同描述的effect是否存在,不存在的话,创建一个。
但没个应用程序都拥有一个自己的EffectHandle。
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
- Android Audio代码分析6 - AudioEffect
- Android Audio代码分析21 - 创建AudioEffect对象
- Android Audio代码分析22 - AudioEffect::getEnabled函数
- Android Audio代码分析24 - AudioEffect::setEnabled函数
- Android Audio AudioEffect
- Android Audio AudioEffect
- Android Audio AudioEffect
- Android Audio AudioEffect
- Android源码分析:AudioEffect
- Android源码分析:AudioEffect
- Android源码分析:AudioEffect
- Android源码分析:AudioEffect
- Android Audio代码分析26 - Audio Strategy
- Android Audio代码分析 - Audio Strategy
- Android Audio 代码分析- Audio Strategy
- Audio笔记之AudioEffect
- Android Audio代码分析2 - 函数getMinBufferSize
- Android Audio代码分析4 - AudioSystem::getOutputSamplingRate
- Java事件模型详解
- 最美丽的英文:奥巴马悼乔布斯(附带本人翻译)
- Java中private、public、protected的区别
- KMP算法详解
- Linux兴趣篇___修改开机图片
- Android Audio代码分析6 - AudioEffect
- 面试前需提前准备的问题
- ORACLE的锁机制
- 关于构造函数的执行
- this关键字跟不定长数组
- 【解惑】领略内部类的“内部”
- Tomcat的连接池配置
- RSA加解密使用总结,.net私钥加密公钥解密,WinCE平台RSA加解密
- 一个简单的小例子让你明白c#中的委托