Android Camera 系统架构源码分析(4)---->Camera的数据来源及Camera的管理

来源:互联网 发布:淘宝买手机主板靠谱吗 编辑:程序博客网 时间:2024/05/12 02:28
Camera的数据来源及Camera的管理 
我们接着第3篇,再返回Cam1DeviceBase::startPreview()的(4) mpCamAdapter->startPreview()。在讲(4)前我们先来看看(1)onStartPreview()。
onStartPreview();的实现在DefaultCam1Device.cpp
DefaultCam1Device::onStartPreview()
{
// Initialize Camera Adapter.
//下面的函数在Cam1DeviceBase.cpp中实现
initCameraAdapter();
}
 
Cam1DeviceBase::initCameraAdapter()
{
// (1) Check to see if CamAdapter has existed or not.
//...
// (2) Create & init a new CamAdapter.
// createInstance的实现在BaseCamAdapter.Instance.cpp中。
// 在createInstance中,我们的s8AppMode是default,所以我们用的是MtkDefaultCamAdapter。
//createInstance仅是初始化变量,不作深入讨论
mpCamAdapter = ICamAdapter::createInstance(mDevName, mi4OpenId, mpParamsMgr);
if ( mpCamAdapter != 0 && mpCamAdapter->init() )
{
// (.1) init. 此处调用的是MtkDefaultCamAdapter的init()
mpCamAdapter->setCallbacks(mpCamMsgCbInfo);
//使能MsgType,未被全能的msgType会被忽略掉
mpCamAdapter->enableMsgType(mpCamMsgCbInfo->mMsgEnabled);
 
// (.2) Invoke its setParameters 貌似没有实现
mpCamAdapter->setParameters();
 
// (.3) Send to-do commands. 根据TodoCmdMap做一些功能的操作,例如打开拍照声音,使能录像自动对焦等
for (size_t i = 0; i < mTodoCmdMap.size(); i++)
{
CommandInfo const& rCmdInfo = mTodoCmdMap.valueAt(i);
MY_LOGD("send queued cmd(%#d),args(%d,%d)", rCmdInfo.cmd, rCmdInfo.arg1, rCmdInfo.arg2);
mpCamAdapter->sendCommand(rCmdInfo.cmd, rCmdInfo.arg1, rCmdInfo.arg2);
}
mTodoCmdMap.clear();
 
//这里分别给两个Client设置了ProviderClient,都为mpCamAdapter,这里之前已经分析过,不再赘述
// (.4) [DisplayClient] set Image Buffer Provider Client if needed.
mpDisplayClient->setImgBufProviderClient(mpCamAdapter);
// (.5) [CamClient] set Image Buffer Provider Client if needed.
mpCamClient->setImgBufProviderClient(mpCamAdapter);
}
}
 
//MtkDefaultCamAdapter.cpp 注意下面涉及到的几个元素,都是很重要的
CamAdapter::init()
{
//PreviewBufMgr
mpPreviewBufMgr = IPreviewBufMgr::createInstance(mpImgBufProvidersMgr);
//PreviewCmdQueThread,注意第一个参数为PreviewBufMgr
mpPreviewCmdQueThread = IPreviewCmdQueThread::createInstance(mpPreviewBufMgr, getOpenId(), mpParamsMgr);
mpPreviewCmdQueThread->run();
 
//CaptureCmdQueThread
mpCaptureCmdQueThread = ICaptureCmdQueThread::createInstance(this);
mpCaptureCmdQueThread->run();
 
//就是相机的3个Auto,自动曝光,自动对焦,自动白平衡
init3A();
 
mpVideoSnapshotScenario = IVideoSnapshotScenario::createInstance();
mpVideoSnapshotScenario->setCallback(this);
 
mpResourceLock = ResourceLock::CreateInstance();
mpResourceLock->Init();
}
就这样初始化完成了。看到这里,可以道大胆地猜,如果说CamClient和DisplayClient是两个功能的操作者,而CamAdapter则是整个Camera的管理者,负责与底层沟通,读取Buf,并负责分配Buf给各个功能操作者,并包括管理着Camer各种属性和算法,如3A,是否自动对焦等。看一下是否是这样的。
我们继续看Cam1DeviceBase::startPreview()的(4) mpCamAdapter->startPreview()
CamAdapter::startPreview()
{
//注意传进去的是this
//调用了state.cpp的onStartPreview();
//StateManager是整个Camera的状态管理,我们忽略
return mpStateManager->getCurrentState()->onStartPreview(this);
}
 
//state.cpp
StateIdle::onStartPreview(IStateHandler* pHandler)
{
//调用handler即MtkDefaultCamAdapter的onHandleStartPreview()
status = pHandler->onHandleStartPreview();
}
 
CamAdapter::onHandleStartPreview()
{
//向PreviewCmdQueThread发送了3条消息
mpPreviewCmdQueThread->postCommand(PrvCmdCookie::eStart, PrvCmdCookie::eSemAfter);
mpPreviewCmdQueThread->postCommand(PrvCmdCookie::eDelay, PrvCmdCookie::eSemAfter);
mpPreviewCmdQueThread->postCommand(PrvCmdCookie::eUpdate, PrvCmdCookie::eSemBefore);
}
 
//PreviewCmdQueThread.cpp 接收上面发出来的命令
PreviewCmdQueThread::threadLoop()
{
sp<PrvCmdCookie> pCmdCookie;
//
if (getCommand(pCmdCookie))
{
switch (pCmdCookie->getCmd())
{
case PrvCmdCookie::eStart:
isvalid = start();
break;
case PrvCmdCookie::eDelay:
isvalid = delay(EQueryType_Init);
break;
case PrvCmdCookie::eUpdate:
isvalid = update();
break;
case PrvCmdCookie::ePrecap:
isvalid = precap();
break;
case PrvCmdCookie::eStop:
isvalid = stop();
break;
case PrvCmdCookie::eExit:
default:
break;
}
}
}

下面要关注的是PreviewCmdQueThread收到的3条命令对应的3个函数,分别是:start(),delay(),update()。都在PreviewCmdQueThread.cpp
start()
PreviewCmdQueThread::start()
{
vector<IhwScenario::PortImgInfo> vimgInfo;
vector<IhwScenario::PortBufInfo> vBufPass1Out;
ImgBufQueNode Pass1Node;
IhwScenario::PortBufInfo BufInfo;
//预览模式
EIspProfile_T eIspProfile = ( mspParamsMgr->getRecordingHint() ) ? EIspProfile_VideoPreview : EIspProfile_NormalPreview;
ECmd_T eCmd = ( mspParamsMgr->getRecordingHint() )? ECmd_CamcorderPreviewStart : ECmd_CameraPreviewStart;
//录像提示
mbRecordingHint = ( mspParamsMgr->getRecordingHint() )? true : false;
 
//(0) scenario ID is decided by recording hint
int32_t eScenarioId = ( mspParamsMgr->getRecordingHint() ) ? ACDK_SCENARIO_ID_VIDEO_PREVIEW : ACDK_SCENARIO_ID_CAMERA_PREVIEW;
 
//(1) sensor (singleton) 配置sensor
ret = mSensorInfo.init((ACDK_SCENARIO_ID_ENUM)eScenarioId));
 
//(2) Hw scenario
mpHwScenario = IhwScenario::createInstance(eHW_VSS, mSensorInfo.getSensorType(),
mSensorInfo.meSensorDev, mSensorInfo.mSensorBitOrder);
//调用的是VSSScenario.cpp 主要初始化了ICamIOPipe,IPostProcPipe
mpHwScenario->init();
 
// (2.1) hw config 获取ImageSensor的信息
getCfg(eID_Pass1In|eID_Pass1Out, vimgInfo);
//配置mpHwScenario的信息
getHw()->setConfig(&vimgInfo);
 
// (2.2) enque pass 1 buffer
// must do this earlier than hw start 调用PreviewBufMgr.cpp 创建buf PASS1BUFCNT+PASS1BUFCNT_VSS个
mspPreviewBufHandler->allocBuffer(mSensorInfo.getImgWidth(), mSensorInfo.getImgHeight(),
mSensorInfo.getImgFormat(), PASS1BUFCNT+PASS1BUFCNT_VSS);
for (int32_t i = 0; i < PASS1BUFCNT; i++)
{
//取出刚才创建的Buf
mspPreviewBufHandler->dequeBuffer(eID_Pass1Out, Pass1Node);
//获取bufinfo
mapNode2BufInfo(eID_Pass1Out, Pass1Node, BufInfo);
//收集bufinfo
vBufPass1Out.push_back(BufInfo);
}
//把buf加入到mpHwScenario的队列时
getHw()->enque(NULL, &vBufPass1Out);
//取出vssBuf
#if VSS_ENABLE
mspPreviewBufHandler->dequeBuffer(eID_Pass1Out, Pass1Node);
mapNode2BufInfo(eID_Pass1Out, Pass1Node, BufInfo);
mvBufPass1OutVss.clear();
mvBufPass1OutVss.push_back(BufInfo);
#endif
 
//(3) 3A
//!! must be set after hw->enque; otherwise, over-exposure.
mp3AHal = Hal3ABase::createInstance(DevMetaInfo::queryHalSensorDev(gInfo.openId));
mp3AHal->setZoom(100, 0, 0, mSensorInfo.getImgWidth(), mSensorInfo.getImgHeight());
mp3AHal->setIspProfile(eIspProfile);
mp3AHal->sendCommand(eCmd);
 
// (4) EIS
mpEisHal = EisHalBase::createInstance("mtkdefaultAdapter");
if(mpEisHal != NULL)
{
eisHal_config_t eisHalConfig;
eisHalConfig.imageWidth = mSensorInfo.getImgWidth();
eisHalConfig.imageHeight = mSensorInfo.getImgHeight();
mpEisHal->configEIS(eHW_VSS, eisHalConfig);
}
//
#if VSS_ENABLE
mpVideoSnapshotScenario = IVideoSnapshotScenario::createInstance();
#endif
// (5) hw start
//!!enable pass1 SHOULD BE last step!! 使能ISP
getHw()->start();
}
 
//PreviewCmdQueThread.cpp
sensorInfo::init(ACDK_SCENARIO_ID_ENUM scenarioId)
{
//(1) init
mpSensor = SensorHal::createInstance();
 
//(2) main or sub 在enumDeviceLocked增加的Sensor信息现在获取出来
meSensorDev = (halSensorDev_e)DevMetaInfo::queryHalSensorDev(gInfo.openId);
 
//设置sensor信息
mpSensor->sendCommand(meSensorDev, SENSOR_CMD_SET_SENSOR_DEV);
mpSensor->init();
 
//(3)读取Camera信息有raw,yuv,rgb565...
mpSensor->sendCommand(meSensorDev, SENSOR_CMD_GET_SENSOR_TYPE, (int32_t)&meSensorType);
 
//(4) tg/mem size
uint32_t u4TgInW = 0;
uint32_t u4TgInH = 0;
switch (scenarioId)
{
case ACDK_SCENARIO_ID_CAMERA_PREVIEW:
{
mpSensor->sendCommand(meSensorDev, SENSOR_CMD_GET_SENSOR_PRV_RANGE, (int32_t)&u4TgInW, (uint32_t)&u4TgInH);
break;
}
case ACDK_SCENARIO_ID_VIDEO_PREVIEW:
//与上面的类似,都只是获取了sensor的分辨率
}
//
mu4TgOutW = ROUND_TO_2X(u4TgInW); // in case sensor returns odd weight
mu4TgOutH = ROUND_TO_2X(u4TgInH); // in case senosr returns odd height
mu4MemOutW = mu4TgOutW;
mu4MemOutH = mu4TgOutH;
// Scenario译为情境,不是很清楚在这里做为何用
IhwScenario* pHwScenario = IhwScenario::createInstance(eHW_VSS, meSensorType,
meSensorDev, mSensorBitOrder);
pHwScenario->getHwValidSize(mu4MemOutW,mu4MemOutH);
pHwScenario->destroyInstance();
pHwScenario = NULL;
//配置Sensor信息
halSensorIFParam_t sensorCfg[2];
int idx = meSensorDev == SENSOR_DEV_MAIN ? 0 : 1;
sensorCfg[idx].u4SrcW = u4TgInW;
sensorCfg[idx].u4SrcH = u4TgInH;
sensorCfg[idx].u4CropW = mu4TgOutW;
sensorCfg[idx].u4CropH = mu4TgOutH;
sensorCfg[idx].u4IsContinous = 1;
sensorCfg[idx].u4IsBypassSensorScenario = 0;
sensorCfg[idx].u4IsBypassSensorDelay = 1;
sensorCfg[idx].scenarioId = scenarioId;
mpSensor->setConf(sensorCfg);
//
//(5) format
halSensorRawImageInfo_t sensorFormatInfo;
memset(&sensorFormatInfo, 0, sizeof(halSensorRawImageInfo_t));
mpSensor->sendCommand(meSensorDev, SENSOR_CMD_GET_RAW_INFO,
(MINT32)&sensorFormatInfo, 1, 0);
mSensorBitOrder = (ERawPxlID)sensorFormatInfo.u1Order;
if(meSensorType == SENSOR_TYPE_RAW) // RAW
{
switch(sensorFormatInfo.u4BitDepth)
{
case 8 :
mFormat = MtkCameraParameters::PIXEL_FORMAT_BAYER8;
break;
case 10 :
default :
mFormat = MtkCameraParameters::PIXEL_FORMAT_BAYER10;
break;
}
}
else if (meSensorType == SENSOR_TYPE_YUV){
switch(sensorFormatInfo.u1Order)
{
case SENSOR_OUTPUT_FORMAT_UYVY :
case SENSOR_OUTPUT_FORMAT_CbYCrY :
mFormat = MtkCameraParameters::PIXEL_FORMAT_YUV422I_UYVY;
break;
case SENSOR_OUTPUT_FORMAT_VYUY :
case SENSOR_OUTPUT_FORMAT_CrYCbY :
mFormat = MtkCameraParameters::PIXEL_FORMAT_YUV422I_VYUY;
break;
case SENSOR_OUTPUT_FORMAT_YVYU :
case SENSOR_OUTPUT_FORMAT_YCrYCb :
mFormat = MtkCameraParameters::PIXEL_FORMAT_YUV422I_YVYU;
break;
case SENSOR_OUTPUT_FORMAT_YUYV :
case SENSOR_OUTPUT_FORMAT_YCbYCr :
default :
mFormat = CameraParameters::PIXEL_FORMAT_YUV422I;
break;
}
}
}
上面初始化了几个元素IhwScenario, vBufPass1Out,meSensorDev。然后Delay()这个没有做什么动作。接下来的Update()是我们的重点,它包括了如何去读取Sensor的数据,又是如何把数据放入到ISP中处理。经过一系列的处理后,PreviewCmdQueThread就会把数据传给我们的DisplayClient让其进行负责显示。
PreviewCmdQueThread::update()
{
// Loop: check if next command is comming
// Next command can be {stop, precap}
// Do at least 1 frame (in case of going to precapture directly)
// --> this works when AE updates in each frame (instead of in 3 frames)
do{
//(1) 读取 处理 分配buf
updateOne();
mFrameCnt++;
 
//(2) handle zoom callback
handleCallback();
 
//(3) update check
updateCheck1();
updateCheck2();
} while( ! isNextCommand() );
}
 
PreviewCmdQueThread::updateOne()
{
bool ret = true;
int32_t pass1LatestBufIdx = -1;
int64_t pass1LatestTimeStamp = 0;
nsecs_t passTime = 0,pass1Time = 0,pass2Time = 0,vssTime = 0;
 
#if VSS_ENABLE
vector<IhwScenario::PortBufInfo> vDeBufPass1OutVss;
#endif
vector<IhwScenario::PortQTBufInfo> vDeBufPass1Out;
vector<IhwScenario::PortQTBufInfo> vDeBufPass2Out;
vector<IhwScenario::PortBufInfo> vEnBufPass2In;
vector<IhwScenario::PortBufInfo> vEnBufPass2Out;
vector<IhwScenario::PortImgInfo> vPass2Cfg;
 
passTime = systemTime();
//*************************************************************
// (1) [PASS 1] sensor ---> ISP --> DRAM(IMGO)
//*************************************************************
pass1Time = passTime;
//从ISP_DMA_IMGO获取Buf信息
getHw()->deque(eID_Pass1Out, &vDeBufPass1Out)
pass1Time = systemTime()-pass1Time;
//
mapQT2BufInfo(eID_Pass2In, vDeBufPass1Out, vEnBufPass2In);
 
mp3AHal->sendCommand(ECmd_Update);
 
mpEisHal->doEIS();
 
//*************************************************************
//(2) [PASS 2] DRAM(IMGI) --> ISP --> CDP --> DRAM (DISPO, VIDO)
// if no buffer is available, return immediately.
//*************************************************************
int32_t flag = 0;
 
//(.1) PASS2-IN
getCfg(eID_Pass2In , vPass2Cfg);
//(.2) PASS2-OUT
ImgBufQueNode dispNode;
ImgBufQueNode vidoNode;
mspPreviewBufHandler->dequeBuffer(eID_Pass2DISPO, dispNode);
mspPreviewBufHandler->dequeBuffer(eID_Pass2VIDO, vidoNode);
 
if ( dispNode.getImgBuf() != 0)
{
flag |= eID_Pass2DISPO;
IhwScenario::PortBufInfo BufInfo;
IhwScenario::PortImgInfo ImgInfo;
mapNode2BufInfo(eID_Pass2DISPO, dispNode, BufInfo);
mapNode2ImgInfo(eID_Pass2DISPO, dispNode, ImgInfo);
vEnBufPass2Out.push_back(BufInfo);
vPass2Cfg.push_back(ImgInfo);
}
 
if ( vidoNode.getImgBuf() != 0)
{
if( vidoNode.getCookieDE() == IPreviewBufMgr::eBuf_Rec && !mbRecording)
{
MY_LOGW("VR has been stopped, do not enque buf to pass 2");
}
else
{
flag = flag | eID_Pass2VIDO;
IhwScenario::PortBufInfo BufInfo;
IhwScenario::PortImgInfo ImgInfo;
mapNode2BufInfo(eID_Pass2VIDO, vidoNode, BufInfo);
mapNode2ImgInfo(eID_Pass2VIDO, vidoNode, ImgInfo);
vEnBufPass2Out.push_back(BufInfo);
vPass2Cfg.push_back(ImgInfo);
}
}
 
//(.3) no buffer ==> return immediately.
//...
//(.4) has buffer ==> do pass2 en/deque
// Note: config must be set earlier than en/de-que
//
updateZoom(vPass2Cfg);
getHw()->setConfig(&vPass2Cfg);
getHw()->enque(&vEnBufPass2In, &vEnBufPass2Out);
 
pass2Time = systemTime();
getHw()->deque((EHwBufIdx)flag, &vDeBufPass2Out);
pass2Time = systemTime()-pass2Time;
 
//*************************************************************
// (3) return buffer
//*************************************************************
if( vDeBufPass1Out.size() > 0 && vDeBufPass1Out[0].bufInfo.vBufInfo.size() > 0)
{
pass1LatestBufIdx = vDeBufPass1Out[0].bufInfo.vBufInfo.size()-1;
pass1LatestTimeStamp = vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].getTimeStamp_ns();
//MY_LOGD("pass1LatestBufIdx(%d),pass1LatestTimeStamp(%lld)",pass1LatestBufIdx,pass1LatestTimeStamp);
}
//
if (ret)
{
if(checkDumpPass1())
{
dumpBuffer(vDeBufPass1Out, "pass1", "raw", mFrameCnt);
}
if (flag & eID_Pass2DISPO)
{
if(checkDumpPass2Dispo())
{
dumpImg((MUINT8*)(dispNode.getImgBuf()->getVirAddr()),
dispNode.getImgBuf()->getBufSize(), "pass2_dispo", "yuv", mFrameCnt);
}
}
if (flag & eID_Pass2VIDO)
{
if(checkDumpPass2Vido())
{
dumpImg((MUINT8*)(vidoNode.getImgBuf()->getVirAddr()),
vidoNode.getImgBuf()->getBufSize(), "pass2_vido", "yuv", mFrameCnt);
}
}
}
// (.1) return PASS1
#if VSS_ENABLE
if( mpVideoSnapshotScenario->getStatus() == IVideoSnapshotScenario::Status_WaitImage &&
pass1LatestBufIdx >= 0)
{
vector<IhwScenario::PortImgInfo> vPass1OutCfg;
IVideoSnapshotScenario::ImageInfo vssImage;
//
getCfg(eID_Pass1Out, vPass1OutCfg);
//
vssImage.size.width = vPass1OutCfg[0].u4Width;
vssImage.size.height = vPass1OutCfg[0].u4Height;
vssImage.size.stride = vPass1OutCfg[0].u4Stride[0];
vssImage.mem.id = vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].memID;
vssImage.mem.vir = vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].u4BufVA;
vssImage.mem.phy = vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].u4BufPA;
vssImage.mem.size = vDeBufPass1Out[0].bufInfo.vBufInfo[pass1LatestBufIdx].u4BufSize;
vssImage.crop.x = vPass2Cfg[0].crop.x;
vssImage.crop.y = vPass2Cfg[0].crop.y;
vssImage.crop.w = vPass2Cfg[0].crop.w;
vssImage.crop.h = vPass2Cfg[0].crop.h;
mpVideoSnapshotScenario->setImage(vssImage);
//
mapQT2BufInfo(eID_Pass1Out, vDeBufPass1Out, vDeBufPass1OutVss);
getHw()->replaceQue(&vDeBufPass1OutVss, &mvBufPass1OutVss);
//Save the current pass 1 buffer for next VSS.
mvBufPass1OutVss.clear();
mvBufPass1OutVss.push_back(vDeBufPass1OutVss[0]);
}
else
{
#endif
getHw()->enque(vDeBufPass1Out);
#if VSS_ENABLE
}
#endif
//
#if VSS_ENABLE
vssTime = systemTime();
mpVideoSnapshotScenario->process();
vssTime = systemTime()-vssTime;
#endif
// (.2) return PASS2
if (flag & eID_Pass2DISPO)
{
dispNode.getImgBuf()->setTimestamp(pass1LatestTimeStamp);
//通知Client的接收队列,进行接收处理
mspPreviewBufHandler->enqueBuffer(dispNode);
}
//
if (flag & eID_Pass2VIDO)
{
vidoNode.getImgBuf()->setTimestamp(pass1LatestTimeStamp);
mspPreviewBufHandler->enqueBuffer(vidoNode);
}
//[T.B.D]
//'0': "SUPPOSE" DISPO and VIDO gets the same timeStamp
if( vDeBufPass2Out.size() > 1 )
{
MY_LOGW_IF(vDeBufPass2Out.at(0).bufInfo.getTimeStamp_ns() != vDeBufPass2Out.at(1).bufInfo.getTimeStamp_ns(),
"DISP(%f),VIDO(%f)", vDeBufPass2Out.at(0).bufInfo.getTimeStamp_ns(), vDeBufPass2Out.at(1).bufInfo.getTimeStamp_ns());
}
}
好啦,我们找到了在哪里取Buf,又是怎么样的方式去通知接收Buf,再结合之前DisplayClient接收Buf的分析,就可以大概知道数据的整体流程是怎么样子的。关于上面是Sensor,ISP是如何关联起来的,又是如何从两者里面去取数据的。那些Pass1,Pass2又是什么东东,为什么Pass1和2又有in和out........这里面似乎大有文章,有空的时候再去开一个专题去深入分析。我们继续mspPreviewBufHandler->enqueBuffer(dispNode);很快就可以完成本文的任务了
mspPreviewBufHandler是PreviewBufMgr,那就相当于调用了PreviewBufMgr的enqueBuffer()
PreviewBufMgr::enqueBuffer(ImgBufQueNode const& node)
{
// (1) set DONE tag into package
const_cast<ImgBufQueNode*>(&node)->setStatus(ImgBufQueNode::eSTATUS_DONE);
 
// (2) choose the correct "client"
switch (node.getCookieDE())
{
case eBuf_Pass1:
//...
case eBuf_Disp:
{
//从Provider的队列中找到Id为eID_DISPLAY的Provider,之前已经分析过在哪里初始化的
sp<IImgBufProvider> bufProvider = mspImgBufProvidersMgr->getDisplayPvdr();
/**
在DisplayClient::waitAndHandleReturnBuffers()中,调用了ImgBufQueue.cpp dequeProcessor()
dequeProcessor()里mDoneImgBufQueCond.wait()会一直在等待。直到下面的函数被调用了
就会把buf传进去,并 mDoneImgBufQueCond.broadcast();通知接收buf
**/
bufProvider->enqueProvider(node);
}
break;
//
case eBuf_AP:
//...
case eBuf_FD:
//...
case eBuf_Rec:
//...
}
看到这里验证了我们之前的猜想,CamAdapter管理了ImageSensor,ISP等硬件的初始化和调度,3A,数据的分配流向和处理等,虽然CamAdapter没有去执行Preview,Capture,Record等操作,但却管理到了他们的数据的来源和数据的动向,最后Camdapter里还包含了Buf manager和整个Camera的状态。例如在CamAdapter中Preview的管理就包括了PreviewCmdQueThread和PreviewBufMgr,PreviewCmdQueThread的线程里接收到通知后,从Sensor取数据并交由ISP处理,3A等处理后把Buf交给PreviewBufMgr进行分配到相应的操作。如果PreviewBufMgr收到的Buf需要显示,则通过ImgBufProvidersManager找到DisplayClient里的ImgBufQueue,再把Buf插入到队列里,由ImgBufQueue进行通知显示
阅读全文
0 0
原创粉丝点击