Vehicle Identification Via Sparse Representation

来源:互联网 发布:飞翔流量软件 编辑:程序博客网 时间:2024/06/11 04:01

Abstract—

    In this paper, we propose a system using video cameras to perform vehicle identification.We tackle this problem by reconstructing an input by using multiple linear regression models and compressed sensing, which provide new ways to deal with three crucial issues in vehicle identification, namely, feature extraction, online vehicle identification database

buildup, and robustness to occlusions and misalignment. The results show

the capability of the proposed approach.

Index Terms—Sparserepresentation, vehicle identification.

   摘要

       这篇文章中,我们提出一个系统使用摄像机来执行车辆识别。我们解决这个问题,通过重建一个输入,利用多元线性回归模型和压缩传感,提供新的方法来处理车辆识别的三个关键问题,即特征提取,在线车辆识别数据库积累,鲁棒性对于遮挡和偏差。结果显示该方法是有效的。

指数Terms-Sparse表示,车辆识别。

I. INTRODUCTION

States conduct traffic monitoring for many reasons, includinghighway planning and design or motor vehicle enforcement. Trafficmonitoring can be classified into two different types, namely, flowmonitoring and route monitoring. Flow monitoring will observe theamount of traffic flowing through an interested checkpoint, whereasroute monitoring will identify the route of an interested vehicle.Unlike flow monitoring, route monitoring generally needs to knowthe identity of the observed vehicle and is generally more difficult.This route monitoring capability can provide valuable information forfreight logistics analysis, forecast modeling, and future transportationinfrastructure planning.

       各州进行流量监测的原因很多,包括高速公路规划和设计或机动车执行。交通监控可以分为两种不同类型,即流监控和路线。流监测观测经过兴趣检查点的交通流量,而路线监控将识别出感兴趣车辆的路线。不同于流量监测、路线监控通常需要知道识别出观察车辆,这将会更加困难。路线监控功能可以提供有价值的信息对于货运物流分析、建模、预测和未来的交通工具基础设施规划。

For vehicle detection using a video or image sequence, the mostobvious approach has been to first compute the stationary background(BG) image and then identify the moving vehicles as those pixels inthe image that significantly differ from the BG, which is named BGsubtraction [1]. However, traffic shadows cause serious problems whendoing subtraction, and slow moving or stationary traffic is difficult todetect. This led to the emergence of the adaptive BG methods [1],[2]. After BG subtraction, connected regions in the foreground (FG)image, namely, blobs, will be associated with different vehicles andtracked over time using different algorithms, such as cross correlation[3], mean shift [4], etc. Moreover, learning-based systems and a hiddenMarkov model are proposed for on-road vehicle detection and trackingin [5]–[7], respectively.

     车辆检测使用一个视频或图像序列,最显著的方法是首先计算静止的背景(BG)图像,然后识别处明显不同于背景的行驶车辆的像素,这是叫BG减法[1]。然而,做减法时交通阴影会造成严重的问题,,检测到缓慢移动或静止的交通物是很困难的。自适应BG应用而生[1],[2]。BG减法后,连接区域在前台(FG)图像,即气泡,将与不同的车辆和相关跟踪随着时间的推移,使用不同的算法,如 cross correlation[3][3],mean shift [4]等。此外,基于学习的系统和一个隐藏的马尔可夫模型被提出了用在行车车辆检测和跟踪5]-[7]。

   The ability of vehicle detection and tracking with video will enable us to further classify or identify interested vehicles. For videobased vehicle classification, there are many techniques concentratingon this work, such as support vector machines (SVMs) [8], principal component analysis (PCA) with neural networks (NNs) [9],a weighted k-nearest neighbor [10], and backpropagation NN [11].Unlike classification problems that classify different vehicles into different categories, the video-based vehicle identification problem is

to maintain the identity of a vehicle as it travels through multiplevideo camera sites. In [12], Zeng and Crisman proposed a color-based vehicle matching system with the highest reported true positiverate of 16.42%. However, their experimental setup was too ideal to reflect real traffic conditions. The proposed system needs to know the average time for vehicles to travel from site 1 to site 2 to reduce the number of candidate vehicles for matching. It is very likely that one cannot find a corresponding vehicle in the candidate set since the size of the candidate set for their systemis typically eight vehicles.

     视频车辆检测和跟踪的能力将使我们能够进一步的分类或识别感兴趣的车辆。有许多技术专注于这项工作,如支持向量机(svm)[8],主成分分析(PCA)和神经网络(NNs)[9],加权K邻近[10],和back propagation NN [11]

不同于车辆分类问题,基于视频的车辆识别的问题中,识别一辆车是通过当其穿越多个摄像机的站点。在[12]中, Zeng and Crisman 提出了一个基于颜色的车辆匹配系统,报道最高的正样本正确率为16.42%。然而,他们的实验设置太理想了不能反映实际交通状况。该系统需要知道汽车从1到2的平均时间,这就减少了匹配的候选车辆的数量。很可能找不到对应的车辆在候选集中,因为该系统的候选集通常是八辆

     Moreover, Kogut and Trivedi [13] combined color features and thespatial organization of vehicles within platoons to improve the identification accuracy (IA). A maximum positive match rate of 45% was reported in their work. Nevertheless, their results were based on only22 samples, which was too small to cover different traffic conditions.In addition, given a platoon of vehicles at site 2, it is very difficultto find the corresponding platoon from site 1 since the platoons of vehicles may significantly change when the two sites are far fromeach other. In this case, this algorithm will fail since its performanceideally depends on the spatial organization of the vehicles within theirplatoons. Another video-based vehicle identification system achievedimpressive performance by using multiple individual vehicle features,such as color, external dimensions, points of optical demarcation, etc. [14], [15]. However, this system needs specially designed hardware fortop-down camera views, where each camera also needs to be manuallycalibrated before performing identification. Moreover, all their results were obtained by using highly overlapped vehicle databases, wherethe overlap rate is about 85%. Thus, the performance of low overlapped data for their system is still unknown. Nevertheless, the resultsobtained from previous vehicle identification research [12]–[15] a reall under some given restraints, which makes it unclear how theperformance of a video-based identification system would be withoutthe a forementioned restraints. Recently, Wright et al. have proposeda face recognition algorithm [16] using sparse representation, whichoffers very competitive performance for face recognition. Moreover,sparse representation is also employed for scene, object, and patternclassification in [17]–[19]. Based on the idea of sparse representationfor objection classification and identification, we propose a video-based vehicle identification framework in this paper. The constructedsystem was designed and tested under a realistic setup, in contrast withthe a forementioned limitations in previous research.Here are the main contributions and accomplishments of our proposed system.

 此外,Kogut and Trivedi13]结合颜色特征和车辆的空间组织内排提高识别精度(IA)。据报道,他们工作的正样本最大匹配率为45%。然而,他们的研究结果仅仅基于22个样本,样本量太小,不能涵盖所有的交通情况此外,当两站距离很远时,从站2的一排车,或许会明显的改变,则在站一很难找到对应的车辆。在这种情况下,该算法将失效,因为它的表现取决于汽车的空间组织排列方式。另一个基于视频的车辆识别系统令人印象深刻,该系统通过使用多个独立车辆特征,如颜色、外形尺寸,光学点界定等。[14],[15]。然而,这种系统需要专门设计的硬件自上而下的相机视图,每个相机也需要在识别前进行手动校准。此外,所有的结果通过使用高度重叠的车辆数据库,在哪里重叠率约为85%。因此,对该系统来说低重叠数据的性能仍然是未知的。然而,从之前的车辆识别的研究结果[12]-[15]获得了一个真实的在给定某些限制条件时,这些限制使的基于视频识别系统如果没有之前提到的限制时,性能会如何,变得未知。最近,Wright等人提出使用稀疏表示的人脸识别算法[16],该算法提供了极具竞争力的人脸识别性能。此外,稀疏表示也用于场景,物体,和模式分类[17]-[19]。基于稀疏表示的物体分类与识别的想法,我们在本文提出一个基于视频的车辆识别框架。在一个现实的设定下,构造系统设计和测试,与之形成对比的是之前研究中提到的限制。

以下是我们提出的系统的主要贡献和成就。

1) We use video cameras to capture the critical information ofvehicles for the purpose of vehicle tracking when they enter the state, and we use additional video cameras to track theirroutes. Unlike [14] and [15], our system does not need specially designed hardware or the calibrated cameras. Moreover, thecameras can be placed at the side of the highway, which makesour system easier to deploy.

2) We treat the problem of vehicle identification from differentvideo sources as a signal reconstruction out of multiple linearregression models and use rising theories from an emerging signal processing area—compressive sensing to solve this problem.

1)为了跟踪车辆,当车辆通过监测点时,我们用摄像机去捕捉车辆的关键信息,我们使用其他的摄像机跟踪他们路线。不同于[14]和[15],我们的系统不需要特别设计的硬件或校准相机。而且,相机可以放置在高速公路的边上,使我们的系统更容易部署。

2)我们针对不同的视频源作为多元线性信号重建回归模型的车辆识别问题,利用该理论在一个关键部位——压缩传感来解决这个问题

By employing Bayesian formalism to compute the l 1 minimization of the sparse weights,the proposed framework provides new

ways to deal with three crucial issues in vehicle identification,namely,feature extraction,online vehicle identification databasebuilding, and robustness to occlusion and misalignment. Forfeature extraction, we use the simple downsampled features thatoffer good identification performance as long as the featuresspace is sparse enough. The theory also provides a validationscheme to decide if a newly entering vehicle has been alreadyincluded in the database. Moreover, by taking advantage ofdownsample-based features, one can easily introduce featuresof newly entering vehicles into the vehicle identification data-base without using training algorithms, e.g., PCA [9]. Finally,Bayesian formalism provides a measure of confidence of eachsparse weight.

3) Different from previous research [12]–[15], where only about100 vehicles were used for testing and the testing databases

were highly overlapped, we conduct extensive experiments ondifferent types of vehicles on interstate highways to verify the

efficiency and accuracy of our proposed system. In our experiments, more than 1200 vehicles were used for testing, and the

overlap rate of the testing databases is less than 48%. The resultsshow that the proposed framework works well on all kinds of

Vehicles.

      采用贝叶斯形式计算L1最小化稀疏的权重,提出框架提供了新的方法去应对车辆识别的三个关键问题,即特征提取、在线车辆识别数据库的建立,和鲁棒性对于闭塞和错位对于特征提取,我们使用简单的采样特征,该特征提供良好的识别性能,只要特征空间足够稀疏。该理论还提供了一个理论验证方案,来判断是否新进入的车辆已经包括在数据库中。此外,充分利用采样特征的优势,可以很容易地引入一个新进入的车辆的新特征到车辆识别数据库中,同时不使用训练算法,如。,PCA[9]。最后,贝叶斯的形式提供了一个测量稀疏权重置信度的方法

3)不同于先前的研究[12]-[15],只有100辆汽车被用于测试,并且测试数据库高度重叠,我们进行了广泛的实验针对不同类型的车辆在州际公路上,来验证我们提出了系统的效率和准确性。在我们的实验中,超过1200多万辆汽车被用于测试,同时测试数据库的重叠率小于48%。结果表明,该框架可以很好的适用于各种车辆

II. SYSTEM A RCHITECTURE

Ourvehicleidentificationsystemisabletodetect,track,andidentifyeach vehicle and transmit vehicle information to a service center forfurther route tracking and other traffic monitoring tasks. The systemIncludes threemaincomponents,namely,videocameras,aservicecenter, and clients. Video cameras are used to gather traffic information,includingenvironmentconditions,illuminationconditions,andvehicleinformation. In addition, there are several parallel video cameras thatare setup along the side of the highway. These video cameras shouldbe reliable, network accessible, of high resolution, and high speed. Wepropose to use the Axis 223M network cameras. The service centercomponent, which is the most critical part, collects the images fromvideo cameras and employs our vehicle identification algorithm toachieve the identification results. The clients are terminals that querythe identification results from the service center and produce reports ofdesired statistics and routing information.

系统架构

我们的车辆识别系统能够探测、跟踪和识别每个车辆和车辆信息传输到服务中心进行进一步的路线跟踪和其他交通监视任务。该系统包括三个主要组件,即视频凸轮,服务中心,客户。摄像机是用来收集交通信息,包括环境条件、光照条件、车辆信息。此外,有几个平行摄像机设置在公路的一边。这些摄像机应该可靠、网络访问、高分辨率和高速度。我们建议使用Axis 223网络摄像机。服务中心组件,它是最关键的部分,收集来自摄像机的图像,使用车辆识别算法来实现识别结果。的客户是终端查询服务中心的鉴定结果和生产所需的统计数据和路由信息的报告。

B. Process Flow

The process flow for the video camera feeds used for vehicleidentification is shown in Fig. 1. Each video feed is sent to the servicecenter for further processing, e.g., the ith video camera VC(i) inFig. 1. At the service center, the images from the video camera areprocessed by the video processor module, which performs FG/BGdetecting, blob detecting, blob tracking, moving direction, and speeddetecting to extract features contributing to a unique vehicle ID. Then,these vehicle IDs from different video cameras will be saved into adatabase with corresponding indices. When a vehicle ID, e.g., thevehicle ID from VC(j), is requested by the client, the given vehicleinformation of VC(j) will be compared with other VC databasesVC(1),...,VC(m) except VC(j), where m denotes the total numberof VCs. If a corresponding ID is found in VC(k), k ?= j, it will reportthat this vehicle was captured in the kth VC; otherwise, it will report−1, which means that this vehicle has not been captured by any VCsbefore.

b .工艺流程

摄像机的流程流源用于车辆识别图1所示。每个视频发送到服务中心进行进一步的处理,例如,第i个摄像机VC(i)在图1。服务中心,来自摄像机的图像是由视频处理器处理模块,执行成品/ BG检测,blob检测,blob跟踪,移动方向,和速度检测来提取特征导致了独特的车辆ID。那么,这些车辆ID从不同的摄像机将保存到数据库相应的指标。当车辆ID,例如。、车辆ID从VC(j),由客户端请求,给定的车辆信息的VC(j)将与其他VC数据库VC(1)相比,…,VC(m)除了VC(j),其中m表示VCs的总数。如果找到相应的ID在VC(k),k ?= j,这将报告车辆被k VC;否则,它将报告−1,这意味着这车之前还没有被任何风险投资。

III. VIDEO -BASED V EHICLE I DENTIFICATION

A. Vehicle Detecting and Tracking

Vehicle detecting and tracking is the first stage for any furtheridentification processing. The four main components in our vehicledetecting and tracking scheme are shown as the video processingsection in Fig. 1. 1) FG/BG detecting: We adopt the approach in [2],which provides an adaptive BG mixture model for real-time trackingby modeling the values of any pixel as a mixture of Gaussians.This method is robust for lighting changes, tracking through clutteredregions, slow-moving objects, and so on. 2) Blob detecting: Our blob tector is implemented based on [20] to detect any newly entering object in each frame using the output from the FG/BG estimation

module. 3) Blob tracking: The blob tracking module provides a way to track blobs from the current frame to the next frame [4]. 4) Moving direction and speed detecting: it is accomplished by using optical flow estimation [21], which tries to calculate the motion between two video

frames at times t and t + τ. In our scheme, we use the blobs with the same index in different video frames to calculate the optical flow. The aforementioned algorithms offer high sensitivity for blob detecting and tracking; however, the false-positive rate (FPR) could be high due to clutter from the motion of leaves and grass. Moreover, we may only be interested in one direction of traffic flow. To tackle these issues, we utilize three filters to exclude these unwanted blobs.


答:车辆检测和跟踪

车辆检测和跟踪是第一阶段进行任何进一步的识别处理。四个主要的组成部分在我们的车辆检测和跟踪计划显示为图1中的视频处理部分。1)成品/ BG检测:我们采用[2]的方法,它提供了一种自适应BG混合模型建模实时跟踪的任何像素的值作为一个混合高斯模型的。这个方法是健壮的灯光变化,跟踪通过混乱地区,缓慢移动的对象,等等。2)斑点检测:我们的Blob tector实现基于[20]发现任何新进入对象在每一帧使用FG的输出/ BG估计

模块。3)斑点追踪:Blob跟踪模块提供了一种方法来跟踪气泡从当前帧到下一帧[4]。4)移动的方向和速度检测:通过使用光流估计[21],试图计算两个视频之间的运动

帧t,t +τ。在我们的方案中,我们使用相同的斑点指数在不同的视频帧计算光流。上述算法为blob提供高灵敏度探测和跟踪;然而,假阳性率(玻璃钢)可能是由于高杂波运动的树叶和草。此外,我们可能只是感兴趣的一个方向的交通流。为了解决这些问题,我们利用三个过滤器来排除这些不必要的斑点。

1) The blob histogram (BH) filter excludes blobs where the number of observations from different video frames for each given blob ID is less than τ BH times, where τ BH is a predeterminedthreshold.

2) The motion distance (MDs) filter excludes blobs whose moving distance is less than a given threshold τ MDs (in pixels).

3) The motion direction (MDr) filter excludes blobs whose motion direction is not the same as the preassigned direction τ MDr (right, left, up, down, etc.).

1)blob直方图(BH)过滤排除气泡,观察从不同的视频帧的数量为每个给定blob ID小于τBH,τBH的预定阈值。

2)运动距离(MDs)过滤排除气泡的移动距离小于给定阈值τMDs(以像素为单位)。

3)运动方向(MDr)过滤排除气泡的运动方向不一样的预先分配方向τMDr(左,右,上,下,等等)。

B. Vehicle Identification Via Sparse Representation

and Bayesian Formalism

A basic problem in vehicle identification is to determine if a newly entering vehicle has already been registered in a database or not and to find a corresponding vehicle ID if such a record exists. The core idea of the proposed vehicle identification algorithm is based on sparse representation, where a similar idea was used in [16] for face recognition.

1) Sparse Representation of a Vehicle: Before generating sparse representation for a vehicle and finding its corresponding vehicle ID, we will first arrange the database into matrices, which are built using labeled training samples from M different vehicles. Here, we assume that k i denotes the number of training images for the ith vehicle ID, where i = 1,...,M, and k = k 1 + k 2 + ···k M denotes the number of images in the database. Then, we reshape each w × h image into a column vector ν ∈ R c , where c = wh; the k i training images from the ith vehicle ID constitute the columns of a matrix Φ i = [ν i,1 ,ν i,2 ,...,ν i,k i ] ∈ R c×k i ; and all k images from the database are combined to form a new matrix Φ = [Φ 1 ,Φ 2 ,...,Φ M ] =

[ν 1,1 ,ν 1,2 ,...,ν M,k M ] ∈ R c×k . For a newly entering vehicle u ∈ R c , if sufficient training samples in the database share the same feature as the incoming vehicle (e.g.,this happens when the incoming vehicle was previously captured, e.g., with a vehicle ID i), then the vehicle can be approximately represented as the linear combination of the training samples in Φ i , i.e., y = Φ i θ = θ i,1 ν i,1 + θ i,2 ν i,2 + ··· + θ i,k i ν i,k i (1)

where θ = [θ i,1 ,θ i,2 ,...,θ i,k i ] T , and θ i,j ∈ R, j = 1,2,...,k i . However, we do not know the identity of the incoming vehicle at the beginning. Fortunately, we can instead represent incoming vehicle y ∈ R c using the entire set of images in the database with a relatively small increase in computation complexity. The linear combination of all the training samples is written as y = Φx s = [Φ 1 ,Φ 2 ,...,Φ M ]x s (2) where, with a high probability, x s = [0,...,0,θ i,1 ,θ i,2 ,...,θ i,k i , 0,...,0] T ∈ R k is a coefficient vector that only has nonzero entries

for those associated with the ith vehicle ID.

通过稀疏表示车辆识别

和贝叶斯形式主义

车辆识别中的一个基本问题是确定一个新进入的车辆已经被注册在一个数据库,并找到相应的车辆如果存在这样一个记录ID。拟议中的车辆识别算法的核心思想是基于稀疏表示,一个类似的想法是在[16]用于人脸识别。

1)车辆稀疏表示:在生成稀疏表示车辆并找到其相应的车辆ID,我们将首先安排数据库到矩阵,使用M标记训练样本构建不同的车辆。在这里,我们假设k我表示数量的训练图像第i个车辆ID,i = 1,……k、M和k = 1 + 2 + k···k M表示数据库中的图像的数量。然后,我们重塑每个w×h映像成一个列向量ν∈R c,c = wh;k我从第i个训练图像车辆标识构成一个矩阵的列Φi =[ν我,1,ν我2…ν我k我)∈R c×k;从数据库和所有k图像组合起来形成一个新的矩阵Φ=(Φ1,Φ2,……ΦM]=

[ν1,1,ν1,2,…νM k M]∈R c×k。新进入的车辆u∈R c,如果足够的训练样本数据库中共享相同的功能作为传入的车辆(如。,这当传入的车辆之前捕获,如。车辆ID我),那么车辆可以近似表示为Φ我训练样本的线性组合,即。θ=θ,y =Φ我,1ν,1 +θ,2ν,2 +···+θ,k我ν,k我(1)

(θ,θ=我,1,θ,2,…,θ,k我]T,θ,j∈R,j = 1,2,…我,k。然而,我们不知道初传入的车辆的身份。幸运的是,我们可以代表输入车辆y∈R c使用整个组的图像数据库中一个相对较小的增加计算的复杂性。所有训练样本的线性组合是写成y =Φx s =(Φ1,Φ2,……ΦM]x s(2),一个高概率,x s =[0,…,0,θ,θ,2,…,θ,k我0…,0]T∈R k是一个只有非零的系数向量条目

对于那些与第i个车辆相关ID。

To find x s that can accurately determine the identity of the incoming

vehicle, we need to solve linear equation y = Φx. In general, measurement data may be noisy; thus, y may not be represented as the sparse combination of training samples exactly. Thus, (2) will be rewritten as

y = Φx s + Υ z (3)

where Υ z ∈ R c is noise and has bounded energy ?Υ z ? 2 < ε. Never-

theless, this is an underdetermined equation, and it does not have a

unique solution x s . To solve sparse solution x s without NP-hard, it

turns out to be an l 1 -norm minimization problem, i.e.,

ˆ x = argmin?x? 1 subject to ?Φx − y? 2 ≤ ε. (4)

 

找到x年代,能够准确地判断传入的身份

车,我们需要解决线性方程y =Φx。一般来说,测量数据可能是嘈杂的;因此,y可能并不表示为训练样本的稀疏的组合。因此,(2)将被重写

y =Φx s +Υz(3)

在Υz∈R c是噪音和能量有界的东西呢?Υz ?2 <ε。从来没有,

theless,这是一个欠定的方程,它没有

唯一解x。解决稀疏解x年代没有np困难,

原来是l 1范数最小化问题,也就是说,

ˆx = argmin ? ?1主题?Φx−y ?2≤ε。(4)

2) Sparse Solution Via Bayesian Formalism: To find the sparse

solution for the l 1 -norm minimization problem, numerous methods

have been proposed, such as orthogonal matching pursuit [22], least

absoluteshrinkageandselectionoperator(LASSO)[23],interior-point

methods [24], sparsity adaptive matching pursuit [25], and gradient

method [26]. However, the aforementioned methods only provide

approximate sparse solutions and do not tell how likely the given

solutions are optimum. Therefore, we will use Bayesian formalism

instead, which returns both a sparse solution x and the probability

information indicating the uncertainty of the solution from actual

sparse x. Our approach is based on [27] by extending Tipping’s

relevance vector machine (RVM) theory [28].

First, we assume that x is the sum of two parts x b and x e (thus,

x = x b + x e ), where x b ∈ R k is the vector composed of nonzero

entries only at the L largest coefficients of x, and x e ∈ R k is the

vector composed of nonzero entries only at the rest of the coefficients.

Moreover, since we assume that measurements can be noisy as in (3),

the vector corresponding to a vehicle y is rewritten as

y = Φx + Υ z = Φx b + Φx e + Υ z = Φx b + Υ e + Υ z = Φx b + Υ

(5)

where Υ e = Φx e . Using the central limit theorem [29], we assume

that both Υ e and Υ z are zero mean and approximately Gaussian

distributed, then Υ = Υ e + Υ z can be approximated as Gaussian

noise with zero mean and unknown variance σ 2 . Then, the

Gaussian likelihood is given by

p(y|x b ,σ 2 ) = (2πσ 2 ) −c/2 exp

?

1

2σ 2

?y − Φ b ? 2

?

. (6)

Given Φ and y, the problem now is to estimate sparse vector x b and

noise variance σ 2 . By Bayes’ rule, we have

p(x b ,σ 2 |y) =

p(y|x b ,σ 2 )p(x b ,σ 2 )

p(y)

. (7)

Note that x b is sparse and can be modeled by a Laplace distribution

[30]. However, the Laplace prior is not conjugate to the Gaussian

likelihood, and thus, the inference problem cannot be written in closed

form [30]. Thus, instead of the Laplace prior, we will perform a hierar-

chical sparseness prior [28] that has similar properties as the Laplace

prior and thus allows convenient conjugate exponential analysis on x b .

Then, based on the priors defined according to [28], the posterior can

be decomposed as

p(x b ,α,σ 2 |y) = p(x b |y,α,σ 2 )p(α,σ 2 |y) (8)

0 0
原创粉丝点击