INEQUALITY PROOF

来源:互联网 发布:动漫下载软件 编辑:程序博客网 时间:2024/05/20 04:14

INEQUALITY PROOF

Ethan

In Loving Memory of Mamba Day, 4.13, 2016

I. Common Inequalities

(a). Lagrangian Formula for Summation Estimation

Suppose f(x) is a continuous function on x >= 1 and F(x) is its primitive function. Iff(x) is monotonically decreasing,


The inequality is reversed iff(x) is monotonically increasing.

Proof:

Suppose f(x) is monotonically increasing.F’(x) = f(x). By applying Lagrange mean value theorem toF(x) on [k, k+1], we have


Note that


By summing up these inequalities above, we obtain,


Therefore,


Similarly, we can prove the other case in whichf(x) is monotonically decreasing.


(b). The r-order Inequality

The r-order meanMr(a, q) is defined as


where a = (a1, …, an)> 0 , q = (q1, …, qn)andq1+…+qn = 1.

 

Theorem:Mr(a, q) is a monotonically increasing function on R with respect tor.

Proof:


Let bi =air,i= 1, …,n,


where B is a random variable with a generalized Bernoulli distribution,


Note that f(x) = xlnx is a convex function. Thus, the expectation of a function value is larger than the function value of the expectation (See Appendix I), i.e.,


This completes the proof.#


Corollary I:

When qi=1/n, we have,

Arithmetic mean (r= 1),


Geometric mean (r = 0),


Harmonic mean (r = -1),


H(a)<=G(a)<=A(a).

 

Corollary II (Holder Inequality)

Suppose we havem vectors with n components,


andw1+w2+…+wm= 1, wi >0, then


A relaxation on its left hand side and some changes of variables might give us more insight on the Holder inequality, i.e.,


letting rijwi =bij, andpi = 1/wi,


By defining a generalized “inner product” among several vectors,


we obtain a relaxed Holder inequality,


wherepi’s are called conjugate coefficients satisfying

 

Proof:


This completes the proof.#


II. Integral Inequalities

(a). The r-order inequality

In this section, we are going to consider various means of a positive-valued continuous functionf(x) on [a, b]. The interval [ab] can be evenly divided intosub-intervals with a=x0<x1<…<xn=b, wherexk=a+k(b-a)/nk=0, 1, …,n.

Arithmetic mean of f(x) on [a, b],


Geometric mean of f(x) on [a,b],


Harmonic mean of f(x) on [a,b],


In generalization, ther-order mean off(x) on [a,b] is given by


Note that A(f)=M1(f),G(f)=M0(f) and H(f)=M-1(f). One can also show that Mr(f) is a monotonically increasing function wrt r. In particular,H(f) <= G(f) <= A(f).


(b) Common Methods to Prove Integral Inequalities

(b1) Holder Inequality

Given m positive-valued continuous functions on [a,b], f1(x), f2(x), …,fm(x), and positive scalars w1, …,wm satisfying w1+…+wm=1, then


Similarly, we have a relaxed Holder inequality


where the generalized inner product among functions is given by


andpi’s are called conjugate coefficients satisfying


Particularly, for a “two functions andw1=w2=1/2” case,


or equivalently,


We obtain the Cauchy-Schwarz Inequality (See Appendix II).

(b2) Function Convexity

Definition (Convex function):f(x) is a convex function if and only if


for


Particularly, whenalphai=1/n, we obtain the Jensen Inequality.

One can show that

If f(x) is differentiable,f(x) is convex <=> f’(x) monotonically increases;

If f(x) is twice differentiable,f(x) is convex <=> f’’(x)< 0.

Corollary: Supposef(x) is an integrable function on [a, b], m<=f(x)<=M, g(x) is a continuous convex function on [mM], then


Hint: This can be proved by taking a limit on Jensen Inequality.

 

APPENDIX

I. A Property of Convex Function

Prove: for a convex twice differentiable real-valued functionf(x) defined on [a, b], the expectation of f(x) is equal to or larger than the function value of the expectation.

Actually this property directly follows the definition of a convex function by regarding alpha’s as probabilities or probability density. Here is another proof when the convex function is twice differentiable.

Proof:

Since the second derivative off is nonnegative, its first derivative must be nondecreasing. Using the fundamental theorem of calculus, we obtain


Now suppose a random variableX can take every possible valuex. Choosing c =E[X] and taking expectations on both sides, we have


This completes the proof.#

 

II. Cauchy-Schwarz Inequality

(1) Generic Cauchy-Schwarz Inequality

Recall that, in functional analysis,

In an inner-product space (H, K,<.,.>), for anyx and y in X, the generic Cauchy-Schwarz inequality holds as


with the equality iff x and y are linearly dependent.

(2) Variants of Cauchy-Schwarz Inequality

There are myriads variants of the Cauchy-Schwarz inequality.

(a) The inner-product space ofm-by-n matrices

Suppose H = {m-by-n complex matrices}, K = C, and the inner product is defined as <A, B> = trace(A*B) (One can check the validity of this inner product in (H, K, <.,.>) by examining the definition of the general inner product). Then, the Cauchy-Schwarz Inequality holds as


(b) The inner-product space of random variables

Suppose H = {random variables}, K = R, and the inner product is defined as <X,Y> = Cov(X, Y) (One can check the validity of this inner product in (H, K, <.,.>) by examining the definition of the general inner product). Then, the Cauchy-Schwarz inequality holds as


(c) The inner-product space of random matrices

Suppose A andB are p-by-n and q-by-n random matrices such that E||A||2< infinity, E||B||2 < infinity and E(AAT) is nonsingular. Then, (1)


In particular, whenn=1,A:=X-EX and B:=Y-EY, we have (2)


Proof:

Let LAMBDA = E(BAT)E-1(AAT). Stylistically,LAMBDA is a linear operator sequentially performing a whitening transform (eliminating the linear independence ofA’s components) and a projection onto the space of B. Then,


This completes the proof.#


Pictorially, Figure 1 shows the transformation of (2).

Philosophically, Formula (1) and (2) are both high-dimensional Cauchy-Schwarz Inequality, regarding a transformation between linear spaces and one between vectors, respectively.

 

 

0 0