Linear Programming Learning Notes (5) Duality Theory
来源:互联网 发布:更改路由器mac 编辑:程序博客网 时间:2024/05/19 23:18
Linear Programming Learning Notes (5) Duality Theory
All the resources come from Linear Programming: Foundations and Extensions by Professor Robert J. Vanderbei.
Explore the link below for further information:
LP Book Resources
Part 1
Basic Theory: The Simplex Method and Duality
Chapter 5 Duality Theory
Associated with every linear program is another called its dual. The dual of this dual program is the original linear program.
Hence, linear programs come in primal/dual pairs. It turns out that every feasible solution for one of these two linear programs gives a bound on the optimal objective function value for the other. Thes ideas form a subject called duality theory.
Movitation: Finding Upper Bounds
Let’s begin with an example:
maximize
s.t.
Every feasible solution provides a lower bound on the optimal objective function value
If we stipulate that each of the coefficients of the
so that :
Now we have the upper bound, which we should minimize in our effort to obtain the best possible upper bound. Therefore, we are naturally led to the following optimization problem:
minimize
s.t.
This problem is called the dual linear programming problem associated with the given one.
The Dual Problem
Given a linear programming problem in standard form,
maximize
s.t.
the associated dual linear program is given by
minimize
s.t.
and the standard form of the dual is
-maximize
s.t.
The Weak Duality Theorem
If
The Strong Duality Theorem
If the primal problem has an optimal solusion,
then the dual also has an optimal solution,
such that
The main idea is that, as the simplex method solves the primal problem, it also implicitly solves the dual problem, and it does so in a way that the theorem holds.
We should notice here that, the dual dictionary is the negative transpose of the primal dictionary. For example,
It can be written as:
Each primal dictionary generated by the simplex method implicited defines a corresponding dual dictionary as follows: first write down the negative transpose and then replace each
Complementary Slackness
Sometimes it is necessary to recover an optimal dual solution when only an optimal primal solution is known. The following theorem, known as the Complementary Slackness Theorem, can help in this regard.
THEOREM Suppose that
And below is how we recover the dual solution. Suppose that we have a nondegenerate primal basic optimal solution
and we wish to find a corresponding optimal solution for the dual. Let
denote the corresponding slack variables, which can be obtained as:
The dual constraints are
These constraints form n equations in
The Dual Simplex Method
We use this method when the primal dictionary is not feasible while the dual is feasible. When we just focus on the primal, we choose the entering and leaving variables following the rules below:
1. Select the basic variable whose constant value is the most negative.
2. Pick the entering variable by scanning the row of this basic variable, looking for the largest negated ratio.
For example:
First we select
Finally we can get an optimized dictionary as:
A Dual-Based Phase I Algorithm
At least it is more elegant than the Phase I before. Consider the following example:
Now, neither the primal and the dual is feasible. Let us temporarily change the primal objective function to
Then the corresponding initial dual dictionary is feasible. In fact, it is the same dictionary as that above. The optimal is
Now we simply reinstate the intended objective function and continue with Phase II.
Hence, the starting dictionary for Phase II is
Rules of Forming the Dual
Lagrangian Duality
The max-min is the primal, and the min-max is the dual.
- Linear Programming Learning Notes (5) Duality Theory
- Linear Programming Learning Notes (1) Introduction
- Linear Programming Learning Notes (3) Degeneracy
- Linear Programming Learning Notes (2) The Simplex Method
- Concurrency programming Learning notes
- Machine Learning Notes - Linear Regression
- 「Machine Learning」Learning Theory from CS229 Lecture Notes
- Machine Learning Notes ——Linear Regression
- [Machine learning 实验4]linear programming
- [Machine learning 实验4]linear programming
- Programming Exercise 1: Linear Regression Machine Learning
- Machine Learning Notes I: The Standard Linear Model
- Learning Theory
- Learning Theory
- learning theory
- Coursera Machine Learning 第六周 Programming Exercise 5: Regularized Linear Regression and Bias
- Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance Machine Learning
- Machine Learning week 2 quiz: programming assignment-Linear Regression
- 关于类模板语法的极简示例
- 集合框架
- python Django 同步mysql数据库
- ANN神经网络代码在Matlab中的简单实现
- matlab size、length和numel函数
- Linear Programming Learning Notes (5) Duality Theory
- 【HNOI2016】序列
- bzoj4538: [Hnoi2016]网络
- Java创建新文件的同时创建相应目录
- Java代码规范和一些常见问题
- Puzzle - Light Bulb Switching
- mac osx下 react 开发环境搭建
- JavaBean
- 数据结构之循环链表