!!!Chapter 10 Algorithm Design Techniques

来源:互联网 发布:美元投资渠道知乎 编辑:程序博客网 时间:2024/05/22 01:05

10.1 Greedy Algorithms

There are three greedy algorithms in chapter 9: Dijkstra's, Prim's, and Kruskal's algorithms.

In greedy algorithms, in each phase, a decision is made that appears to be good, without regard for future consequences.

When the algorithm terminates, we hope the local optimum is equal to the global optimum. If this is the case, then the algorithm is correct; otherwise, the algorithm has produced a suboptimal solution.

If the absolute best answer is not required, then simple greedy algorithm are sometimes used to generate approximate answers, rather than using the more complicated algorithms generally required to generate an exact answer.

Examples:

1. coin-changing problem: works for American monetary system

2. traffic problems: does not always work

10.1.1 A Simple Scheduling Problem

We are given jobs j1, j2, ..., jN, all with the known running times t1, t2, tN, respectively. What is the best way to schedule these jobs in order to minimize the average completion time? (we assume nonpreemptive scheduling: once a job start, it must run to completion) P349

Answer: We always process the shortest job first.

This result indicates the reason the operating system scheduler generally gives precedence to shorter jobs.

The Multiprocess Case

We are given jobs j1, j2, ..., jN, all with the known running times t1, t2, tN, respectively. And we have a number P of processors. What is the best way to schedule these jobs in order to minimize the average completion time?

Answer: A simple, often-used algorithm is the LPT algorithm (Longest Processing Time) which sorts the jobs by its processing time and then assigns them to the machine with the earliest end time so far. This algorithm achieves an upper bound of 4/3 - 1/(3m) OPT.

Wiki: http://en.wikipedia.org/wiki/Multiprocessor_scheduling

Minimizing the Final Completion Time(Left)

10.1.2 Huffman Codes (Left)

In file compression, if the size of the character set is C, then |logC| bits are needed in a standard encoding.

To save space, the general strategy is to allow the code length to vary from character to character and to ensure that the frequently occurring characters have short codes.

10.1.3 Approximate Bin Packing (Left)

10.2 Divide and Conquer

Divide and conquer algorithm consist of two parts:

Divide: Smaller problems are solved recursively (except base cases).

Conquer: The solution to the original problem is then formed from the solutions to the subproblems.

Traditionally, routines in which the text contains at least two recursive calls are called divide and conquer algorithms, while routines whose text contains only one recursive call are not.

We generally insist that the subproblems be disjoint (that is, essentially nonoverlapping).

10.2.1 Running Time of Divide and Conquer Algorithms

E.G.

For merge sort, we operates on two problems, each of which is half the size of the original, and then use O(N) additional work.

T(N) = 2T(N/2) + O(N)

The solution to the above equation is O(NlogN)

10.2.2 Closest-Points Problem

10.2.3 The Selection Problem

10.3 Dynamic Programming

Any recursive mathematical formula could be directly translated to a recursive algorithm, but the underlying reality is that often the compiler will not do justice to the recursive algorithm, and an inefficient program results.

We can rewriting the recursive algorithm as a nonrecursive algorithm that systematically records the answers to the subproblems in a table. One technique that makes use of this approach is known asdynamic programming.

10.3.1 Using a Table Instead of Recursion

How to solve the recurrence C(N) = (2/N)∑C(i) + N, with C(0) = 1.

Recursive program:

double Eval(int N){    int i;    double Sum;    if( N == 0 )        return 1.0;    else    {        Sum = 0.0;        for(i=0;i<N;++i)            Sum += Eval(i);        return 2.0*Sum/N + N;}

Evaluate with a table:

double Eval( int N ){    int i, j;    double Sum, Answer;    double *C;        C = malloc( sizeof(double)*(N+1)); // check space       C[0] = 1.0;    for(i=1;i<=N;++i)    {        Sum = 0.0;        for(j=0;j<i;++j)  //C[x] is available when we calc C[x+1]            Sum +=C[j];        C[i] = 2.0 * Sum/i + i;    }    Answer = C[N];    free( C );    return Answer;}
Time Complexity: O(N^2)                                   O(N)???

10.3.2 Ordering Matrix Multiplications

10.3.3 Optimal Binary Search Tree

10.3.4 All-pairs Shortest Path

10.4 Randomized Algorithms

10.5 Backtracking Algorithms

A backtracking algorithm amounts to a clever implementation of exhaustive search, with generally unfavorable performance. It normally will not affect time complexity!

原创粉丝点击