Introduction to Algorithms, Second Edition _ONE (Adding)

来源:互联网 发布:淘宝运费模板默认运费 编辑:程序博客网 时间:2024/06/05 21:06
 

【原书名】 Introduction to Algorithms(Second Edition)
【原出版社】 The MIT Press 
【作者】 (美)Thomas H.Cormen Charles E.Leiserson Ronald L.Rivest Clifford Stein 
【出版社】 高等教育出版社  
                        http://www.china-pub.com/computers/common/info.asp?id=6434
【参考资料】
e-book:
http://www.cfcs.com.cn/fjas/ebook.htm
http://219.139.240.53/Soft/Soft_12024.htm
http://online.ysu.edu.cn/personal/yyf/weitao/taocp/clrs.htm

本书答案
(solutions to the exercises in the book: "Introduction to Algorithms" by Cormen, Leiserson and Rivest.)
课程的录像
http://18.89.1.101/sma/5503fall2001/index5503fall2001.html 

对应该书的rm录像的下载地址:
http://acm.ustc.edu.cn/~algorithm/video/Introduction_To_Algorithm/

对应于录像的麻省理工学院这门课(6.046J / 18.410J 2001秋季课程:算法导论)的“开放式课程网页”:
http://www.cocw.net/mit/Electrical-Engineering-and-Computer-Science/6-046JIntroduction-to-AlgorithmsFall2001/CourseHome/
上面可以下载到完整的pdf版的“课堂讲义”、“习题”、“习题答案”、“作业”、“作业答案”、“试卷”和“试卷答案”,另外还有“教学大纲”、“教学时程”、“相关阅读资料”等等资料 

勘误
http://www.cs.dartmouth.edu/~thc/clrs-2e-bugs/bugs.php

练习题部分答案下载:
http://ftp.cdaan.com/sy/light/clrs_study.pdf

MIT OpenCourseWare
http://ocw.mit.edu/OcwWeb/index.htm

http://www.itu.dk/people/beetle
Solutions for the second edition:
http://www.it-c.dk/people/beetle/solution.pdf
http://www.it-c.dk/people/beetle/teaching/solution.pdf

【评论】
这本书的英文简称或昵称不是ITA,而是CLR(第一版)或CLRS(第二版),其实就是几位作者的姓名缩写加在一起。

百科全书的组织方式,麻省理工的团队作品,经典中之经典,作者刚刚获得去年的图灵奖,国外绝大多数大学算法课的必备教材。

这本书的大部分内容是美国大学的本科教学内容。

这是一本经典,在CiteSeer被引用最多的文章排名中位居第二。
http://citeseer.nj.nec.com/articles.html

在《程序员》的算法书排名上位列第二,仅次于大名顶顶的 the art of computer programming

作者之一Ronald Rivest是RSA的设计者,于2002年获得图灵奖。

================================================================================================

ZZ from http://outmyth.blogdriver.com/outmyth/

Reference book for Algorithms Design

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Introduction to Algorithms, Second Edition
by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein  ISBN:0262032937
The MIT Press © 2001 (1180 pages)

A course in computer algorithms, suitable for use as a field reference for working software developers.

 

Table of Contents
Introduction to Algorithms, Second Edition
Preface
Part I - Foundations
Chapter 1-The Role of Algorithms in Computing
Chapter 2-Getting Started
Chapter 3-Growth of Functions
Chapter 4-Recurrences
Chapter 5-Probabilistic Analysis and Randomized Algorithms
Part II - Sorting and Order Statistics
Chapter 6-Heapsort
Chapter 7-Quicksort
Chapter 8-Sorting in Linear Time
Chapter 9-Medians and Order Statistics
Part III - Data Structures
Chapter 10-Elementary Data Structures
Chapter 11-Hash Tables
Chapter 12-Binary Search Trees
Chapter 13-Red-Black Trees
Chapter 14-Augmenting Data Structures
Part IV - Advanced Design and Analysis Techniques
Chapter 15-Dynamic Programming
Chapter 16-Greedy Algorithms
Chapter 17-Amortized Analysis
Part V - Advanced Data Structures
Chapter 18-B-Trees
Chapter 19-Binomial Heaps
Chapter 20-Fibonacci Heaps
Chapter 21-Data Structures for Disjoint Sets
Part VI - Graph Algorithms
Chapter 22-Elementary Graph Algorithms
Chapter 23-Minimum Spanning Trees
Chapter 24-Single-Source Shortest Paths
Chapter 25-All-Pairs Shortest Paths
Chapter 26-Maximum Flow
Part VII - Selected Topics
Chapter 27-Sorting Networks
Chapter 28-Matrix Operations
Chapter 29-Linear Programming
Chapter 30-Polynomials and the FFT
Chapter 31-Number-Theoretic Algorithms
Chapter 32-String Matching
Chapter 33-Computational Geometry
Chapter 34-NP-Completeness
Chapter 35-Approximation Algorithms
Part VIII - Appendix: Mathematical Background
Appendix A-Summations
Appendix B-Sets, Etc.
Appendix C-Counting and Probability
Bibliography
Index
List of Figures
List of Corollaries
List of Problems
List of Exercises

 

Back Cover

There are books on algorithms that are rigorous but incomplete and others that cover masses of material but lack rigor. Introduction to Algorithms combines rigor and comprehensiveness.

The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.

The first edition became the standard reference for professionals and a widely used text in universities worldwide. The second edition features new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming, as well as extensive revisions to virtually every section of the book. In a subtle but important change, loop invariants are introduced early and used throughout the text to prove algorithm correctness. Without changing the mathematical and analytic focus, the authors have moved much of the mathematical foundations material from Part I to an appendix and have included additional motivational material at the beginning.

About the Authors

Thomas H. Cormen is Associate Professor of Computer Science at Dartmouth College.

Charles E. Leiserson is Professor of Computer Science and Electrical Engineering at the Massachusetts Institute of Technology.

Ronald L. Rivest is Andrew and Erna Viterbi Professor of Computer Science at the Massachusetts Institute of Technology.

Clifford Stein is Associate Professor of Industrial Engineering and Operations Research at Columbia University.

 

Introduction to Algorithms, Second Edition

Thomas H. Cormen
Charles E. Leiserson
Ronald L. Rivest
Clifford Stein
The MIT Press
Cambridge , Massachusetts London, England
McGraw-Hill Book Company
Boston Burr Ridge , IL Dubuque , IA Madison , WI New York San Francisco St. Louis Montréal Toronto

This book is one of a series of texts written by faculty of the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology. It was edited and produced by The MIT Press under a joint production-distribution agreement with the McGraw-Hill Book Company.

Ordering Information:

North America

Text orders should be addressed to the McGraw-Hill Book Company. All other orders should be addressed to The MIT Press.

Outside North America

All orders should be addressed to The MIT Press or its local distributor.

First edition 1990

All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

This book was printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication DataIntroduction to algorithms / Thomas H. Cormen ... [et al.].-2nd ed.    p. cm.  Includes bibliographical references and index.

ISBN 0-262-03293-7 (hc.: alk. paper, MIT Press).-ISBN 0-07-013151-1 (McGraw-Hill)

1. Computer programming. 2. Computer algorithms. I. Title: Algorithms. II. Cormen, Thomas H.

QA76.6 I5858 2001

005.1-dc21

2001031277

 

Preface

This book provides a comprehensive introduction to the modern study of computer algorithms. It presents many algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. We have tried to keep explanations elementary without sacrificing depth of coverage or mathematical rigor.

Each chapter presents an algorithm, a design technique, an application area, or a related topic. Algorithms are described in English and in a "pseudocode" designed to be readable by anyone who has done a little programming. The book contains over 230 figures illustrating how the algorithms work. Since we emphasize efficiency as a design criterion, we include careful analyses of the running times of all our algorithms.

The text is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Because it discusses engineering issues in algorithm design, as well as mathematical aspects, it is equally well suited for self-study by technical professionals.

In this, the second edition, we have updated the entire book. The changes range from the addition of new chapters to the rewriting of individual sentences.

To the teacher

This book is designed to be both versatile and complete. You will find it useful for a variety of courses, from an undergraduate course in data structures up through a graduate course in algorithms. Because we have provided considerably more material than can fit in a typical one-term course, you should think of the book as a "buffet" or "smorgasbord" from which you can pick and choose the material that best supports the course you wish to teach.

You should find it easy to organize your course around just the chapters you need. We have made chapters relatively self-contained, so that you need not worry about an unexpected and unnecessary dependence of one chapter on another. Each chapter presents the easier material first and the more difficult material later, with section boundaries marking natural stopping points. In an undergraduate course, you might use only the earlier sections from a chapter; in a graduate course, you might cover the entire chapter.

We have included over 920 exercises and over 140 problems. Each section ends with exercises, and each chapter ends with problems. The exercises are generally short questions that test basic mastery of the material. Some are simple self-check thought exercises, whereas others are more substantial and are suitable as assigned homework. The problems are more elaborate case studies that often introduce new material; they typically consist of several questions that lead the student through the steps required to arrive at a solution.

We have starred () the sections and exercises that are more suitable for graduate students than for undergraduates. A starred section is not necessarily more difficult than an unstarred one, but it may require an understanding of more advanced mathematics. Likewise, starred exercises may require an advanced background or more than average creativity.

 

To the student

We hope that this textbook provides you with an enjoyable introduction to the field of algorithms. We have attempted to make every algorithm accessible and interesting. To help you when you encounter unfamiliar or difficult algorithms, we describe each one in a step-by-step manner. We also provide careful explanations of the mathematics needed to understand the analysis of the algorithms. If you already have some familiarity with a topic, you will find the chapters organized so that you can skim introductory sections and proceed quickly to the more advanced material.

This is a large book, and your class will probably cover only a portion of its material. We have tried, however, to make this a book that will be useful to you now as a course textbook and also later in your career as a mathematical desk reference or an engineering handbook.

What are the prerequisites for reading this book?

  • You should have some programming experience. In particular, you should understand recursive procedures and simple data structures such as arrays and linked lists.

  • You should have some facility with proofs by mathematical induction. A few portions of the book rely on some knowledge of elementary calculus. Beyond that, Parts I and VIII of this book teach you all the mathematical techniques you will need.

To the professional

The wide range of topics in this book makes it an excellent handbook on algorithms. Because each chapter is relatively self-contained, you can focus in on the topics that most interest you.

Most of the algorithms we discuss have great practical utility. We therefore address implementation concerns and other engineering issues. We often provide practical alternatives to the few algorithms that are primarily of theoretical interest.

If you wish to implement any of the algorithms, you will find the translation of our pseudocode into your favorite programming language a fairly straightforward task. The pseudocode is designed to present each algorithm clearly and succinctly. Consequently, we do not address error-handling and other software-engineering issues that require specific assumptions about your programming environment. We attempt to present each algorithm simply and directly without allowing the idiosyncrasies of a particular programming language to obscure its essence.

 

To our colleagues

We have supplied an extensive bibliography and pointers to the current literature. Each chapter ends with a set of "chapter notes" that give historical details and references. The chapter notes do not provide a complete reference to the whole field of algorithms, however. Though it may be hard to believe for a book of this size, many interesting algorithms could not be included due to lack of space.

Despite myriad requests from students for solutions to problems and exercises, we have chosen as a matter of policy not to supply references for problems and exercises, to remove the temptation for students to look up a solution rather than to find it themselves.

 

Changes for the second edition

What has changed between the first and second editions of this book? Depending on how you look at it, either not much or quite a bit.

A quick look at the table of contents shows that most of the first-edition chapters and sections appear in the second edition. We removed two chapters and a handful of sections, but we have added three new chapters and four new sections apart from these new chapters. If you were to judge the scope of the changes by the table of contents, you would likely conclude that the changes were modest.

The changes go far beyond what shows up in the table of contents, however. In no particular order, here is a summary of the most significant changes for the second edition:

  • Cliff Stein was added as a coauthor.

  • Errors have been corrected. How many errors? Let's just say several.

  • There are three new chapters:

    • Chapter 1 discusses the role of algorithms in computing.

    • Chapter 5 covers probabilistic analysis and randomized algorithms. As in the first edition, these topics appear throughout the book.

    • Chapter 29 is devoted to linear programming.

  • Within chapters that were carried over from the first edition, there are new sections on the following topics:

  • To allow more algorithms to appear earlier in the book, three of the chapters on mathematical background have been moved from Part I to the Appendix, which is Part VIII.

  • There are over 40 new problems and over 185 new exercises.

  • We have made explicit the use of loop invariants for proving correctness. Our first loop invariant appears in Chapter 2, and we use them a couple of dozen times throughout the book.

  • Many of the probabilistic analyses have been rewritten. In particular, we use in a dozen places the technique of "indicator random variables," which simplify probabilistic analyses, especially when random variables are dependent.

  • We have expanded and updated the chapter notes and bibliography. The bibliography has grown by over 50%, and we have mentioned many new algorithmic results that have appeared subsequent to the printing of the first edition.

We have also made the following changes:

  • The chapter on solving recurrences no longer contains the iteration method. Instead, in Section 4.2, we have "promoted" recursion trees to constitute a method in their own right. We have found that drawing out recursion trees is less error-prone than iterating recurrences. We do point out, however, that recursion trees are best used as a way to generate guesses that are then verified via the substitution method.

  • The partitioning method used for quicksort (Section 7.1) and the expected linear-time order-statistic algorithm (Section 9.2) is different. We now use the method developed by Lomuto, which, along with indicator random variables, allows for a somewhat simpler analysis. The method from the first edition, due to Hoare, appears as a problem in Chapter 7.

  • We have modified the discussion of universal hashing in Section 11.3.3 so that it integrates into the presentation of perfect hashing.

  • There is a much simpler analysis of the height of a randomly built binary search tree in Section 12.4.

  • The discussions on the elements of dynamic programming (Section 15.3) and the elements of greedy algorithms (Section 16.2) are significantly expanded. The exploration of the activity-selection problem, which starts off the greedy-algorithms chapter, helps to clarify the relationship between dynamic programming and greedy algorithms.

  • We have replaced the proof of the running time of the disjoint-set-union data structure in Section 21.4 with a proof that uses the potential method to derive a tight bound.

  • The proof of correctness of the algorithm for strongly connected components in Section 22.5 is simpler, clearer, and more direct.

  • Chapter 24, on single-source shortest paths, has been reorganized to move proofs of the essential properties to their own section. The new organization allows us to focus earlier on algorithms.

  • Section 34.5 contains an expanded overview of NP-completeness as well as new NP-completeness proofs for the hamiltonian-cycle and subset-sum problems.

Finally, virtually every section has been edited to correct, simplify, and clarify explanations and proofs.

 

Web site

Another change from the first edition is that this book now has its own web site: http://mitpress.mit.edu/algorithms/. You can use the web site to report errors, obtain a list of known errors, or make suggestions; we would like to hear from you. We particularly welcome ideas for new exercises and problems, but please include solutions.

We regret that we cannot personally respond to all comments

 

Acknowledgments for the first edition

Many friends and colleagues have contributed greatly to the quality of this book. We thank all of you for your help and constructive criticisms.

MIT's Laboratory for Computer Science has provided an ideal working environment. Our colleagues in the laboratory's Theory of Computation Group have been particularly supportive and tolerant of our incessant requests for critical appraisal of chapters. We specifically thank Baruch Awerbuch, Shafi Goldwasser, Leo Guibas, Tom Leighton, Albert Meyer, David Shmoys, and Éva Tardos. Thanks to William Ang, Sally Bemus, Ray Hirschfeld, and Mark Reinhold for keeping our machines (DEC Microvaxes, Apple Macintoshes, and Sun Sparcstations) running and for recompiling whenever we exceeded a compile-time limit. Thinking Machines Corporation provided partial support for Charles Leiserson to work on this book during a leave of absence from MIT.

Many colleagues have used drafts of this text in courses at other schools. They have suggested numerous corrections and revisions. We particularly wish to thank Richard Beigel, Andrew Goldberg, Joan Lucas, Mark Overmars, Alan Sherman, and Diane Souvaine.

Many teaching assistants in our courses have made significant contributions to the development of this material. We especially thank Alan Baratz, Bonnie Berger, Aditi Dhagat, Burt Kaliski, Arthur Lent, Andrew Moulton, Marios Papaefthymiou, Cindy Phillips, Mark Reinhold, Phil Rogaway, Flavio Rose, Arie Rudich, Alan Sherman, Cliff Stein, Susmita Sur, Gregory Troxel, and Margaret Tuttle.

Additional valuable technical assistance was provided by many individuals. Denise Sergent spent many hours in the MIT libraries researching bibliographic references. Maria Sensale, the librarian of our reading room, was always cheerful and helpful. Access to Albert Meyer's personal library saved many hours of library time in preparing the chapter notes. Shlomo Kipnis, Bill Niehaus, and David Wilson proofread old exercises, developed new ones, and wrote notes on their solutions. Marios Papaefthymiou and Gregory Troxel contributed to the indexing. Over the years, our secretaries Inna Radzihovsky, Denise Sergent, Gayle Sherman, and especially Be Blackburn provided endless support in this project, for which we thank them.

Many errors in the early drafts were reported by students. We particularly thank Bobby Blumofe, Bonnie Eisenberg, Raymond Johnson, John Keen, Richard Lethin, Mark Lillibridge, John Pezaris, Steve Ponzio, and Margaret Tuttle for their careful readings.

Colleagues have also provided critical reviews of specific chapters, or information on specific algorithms, for which we are grateful. We especially thank Bill Aiello, Alok Aggarwal, Eric Bach, Vašek Chvátal, Richard Cole, Johan Hastad, Alex Ishii, David Johnson, Joe Kilian, Dina Kravets, Bruce Maggs, Jim Orlin, James Park, Thane Plambeck, Hershel Safer, Jeff Shallit, Cliff Stein, Gil Strang, Bob Tarjan, and Paul Wang. Several of our colleagues also graciously supplied us with problems; we particularly thank Andrew Goldberg, Danny Sleator, and Umesh Vazirani.

It has been a pleasure working with The MIT Press and McGraw-Hill in the development of this text. We especially thank Frank Satlow, Terry Ehling, Larry Cohen, and Lorrie Lejeune of The MIT Press and David Shapiro of McGraw-Hill for their encouragement, support, and patience. We are particularly grateful to Larry Cohen for his outstanding copyediting.

 

Acknowledgments for the second edition

When we asked Julie Sussman, P.P.A., to serve as a technical copyeditor for the second edition, we did not know what a good deal we were getting. In addition to copyediting the technical content, Julie enthusiastically edited our prose. It is humbling to think of how many errors Julie found in our earlier drafts, though considering how many errors she found in the first edition (after it was printed, unfortunately), it is not surprising. Moreover, Julie sacrificed her own schedule to accommodate ours-she even brought chapters with her on a trip to the Virgin Islands! Julie, we cannot thank you enough for the amazing job you did.

The work for the second edition was done while the authors were members of the Department of Computer Science at Dartmouth College and the Laboratory for Computer Science at MIT. Both were stimulating environments in which to work, and we thank our colleagues for their support.

Friends and colleagues all over the world have provided suggestions and opinions that guided our writing. Many thanks to Sanjeev Arora, Javed Aslam, Guy Blelloch, Avrim Blum, Scot Drysdale, Hany Farid, Hal Gabow, Andrew Goldberg, David Johnson, Yanlin Liu, Nicolas Schabanel, Alexander Schrijver, Sasha Shen, David Shmoys, Dan Spielman, Gerald Jay Sussman, Bob Tarjan, Mikkel Thorup, and Vijay Vazirani.

Many teachers and colleagues have taught us a great deal about algorithms. We particularly acknowledge our teachers Jon L. Bentley, Bob Floyd, Don Knuth, Harold Kuhn, H. T. Kung, Richard Lipton, Arnold Ross, Larry Snyder, Michael I. Shamos, David Shmoys, Ken Steiglitz, Tom Szymanski, Éva Tardos, Bob Tarjan, and Jeffrey Ullman.

We acknowledge the work of the many teaching assistants for the algorithms courses at MIT and Dartmouth, including Joseph Adler, Craig Barrack, Bobby Blumofe, Roberto De Prisco, Matteo Frigo, Igal Galperin, David Gupta, Raj D. Iyer, Nabil Kahale, Sarfraz Khurshid, Stavros Kolliopoulos, Alain Leblanc, Yuan Ma, Maria Minkoff, Dimitris Mitsouras, Alin Popescu, Harald Prokop, Sudipta Sengupta, Donna Slonim, Joshua A. Tauber, Sivan Toledo, Elisheva Werner-Reiss, Lea Wittie, Qiang Wu, and Michael Zhang.

Computer support was provided by William Ang, Scott Blomquist, and Greg Shomo at MIT and by Wayne Cripps, John Konkle, and Tim Tregubov at Dartmouth. Thanks also to Be Blackburn, Don Dailey, Leigh Deacon, Irene Sebeda, and Cheryl Patton Wu at MIT and to Phyllis Bellmore, Kelly Clark, Delia Mauceli, Sammie Travis, Deb Whiting, and Beth Young at Dartmouth for administrative support. Michael Fromberger, Brian Campbell, Amanda Eubanks, Sung Hoon Kim, and Neha Narula also provided timely support at Dartmouth.

Many people were kind enough to report errors in the first edition. We thank the following people, each of whom was the first to report an error from the first edition: Len Adleman, Selim Akl, Richard Anderson, Juan Andrade-Cetto, Gregory Bachelis, David Barrington, Paul Beame, Richard Beigel, Margrit Betke, Alex Blakemore, Bobby Blumofe, Alexander Brown, Xavier Cazin, Jack Chan, Richard Chang, Chienhua Chen, Ien Cheng, Hoon Choi, Drue Coles, Christian Collberg, George Collins, Eric Conrad, Peter Csaszar, Paul Dietz, Martin Dietzfelbinger, Scot Drysdale, Patricia Ealy, Yaakov Eisenberg, Michael Ernst, Michael Formann, Nedim Fresko, Hal Gabow, Marek Galecki, Igal Galperin, Luisa Gargano, John Gately, Rosario Genario, Mihaly Gereb, Ronald Greenberg, Jerry Grossman, Stephen Guattery, Alexander Hartemik, Anthony Hill, Thomas Hofmeister, Mathew Hostetter, Yih-Chun Hu, Dick Johnsonbaugh, Marcin Jurdzinki, Nabil Kahale, Fumiaki Kamiya, Anand Kanagala, Mark Kantrowitz, Scott Karlin, Dean Kelley, Sanjay Khanna, Haluk Konuk, Dina Kravets, Jon Kroger, Bradley Kuszmaul, Tim Lambert, Hang Lau, Thomas Lengauer, George Madrid, Bruce Maggs, Victor Miller, Joseph Muskat, Tung Nguyen, Michael Orlov, James Park, Seongbin Park, Ioannis Paschalidis, Boaz Patt-Shamir, Leonid Peshkin, Patricio Poblete, Ira Pohl, Stephen Ponzio, Kjell Post, Todd Poynor, Colin Prepscius, Sholom Rosen, Dale Russell, Hershel Safer, Karen Seidel, Joel Seiferas, Erik Seligman, Stanley Selkow, Jeffrey Shallit, Greg Shannon, Micha Sharir, Sasha Shen, Norman Shulman, Andrew Singer, Daniel Sleator, Bob Sloan, Michael Sofka, Volker Strumpen, Lon Sunshine, Julie Sussman, Asterio Tanaka, Clark Thomborson, Nils Thommesen, Homer Tilton, Martin Tompa, Andrei Toom, Felzer Torsten, Hirendu Vaishnav, M. Veldhorst, Luca Venuti, Jian Wang, Michael Wellman, Gerry Wiener, Ronald Williams, David Wolfe, Jeff Wong, Richard Woundy, Neal Young, Huaiyuan Yu, Tian Yuxing, Joe Zachary, Steve Zhang, Florian Zschoke, and Uri Zwick.

Many of our colleagues provided thoughtful reviews or filled out a long survey. We thank reviewers Nancy Amato, Jim Aspnes, Kevin Compton, William Evans, Peter Gacs, Michael Goldwasser, Andrzej Proskurowski, Vijaya Ramachandran, and John Reif. We also thank the following people for sending back the survey: James Abello, Josh Benaloh, Bryan Beresford-Smith, Kenneth Blaha, Hans Bodlaender, Richard Borie, Ted Brown, Domenico Cantone, M. Chen, Robert Cimikowski, William Clocksin, Paul Cull, Rick Decker, Matthew Dickerson, Robert Douglas, Margaret Fleck, Michael Goodrich, Susanne Hambrusch, Dean Hendrix, Richard Johnsonbaugh, Kyriakos Kalorkoti, Srinivas Kankanahalli, Hikyoo Koh, Steven Lindell, Errol Lloyd, Andy Lopez, Dian Rae Lopez, George Lucker, David Maier, Charles Martel, Xiannong Meng, David Mount, Alberto Policriti, Andrzej Proskurowski, Kirk Pruhs, Yves Robert, Guna Seetharaman, Stanley Selkow, Robert Sloan, Charles Steele, Gerard Tel, Murali Varanasi, Bernd Walter, and Alden Wright. We wish we could have carried out all your suggestions. The only problem is that if we had, the second edition would have been about 3000 pages long!

The second edition was produced in . Michael Downes converted the macros from "classic" to , and he converted the text files to use these new macros. David Jones also provided support. Figures for the second edition were produced by the authors using MacDraw Pro. As in the first edition, the index was compiled using Windex, a C program written by the authors, and the bibliography was prepared using . Ayorkor Mills-Tettey and Rob Leathern helped convert the figures to MacDraw Pro, and Ayorkor also checked our bibliography.

As it was in the first edition, working with The MIT Press and McGraw-Hill has been a delight. Our editors, Bob Prior of The MIT Press and Betsy Jones of McGraw-Hill, put up with our antics and kept us going with carrots and sticks.

Finally, we thank our wives-Nicole Cormen, Gail Rivest, and Rebecca Ivry-our children-Ricky, William, and Debby Leiserson; Alex and Christopher Rivest; and Molly, Noah, and Benjamin Stein-and our parents-Renee and Perry Cormen, Jean and Mark Leiserson, Shirley and Lloyd Rivest, and Irene and Ira Stein-for their love and support during the writing of this book. The patience and encouragement of our families made this project possible. We affectionately dedicate this book to them.

THOMAS H. CORMEN
Hanover, New Hampshire

CHARLES E. LEISERSON
Cambridge, Massachusetts

RONALD L. RIVEST
Cambridge, Massachusetts

CLIFFORD STEIN
Hanover, New Hampshire

May 2001

 

Part I: Foundations

Chapter List

Chapter 1: The Role of Algorithms in Computing
Chapter 2: Getting Started
Chapter 3: Growth of Functions
Chapter 4: Recurrences
Chapter 5: Probabilistic Analysis and Randomized Algorithms

Introduction

This part will get you started in thinking about designing and analyzing algorithms. It is intended to be a gentle introduction to how we specify algorithms, some of the design strategies we will use throughout this book, and many of the fundamental ideas used in algorithm analysis. Later parts of this book will build upon this base.

Chapter 1 is an overview of algorithms and their place in modern computing systems. This chapter defines what an algorithm is and lists some examples. It also makes a case that algorithms are a technology, just as are fast hardware, graphical user interfaces, object-oriented systems, and networks.

In Chapter 2, we see our first algorithms, which solve the problem of sorting a sequence of n numbers. They are written in a pseudocode which, although not directly translatable to any conventional programming language, conveys the structure of the algorithm clearly enough that a competent programmer can implement it in the language of his choice. The sorting algorithms we examine are insertion sort, which uses an incremental approach, and merge sort, which uses a recursive technique known as "divide and conquer." Although the time each requires increases with the value of n, the rate of increase differs between the two algorithms. We determine these running times in Chapter 2, and we develop a useful notation to express them.

Chapter 3 precisely defines this notation, which we call asymptotic notation. It starts by defining several asymptotic notations, which we use for bounding algorithm running times from above and/or below. The rest of Chapter 3 is primarily a presentation of mathematical notation. Its purpose is more to ensure that your use of notation matches that in this book than to teach you new mathematical concepts.

Chapter 4 delves further into the divide-and-conquer method introduced in Chapter 2. In particular, Chapter 4 contains methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One powerful technique is the "master method," which can be used to solve recurrences that arise from divide-and-conquer algorithms. Much of Chapter 4 is devoted to proving the correctness of the master method, though this proof may be skipped without harm.

Chapter 5 introduces probabilistic analysis and randomized algorithms. We typically use probabilistic analysis to determine the running time of an algorithm in cases in which, due to the presence of an inherent probability distribution, the running time may differ on different inputs of the same size. In some cases, we assume that the inputs conform to a known probability distribution, so that we are averaging the running time over all possible inputs. In other cases, the probability distribution comes not from the inputs but from random choices made during the course of the algorithm. An algorithm whose behavior is determined not only by its input but by the values produced by a random-number generator is a randomized algorithm. We can use randomized algorithms to enforce a probability distribution on the inputs-thereby ensuring that no particular input always causes poor performance-or even to bound the error rate of algorithms that are allowed to produce incorrect results on a limited basis.

Appendices A-C contain other mathematical material that you will find helpful as you read this book. You are likely to have seen much of the material in the appendix chapters before having read this book (although the specific notational conventions we use may differ in some cases from what you have seen in the past), and so you should think of the Appendices as reference material. On the other hand, you probably have not already seen most of the material in Part I. All the chapters in Part I and the Appendices are written with a tutorial flavor.

 

Chapter 1: The Role of Algorithms in Computing

What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions.

1.1 Algorithms

Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.

We can also view an algorithm as a tool for solving a well-specified computational problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational procedure for achieving that input/output relationship.

For example, one might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally define the sorting problem:

  • Input: A sequence of n numbers a1, a2, ..., an.

  • Output: A permutation (reordering) of the input sequence such that .

For example, given the input sequence 31, 41, 59, 26, 41, 58, a sorting algorithm returns as output the sequence 26, 31, 41, 41, 58, 59. Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem.

Sorting is a fundamental operation in computer science (many programs use it as an intermediate step), and as a result a large number of good sorting algorithms have been developed. Which algorithm is best for a given application depends on-among other factors-the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, and the kind of storage device to be used: main memory, disks, or tapes.

An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an answer other than the desired one. Contrary to what one might expect, incorrect algorithms can sometimes be useful, if their error rate can be controlled. We shall see an example of this in Chapter 31 when we study algorithms for finding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms.

An algorithm can be specified in English, as a computer program, or even as a hardware design. The only requirement is that the specification must provide a precise description of the computational procedure to be followed.

What kinds of problems are solved by algorithms?

Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the following examples:

  • The Human Genome Project has the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. While the solutions to the various problems involved are beyond the scope of this book, ideas from many of the chapters in this book are used in the solution of these biological problems, thereby enabling scientists to accomplish tasks while using resources efficiently. The savings are in time, both human and machine, and in money, as more information can be extracted from laboratory techniques.

  • The Internet enables people all around the world to quickly access and retrieve large amounts of information. In order to do so, clever algorithms are employed to manage and manipulate this large volume of data. Examples of problems which must be solved include finding good routes on which the data will travel (techniques for solving such problems appear in Chapter 24), and using a search engine to quickly find pages on which particular information resides (related techniques are in Chapters 11 and 32).

  • Electronic commerce enables goods and services to be negotiated and exchanged electronically. The ability to keep information such as credit card numbers, passwords, and bank statements private is essential if electronic commerce is to be used widely. Public-key cryptography and digital signatures (covered in Chapter 31) are among the core technologies used and are based on numerical algorithms and number theory.

  • In manufacturing and other commercial settings, it is often important to allocate scarce resources in the most beneficial way. An oil company may wish to know where to place its wells in order to maximize its expected profit. A candidate for the presidency of the United States may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew scheduling are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29.

While some of the details of these examples are beyond the scope of this book, we do give underlying techniques that apply to these problems and problem areas. We also show how to solve many concrete problems in this book, including the following:

  • We are given a road map on which the distance between each pair of adjacent intersections is marked, and our goal is to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Chapter 10 and Appendix B), and we wish to find the shortest path from one vertex to another in the graph. We shall see how to solve this problem efficiently in Chapter 24.

  • We are given a sequence A1, A2, ..., An of n matrices, and we wish to determine their product A1 A2 An. Because matrix multiplication is associative, there are several legal multiplication orders. For example, if n = 4, we could perform the matrix multiplications as if the product were parenthesized in any of the following orders: (A1(A2(A3A4))), (A1((A2A3)A4)), ((A1A2)(A3A4)), ((A1(A2A3))A4), or (((A1A2)A3)A4). If these matrices are all square (and hence the same size), the multiplication order will not affect how long the matrix multiplications take. If, however, these matrices are of differing sizes (yet their sizes are compatible for matrix multiplication), then the multiplication order can make a very big difference. The number of possible multiplication orders is exponential in n, and so trying all possible orders may take a very long time. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efficiently.

  • We are given an equation ax b (mod n), where a, b, and n are integers, and we wish to find all the integers x, modulo n, that satisfy the equation. There may be zero, one, or more than one such solution. We can simply try x = 0, 1, ..., n - 1 in order, but Chapter 31 shows a more efficient method.

  • We are given n points in the plane, and we wish to find the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 948 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for finding the convex hull.

These lists are far from exhaustive (as you again have probably surmised from this book's heft), but exhibit two characteristics that are common to many interesting algorithms.

  1. There are many candidate solutions, most of which are not what we want. Finding one that we do want can present quite a challenge.

  2. There are practical applications. Of the problems in the above list, shortest paths provides the easiest examples. A transportation firm, such as a trucking or railroad company, has a financial interest in finding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to find the shortest path through the network in order to route a message quickly.

Data structures

This book also contains several data structures. A data structure is a way to store and organize data in order to facilitate access and modifications. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them.

Technique

Although you can use this book as a "cookbook" for algorithms, you may someday encounter a problem for which you cannot readily find a published algorithm (many of the exercises and problems in this book, for example!). This book will teach you techniques of algorithm design and analysis so that you can develop algorithms on your own, show that they give the correct answer, and understand their efficiency.

Hard problems

Most of this book is about efficient algorithms. Our usual measure of efficiency is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efficient solution is known. Chapter 34 studies an interesting subset of these problems, which are known as NP-complete.

Why are NP-complete problems interesting? First, although no efficient algorithm for an NP-complete problem has ever been found, nobody has ever proven that an efficient algorithm for one cannot exist. In other words, it is unknown whether or not efficient algorithms exist for NP-complete problems. Second, the set of NP-complete problems has the remarkable property that if an efficient algorithm exists for any one of them, then efficient algorithms exist for all of them. This relationship among the NP-complete problems makes the lack of efficient solutions all the more tantalizing. Third, several NP-complete problems are similar, but not identical, to problems for which we do know of efficient algorithms. A small change to the problem statement can cause a big change to the efficiency of the best known algorithm.

It is valuable to know about NP-complete problems because some of them arise surprisingly often in real applications. If you are called upon to produce an efficient algorithm for an NP-complete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NP-complete, you can instead spend your time developing an efficient algorithm that gives a good, but not the best possible, solution.

As a concrete example, consider a trucking company with a central warehouse. Each day, it loads up the truck at the warehouse and sends it around to several locations to make deliveries. At the end of the day, the truck must end up back at the warehouse so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by the truck. This problem is the well-known "traveling-salesman problem," and it is NP-complete. It has no known efficient algorithm. Under certain assumptions, however, there are efficient algorithms that give an overall distance that is not too far above the smallest possible. Chapter 35 discusses such "approximation algorithms."

Exercises 1.1-1
Start example

Give a real-world example in which one of the following computational problems appears: sorting, determining the best order for multiplying matrices, or finding the convex hull.

End example
Exercises 1.1-2
Start example

Other than speed, what other measures of efficiency might one use in a real-world setting?

End example
Exercises 1.1-3
Start example

Select a data structure that you have seen previously, and discuss its strengths and limitations.

End example
Exercises 1.1-4
Start example

How are the shortest-path and traveling-salesman problems given above similar? How are they different?

End example
Exercises 1.1-5
Start example

Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is "approximately" the best is good enough.

1.2 Algorithms as a technology

Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer.

If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (i.e., well designed and documented), but you would most often use whichever method was the easiest to implement.

Of course, computers may be fast, but they are not infinitely fast. And memory may be cheap, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. These resources should be used wisely, and algorithms that are efficient in terms of time or space will help you do so.

Efficiency

Algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software.

As an example, in Chapter 2, we will see two algorithms for sorting. The first, known as insertion sort, takes time roughly equal to c1n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2. The second, merge sort, takes time roughly equal to c2n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Insertion sort usually has a smaller constant factor than merge sort, so that c1 < c2. We shall see that the constant factors can be far less significant in the running time than the dependence on the input size n. Where merge sort has a factor of lg n in its running time, insertion sort has a factor of n, which is much larger. Although insertion sort is usually faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort's advantage of lg n vs. n will more than compensate for the difference in constant factors. No matter how much smaller c1 is than c2, there will always be a crossover point beyond which merge sort is faster.

For a concrete example, let us pit a faster computer (computer A) running insertion sort against a slower computer (computer B) running merge sort. They each must sort an array of one million numbers. Suppose that computer A executes one billion instructions per second and computer B executes only ten million instructions per second, so that computer A is 100 times faster than computer B in raw computing power. To make the difference even more dramatic, suppose that the world's craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n2 instructions to sort n numbers. (Here, c1 = 2.) Merge sort, on the other hand, is programmed for computer B by an average programmer using a high-level language with an inefficient compiler, with the resulting code taking 50n lg n instructions (so that c2 = 50). To sort one million numbers, computer A takes

while computer B takes

By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B runs 20 times faster than computer A! The advantage of merge sort is even more pronounced when we sort ten million numbers: where insertion sort takes approximately 2.3 days, merge sort takes under 20 minutes. In general, as the problem size increases, so does the relative advantage of merge sort.

Algorithms and other technologies

The example above shows that algorithms, like computer hardware, are a technology. Total system performance depends on choosing efficient algorithms as much as on choosing fast hardware. Just as rapid advances are being made in other computer technologies, they are being made in algorithms as well.

You might wonder whether algorithms are truly that important on contemporary computers in light of other advanced technologies, such as

  • hardware with high clock rates, pipelining, and superscalar architectures,

  • easy-to-use, intuitive graphical user interfaces (GUIs),

  • object-oriented systems, and

  • local-area and wide-area networking.

The answer is yes. Although there are some applications that do not explicitly require algorithmic content at the application level (e.g., some simple web-based applications), most also require a degree of algorithmic content on their own. For example, consider a web-based service that determines how to travel from one location to another. (Several such services existed at the time of this writing.) Its implementation would rely on fast hardware, a graphical user interface, wide-area networking, and also possibly on object orientation. However, it would also require algorithms for certain operations, such as finding routes (probably using a shortest-path algorithm), rendering maps, and interpolating addresses.

Moreover, even an application that does not require algorithmic content at the application level relies heavily upon algorithms. Does the application rely on fast hardware? The hardware design used algorithms. Does the application rely on graphical user interfaces? The design of any GUI relies on algorithms. Does the application rely on networking? Routing in networks relies heavily on algorithms. Was the application written in a language other than machine code? Then it was processed by a compiler, interpreter, or assembler, all of which make extensive use of algorithms. Algorithms are at the core of most technologies used in contemporary computers.

Furthermore, with the ever-increasing capacities of computers, we use them to solve larger problems than ever before. As we saw in the above comparison between insertion sort and merge sort, it is at larger problem sizes that the differences in efficiencies between algorithms become particularly prominent.

Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern computing technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more.

Exercises 1.2-1
Start example

Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved.

End example
Exercises 1.2-2
Start example

Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 8n2 steps, while merge sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort?

End example
Exercises 1.2-3
Start example

What is the smallest value of n such that an algorithm whose running time is 100n2 runs faster than an algorithm whose running time is 2n on the same machine?

End example
Problems 1-1: Comparison of running times
Start example

For each function f(n) and time t in the following table, determine the largest size n of a problem that can be solved in time t, assuming that the algorithm to solve the problem takes f(n) microseconds.

 

1

second

1

minute

1

hour

1

day

1

month

1

year

1

century

lg n

       

       

n

       

n lg n

       

n2

       

n3

       

2n

       

n!

       
End example

Chapter notes

There are many excellent texts on the general topic of algorithms, including those by Aho, Hopcroft, and Ullman [5, 6], Baase and Van Gelder [26], Brassard and Bratley [46, 47], Goodrich and Tamassia [128], Horowitz, Sahni, and Rajasekaran [158], Kingston [179], Knuth [182, 183, 185], Kozen [193], Manber [210], Mehlhorn [217, 218, 219], Purdom and Brown [252], Reingold, Nievergelt, and Deo [257], Sedgewick [269], Skiena [280], and Wilf [315]. Some of the more practical aspects of algorithm design are discussed by Bentley [39, 40] and Gonnet [126]. Surveys of the field of algorithms can also be found in the Handbook of Theoretical Computer Science, Volume A [302] and the CRC Handbook on Algorithms and Theory of Computation [24]. Overviews of the algorithms used in computational biology can be found in textbooks by Gusfield [136], Pevzner [240], Setubal and Medinas [272], and Waterman [309].

Chapter 2: Getting Started

This chapter will familiarize you with the framework we shall use throughout the book to think about the design and analysis of algorithms. It is self-contained, but it does include several references to material that will be introduced in Chapters 3 and 4. (It also contains several summations, which Appendix A shows how to solve.)

We begin by examining the insertion sort algorithm to solve the sorting problem introduced in Chapter 1. We define a "pseudocode" that should be familiar to readers who have done computer programming and use it to show how we shall specify our algorithms. Having specified the algorithm, we then argue that it correctly sorts and we analyze its running time. The analysis introduces a notation that focuses on how that time increases with the number of items to be sorted. Following our discussion of insertion sort, we introduce the divide-and-conquer approach to the design of algorithms and use it to develop an algorithm called merge sort. We end with an analysis of merge sort's running time.

2.1 Insertion sort

Our first algorithm, insertion sort, solves the sorting problem introduced in Chapter 1:

  • Input: A sequence of n numbers a1, a2, . . .,an.

  • Output: A permutation (reordering) of the input sequence such that .

The numbers that we wish to sort are also known as the keys.

In this book, we shall typically describe algorithms as programs written in a pseudocode that is similar in many respects to C, Pascal, or Java. If you have been introduced to any of these languages, you should have little trouble reading our algorithms. What separates pseudocode from "real" code is that in pseudocode, we employ whatever expressive method is most clear and concise to specify a given algorithm. Sometimes, the clearest method is English, so do not be surprised if you come across an English phrase or sentence embedded within a section of "real" code. Another difference between pseudocode and real code is that pseudocode is not typically concerned with issues of software engineering. Issues of data abstraction, modularity, and error handling are often ignored in order to convey the essence of the algorithm more concisely.

We start with insertion sort, which is an efficient algorithm for sorting a small number of elements. Insertion sort works the way many people sort a hand of playing cards. We start with an empty left hand and the cards face down on the table. We then remove one card at a time from the table and insert it into the correct position in the left hand. To find the correct position for a card, we compare it with each of the cards already in the hand, from right to left, as illustrated in Figure 2.1. At all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table.

Click To expand
Figure 2.1: Sorting a hand of cards using insertion sort.

Our pseudocode for insertion sort is presented as a procedure called INSERTION-SORT, which takes as a parameter an array A[1 n] containing a sequence of length n that is to be sorted. (In the code, the number n of elements in A is denoted by length[A].) The input numbers are sorted in place: the numbers are rearranged within the array A, with at most a constant number of them stored outside the array at any time. The input array A contains the sorted output sequence when INSERTION-SORT is finished.

INSERTION-SORT(A)1  for j  2 to length[A]2       do key  A[j]3           Insert A[j] into the sorted sequence A[1  j - 1].4          i  j - 15          while i > 0 and A[i] > key6              do A[i + 1]  A[i]7                 i  i - 18          A[i + 1]  key

Loop invariants and the correctness of insertion sort

Figure 2.2 shows how this algorithm works for A = 5, 2, 4, 6, 1, 3. The index j indicates the "current card" being inserted into the hand. At the beginning of each iteration of the "outer" for loop, which is indexed by j, the subarray consisting of elements A[1 j - 1] constitute the currently sorted hand, and elements A[j + 1 n] correspond to the pile of cards still on the table. In fact, elements A[1 j - 1] are the elements originally in positions 1 through j - 1, but now in sorted order. We state these properties of A[1 j -1] formally as a loop invariant:

  • At the start of each iteration of the for loop of lines 1-8, the subarray A[1 j - 1] consists of the elements originally in A[1 j - 1] but in sorted order.

Click To expand
Figure 2.2: The operation of INSERTION-SORT on the array A = 5, 2, 4, 6, 1, 3. Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles. (a)-(e) The iterations of the for loop of lines 1-8. In each iteration, the black rectangle holds the key taken from A[j], which is compared with the values in shaded rectangles to its left in the test of line 5. Shaded arrows show array values moved one position to the right in line 6, and black arrows indicate where the key is moved to in line 8. (f) The final sorted array.

We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant:

  • Initialization: It is true prior to the first iteration of the loop.

  • Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration.

  • Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct.

When the first two properties hold, the loop invariant is true prior to every iteration of the loop. Note the similarity to mathematical induction, where to prove that a property holds, you prove a base case and an inductive step. Here, showing that the invariant holds before the first iteration is like the base case, and showing that the invariant holds from iteration to iteration is like the inductive step.

The third property is perhaps the most important one, since we are using the loop invariant to show correctness. It also differs from the usual use of mathematical induction, in which the inductive step is used infinitely; here, we stop the "induction" when the loop terminates.

Let us see how these properties hold for insertion sort.

  • Initialization: We start by showing that the loop invariant holds before the first loop iteration, when j = 2.[1] The subarray A[1 j - 1], therefore, consists of just the single element A[1], which is in fact the original element in A[1]. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the first iteration of the loop.

  • Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the outer for loop works by moving A[ j - 1], A[ j - 2], A[ j - 3], and so on by one position to the right until the proper position for A[ j] is found (lines 4-7), at which point the value of A[j] is inserted (line 8). A more formal treatment of the second property would require us to state and show a loop invariant for the "inner" while loop. At this point, however, we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop.

  • Termination: Finally, we examine what happens when the loop terminates. For insertion sort, the outer for loop ends when j exceeds n, i.e., when j = n + 1. Substituting n + 1 for j in the wording of loop invariant, we have that the subarray A[1 n] consists of the elements originally in A[1 n], but in sorted order. But the subarray A[1 n] is the entire array! Hence, the entire array is sorted, which means that the algorithm is correct.

We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well.

Pseudocode conventions

We use the following conventions in our pseudocode.

  1. Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2-8, and the body of the while loop that begins on line 5 contains lines 6-7 but not line 8. Our indentation style applies to if-then-else statements as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity.[2]

  2. The looping constructs while, for, and repeat and the conditional constructs if, then, and else have interpretations similar to those in Pascal.[3] There is one subtle difference with respect to for loops, however: in Pascal, the value of the loop-counter variable is undefined upon exiting the loop, but in this book, the loop counter retains its value after exiting the loop. Thus, immediately after a for loop, the loop counter's value is the value that first exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j 2 to length[A], and so when this loop terminates, j = length[A]+1 (or, equivalently, j = n+1, since n = length[A]).

  3. The symbol "" indicates that the remainder of the line is a comment.

  4. A multiple assignment of the form i j e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j e followed by the assignment i j.

  5. Variables (such as i, j, and key) are local to the given procedure. We shall not use global variables without explicit indication.

  6. Array elements are accessed by specifying the array name followed by the index in square brackets. For example, A[i] indicates the ith element of the array A. The notation "" is used to indicate a range of values within an array. Thus, A[1 j] indicates the subarray of A consisting of the j elements A[1], A[2], . . . , A[j].

  7. Compound data are typically organized into objects, which are composed of attributes or fields. A particular field is accessed using the field name followed by the name of its object in square brackets. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write length[A]. Although we use square brackets for both array indexing and object attributes, it will usually be clear from the context which interpretation is intended.

    A variable representing an array or object is treated as a pointer to the data representing the array or object. For all fields f of an object x, setting y x causes f[y] = f[x]. Moreover, if we now set f[x] 3, then afterward not only is f[x] = 3, but f[y] = 3 as well. In other words, x and y point to ("are") the same object after the assignment y x.

    Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL.

  8. Parameters are passed to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object's fields are not. For example, if x is a parameter of a called procedure, the assignment x y within the called procedure is not visible to the calling procedure. The assignment f [x] 3, however, is visible.

  9. The boolean operators "and" and "or" are short circuiting. That is, when we evaluate the expression "x and y" we first evaluate x. If x evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y. If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Similarly, in the expression "x or y" we evaluate the expression y only if x evaluates to FALSE. Short-circuiting operators allow us to write boolean expressions such as "x NIL and f[x] = y" without worrying about what happens when we try to evaluate f[x] when x is NIL.

Exercises 2.1-1
Start example

Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORT on the array A = 31, 41, 59, 26, 41, 58.

End example
Exercises 2.1-2
Start example

Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of nondecreasing order.

End example
Exercises 2.1-3
Start example

Consider the searching problem:

  • Input: A sequence of n numbers A = a1, a2, . . . , an and a value v.

  • Output: An index i such that v = A[i] or the special value NIL if v does not appear in A.

Write pseudocode for linear search, which scans through the sequence, looking for v. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.

End example
Exercises 2.1-4
Start example

Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in an (n + 1)-element array C. State the problem formally and write pseudocode for adding the two integers.

End example

[1]When the loop is a for loop, the moment at which we check the loop invariant just prior to the first iteration is immediately after the initial assignment to the loop-counter variable and just before the first test in the loop header. In the case of INSERTION-SORT, this time is after assigning 2 to the variable j but before the first test of whether j length[A].

[2]In real programming languages, it is generally not advisable to use indentation alone to indicate block structure, since levels of indentation are hard to determine when code is split across pages.

[3]Most block-structured languages have equivalent constructs, though the exact syntax may differ from that of Pascal.

2.2 Analyzing algorithms

Analyzing an algorithm has come to mean predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, a most efficient one can be easily identified. Such analysis may indicate more than one viable candidate, but several inferior algorithms are usually discarded in the process.

Before we can analyze an algorithm, we must have a model of the implementation technology that will be used, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic one-processor, random-access machine (RAM) model of computation as our implementation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after another, with no concurrent operations. In later chapters, however, we shall have occasion to investigate models for digital hardware.

Strictly speaking, one should precisely define the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are designed. The RAM model contains instructions commonly found in real computers: arithmetic (add, subtract, multiply, divide, remainder, floor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time.

The data types in the RAM model are integer and floating point. Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n, we typically assume that integers are represented by c lg n bits for some constant c 1. We require c 1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time-clearly an unrealistic scenario.)

Real computers contain instructions not listed above, and such instructions represent a gray area in the RAM model. For example, is exponentiation a constant-time instruction? In the general case, no; it takes several instructions to compute xy when x and y are real numbers. In restricted situations, however, exponentiation is a constant-time operation. Many computers have a "shift left" instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2. Shifting the bits by k positions to the left is equivalent to multiplication by 2k. Therefore, such computers can compute 2k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constant-time operation when k is a small enough positive integer.

In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory (which is most often implemented with demand paging). Several computational models attempt to account for memory-hierarchy effects, which are sometimes significant in real programs on real machines. A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, so that they can be difficult to work with. Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines.

Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, algebraic dexterity, and the ability to identify the most significant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas.

Even though we typically select only one machine model to analyze a given algorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important characteristics of an algorithm's resource requirements, and suppresses tedious details.

Analysis of insertion sort

The time taken by the INSERTION-SORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, INSERTION-SORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to define the terms "running time" and "size of input" more carefully.

The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most natural measure is the number of items in the input-for example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algorithm is a graph, the input size can be described by the numbers of vertices and edges in the graph. We shall indicate which input size measure is being used with each problem we study.

The running time of an algorithm on a particular input is the number of primitive operations or "steps" executed. It is convenient to define the notion of step so that it is as machine-independent as possible. For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the ith line takes time ci , where ci is a constant. This viewpoint is in keeping with the RAM model, and it also reflects how the pseudocode would be implemented on most actual computers.[4]

In the following discussion, our expression for the running time of INSERTION-SORT will evolve from a messy formula that uses all the statement costs ci to a much simpler notation that is more concise and more easily manipulated. This simpler notation will also make it easy to determine whether one algorithm is more efficient than another.

We start by presenting the INSERTION-SORT procedure with the time "cost" of each statement and the number of times each statement is executed. For each j = 2, 3, . . . , n, where n = length[A], we let tj be the number of times the while loop test in line 5 is executed for that value of j. When a for or while loop exits in the usual way (i.e., due to the test in the loop header), the test is executed one time more than the loop body. We assume that comments are not executable statements, and so they take no time.

INSERTION-SORT(A)                              cost    times1  for j  2 to length[A]                      c1      n2       do key  A[j]                          c2      n - 13           Insert A[j] into the sorted                   sequence A[1  j - 1].      0       n - 14          i  j - 1                           c4      n - 15          while i > 0 and A[i] > key          c5      6             do A[i + 1]  A[i]               c6      7             i  i - 1                        c7      8          A[i + 1]  key                      c8      n - 1

The running time of the algorithm is the sum of running times for each statement executed; a statement that takes ci steps to execute and is executed n times will contribute cin to the total running time.[5] To compute T(n), the running time of INSERTION-SORT, we sum the products of the cost and times columns, obtaining

Click To expand

Even for inputs of a given size, an algorithm's running time may depend on which input of that size is given. For example, in INSERTION-SORT, the best case occurs if the array is already sorted. For each j = 2, 3, . . . , n, we then find that A[i] key in line 5 when i has its initial value of j - 1. Thus tj = 1 for j = 2, 3, . . . , n, and the best-case running time is

T(n)

=

c1n + c2(n - 1) + c4(n - 1) + c5(n - 1) + c8(n - 1)

 

=

(c1 + c2 + c4 + c5 + c8)n - (c2+ c4 + c5 + c8).

This running time can be expressed as an + b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n.

If the array is in reverse sorted order-that is, in decreasing order-the worst case results. We must compare each element A[j] with each element in the entire sorted subarray A[1 j - 1], and so tj = j for j = 2, 3, . . . , n. Noting that

and

(see Appendix A for a review of how to solve these summations), we find that in the worst case, the running time of INSERTION-SORT is

Click To expand

This worst-case running time can be expressed as an2 + bn + c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n.

Typically, as in insertion sort, the running time of an algorithm is fixed for a given input, although in later chapters we shall see some interesting "randomized" algorithms whose behavior can vary even for a fixed input.

Worst-case and average-case analysis

In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on finding only the worst-case running time, that is, the longest running time for any input of size n. We give three reasons for this orientation.

  • The worst-case running time of an algorithm is an upper bound on the running time for any input. Knowing it gives us a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse.

  • For some algorithms, the worst case occurs fairly often. For example, in searching a database for a particular piece of information, the searching algorithm's worst case will often occur when the information is not present in the database. In some searching applications, searches for absent information may be frequent.

  • The "average case" is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray A[1 j - 1] to insert element A[j]? On average, half the elements in A[1 j - 1] are less than A[j], and half the elements are greater. On average, therefore, we check half of the subarray A[1 j - 1], so tj = j/2. If we work out the resulting average-case running time, it turns out to be a quadratic function of the input size, just like the worst-case running time.

In some particular cases, we shall be interested in the average-case or expected running time of an algorithm; in Chapter 5, we shall see the technique of probabilistic analysis, by which we determine expected running times. One problem with performing an average-case analysis, however, is that it may not be apparent what constitutes an "average" input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis.

Order of growth

We used some simplifying abstractions to ease our analysis of the INSERTION-SORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: the worst-case running time is an2 + bn + c for some constants a, b, and c that depend on the statement costs ci. We thus ignored not only the actual statement costs, but also the abstract costs ci.

We shall now make one more simplifying abstraction. It is the rate of growth, or order of growth, of the running time that really interests us. We therefore consider only the leading term of a formula (e.g., an2), since the lower-order terms are relatively insignificant for large n. We also ignore the leading term's constant coefficient, since constant factors are less significant than the rate of growth in determining computational efficiency for large inputs. Thus, we write that insertion sort, for example, has a worst-case running time of Θ(n2) (pronounced "theta of n-squared"). We shall use Θ-notation informally in this chapter; it will be defined precisely in Chapter 3.

We usually consider one algorithm to be more efficient than another if its worst-case running time has a lower order of growth. Due to constant factors and lower-order terms, this evaluation may be in error for small inputs. But for large enough inputs, a Θ(n2) algorithm, for example, will run more quickly in the worst case than a Θ(n3) algorithm.

Exercises 2.2-1
Start example

Express the function n3/1000 - 100n2 - 100n + 3 in terms of Θ-notation.

End example
Exercises 2.2-2
Start example

Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in A[1]. Then find the second smallest element of A, and exchange it with A[2]. Continue in this manner for the first n - 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n - 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in Θ-notation.

End example
Exercises 2.2-3
Start example

Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in Θ-notation? Justify your answers.

End example
Exercises 2.2-4
Start example

How can we modify almost any algorithm to have a good best-case running time?

End example

[4]There are some subtleties here. Computational steps that we specify in English are often variants of a procedure that requires more than just a constant amount of time. For example, later in this book we might say "sort the points by x-coordinate," which, as we shall see, takes more than a constant amount of time. Also, note that a statement that calls a subroutine takes constant time, though the subroutine, once invoked, may take more. That is, we separate the process of calling the subroutine-passing parameters to it, etc.-from the process of executing the subroutine.

[5]This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily consume mn words of memory in total.

2.3 Designing algorithms

There are many ways to design algorithms. Insertion sort uses an incremental approach: having sorted the subarray A[1 j - 1], we insert the single element A[j] into its proper place, yielding the sorted subarray A[1 j].

In this section, we examine an alternative design approach, known as "divide-and-conquer." We shall use divide-and-conquer to design a sorting algorithm whose worst-case running time is much less than that of insertion sort. One advantage of divide-and-conquer algorithms is that their running times are often easily determined using techniques that will be introduced in Chapter 4.

2.3.1 The divide-and-conquer approach

Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related subproblems. These algorithms typically follow a divide-and-conquer approach: they break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem.

The divide-and-conquer paradigm involves three steps at each level of the recursion:

  • Divide the problem into a number of subproblems.

  • Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.

  • Combine the solutions to the subproblems into the solution for the original problem.

The merge sort algorithm closely follows the divide-and-conquer paradigm. Intuitively, it operates as follows.

  • Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each.

  • Conquer: Sort the two subsequences recursively using merge sort.

  • Combine: Merge the two sorted subsequences to produce the sorted answer.

The recursion "bottoms out" when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order.

The key operation of the merge sort algorithm is the merging of two sorted sequences in the "combine" step. To perform the merging, we use an auxiliary procedure MERGE(A, p, q, r), where A is an array and p, q, and r are indices numbering elements of the array such that p q < r. The procedure assumes that the subarrays A[p q] and A[q + 1 r] are in sorted order. It merges them to form a single sorted subarray that replaces the current subarray A[p r].

Our MERGE procedure takes time Θ(n), where n = r - p + 1 is the number of elements being merged, and it works as follows. Returning to our card-playing motif, suppose we have two piles of cards face up on a table. Each pile is sorted, with the smallest cards on top. We wish to merge the two piles into a single sorted output pile, which is to be face down on the table. Our basic step consists of choosing the smaller of the two cards on top of the face-up piles, removing it from its pile (which exposes a new top card), and placing this card face down onto the output pile. We repeat this step until one input pile is empty, at which time we just take the remaining input pile and place it face down onto the output pile. Computationally, each basic step takes constant time, since we are checking just two top cards. Since we perform at most n basic steps, merging takes Θ(n) time.

The following pseudocode implements the above idea, but with an additional twist that avoids having to check whether either pile is empty in each basic step. The idea is to put on the bottom of each pile a sentinel card, which contains a special value that we use to simplify our code. Here, we use as the sentinel value, so that whenever a card with is exposed, it cannot be the smaller card unless both piles have their sentinel cards exposed. But once that happens, all the nonsentinel cards have already been placed onto the output pile. Since we know in advance that exactly r - p + 1 cards will be placed onto the output pile, we can stop once we have performed that many basic steps.

MERGE(A, p, q, r) 1  n1  q - p + 1 2  n2  r - q 3  create arrays L[1  n1 + 1] and R[1  n2 + 1] 4  for i  1 to n1 5       do L[i]  A[p + i - 1] 6  for j  1 to n2 7       do R[j]  A[q + j] 8  L[n1 + 1]   9  R[n2 + 1]  10  i  111  j  112  for k  p to r13       do if L[i]  R[j]14             then A[k]  L[i]15                  i  i + 116             else A[k]  R[j]17                  j  j + 1

In detail, the MERGE procedure works as follows. Line 1 computes the length n1 of the subarray A[p q], and line 2 computes the length n2 of the subarray A[q + 1 r]. We create arrays L and R ("left" and "right"), of lengths n1 + 1 and n2 + 1, respectively, in line 3. The for loop of lines 4-5 copies the subarray A[p q] into L[1 n1], and the for loop of lines 6-7 copies the subarray A[q + 1 r] into R[1 n2]. Lines 8-9 put the sentinels at the ends of the arrays L and R. Lines 10-17, illustrated in Figure 2.3, perform the r - p + 1 basic steps by maintaining the following loop invariant:

  • At the start of each iteration of the for loop of lines 12-17, the subarray A[p k - 1] contains the k - p smallest elements of L[1 n1 + 1] and R[1 n2 + 1], in sorted order. Moreover, L[i] and R[j] are the smallest elements of their arrays that have not been copied back into A.

Click To expand
Figure 2.3: The operation of lines 10-17 in the call MERGE(A, 9, 12, 16), when the subarray A[9 16] contains the sequence 2, 4, 5, 7, 1, 2, 3, 6. After copying and inserting sentinels, the array L contains 2, 4, 5, 7, , and the array R contains 1, 2, 3, 6, . Lightly shaded positions in A contain their final values, and lightly shaded positions in L and R contain values that have yet to be copied back into A. Taken together, the lightly shaded positions always comprise the values originally in A[9 16], along with the two sentinels. Heavily shaded positions in A contain values that will be copied over, and heavily shaded positions in L and R contain values that have already been copied back into A. (a)-(h) The arrays A, L, and R, and their respective indices k, i, and j prior to each iteration of the loop of lines 12-17. (i) The arrays and indices at termination. At this point, the subarray in A[9 16] is sorted, and the two sentinels in L and R are the only two elements in these arrays that have not been copied into A.

We must show that this loop invariant holds prior to the first iteration of the for loop of lines 12-17, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates.

  • Initialization: Prior to the first iteration of the loop, we have k = p, so that the subarray A[p k - 1] is empty. This empty subarray contains the k - p = 0 smallest elements of L and R, and since i = j = 1, both L[i] and R[j] are the smallest elements of their arrays that have not been copied back into A.

    Click To expand
  • Maintenance: To see that each iteration maintains the loop invariant, let us first suppose that L[i] R[j]. Then L[i] is the smallest element not yet copied back into A. Because A[p k - 1] contains the k - p smallest elements, after line 14 copies L[i] into A[k], the subarray A[p k] will contain the k - p + 1 smallest elements. Incrementing k (in the for loop update) and i (in line 15) reestablishes the loop invariant for the next iteration. If instead L[i] > R[j], then lines 16-17 perform the appropriate action to maintain the loop invariant.

  • Termination: At termination, k = r + 1. By the loop invariant, the subarray A[p k - 1], which is A[p r], contains the k - p = r - p + 1 smallest elements of L[1 n1 + 1] and R[1 n2 + 1], in sorted order. The arrays L and R together contain n1 + n2 + 2 = r - p + 3 elements. All but the two largest have been copied back into A, and these two largest elements are the sentinels.

To see that the MERGE procedure runs in Θ(n) time, where n = r - p + 1, observe that each of lines 1-3 and 8-11 takes constant time, the for loops of lines 4-7 take Θ(n1 + n2) = Θ(n) time,[6] and there are n iterations of the for loop of lines 12-17, each of which takes constant time.

We can now use the MERGE procedure as a subroutine in the merge sort algorithm. The procedure MERGE-SORT(A, p, r) sorts the elements in the subarray A[p ߩ r]. If p r, the subarray has at most one element and is therefore already sorted. Otherwise, the divide step simply computes an index q that partitions A[p r] into two subarrays: A[p q], containing n/2 elements, and A[q + 1 r], containing n/2 elements.[7]

MERGE-SORT(A, p, r)1 if p < r2   then q  (p + r)/23        MERGE-SORT(A, p, q)4        MERGE-SORT(A, q + 1, r)5        MERGE(A, p, q, r)

To sort the entire sequence A = A[1], A[2], . . . , A[n], we make the initial call MERGE-SORT(A, 1, length[A]), where once again length[A] = n. Figure 2.4 illustrates the operation of the procedure bottom-up when n is a power of 2. The algorithm consists of merging pairs of 1-item sequences to form sorted sequences of length 2, merging pairs of sequences of length 2 to form sorted sequences of length 4, and so on, until two sequences of length n/2 are merged to form the final sorted sequence of length n.

Click To expand
Figure 2.4: The operation of merge sort on the array A = 5, 2, 4, 7, 1, 3, 2, 6. The lengths of the sorted sequences being merged increase as the algorithm progresses from bottom to top.

2.3.2 Analyzing divide-and-conquer algorithms

When an algorithm contains a recursive call to itself, its running time can often be described by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs. We can then use mathematical tools to solve the recurrence and provide bounds on the performance of the algorithm.

A recurrence for the running time of a divide-and-conquer algorithm is based on the three steps of the basic paradigm. As before, we let T (n) be the running time on a problem of size n. If the problem size is small enough, say n c for some constant c, the straightforward solution takes constant time, which we write as Θ(1). Suppose that our division of the problem yields a subproblems, each of which is 1/b the size of the original. (For merge sort, both a and b are 2, but we shall see many divide-and-conquer algorithms in which a b.) If we take D(n) time to divide the problem into subproblems and C(n) time to combine the solutions to the subproblems into the solution to the original problem, we get the recurrence

In Chapter 4, we shall see how to solve common recurrences of this form.

Analysis of merge sort

Although the pseudocode for MERGE-SORT works correctly when the number of elements is not even, our recurrence-based analysis is simplified if we assume that the original problem size is a power of 2. Each divide step then yields two subsequences of size exactly n/2. In Chapter 4, we shall see that this assumption does not affect the order of growth of the solution to the recurrence.

We reason as follows to set up the recurrence for T (n), the worst-case running time of merge sort on n numbers. Merge sort on just one element takes constant time. When we have n > 1 elements, we break down the running time as follows.

  • Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D(n) = Θ(1).

  • Conquer: We recursively solve two subproblems, each of size n/2, which contributes 2T (n/2) to the running time.

  • Combine: We have already noted that the MERGE procedure on an n-element subarray takes time Θ(n), so C(n) = Θ(n).

When we add the functions D(n) and C(n) for the merge sort analysis, we are adding a function that is Θ(n) and a function that is Θ(1). This sum is a linear function of n, that is, Θ(n). Adding it to the 2T (n/2) term from the "conquer" step gives the recurrence for the worst-case running time T (n) of merge sort:

(2.1) 

In Chapter 4, we shall see the "master theorem," which we can use to show that T (n) is Θ(n lg n), where lg n stands for log2 n. Because the logarithm function grows more slowly than any linear function, for large enough inputs, merge sort, with its Θ(n lg n) running time, outperforms insertion sort, whose running time is Θ(n2), in the worst case.

We do not need the master theorem to intuitively understand why the solution to the recurrence (2.1) is T (n) = Θ(n lg n). Let us rewrite recurrence (2.1) as

(2.2) 

where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps.[8]

Figure 2.5 shows how we can solve the recurrence (2.2). For convenience, we assume that n is an exact power of 2. Part (a) of the figure shows T (n), which in part (b) has been expanded into an equivalent tree representing the recurrence. The cn term is the root (the cost at the top level of recursion), and the two subtrees of the root are the two smaller recurrences T (n/2). Part (c) shows this process carried one step further by expanding T (n/2). The cost for each of the two subnodes at the second level of recursion is cn/2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence, until the problem sizes get down to 1, each with a cost of c. Part (d) shows the resulting tree.

Click To expand
Figure 2.5: The construction of a recursion tree for the recurrence T(n) = 2T(n/2) + cn. Part (a) shows T(n), which is progressively expanded in (b)-(d) to form the recursion tree. The fully expanded tree in part (d) has lg n + 1 levels (i.e., it has height lg n, as indicated), and each level contributes a total cost of cn. The total cost, therefore, is cn lg n + cn, which is Θ(n lg n).

Next, we add the costs across each level of the tree. The top level has total cost cn, the next level down has total cost c(n/2) + c(n/2) = cn, the level after that has total cost c(n/4) + c(n/4) + c(n/4) + c(n/4) = cn, and so on. In general, the level i below the top has 2i nodes, each contributing a cost of c(n/2i), so that the ith level below the top has total cost 2i c(n/2i) = cn. At the bottom level, there are n nodes, each contributing a cost of c, for a total cost of cn.

The total number of levels of the "recursion tree" in Figure 2.5 is lg n + 1. This fact is easily seen by an informal inductive argument. The base case occurs when n = 1, in which case there is only one level. Since lg 1 = 0, we have that lg n + 1 gives the correct number of levels. Now assume as an inductive hypothesis that the number of levels of a recursion tree for 2i nodes is lg 2i + 1 = i + 1 (since for any value of i, we have that lg 2i = i). Because we are assuming that the original input size is a power of 2, the next input size to consider is 2i+1. A tree with 2i+1 nodes has one more level than a tree of 2i nodes, and so the total number of levels is (i + 1) + 1 = lg 2i+1 + 1.

To compute the total cost represented by the recurrence (2.2), we simply add up the costs of all the levels. There are lg n + 1 levels, each costing cn, for a total cost of cn(lg n + 1) = cn lg n + cn. Ignoring the low-order term and the constant c gives the desired result of Θ(n lg n).

Exercises 2.3-1
Start example

Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A = 3, 41, 52, 26, 38, 57, 9, 49.

End example
Exercises 2.3-2
Start example

Rewrite the MERGE procedure so that it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.

End example
Exercises 2.3-3
Start example

Use mathematical induction to show that when n is an exact power of 2, the solution of the recurrence

End example
Exercises 2.3-4
Start example

Insertion sort can be expressed as a recursive procedure as follows. In order to sort A[1 n], we recursively sort A[1 n -1] and then insert A[n] into the sorted array A[1 n - 1]. Write a recurrence for the running time of this recursive version of insertion sort.

End example
Exercises 2.3-5
Start example

Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against v and eliminate half of the sequence from further consideration. Binary search is an algorithm that repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is Θ(lg n).

End example
Exercises 2.3-6
Start example

Observe that the while loop of lines 5 - 7 of the INSERTION-SORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray A[1 j - 1]. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to Θ(n lg n)?

End example
Exercises 2.3-7:
Start example

Describe a Θ(n lg n)-time algorithm that, given a set S of n integers and another integer x, determines whether or not there exist two elements in S whose sum is exactly x.

End example
Problems 2-1: Insertion sort on small arrays in merge sort
Start example

Although merge sort runs in Θ(n lg n) worst-case time and insertion sort runs in Θ(n2) worst-case time, the constant factors in insertion sort make it faster for small n. Thus, it makes sense to use insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which n/k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined.

  1. Show that the n/k sublists, each of length k, can be sorted by insertion sort in Θ(nk) worst-case time.

  2. Show that the sublists can be merged in Θ(n lg (n/k) worst-case time.

  3. Given that the modified algorithm runs in Θ(nk + n lg (n/k)) worst-case time, what is the largest asymptotic (Θnotation) value of k as a function of n for which the modified algorithm has the same asymptotic running time as standard merge sort?

  4. How should k be chosen in practice?

End example
Problems 2-2: Correctness of bubblesort
Start example

Bubblesort is a popular sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order.

BUBBLESORT(A)1 for i  1 to length[A]2     do for j  length[A] downto i + 13            do if A[j] < A[j - 1]4                  then exchange A[j]  A[j - 1]
  1. Let A denote the output of BUBBLESORT(A). To prove that BUBBLESORT is correct, we need to prove that it terminates and that

    (2.3) 

    where n = length[A]. What else must be proved to show that BUBBLESORT actually sorts?

The next two parts will prove inequality (2.3).

  1. State precisely a loop invariant for the for loop in lines 2-4, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter.

  2. Using the termination condition of the loop invariant proved in part (b), state a loop invariant for the for loop in lines 1-4 that will allow you to prove inequality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter.

  3. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort?

End example
Problems 2-3: Correctness of Horner's rule
Start example

The following code fragment implements Horner's rule for evaluating a polynomial

given the coefficients a0, a1, . . . , an and a value for x:

1  y  02  i  n3  while i  04      do y  ai + x · y5         i  i - 1
  1. What is the asymptotic running time of this code fragment for Horner's rule?

  2. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner's rule?

  3. Prove that the following is a loop invariant for the while loop in lines 3 -5.

    At the start of each iteration of the while loop of lines 3-5,

    Interpret a summation with no terms as equaling 0. Your proof should follow the structure of the loop invariant proof presented in this chapter and should show that, at termination, .

  4. Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients a0, a1, . . . , an.

End example
Problems 2-4: Inversions
Start example

Let A[1 n] be an array of n distinct numbers. If i < j and A[i] > A[j], then the pair (i, j) is called an inversion of A.

  1. List the five inversions of the array 2, 3, 8, 6, 1.

  2. What array with elements from the set {1, 2, . . . , n} has the most inversions? How many does it have?

  3. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.

  4. Give an algorithm that determines the number of inversions in any permutation on n elements in Θ(n lg n) worst-case time. (Hint: Modify merge sort.)

End example

[6]We shall see in Chapter 3 how to formally interpret equations containing Θ-notation.

[7]The expression x denotes the least integer greater than or equal to x, and x denotes the greatest integer less than or equal to x. These notations are defined in Chapter 3. The easiest way to verify that setting q to ( p + r)/2 yields subarrays A[p q] and A[q + 1 r] of sizes n/2 and n/2, respectively, is to examine the four cases that arise depending on whether each of p and r is odd or even.

[8]It is unlikely that the same constant exactly represents both the time to solve problems of size 1 and the time per array element of the divide and combine steps. We can get around this problem by letting c be the larger of these times and understanding that our recurrence gives an upper bound on the running time, or by letting c be the lesser of these times and understanding that our recurrence gives a lower bound on the running time. Both bounds will be on the order of n lg n and, taken together, give a Θ(n lg n) running time.

Chapter notes

In 1968, Knuth published the first of three volumes with the general title The Art of Computer Programming [182, 183, 185]. The first volume ushered in the modern study of computer algorithms with a focus on the analysis of running time, and the full series remains an engaging and worthwhile reference for many of the topics presented here. According to Knuth, the word "algorithm" is derived from the name "al-Khowârizmî," a ninth-century Persian mathematician.

Aho, Hopcroft, and Ullman [5] advocated the asymptotic analysis of algorithms as a means of comparing relative performance. They also popularized the use of recurrence relations to describe the running times of recursive algorithms.

Knuth [185] provides an encyclopedic treatment of many sorting algorithms. His comparison of sorting algorithms (page 381) includes exact step-counting analyses, like the one we performed here for insertion sort. Knuth's discussion of insertion sort encompasses several variations of the algorithm. The most important of these is Shell's sort, introduced by D. L. Shell, which uses insertion sort on periodic subsequences of the input to produce a faster sorting algorithm.

Merge sort is also described by Knuth. He mentions that a mechanical collator capable of merging two decks of punched cards in a single pass was invented in 1938. J. von Neumann, one of the pioneers of computer science, apparently wrote a program for merge sort on the EDVAC computer in 1945.

The early history of proving programs correct is described by Gries [133], who credits P. Naur with the first article in this field. Gries attributes loop invariants to R. W. Floyd. The textbook by Mitchell [222] describes more recent progress in proving programs correct.

 

Chapter 3: Growth of Functions

Overview

The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple characterization of the algorithm's efficiency and also allows us to compare the relative performance of alternative algorithms. Once the input size n becomes large enough, merge sort, with its Θ(n lg n) worst-case running time, beats insertion sort, whose worst-case running time is Θ(n2). Although we can sometimes determine the exact running time of an algorithm, as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of computing it. For large enough inputs, the multiplicative constants and lower-order terms of an exact running time are dominated by the effects of the input size itself.

When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.

This chapter gives several standard methods for simplifying the asymptotic analysis of algorithms. The next section begins by defining several types of "asymptotic notation," of which we have already seen an example in Θ-notation. Several notational conventions used throughout this book are then presented, and finally we review the behavior of functions that commonly arise in the analysis of algorithms.

3.1 Asymptotic notation

The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N = {0, 1, 2, ...}. Such notations are convenient for describing the worst-case running-time function T (n), which is usually defined only on integer input sizes. It is sometimes convenient, however, to abuse asymptotic notation in a variety of ways. For example, the notation is easily extended to the domain of real numbers or, alternatively, restricted to a subset of the natural numbers. It is important, however, to understand the precise meaning of the notation so that when it is abused, it is not misused. This section defines the basic asymptotic notations and also introduces some common abuses.

Θ-notation

In Chapter 2, we found that the worst-case running time of insertion sort is T (n) = Θ(n2). Let us define what this notation means. For a given function g(n), we denote by Θ(g(n)) the set of functions

Θ(g(n)) = {f(n) : there exist positive constants c1, c2, and n0 such that 0 c1g(n) f(n) c2g(n) for all n n0}.[1]

A function f(n) belongs to the set Θ(g(n)) if there exist positive constants c1 and c2 such that it can be "sandwiched" between c1g(n) and c2g(n), for sufficiently large n. Because Θ(g(n)) is a set, we could write "f(n) Θ(g(n))" to indicate that f(n) is a member of Θ(g(n)). Instead, we will usually write "f(n) = Θ(g(n))" to express the same notion. This abuse of equality to denote set membership may at first appear confusing, but we shall see later in this section that it has advantages.

Figure 3.1(a) gives an intuitive picture of functions f(n) and g(n), where we have that f(n) = Θ(g(n)). For all values of n to the right of n0, the value of f(n) lies at or above c1g(n) and at or below c2g(n). In other words, for all n n0, the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n).

Click To expand
Figure 3.1: Graphic examples of the Θ, O, and notations. In each part, the value of n0 shown is the minimum possible value; any greater value would also work. (a) Θ-notation bounds a function to within constant factors. We write f(n) = Θ(g(n)) if there exist positive constants n0, c1, and c2 such that to the right of n0, the value of f(n) always lies between c1g(n) and c2g(n) inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or below cg(n). (c) -notation gives a lower bound for a function to within a constant factor. We write f(n) = (g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or above cg(n).

The definition of Θ(g(n)) requires that every member f(n) Θ(g(n)) be asymptotically nonnegative, that is, that f(n) be nonnegative whenever n is sufficiently large. (An asymptotically positive function is one that is positive for all sufficiently large n.) Consequently, the function g(n) itself must be asymptotically nonnegative, or else the set Θ(g(n)) is empty. We shall therefore assume that every function used within Θ-notation is asymptotically nonnegative. This assumption holds for the other asymptotic notations defined in this chapter as well.

In Chapter 2, we introduced an informal notion of Θ-notation that amounted to throwing away lower-order terms and ignoring the leading coefficient of the highest-order term. Let us briefly justify this intuition by using the formal definition to show that 1/2n2 - 3n = Θ(n2). To do so, we must determine positive constants c1, c2, and n0 such that

c1n2 1/2n2 - 3n c2n2

for all n n0. Dividing by n2 yields

c1 1/2 - 3/n c2.

The right-hand inequality can be made to hold for any value of n 1 by choosing c2 1/2. Likewise, the left-hand inequality can be made to hold for any value of n 7 by choosing c1 1/14. Thus, by choosing c1 = 1/14, c2 = 1/2, and n0 = 7, we can verify that 1/2n2 - 3n = Θ(n2). Certainly, other choices for the constants exist, but the important thing is that some choice exists. Note that these constants depend on the function 1/2n2 - 3n; a different function belonging to Θ(n2) would usually require different constants.

We can also use the formal definition to verify that 6n3 Θ(n2). Suppose for the purpose of contradiction that c2 and n0 exist such that 6n3 c2n2 for all n n0. But then n c2/6, which cannot possibly hold for arbitrarily large n, since c2 is constant.

Intuitively, the lower-order terms of an asymptotically positive function can be ignored in determining asymptotically tight bounds because they are insignificant for large n. A tiny fraction of the highest-order term is enough to dominate the lower-order terms. Thus, setting c1 to a value that is slightly smaller than the coefficient of the highest-order term and setting c2 to a value that is slightly larger permits the inequalities in the definition of Θ-notation to be satisfied. The coefficient of the highest-order term can likewise be ignored, since it only changes c1 and c2 by a constant factor equal to the coefficient.

As an example, consider any quadratic function f(n) = an2 + bn + c, where a, b, and c are constants and a > 0. Throwing away the lower-order terms and ignoring the constant yields f(n) = Θ(n2). Formally, to show the same thing, we take the constants c1 = a/4, c2 = 7a/4, and . The reader may verify that 0 c1n2 an2 + bn + c c2n2 for all n n0. In general, for any polynomial , where the ai are constants and ad > 0, we have p(n) = Θ(nd) (see Problem 3-1).

Since any constant is a degree-0 polynomial, we can express any constant function as Θ(n0), or Θ(1). This latter notation is a minor abuse, however, because it is not clear what variable is tending to infinity.[2] We shall often use the notation Θ(1) to mean either a constant or a constant function with respect to some variable.

O-notation

The Θ-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes just "oh of g of n") the set of functions

O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 f(n) cg(n) for all n n0}.

We use O-notation to give an upper bound on a function, to within a constant factor. Figure 3.1(b) shows the intuition behind O-notation. For all values n to the right of n0, the value of the function f(n) is on or below g(n).

We write f(n) = O(g(n)) to indicate that a function f(n) is a member of the set O(g(n)). Note that f(n) = Θ(g(n)) implies f(n) = O(g(n)), since Θ-notation is a stronger notion than O-notation. Written set-theoretically, we have Θ(g(n)) O(g(n)). Thus, our proof that any quadratic function an2 + bn + c, where a > 0, is in Θ(n2) also shows that any quadratic function is in O(n2). What may be more surprising is that any linear function an + b is in O(n2), which is easily verified by taking c = a + |b| and n0 = 1.

Some readers who have seen O-notation before may find it strange that we should write, for example, n = O(n2). In the literature, O-notation is sometimes used informally to describe asymptotically tight bounds, that is, what we have defined using Θ-notation. In this book, however, when we write f(n) = O(g(n)), we are merely claiming that some constant multiple of g(n) is an asymptotic upper bound on f(n), with no claim about how tight an upper bound it is. Distinguishing asymptotic upper bounds from asymptotically tight bounds has now become standard in the algorithms literature.

Using O-notation, we can often describe the running time of an algorithm merely by inspecting the algorithm's overall structure. For example, the doubly nested loop structure of the insertion sort algorithm from Chapter 2 immediately yields an O(n2) upper bound on the worst-case running time: the cost of each iteration of the inner loop is bounded from above by O(1) (constant), the indices i and j are both at most n, and the inner loop is executed at most once for each of the n2 pairs of values for i and j.

Since O-notation describes an upper bound, when we use it to bound the worst-case running time of an algorithm, we have a bound on the running time of the algorithm on every input. Thus, the O(n2) bound on worst-case running time of insertion sort also applies to its running time on every input. The Θ(n2) bound on the worst-case running time of insertion sort, however, does not imply a Θ(n2) bound on the running time of insertion sort on every input. For example, we saw in Chapter 2 that when the input is already sorted, insertion sort runs in Θ(n) time.

Technically, it is an abuse to say that the running time of insertion sort is O(n2), since for a given n, the actual running time varies, depending on the particular input of size n. When we say "the running time is O(n2)," we mean that there is a function f(n) that is O(n2) such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value f(n). Equivalently, we mean that the worst-case running time is O(n2).

-notation

Just as O-notation provides an asymptotic upper bound on a function, -notation provides an asymptotic lower bound. For a given function g(n), we denote by (g(n)) (pronounced "big-omega of g of n" or sometimes just "omega of g of n") the set of functions

(g(n)) = {f(n): there exist positive constants c and n0 such that 0 cg(n) f(n) for all n n0}.

The intuition behind -notation is shown in Figure 3.1(c). For all values n to the right of n0, the value of f(n) is on or above cg(n).

From the definitions of the asymptotic notations we have seen thus far, it is easy to prove the following important theorem (see Exercise 3.1-5).

Theorem 3.1
Start example

For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n) = (g(n)).

End example

As an example of the application of this theorem, our proof that an2 + bn + c = Θ(n2) for any constants a, b, and c, where a > 0, immediately implies that an2 + bn + c = (n2) and an2 + bn + c = O(n2). In practice, rather than using Theorem 3.1 to obtain asymptotic upper and lower bounds from asymptotically tight bounds, as we did for this example, we usually use it to prove asymptotically tight bounds from asymptotic upper and lower bounds.

Since -notation describes a lower bound, when we use it to bound the best-case running time of an algorithm, by implication we also bound the running time of the algorithm on arbitrary inputs as well. For example, the best-case running time of insertion sort is (n), which implies that the running time of insertion sort is (n).

The running time of insertion sort therefore falls between (n) and O(n2), since it falls anywhere between a linear function of n and a quadratic function of n. Moreover, these bounds are asymptotically as tight as possible: for instance, the running time of insertion sort is not (n2), since there exists an input for which insertion sort runs in Θ(n) time (e.g., when the input is already sorted). It is not contradictory, however, to say that the worst-case running time of insertion sort is (n2), since there exists an input that causes the algorithm to take (n2) time. When we say that the running time (no modifier) of an algorithm is (g(n)), we mean that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g(n), for sufficiently large n.

Asymptotic notation in equations and inequalities

We have already seen how asymptotic notation can be used within mathematical formulas. For example, in introducing O-notation, we wrote "n = O(n2)." We might also write 2n2 + 3n + 1 = 2n2 + Θ(n). How do we interpret such formulas?

When the asymptotic notation stands alone on the right-hand side of an equation (or inequality), as in n = O(n2), we have already defined the equal sign to mean set membership: n O(n2). In general, however, when asymptotic notation appears in a formula, we interpret it as standing for some anonymous function that we do not care to name. For example, the formula 2n2 + 3n + 1 = 2n2 + Θ(n) means that 2n2 + 3n + 1 = 2n2 + f(n), where f(n) is some function in the set Θ(n). In this case, f(n) = 3n + 1, which indeed is in Θ(n).

Using asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation. For example, in Chapter 2 we expressed the worst-case running time of merge sort as the recurrence

T(n) = 2T (n/2) + Θ(n).

If we are interested only in the asymptotic behavior of T(n), there is no point in specifying all the lower-order terms exactly; they are all understood to be included in the anonymous function denoted by the term Θ(n).

The number of anonymous functions in an expression is understood to be equal to the number of times the asymptotic notation appears. For example, in the expression

there is only a single anonymous function (a function of i). This expression is thus not the same as O(1) + O(2) + . . . + O(n), which doesn't really have a clean interpretation.

In some cases, asymptotic notation appears on the left-hand side of an equation, as in

2n2 + Θ(n) = Θ(n2).

We interpret such equations using the following rule: No matter how the anonymous functions are chosen on the left of the equal sign, there is a way to choose the anonymous functions on the right of the equal sign to make the equation valid. Thus, the meaning of our example is that for any function f(n) Θ(n), there is some function g(n) Θ(n2) such that 2n2 + f(n) = g(n) for all n. In other words, the right-hand side of an equation provides a coarser level of detail than the left-hand side.

A number of such relationships can be chained together, as in

2n2 + 3n + 1

=

2n2 + Θ(n)

 

=

Θ(n2).

We can interpret each equation separately by the rule above. The first equation says that there is some function f(n) Θ(n) such that 2n2 + 3n + 1 = 2n2 + f(n) for all n. The second equation says that for any function g(n) Θ(n) (such as the f(n) just mentioned), there is some function h(n) Θ(n2) such that 2n2 + g(n) = h(n) for all n. Note that this interpretation implies that 2n2 + 3n + 1 = Θ(n2), which is what the chaining of equations intuitively gives us.

o-notation

The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound 2n2 = O(n2) is asymptotically tight, but the bound 2n = O(n2) is not. We use o-notation to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) ("little-oh of g of n") as the set

o(g(n)) = {f(n) : for any positive constant c > 0, there exists a constant n0 > 0 such that 0 f(n) < cg(n) for all n n0}.

For example, 2n = o(n2), but 2n2 o(n2).

The definitions of O-notation and o-notation are similar. The main difference is that in f(n) = O(g(n)), the bound 0 f(n) cg(n) holds for some constant c > 0, but in f(n) = o(g(n)), the bound 0 f(n) < cg(n) holds for all constants c > 0. Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity; that is,

(3.1) 

Some authors use this limit as a definition of the o-notation; the definition in this book also restricts the anonymous functions to be asymptotically nonnegative.

ω-notation

By analogy, ω-notation is to -notation as o-notation is to O-notation. We use ω-notation to denote a lower bound that is not asymptotically tight. One way to define it is by

f(n) ω(g(n)) if and only if g(n) o(f(n)).

Formally, however, we define ω(g(n)) ("little-omega of g of n") as the set

ω(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n0 > 0 such that 0 cg(n) < f(n) for all n n0}.

For example, n2/2 = ω(n), but n2/2 ω(n2). The relation f(n) = ω(g(n)) implies that

if the limit exists. That is, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.

Comparison of functions

Many of the relational properties of real numbers apply to asymptotic comparisons as well. For the following, assume that f(n) and g(n) are asymptotically positive.

Transitivity:

  •  

    f(n) = Θ(g(n)) and g(n) = Θ(h(n))

    imply

    f(n) = Θ(h(n)),

    f(n) = O(g(n)) and g(n) = O(h(n))

    imply

    f(n) = O(h(n)),

    f(n) = (g(n)) and g(n) = (h(n))

    imply

    f(n) = (h(n)),

    f(n) = o(g(n)) and g(n) = o(h(n))

    imply

    f(n) = o(h(n)),

    f(n) = ω(g(n)) and g(n) = ω(h(n))

    imply

    f(n) = ω(h(n)).

Reflexivity:

  •  

    f(n)

    =

    Θ(f(n)),

    f(n)

    =

    O(f(n)),

    f(n)

    =

    (f(n)).

Symmetry:

f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n)).

Transpose symmetry:

  •  

    f(n) = O(g(n))

    if and only if

    g(n) = (f(n)),

    f(n) = o(g(n))

    if and only if

    g(n) = ω(f(n)).

Because these properties hold for asymptotic notations, one can draw an analogy between the asymptotic comparison of two functions f and g and the comparison of two real numbers a and b:

f(n) = O(g(n))

a b,

f(n) = (g(n))

a b,

f(n) = Θ(g(n))

a = b,

f(n) = o(g(n))

a < b,

f(n) = ω(g(n))

a > b.

We say that f(n) is asymptotically smaller than g(n) if f(n) = o(g(n)), and f(n) is asymptotically larger than g(n) if f(n) = ω(g(n)).

One property of real numbers, however, does not carry over to asymptotic notation:

  • Trichotomy: For any two real numbers a and b, exactly one of the following must hold: a < b, a = b, or a > b.

Although any two real numbers can be compared, not all functions are asymptotically comparable. That is, for two functions f(n) and g(n), it may be the case that neither f(n) = O(g(n)) nor f(n) = (g(n)) holds. For example, the functions n and n1+sin n cannot be compared using asymptotic notation, since the value of the exponent in n1+sin n oscillates between 0 and 2, taking on all values in between.

Exercises 3.1-1
Start example

Let f(n) and g(n) be asymptotically nonnegative functions. Using the basic definition of Θ-notation, prove that max(f(n), g(n)) = Θ(f(n) + g(n)).

End example
Exercises 3.1-2
Start example

Show that for any real constants a and b, where b > 0,

(3.2) 

End example
Exercises 3.1-3
Start example

Explain why the statement, "The running time of algorithm A is at least O(n2)," is meaningless.

End example
Exercises 3.1-4
Start example

Is 2n+1 = O(2n)? Is 22n = O(2n)?

End example
Exercises 3.1-5
Exercises 3.1-6
Start example

Prove that the running time of an algorithm is Θ(g(n)) if and only if its worst-case running time is O(g(n)) and its best-case running time is (g(n)).

End example
Exercises 3.1-7
Start example

Prove that o(g(n)) ω(g(n)) is the empty set.

End example
Exercises 3.1-8
Start example

We can extend our notation to the case of two parameters n and m that can go to infinity independently at different rates. For a given function g(n, m), we denote by O(g(n, m)) the set of functions

O(g(n, m)) = {f(n, m): there exist positive constants c, n0, and m0 such that 0 f(n, m) cg(n, m) for all n n0 and m m0}.

Give corresponding definitions for (g(n, m)) and Θ(g(n, m)).

End example

[1]Within set notation, a colon should be read as "such that."

[2]The real problem is that our ordinary notation for functions does not distinguish functions from values. In λ-calculus, the parameters to a function are clearly specified: the function n2 could be written as λn.n2, or even λr.r2. Adopting a more rigorous notation, however, would complicate algebraic manipulations, and so we choose to tolerate the abuse.

 

3.2 Standard notations and common functions

This section reviews some standard mathematical functions and notations and explores the relationships among them. It also illustrates the use of the asymptotic notations.

Monotonicity

A function f(n) is monotonically increasing if m n implies f(m) f(n). Similarly, it is monotonically decreasing if m n implies f(m) f(n). A function f(n) is strictly increasing if m < n implies f(m) < f(n) and strictly decreasing if m < n implies f(m) > f(n).

Floors and ceilings

For any real number x, we denote the greatest integer less than or equal to x by x (read "the floor of x") and the least integer greater than or equal to x by x (read "the ceiling of x"). For all real x,

(3.3) 

For any integer n,

n/2 + n/2 = n,

and for any real number n 0 and integers a, b > 0,

(3.4) 
(3.5) 
(3.6) 
(3.7) 

The floor function f(x) = x is monotonically increasing, as is the ceiling function f(x) = x.

Modular arithmetic

For any integer a and any positive integer n, the value a mod n is the remainder (or residue) of the quotient a/n:

(3.8) 

Given a well-defined notion of the remainder of one integer when divided by another, it is convenient to provide special notation to indicate equality of remainders. If (a mod n) = (b mod n), we write a b (mod n) and say that a is equivalent to b, modulo n. In other words, a b (mod n) if a and b have the same remainder when divided by n. Equivalently, a b (mod n) if and only if n is a divisor of b - a. We write a b (mod n) if a is not equivalent to b, modulo n.

Polynomials

Given a nonnegative integer d, a polynomial in n of degree d is a function p(n) of the form

where the constants a0, a1, ..., ad are the coefficients of the polynomial and ad 0. A polynomial is asymptotically positive if and only if ad > 0. For an asymptotically positive polynomial p(n) of degree d, we have p(n) = Θ(nd). For any real constant a 0, the function na is monotonically increasing, and for any real constant a 0, the function na is monotonically decreasing. We say that a function f(n) is polynomially bounded if f(n) = O(nk) for some constant k.

Exponentials

For all real a > 0, m, and n, we have the following identities:

a0

=

1,

a1

=

a,

a-1

=

1/a,

(am)n

=

amn,

(am)n

=

(an)m,

am an

=

am+n.

For all n and a 1, the function an is monotonically increasing in n. When convenient, we shall assume 00 = 1.

The rates of growth of polynomials and exponentials can be related by the following fact. For all real constants a and b such that a > 1,

(3.9) 

from which we can conclude that

nb = o(an).

Thus, any exponential function with a base strictly greater than 1 grows faster than any polynomial function.

Using e to denote 2.71828..., the base of the natural logarithm function, we have for all real x,

(3.10) 

where "!" denotes the factorial function defined later in this section. For all real x, we have the inequality

(3.11) 

where equality holds only when x = 0. When |x| 1, we have the approximation

(3.12) 

When x 0, the approximation of ex by 1 + x is quite good:

ex = 1 + x + Θ(x2).

(In this equation, the asymptotic notation is used to describe the limiting behavior as x 0 rather than as x .) We have for all x,

(3.13) 

Logarithms

We shall use the following notations:

lg n

=

log2 n

(binary logarithm) ,

ln n

=

loge n

(natural logarithm) ,

lgk n

=

(lg n)k

(exponentiation) ,

lg lg n

=

lg(lg n)

(composition) .

An important notational convention we shall adopt is that logarithm functions will apply only to the next term in the formula, so that lg n + k will mean (lg n) + k and not lg(n + k). If we hold b > 1 constant, then for n > 0, the function logb n is strictly increasing.

For all real a > 0, b > 0, c > 0, and n,

(3.14) 
(3.15) 

where, in each equation above, logarithm bases are not 1.

By equation (3.14), changing the base of a logarithm from one constant to another only changes the value of the logarithm by a constant factor, and so we shall often use the notation "lg n" when we don't care about constant factors, such as in O-notation. Computer scientists find 2 to be the most natural base for logarithms because so many algorithms and data structures involve splitting a problem into two parts.

There is a simple series expansion for ln(1 + x) when |x| < 1:

We also have the following inequalities for x > -1:

(3.16) 

where equality holds only for x = 0.

We say that a function f(n) is polylogarithmically bounded if f(n) = O(lgk n) for some constant k. We can relate the growth of polynomials and polylogarithms by substituting lg n for n and 2a for a in equation (3.9), yielding

From this limit, we can conclude that

lgb n = o(na)

for any constant a > 0. Thus, any positive polynomial function grows faster than any polylogarithmic function.

Factorials

The notation n! (read "n factorial") is defined for integers n 0 as

Thus, n! = 1 · 2 · 3 n.

A weak upper bound on the factorial function is n! nn, since each of the n terms in the factorial product is at most n. Stirling's approximation,

(3.17) 

where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as well. One can prove (see Exercise 3.2-3)

(3.18) 

where Stirling's approximation is helpful in proving equation (3.18). The following equation also holds for all n 1:

(3.19) 

where

(3.20) 

Functional iteration

We use the notation f(i)(n) to denote the function f(n) iteratively applied i times to an initial value of n. Formally, let f(n) be a function over the reals. For nonnegative integers i, we recursively define

For example, if f(n) = 2n, then f(i)(n) = 2in.

The iterated logarithm function

We use the notation lg* n (read "log star of n") to denote the iterated logarithm, which is defined as follows. Let lg(i) n be as defined above, with f(n) = lg n. Because the logarithm of a nonpositive number is undefined, lg(i) n is defined only if lg(i-1) n > 0. Be sure to distinguish lg(i) n (the logarithm function applied i times in succession, starting with argument n) from lgi n (the logarithm of n raised to the ith power). The iterated logarithm function is defined as

lg* n = min {i = 0: lg(i) n 1}.

The iterated logarithm is a very slowly growing function:

lg* 2

=

1,

lg* 4

=

2,

lg* 16

=

3,

lg* 65536

=

4,

lg*(265536)

=

5.

Since the number of atoms in the observable universe is estimated to be about 1080, which is much less than 265536, we rarely encounter an input size n such that lg* n > 5.

Fibonacci numbers

The Fibonacci numbers are defined by the following recurrence:

(3.21) 

Thus, each Fibonacci number is the sum of the two previous ones, yielding the sequence

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ... .

Fibonacci numbers are related to the golden ratio φ and to its conjugate , which are given by the following formulas:

(3.22) 

Specifically, we have

(3.23) 

which can be proved by induction (Exercise 3.2-6). Since , we have , so that the ith Fibonacci number Fi is equal to rounded to the nearest integer. Thus, Fibonacci numbers grow exponentially.

Exercises 3.2-1
Start example

Show that if f(n) and g(n) are monotonically increasing functions, then so are the functions f(n) + g(n) and f (g(n)), and if f(n) and g(n) are in addition nonnegative, then f(n) · g(n) is monotonically increasing.

End example
Exercises 3.2-2
Exercises 3.2-3
Start example

Prove equation (3.18). Also prove that n! = ω(2n) and n! = o(nn).

End example
Exercises 3.2-4:
Start example

Is the function lg n! polynomially bounded? Is the function lg lg n! polynomially bounded?

End example
Exercises 3.2-5:
Start example

Which is asymptotically larger: lg(lg* n) or lg*(lg n)?

End example
Exercises 3.2-6
Start example

Prove by induction that the ith Fibonacci number satisfies the equality

where φ is the golden ratio and is its conjugate.

End example
Exercises 3.2-7
Start example

Prove that for i 0, the (i + 2)nd Fibonacci number satisfies Fi+2 φi.

End example
Problems 3-1: Asymptotic behavior of polynomials
Start example

Let

where ad > 0, be a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties.

  1. If k d, then p(n) = O(nk).

  2. If k d, then p(n) = (nk).

  3. If k = d, then p(n) = Θ(nk).

  4. If k > d, then p(n) = o(nk).

  5. If k < d, then p(n) = ω(nk).

End example
Problems 3-2: Relative asymptotic growths
Start example

Indicate, for each pair of expressions (A, B) in the table below, whether A is O, o, , ω, or Θ of B. Assume that k 1, > 0, and c > 1 are constants. Your answer should be in the form of the table with "yes" or "no" written in each box.

 

A

B

O

o

ω

Θ

a.

lgk n

n

     

b.

nk

cn

     

c.

nsin n

     

d.

2n

2n/2

     

e.

nlg c

clg n

     

f.

lg(n!)

lg(nn)

     
End example
Problems 3-3: Ordering by asymptotic growth rates
Start example
  1. Rank the following functions by order of growth; that is, find an arrangement g1, g2, ..., g30 of the functions satisfying g1 = (g2), g2 = (g3), ..., g29 = (g30). Partition your list into equivalence classes such that f(n) and g(n) are in the same class if and only if f(n) = Θ(g(n)).

  2. Give an example of a single nonnegative function f(n) such that for all functions gi(n) in part (a), f(n) is neither O(gi(n)) nor (gi(n)).

End example
Problems 3-4: Asymptotic notation properties
Start example

Let f(n) and g(n) be asymptotically positive functions. Prove or disprove each of the following conjectures.

  1. f(n) = O(g(n)) implies g(n) = O(f(n)).

  2. f(n) + g(n) = Θ(min(f(n), g(n))).

  3. f(n) = O(g(n)) implies lg(f(n)) = O(lg(g(n))), where lg(g(n)) 1 and f(n) 1 for all sufficiently large n.

  4. f(n) = O(g(n)) implies 2f(n) = O (2g(n)).

  5. f(n) = O((f(n))2).

  6. f(n) = O(g(n)) implies g(n) = (f(n)).

  7. f(n) = Θ(f(n/2)).

  8. f(n) + o( f(n)) = Θ(f(n)).

End example
Problems 3-5: Variations on O and
Start example

Some authors define in a slightly different way than we do; let's use (read "omega infinity") for this alternative definition. We say that if there exists a positive constant c such that f(n) cg(n) 0 for infinitely many integers n.

  1. Show that for any two functions f(n) and g(n) that are asymptotically nonnegative, either f(n) = O(g(n)) or or both, whereas this is not true if we use in place of .

  2. Describe the potential advantages and disadvantages of using instead of to characterize the running times of programs.

Some authors also define O in a slightly different manner; let's use O' for the alternative definition. We say that f(n) = O'(g(n)) if and only if |f(n)| = O(g(n)).

  1. What happens to each direction of the "if and only if" in Theorem 3.1 if we substitute O' for O but still use ?

Some authors define Õ (read "soft-oh") to mean O with logarithmic factors ignored:

Õ (g(n)) = {f(n): there exist positive constants c, k, and n0 such that 0 f(n) cg(n) lgk(n) for all n n0}.

  1. Define and in a similar manner. Prove the corresponding analog to Theorem 3.1.

End example
Problems 3-6: Iterated functions
Start example

The iteration operator* used in the lg* function can be applied to any monotonically increasing function f(n) over the reals. For a given constant c R, we define the iterated function by

which need not be well-defined in all cases. In other words, the quantity is the number of iterated applications of the function f required to reduce its argument down to c or less.

For each of the following functions f(n) and constants c, give as tight a bound as possible on .

 

f(n)

c

a.

n - 1

0

 

b.

lg n

1

 

c.

n/2

1

 

d.

n/2

2

 

e.

2

 

f.

1

 

g.

n1/3

2

 

h.

n/lg n

2

 
End example

Chapter notes

Knuth [182] traces the origin of the O-notation to a number-theory text by P. Bachmann in 1892. The o-notation was invented by E. Landau in 1909 for his discussion of the distribution of prime numbers. The and Θ notations were advocated by Knuth [186] to correct the popular, but technically sloppy, practice in the literature of using O-notation for both upper and lower bounds. Many people continue to use the O-notation where the Θ-notation is more technically precise. Further discussion of the history and development of asymptotic notations can be found in Knuth [182, 186] and Brassard and Bratley [46].

Not all authors define the asymptotic notations in the same way, although the various definitions agree in most common situations. Some of the alternative definitions encompass functions that are not asymptotically nonnegative, as long as their absolute values are appropriately bounded.

Equation (3.19) is due to Robbins [260]. Other properties of elementary mathematical functions can be found in any good mathematical reference, such as Abramowitz and Stegun [1] or Zwillinger [320], or in a calculus book, such as Apostol [18] or Thomas and Finney [296]. Knuth [182] and Graham, Knuth, and Patashnik [132] contain a wealth of material on discrete mathematics as used in computer science.

 

 

 

 

Chapter 4: Recurrences

Overview

As noted in Section 2.3.2, when an algorithm contains a recursive call to itself, its running time can often be described by a recurrence. A recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs. For example, we saw in Section 2.3.2 that the worst-case running time T (n) of the MERGE-SORT procedure could be described by the recurrence

(4.1) 

whose solution was claimed to be T (n) = Θ(n lg n).

This chapter offers three methods for solving recurrences-that is, for obtaining asymptotic "Θ" or "O" bounds on the solution. In the substitution method, we guess a bound and then use mathematical induction to prove our guess correct. The recursion-tree method converts the recurrence into a tree whose nodes represent the costs incurred at various levels of the recursion; we use techniques for bounding summations to solve the recurrence. The master method provides bounds for recurrences of the form

T (n) = aT (n/b) + f (n),

where a 1, b > 1, and f (n) is a given function; it requires memorization of three cases, but once you do that, determining asymptotic bounds for many simple recurrences is easy.

Technicalities

In practice, we neglect certain technical details when we state and solve recurrences. A good example of a detail that is often glossed over is the assumption of integer arguments to functions. Normally, the running time T (n) of an algorithm is only defined when n is an integer, since for most algorithms, the size of the input is always an integer. For example, the recurrence describing the worst-case running time of MERGE-SORT is really

(4.2) 

Boundary conditions represent another class of details that we typically ignore. Since the running time of an algorithm on a constant-sized input is a constant, the recurrences that arise from the running times of algorithms generally have T(n) = Θ(1) for sufficiently small n. Consequently, for convenience, we shall generally omit statements of the boundary conditions of recurrences and assume that T (n) is constant for small n. For example, we normally state recurrence (4.1) as

(4.3) 

without explicitly giving values for small n. The reason is that although changing the value of T (1) changes the solution to the recurrence, the solution typically doesn't change by more than a constant factor, so the order of growth is unchanged.

When we state and solve recurrences, we often omit floors, ceilings, and boundary conditions. We forge ahead without these details and later determine whether or not they matter. They usually don't, but it is important to know when they do. Experience helps, and so do some theorems stating that these details don't affect the asymptotic bounds of many recurrences encountered in the analysis of algorithms (see Theorem 4.1). In this chapter, however, we shall address some of these details to show the fine points of recurrence solution methods.

 

4.1 The substitution method

The substitution method for solving recurrences entails two steps:

  1. Guess the form of the solution.

  2. Use mathematical induction to find the constants and show that the solution works.

The name comes from the substitution of the guessed answer for the function when the inductive hypothesis is applied to smaller values. This method is powerful, but it obviously can be applied only in cases when it is easy to guess the form of the answer.

The substitution method can be used to establish either upper or lower bounds on a recurrence. As an example, let us determine an upper bound on the recurrence

(4.4) 

which is similar to recurrences (4.2) and (4.3). We guess that the solution is T (n) = O(n lg n). Our method is to prove that T (n) cn lg n for an appropriate choice of the constant c > 0. We start by assuming that this bound holds for n/2, that is, that T (n/2) c n/2 lg(n/2). Substituting into the recurrence yields

T(n)

2(c n/2lg(n/2)) + n

 

cn lg(n/2) + n

 

=

cn lg n - cn lg 2 + n

 

=

cn lg n - cn + n

 

cn lg n,

where the last step holds as long as c 1.

Mathematical induction now requires us to show that our solution holds for the boundary conditions. Typically, we do so by showing that the boundary conditions are suitable as base cases for the inductive proof. For the recurrence (4.4), we must show that we can choose the constant c large enough so that the bound T(n) = cn lg n works for the boundary conditions as well. This requirement can sometimes lead to problems. Let us assume, for the sake of argument, that T (1) = 1 is the sole boundary condition of the recurrence. Then for n = 1, the bound T (n) = cn lg n yields T (1) = c1 lg 1 = 0, which is at odds with T (1) = 1. Consequently, the base case of our inductive proof fails to hold.

This difficulty in proving an inductive hypothesis for a specific boundary condition can be easily overcome. For example, in the recurrence (4.4), we take advantage of asymptotic notation only requiring us to prove T (n) = cn lg n for n n0, where n0 is a constant of our choosing. The idea is to remove the difficult boundary condition T (1) = 1 from consideration in the inductive proof. Observe that for n > 3, the recurrence does not depend directly on T (1). Thus, we can replace T (1) by T (2) and T (3) as the base cases in the inductive proof, letting n0 = 2. Note that we make a distinction between the base case of the recurrence (n = 1) and the base cases of the inductive proof (n = 2 and n = 3). We derive from the recurrence that T (2) = 4 and T (3) = 5. The inductive proof that T (n) cn lg n for some constant c 1 can now be completed by choosing c large enough so that T (2) c2 lg 2 and T (3) c3 lg 3. As it turns out, any choice of c 2 suffices for the base cases of n = 2 and n = 3 to hold. For most of the recurrences we shall examine, it is straightforward to extend boundary conditions to make the inductive assumption work for small n.

Making a good guess

Unfortunately, there is no general way to guess the correct solutions to recurrences. Guessing a solution takes experience and, occasionally, creativity. Fortunately, though, there are some heuristics that can help you become a good guesser. You can also use recursion trees, which we shall see in Section 4.2, to generate good guesses.

If a recurrence is similar to one you have seen before, then guessing a similar solution is reasonable. As an example, consider the recurrence

T (n) = 2T (n/2 + 17) + n ,

which looks difficult because of the added "17" in the argument to T on the right-hand side. Intuitively, however, this additional term cannot substantially affect the solution to the recurrence. When n is large, the difference between T (n/2) and T (n/2 + 17) is not that large: both cut n nearly evenly in half. Consequently, we make the guess that T (n) = O(n lg n), which you can verify as correct by using the substitution method (see Exercise 4.1-5).

Another way to make a good guess is to prove loose upper and lower bounds on the recurrence and then reduce the range of uncertainty. For example, we might start with a lower bound of T (n) = (n) for the recurrence (4.4), since we have the term n in the recurrence, and we can prove an initial upper bound of T (n) = O(n2). Then, we can gradually lower the upper bound and raise the lower bound until we converge on the correct, asymptotically tight solution of T (n) = Θ(n lg n).

Subtleties

There are times when you can correctly guess at an asymptotic bound on the solution of a recurrence, but somehow the math doesn't seem to work out in the induction. Usually, the problem is that the inductive assumption isn't strong enough to prove the detailed bound. When you hit such a snag, revising the guess by subtracting a lower-order term often permits the math to go through.

Consider the recurrence

T (n) = T (n/2) + T (n/2) + 1.

We guess that the solution is O(n), and we try to show that T (n) cn for an appropriate choice of the constant c. Substituting our guess in the recurrence, we obtain

T (n)

c n/2 + c n/2 + 1

 

=

cn + 1 ,

which does not imply T (n) cn for any choice of c. It's tempting to try a larger guess, say T (n) = O(n2), which can be made to work, but in fact, our guess that the solution is T (n) = O(n) is correct. In order to show this, however, we must make a stronger inductive hypothesis.

Intuitively, our guess is nearly right: we're only off by the constant 1, a lower-order term. Nevertheless, mathematical induction doesn't work unless we prove the exact form of the inductive hypothesis. We overcome our difficulty by subtracting a lower-order term from our previous guess. Our new guess is T (n) cn - b, where b 0 is constant. We now have

T (n)

(c n/2 - b) + (c n/2 - b) + 1

 

=

cn - 2b + 1

 

cn - b ,

as long as b 1. As before, the constant c must be chosen large enough to handle the boundary conditions.

Most people find the idea of subtracting a lower-order term counterintuitive. After all, if the math doesn't work out, shouldn't we be increasing our guess? The key to understanding this step is to remember that we are using mathematical induction: we can prove something stronger for a given value by assuming something stronger for smaller values.

Avoiding pitfalls

It is easy to err in the use of asymptotic notation. For example, in the recurrence (4.4) we can falsely "prove" T (n) = O(n) by guessing T (n) cn and then arguing

T (n)

2(c n/2) + n

 

cn + n

 

=

O(n) , wrong!!

since c is a constant. The error is that we haven't proved the exact form of the inductive hypothesis, that is, that T (n) cn.

Changing variables

Sometimes, a little algebraic manipulation can make an unknown recurrence similar to one you have seen before. As an example, consider the recurrence

which looks difficult. We can simplify this recurrence, though, with a change of variables. For convenience, we shall not worry about rounding off values, such as , to be integers. Renaming m = lg n yields

T (2m) = 2T (2m/2) + m.

We can now rename S(m) = T(2m) to produce the new recurrence

S(m) = 2S(m/2) + m,

which is very much like recurrence (4.4). Indeed, this new recurrence has the same solution: S(m) = O(m lg m). Changing back from S(m) to T (n), we obtain T (n) = T (2m) = S(m) = O(m lg m) = O(lg n lg lg n).

Exercises 4.1-1
Start example

Show that the solution of T (n) = T (n/2) + 1 is O(lg n).

End example
Exercises 4.1-2
Start example

We saw that the solution of T (n) = 2T (n/2) + n is O(n lg n). Show that the solution of this recurrence is also (n lg n). Conclude that the solution is Θ(n lg n).

End example
Exercises 4.1-3
Start example

Show that by making a different inductive hypothesis, we can overcome the difficulty with the boundary condition T (1) = 1 for the recurrence (4.4) without adjusting the boundary conditions for the inductive proof.

End example
Exercises 4.1-4
Start example

Show that Θ(n lg n) is the solution to the "exact" recurrence (4.2) for merge sort.

End example
Exercises 4.1-5
Start example

Show that the solution to T (n) = 2T (n/2 + 17) + n is O(n lg n).

End example
Exercises 4.1-6
Start example

Solve the recurrence by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral.

End example

4.2 The recursion-tree method

Although the substitution method can provide a succinct proof that a solution to a recurrence is correct, it is sometimes difficult to come up with a good guess. Drawing out a recursion tree, as we did in our analysis of the merge sort recurrence in Section 2.3.2, is a straightforward way to devise a good guess. In a recursion tree, each node represents the cost of a single subproblem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs, and then we sum all the per-level costs to determine the total cost of all levels of the recursion. Recursion trees are particularly useful when the recurrence describes the running time of a divide-and-conquer algorithm.

A recursion tree is best used to generate a good guess, which is then verified by the substitution method. When using a recursion tree to generate a good guess, you can often tolerate a small amount of "sloppiness," since you will be verifying your guess later on. If you are very careful when drawing out a recursion tree and summing the costs, however, you can use a recursion tree as a direct proof of a solution to a recurrence. In this section, we will use recursion trees to generate good guesses, and in Section 4.4, we will use recursion trees directly to prove the theorem that forms the basis of the master method.

For example, let us see how a recursion tree would provide a good guess for the recurrence T (n) = 3T (n/4) + Θ(n2). We start by focusing on finding an upper bound for the solution. Because we know that floors and ceilings are usually insubstantial in solving recurrences (here's an example of sloppiness that we can tolerate), we create a recursion tree for the recurrence T (n) = 3T(n/4) + cn2, having written out the implied constant coefficient c > 0.

Figure 4.1 shows the derivation of the recursion tree for T (n) = 3T (n/4) + cn2. For convenience, we assume that n is an exact power of 4 (another example of tolerable sloppiness). Part (a) of the figure shows T (n), which is expanded in part (b) into an equivalent tree representing the recurrence. The cn2 term at the root represents the cost at the top level of recursion, and the three subtrees of the root represent the costs incurred by the subproblems of size n/4. Part (c) shows this process carried one step further by expanding each node with cost T (n/4) from part (b). The cost for each of the three children of the root is c(n/4)2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence.

Click To expand
Figure 4.1: The construction of a recursion tree for the recurrence T(n) = 3T(n/4) + cn2. Part (a) shows T(n), which is progressively expanded in (b)-(d) to form the recursion tree. The fully expanded tree in part (d) has height log4 n (it has log4 n + 1 levels).

Because subproblem sizes decrease as we get further from the root, we eventually must reach a boundary condition. How far from the root do we reach one? The subproblem size for a node at depth i is n/4i. Thus, the subproblem size hits n = 1 when n/4i = 1 or, equivalently, when i = log4 n. Thus, the tree has log 4n + 1 levels (0, 1, 2,..., log4 n).

Next we determine the cost at each level of the tree. Each level has three times more nodes than the level above, and so the number of nodes at depth i is 3i. Because subproblem sizes reduce by a factor of 4 for each level we go down from the root, each node at depth i, for i = 0, 1, 2,..., log4 n - 1, has a cost of c(n/4i)2. Multiplying, we see that the total cost over all nodes at depth i, for i = 0, 1, 2,..., log4 n - 1, is 3i c(n/4i)2 = (3/16)i cn2. The last level, at depth log4 n, has nodes, each contributing cost T (1), for a total cost of , which is .

Now we add up the costs over all levels to determine the cost for the entire tree:

Click To expand

This last formula looks somewhat messy until we realize that we can again take advantage of small amounts of sloppiness and use an infinite decreasing geometric series as an upper bound. Backing up one step and applying equation (A.6), we have

Thus, we have derived a guess of T (n) = O(n2) for our original recurrence T (n) = 3T (n/4) + Θ(n2). In this example, the coefficients of cn2 form a decreasing geometric series and, by equation (A.6), the sum of these coefficients is bounded from above by the constant 16/13. Since the root's contribution to the total cost is cn2, the root contributes a constant fraction of the total cost. In other words, the total cost of the tree is dominated by the cost of the root.

In fact, if O(n2) is indeed an upper bound for the recurrence (as we shall verify in a moment), then it must be a tight bound. Why? The first recursive call contributes a cost of Θ(n2), and so (n2) must be a lower bound for the recurrence.

Now we can use the substitution method to verify that our guess was correct, that is, T (n) = O(n2) is an upper bound for the recurrence T (n) = 3T (n/4)+Θ(n2). We want to show that T (n) dn2 for some constant d > 0. Using the same constant c > 0 as before, we have

T(n)

3T(n/4) + cn2

 

3dn/42 + cn2

 

3d(n/4)2 + cn2

 

=

3/16 dn2 + cn2

 

dn2,

where the last step holds as long as d (16/13)c.

As another, more intricate example, Figure 4.2 shows the recursion tree for T (n) = T(n/3) + T(2n/3) + O(n).


Figure 4.2: A recursion tree for the recurrence T(n) = T (n/3) + T (2n/3) + cn.

(Again, we omit floor and ceiling functions for simplicity.) As before, we let c represent the constant factor in the O(n) term. When we add the values across the levels of the recursion tree, we get a value of cn for every level. The longest path from the root to a leaf is n (2/3)n (2/3)2n ··· 1. Since (2/3)kn = 1 when k = log3/2 n, the height of the tree is log3/2 n.

Intuitively, we expect the solution to the recurrence to be at most the number of levels times the cost of each level, or O(cn log3/2 n) = O(n lg n). The total cost is evenly distributed throughout the levels of the recursion tree. There is a complication here: we have yet to consider the cost of the leaves. If this recursion tree were a complete binary tree of height log3/2 n, there would be leaves. Since the cost of each leaf is a constant, the total cost of all leaves would then be , which is ω(n lg n). This recursion tree is not a complete binary tree, however, and so it has fewer than leaves. Moreover, as we go down from the root, more and more internal nodes are absent. Consequently, not all levels contribute a cost of exactly cn; levels toward the bottom contribute less. We could work out an accurate accounting of all costs, but remember that we are just trying to come up with a guess to use in the substitution method. Let us tolerate the sloppiness and attempt to show that a guess of O(n lg n) for the upper bound is correct.

Indeed, we can use the substitution method to verify that O(n lg n) is an upper bound for the solution to the recurrence. We show that T (n) dn lg n, where d is a suitable positive constant. We have

T(n)

T(n/3) + T(2n/3) + cn

 

d(n/3)lg(n/3) + d(2n/3)lg(2n/3) + cn

 

=

(d(n/3)lgn - d(n/3)lg 3) + (d(2n/3) lg n - d(2n/3)lg(3/2)) + cn

 

=

dn lg n - d((n/3) lg 3 + (2n/3) lg(3/2)) + cn

 

=

dn lg n - d((n/3) lg 3 + (2n/3) lg 3 - (2n/3)lg 2) + cn

 

=

dn lg n - dn(lg 3 - 2/3) + cn

 

dn lg n,

as long as d c/(lg 3 - (2/3)). Thus, we did not have to perform a more accurate accounting of costs in the recursion tree.

Exercises 4.2-1
Start example

Use a recursion tree to determine a good asymptotic upper bound on the recurrence T(n) = 3T(n/2) + n. Use the substitution method to verify your answer.

End example
Exercises 4.2-2
Start example

Argue that the solution to the recurrence T (n) = T (n/3) + T (2n/3) + cn, where c is a constant, is (n lg n) by appealing to a recursion tree.

End example
Exercises 4.2-3
Start example

Draw the recursion tree for T (n) = 4T (n/2)+cn, where c is a constant, and provide a tight asymptotic bound on its solution. Verify your bound by the substitution method.

End example
Exercises 4.2-4
Start example

Use a recursion tree to give an asymptotically tight solution to the recurrence T(n) = T(n - a) + T(a) + cn, where a 1 and c > 0 are constants.

End example
Exercises 4.2-5
Start example

Use a recursion tree to give an asymptotically tight solution to the recurrence T(n) = T(αn) + T((1 - α)n) + cn, where α is a constant in the range 0 <α < 1 and c > 0 is also a constant.

End example

4.3 The master method

The master method provides a "cookbook" method for solving recurrences of the form

(4.5) 

where a 1 and b > 1 are constants and f (n) is an asymptotically positive function. The master method requires memorization of three cases, but then the solution of many recurrences can be determined quite easily, often without pencil and paper.

The recurrence (4.5) describes the running time of an algorithm that divides a problem of size n into a subproblems, each of size n/b, where a and b are positive constants. The a subproblems are solved recursively, each in time T (n/b). The cost of dividing the problem and combining the results of the subproblems is described by the function f (n). (That is, using the notation from Section 2.3.2, f(n) = D(n)+C(n).) For example, the recurrence arising from the MERGE-SORT procedure has a = 2, b = 2, and f (n) = Θ(n).

As a matter of technical correctness, the recurrence isn't actually well defined because n/b might not be an integer. Replacing each of the a terms T (n/b) with either T (n/b) or T (n/b) doesn't affect the asymptotic behavior of the recurrence, however. (We'll prove this in the next section.) We normally find it convenient, therefore, to omit the floor and ceiling functions when writing divide-and-conquer recurrences of this form.

The master theorem

The master method depends on the following theorem.

Theorem 4.1: (Master theorem)
Start example

Let a 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined on the nonnegative integers by the recurrence

T(n) = aT(n/b) + f(n),

where we interpret n/b to mean either n/b or n/b. Then T (n) can be bounded asymptotically as follows.

  1. If for some constant > 0, then

  2. If , then .

  3. If for some constant > 0, and if a f (n/b) cf (n) for some constant c < 1 and all sufficiently large n, then T (n) = Θ(f (n)).

End example

Before applying the master theorem to some examples, let's spend a moment trying to understand what it says. In each of the three cases, we are comparing the function f (n) with the function . Intuitively, the solution to the recurrence is determined by the larger of the two functions. If, as in case 1, the function is the larger, then the solution is . If, as in case 3, the function f (n) is the larger, then the solution is T (n) = Θ(f (n)). If, as in case 2, the two functions are the same size, we multiply by a logarithmic factor, and the solution is .

Beyond this intuition, there are some technicalities that must be understood. In the first case, not only must f (n) be smaller than , it must be polynomially smaller. That is, f (n) must be asymptotically smaller than by a factor of n for some constant > 0. In the third case, not only must f (n) be larger than , it must be polynomially larger and in addition satisfy the "regularity" condition that af (n/b) cf(n). This condition is satisfied by most of the polynomially bounded functions that we shall encounter.

It is important to realize that the three cases do not cover all the possibilities for f (n). There is a gap between cases 1 and 2 when f (n) is smaller than but not polynomially smaller. Similarly, there is a gap between cases 2 and 3 when f (n) is larger than but not polynomially larger. If the function f (n) falls into one of these gaps, or if the regularity condition in case 3 fails to hold, the master method cannot be used to solve the recurrence.

Using the master method

To use the master method, we simply determine which case (if any) of the master theorem applies and write down the answer.

As a first example, consider

T (n) = 9T(n/3) + n.

For this recurrence, we have a = 9, b = 3, f (n) = n, and thus we have that . Since , where = 1, we can apply case 1 of the master theorem and conclude that the solution is T (n) = Θ(n2).

Now consider

T (n) = T (2n/3) + 1,

in which a = 1, b = 3/2, f (n) = 1, and . Case 2 applies, since , and thus the solution to the recurrence is T(n) = Θ(lg n).

For the recurrence

T(n) = 3T(n/4) + n lg n,

we have a = 3, b = 4, f (n) = n lg n, and . Since

, where 0.2, case 3 applies if we can show that the regularity condition holds for f (n). For sufficiently large n, af (n/b) = 3(n/4)lg(n/4) (3/4)n lg n = cf (n) for c = 3/4. Consequently, by case 3, the solution to the recurrence is T(n) = Θ(nlg n).

The master method does not apply to the recurrence

T(n) = 2T(n/2) + n lg n,

even though it has the proper form: a = 2, b = 2, f(n) = n lg n, and . It might seem that case 3 should apply, since f (n) = n lg n is asymptotically larger than . The problem is that it is not polynomially larger. The ratio is asymptotically less than n for any positive constant . Consequently, the recurrence falls into the gap between case 2 and case 3. (See Exercise 4.4-2 for a solution.)

Exercises 4.3-1
Start example

Use the master method to give tight asymptotic bounds for the following recurrences.

  1. T (n) = 4T(n/2) + n.

  2. T (n) = 4T(n/2) + n2.

  3. T (n) = 4T(n/2) + n3.

End example
Exercises 4.3-2
Start example

The recurrence T(n) = 7T (n/2)+n2 describes the running time of an algorithm A. A competing algorithm A has a running time of T(n) = aT(n/4) + n2. What is the largest integer value for a such that A is asymptotically faster than A?

End example
Exercises 4.3-3
Start example

Use the master method to show that the solution to the binary-search recurrence T(n) = T (n/2) + Θ(1) is T(n) = Θ(lg n). (See Exercise 2.3-5 for a description of binary search.)

End example
Exercises 4.3-4
Start example

Can the master method be applied to the recurrence T (n) = 4T(n/2) + n2 lg n? Why or why not? Give an asymptotic upper bound for this recurrence.

End example
Exercises 4.3-5:
Start example

Consider the regularity condition af (n/b) cf(n) for some constant c < 1, which is part of case 3 of the master theorem. Give an example of constants a 1 and b > 1 and a function f (n) that satisfies all the conditions in case 3 of the master theorem except the regularity condition.

End example
 

★ 4.4: Proof of the master theorem

This section contains a proof of the master theorem (Theorem 4.1). The proof need not be understood in order to apply the theorem.

The proof is in two parts. The first part analyzes the "master" recurrence (4.5), under the simplifying assumption that T(n) is defined only on exact powers of b > 1, that is, for n = 1, b, b2, .. This part gives all the intuition needed to understand why the master theorem is true. The second part shows how the analysis can be extended to all positive integers n and is merely mathematical technique applied to the problem of handling floors and ceilings.

In this section, we shall sometimes abuse our asymptotic notation slightly by using it to describe the behavior of functions that are defined only over exact powers of b. Recall that the definitions of asymptotic notations require that bounds be proved for all sufficiently large numbers, not just those that are powers of b. Since we could make new asymptotic notations that apply to the set {bi : i = 0, 1,...}, instead of the nonnegative integers, this abuse is minor.

Nevertheless, we must always be on guard when we are using asymptotic notation over a limited domain so that we do not draw improper conclusions. For example, proving that T (n) = O(n) when n is an exact power of 2 does not guarantee that T (n) = O(n). The function T (n) could be defined as

in which case the best upper bound that can be proved is T (n) = O(n2). Because of this sort of drastic consequence, we shall never use asymptotic notation over a limited domain without making it absolutely clear from the context that we are doing so.

4.4.1 The proof for exact powers

The first part of the proof of the master theorem analyzes the recurrence (4.5)

T (n) = aT (n/b) + f (n) ,

for the master method, under the assumption that n is an exact power of b > 1, where b need not be an integer. The analysis is broken into three lemmas. The first reduces the problem of solving the master recurrence to the problem of evaluating an expression that contains a summation. The second determines bounds on this summation. The third lemma puts the first two together to prove a version of the master theorem for the case in which n is an exact power of b.

Lemma 4.2
Start example

Let a 1 and b > 1 be constants, and let f (n) be a nonnegative function defined on exact powers of b. Define T (n) on exact powers of b by the recurrence

where i is a positive integer. Then

(4.6) 

Proof We use the recursion tree in Figure 4.3. The root of the tree has cost f (n), and it has a children, each with cost f (n/b). (It is convenient to think of a as being an integer, especially when visualizing the recursion tree, but the mathematics does not require it.) Each of these children has a children with cost f (n/b2), and thus there are a2 nodes that are distance 2 from the root. In general, there are aj nodes that are distance j from the root, and each has cost f (n/bj). The cost of each leaf is T (1) = Θ(1), and each leaf is at depth logb n, since . There are leaves in the tree.

Click To expand
Figure 4.3: The recursion tree generated by T (n) = aT (n/b) + f (n). The tree is a complete a-ary tree with leaves and height logb n. The cost of each level is shown at the right, and their sum is given in equation (4.6).

We can obtain equation (4.6) by summing the costs of each level of the tree, as shown in the figure. The cost for a level j of internal nodes is aj f(n/bj), and so the total of all internal node levels is

In the underlying divide-and-conquer algorithm, this sum represents the costs of dividing problems into subproblems and then recombining the subproblems. The cost of all the leaves, which is the cost of doing all subproblems of size 1, is .

End example

In terms of the recursion tree, the three cases of the master theorem correspond to cases in which the total cost of the tree is (1) dominated by the costs in the leaves, (2) evenly distributed across the levels of the tree, or (3) dominated by the cost of the root.

The summation in equation (4.6) describes the cost of the dividing and combining steps in the underlying divide-and-conquer algorithm. The next lemma provides asymptotic bounds on the summation's growth.

Lemma 4.3
Start example

Let a 1 and b > 1 be constants, and let f(n) be a nonnegative function defined on exact powers of b. A function g(n) defined over exact powers of b by

(4.7) 

can then be bounded asymptotically for exact powers of b as follows.

  1. If for some constant > 0, then .

  2. If , then .

  3. If af (n/b) cf(n) for some constant c < 1 and for all n b, then g(n) = Θ(f(n)).

Proof For case 1, we have , which implies that . Substituting into equation (4.7) yields

(4.8) 

We bound the summation within the O-notation by factoring out terms and simplifying, which leaves an increasing geometric series:

Since b and are constants, we can rewrite the last expression as . Substituting this expression for the summation in equation (4.8) yields

and case 1 is proved.

Under the assumption that for case 2, we have that . Substituting into equation (4.7) yields

(4.9) 

We bound the summation within the Θ as in case 1, but this time we do not obtain a geometric series. Instead, we discover that every term of the summation is the same:

Substituting this expression for the summation in equation (4.9) yields

g(n)

=

(

 

=

(,

and case 2 is proved.

Case 3 is proved similarly. Since f(n) appears in the definition (4.7) of g(n) and all terms of g(n) are nonnegative, we can conclude that g(n) = (f(n)) for exact powers of b. Under our assumption that af(n/b) cf(n) for some constant c < 1 and all n b, we have f(n/b) (c/a)f(n). Iterating j times, we have f(n/bj) (c/a)j f(n) or, equivalently, aj f(n/bj) cj f(n). Substituting into equation (4.7) and simplifying yields a geometric series, but unlike the series in case 1, this one has decreasing terms:

since c is constant. Thus, we can conclude that g(n) = Θ(f(n)) for exact powers of b. Case 3 is proved, which completes the proof of the lemma.

End example

We can now prove a version of the master theorem for the case in which n is an exact power of b.

Lemma 4.4
Start example

Let a 1 and b > 1 be constants, and let f(n) be a nonnegative function defined on exact powers of b. Define T(n) on exact powers of b by the recurrence

where i is a positive integer. Then T(n) can be bounded asymptotically for exact powers of b as follows.

  1. If for some constant > 0, then .

  2. If , then .

  3. If for some constant > 0, and if af (n/b) cf(n) for some constant c < 1 and all sufficiently large n, then T(n) = Θ(f(n)).

Proof We use the bounds in Lemma 4.3 to evaluate the summation (4.6) from Lemma 4.2. For case 1, we have

T(n)

=

 

=

,

and for case 2,

T(n)

=

 

=

.

For case 3,

T(n)

=

 

=

Θ(f(n)),

because .

End example

4.4.2 Floors and ceilings

To complete the proof of the master theorem, we must now extend our analysis to the situation in which floors and ceilings are used in the master recurrence, so that the recurrence is defined for all integers, not just exact powers of b. Obtaining a lower bound on

(4.10) 

and an upper bound on

(4.11) 

is routine, since the bound n/b n/b can be pushed through in the first case to yield the desired result, and the bound n/b n/b can be pushed through in the second case. Lower bounding the recurrence (4.11) requires much the same technique as upper bounding the recurrence (4.10), so we shall present only this latter bound.

We modify the recursion tree of Figure 4.3 to produce the recursion tree in Figure 4.4. As we go down in the recursion tree, we obtain a sequence of recursive invocations on the arguments

Click To expand
Figure 4.4: The recursion tree generated by T(n) = aT(n/b) + f(n). The recursive argument nj is given by equation (4.12).
n ,n/b ,n/b/b ,n/b/b/b ,          

Let us denote the jth element in the sequence by nj, where

(4.12) 

Our first goal is to determine the depth k such that nk is a constant. Using the inequality x x + 1, we obtain

In general,

Letting j = logb n, we obtain

and thus we see that at depth logb n, the problem size is at most a constant.

From Figure 4.4, we see that

(4.13) 

which is much the same as equation (4.6), except that n is an arbitrary integer and not restricted to be an exact power of b.

We can now evaluate the summation

(4.14) 

from (4.13) in a manner analogous to the proof of Lemma 4.3. Beginning with case 3, if af(n/b) cf(n) for n > b + b/(b - 1), where c < 1 is a constant, then it follows that ajf(nj) cjf(n). Therefore, the sum in equation (4.14) can be evaluated just as in Lemma 4.3. For case 2, we have . If we can show that , then the proof for case 2 of Lemma 4.3 will go through. Observe that j = logb n implies bj/n 1. The bound implies that there exists a constant c > 0 such that for all sufficiently large nj,

since is a constant. Thus, case 2 is proved. The proof of case 1 is almost identical. The key is to prove the bound , which is similar to the corresponding proof of case 2, though the algebra is more intricate.

We have now proved the upper bounds in the master theorem for all integers n. The proof of the lower bounds is similar.

Exercises 4.4-1:
Start example

Give a simple and exact expression for nj in equation (4.12) for the case in which b is a positive integer instead of an arbitrary real number.

End example
Exercises 4.4-2:
Start example

Show that if , where k 0, then the master recurrence has solution . For simplicity, confine your analysis to exact powers of b.

End example
Exercises 4.4-3:
Start example

Show that case 3 of the master theorem is overstated, in the sense that the regularity condition af(n/b) cf(n) for some constant c < 1 implies that there exists a constant > 0 such that .

End example
Problems 4-1: Recurrence examples
Start example

Give asymptotic upper and lower bounds for T(n) in each of the following recurrences. Assume that T(n) is constant for n 2. Make your bounds as tight as possible, and justify your answers.

  1. T(n) = 2T(n/2) + n3.

  2. T(n) = T(9n/10) + n.

  3. T(n) = 16T(n/4) + n2.

  4. T (n) = 7T(n/3) + n2.

  5. T(n) = 7T(n/2) + n2.

  6. .

  7. T(n) = T(n - 1) + n.

  8. .

End example
Problems 4-2: Finding the missing integer
Start example

An array A[1 n] contains all the integers from 0 to n except one. It would be easy to determine the missing integer in O(n) time by using an auxiliary array B[0 n] to record which numbers appear in A. In this problem, however, we cannot access an entire integer in A with a single operation. The elements of A are represented in binary, and the only operation we can use to access them is "fetch the jth bit of A[i]," which takes constant time.

Show that if we use only this operation, we can still determine the missing integer in O(n) time.

End example
Problems 4-3: Parameter-passing costs
Start example

Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies:

  1. An array is passed by pointer. Time = Θ(1).

  2. An array is passed by copying. Time = Θ(N), where N is the size of the array.

  3. An array is passed by copying only the subrange that might be accessed by the called procedure. Time = Θ(q - p + 1) if the subarray A[p q] is passed.

  1. Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let N be the size of the original problem and n be the size of a subproblem.

  2. Redo part (a) for the MERGE-SORT algorithm from Section 2.3.1.

End example
Problems 4-4: More recurrence examples
Start example

Give asymptotic upper and lower bounds for T(n) in each of the following recurrences. Assume that T(n) is constant for sufficiently small n. Make your bounds as tight as possible, and justify your answers.

  1. T(n) = 3T(n/2) + n lg n.

  2. T(n) = 5T(n/5) + n/ lg n.

  3. T(n) = 3T(n/3 + 5) + n/2.

  4. T(n) = 2T(n/2) + n/ lg n.

  5. T(n) = T(n/2) + T(n/4) + T(n/8) + n.

  6. T(n) = T(n - 1) + 1/n.

  7. T(n) = T(n - 1) + lg n.

  8. T(n) = T(n - 2) + 2 lg n.

  9. .

End example
Problems 4-5: Fibonacci numbers
Start example

This problem develops properties of the Fibonacci numbers, which are defined by recurrence (3.21). We shall use the technique of generating functions to solve the Fibonacci recurrence. Define the generating function (or formal power series) F as

Click To expand

where Fi is the ith Fibonacci number.

  1. Show that F (z) = z + z F (z) + z2F(z).

  2. Show that

    where

    and

  3. Show that

  4. Prove that for i > 0, rounded to the nearest integer. (Hint: Observe .)

  5. Prove that Fi+2 φi for i 0.

End example
Problems 4-6: VLSI chip testing
Start example

Professor Diogenes has n supposedly identical VLSI[1] chips that in principle are capable of testing each other. The professor's test jig accommodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether the other chip is good or bad, but the answer of a bad chip cannot be trusted. Thus, the four possible outcomes of a test are as follows:

Chip A says

Chip B says

Conclusion


B is good

A is good

both are good, or both are bad

B is good

A is bad

at least one is bad

B is bad

A is good

at least one is bad

B is bad

A is bad

at least one is bad

  1. Show that if more than n/2 chips are bad, the professor cannot necessarily determine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor.

  2. Consider the problem of finding a single good chip from among n chips, assuming that more than n/2 of the chips are good. Show that n/2 pairwise tests are sufficient to reduce the problem to one of nearly half the size.

  3. Show that the good chips can be identified with Θ(n) pairwise tests, assuming that more than n/2 of the chips are good. Give and solve the recurrence that describes the number of tests.

End example
Problems 4-7: Monge arrays
Start example

An m × n array A of real numbers is a Monge array if for all i, j, k, and l such that 1 i < k m and 1 j < l n, we have

A[i, j] + A[k, l] A[i, l] + A[k, j].

In other words, whenever we pick two rows and two columns of a Monge array and consider the four elements at the intersections of the rows and the columns, the sum of the upper-left and lower-right elements is less or equal to the sum of the lower-left and upper-right elements. For example, the following array is Monge:

10

17

13

28

23

17

22

16

29

23

24

28

22

34

24

11

13

6

17

7

45

44

32

37

23

36

33

19

21

6

75

66

51

53

34

  1. Prove that an array is Monge if and only if for all i = 1, 2, , m - 1 and j = 1, 2,, n - 1, we have

    A[i, j] + A[i + 1, j + 1] A[i, j + 1] + A[i + 1, j].

    Note 

    (For the "only if" part, use induction separately on rows and columns.)

  2. The following array is not Monge. Change one element in order to make it Monge. (Hint: Use part (a).)

    37

    23

    22

    32

    21

    6

    7

    10

    53

    34

    30

    31

    32

    13

    9

    6

    43

    21

    15

    8

  3. Let f(i) be the index of the column containing the leftmost minimum element of row i. Prove that f(1) f(2) ··· f(m) for any m × n Monge array.

  4. Here is a description of a divide-and-conquer algorithm that computes the left-most minimum element in each row of an m × n Monge array A:

    • Construct a submatrix A of A consisting of the even-numbered rows of A. Recursively determine the leftmost minimum for each row of A. Then compute the leftmost minimum in the odd-numbered rows of A.

    Explain how to compute the leftmost minimum in the odd-numbered rows of A (given that the leftmost minimum of the even-numbered rows is known) in O(m + n) time.

  5. Write the recurrence describing the running time of the algorithm described in part (d). Show that its solution is O(m + n log m).

End example

[1]VLSI stands for "very large scale integration," which is the integrated-circuit chip technology used to fabricate most microprocessors today.


Chapter notes

Recurrences were studied as early as 1202 by L. Fibonacci, for whom the Fibonacci numbers are named. A. De Moivre introduced the method of generating functions (see Problem 4-5) for solving recurrences. The master method is adapted from Bentley, Haken, and Saxe [41], which provides the extended method justified by Exercise 4.4-2. Knuth [182] and Liu [205] show how to solve linear recurrences using the method of generating functions. Purdom and Brown [252] and Graham, Knuth, and Patashnik [132] contain extended discussions of recurrence solving.

Several researchers, including Akra and Bazzi [13], Roura [262], and Verma [306], have given methods for solving more general divide-and-conquer recurrences than are solved by the master method. We describe the result of Akra and Bazzi here, which works for recurrences of the form

(4.15) 

where k 1; all coefficients ai are positive and sum to at least 1; all bi are at least 2; f(n) is bounded, positive, and nondecreasing; and for all constants c > 1, there exist constants n0, d > 0 such that f(n/c) df (n) for all n n0. This method would work on a recurrence such as T(n) = T(n/3) + T(2n/3) + O(n), for which the master method does not apply. To solve the recurrence (4.15), we first find the value of p such that . (Such a p always exists, and it is unique and positive.) The solution to the recurrence is then

for n a sufficiently large constant. The Akra-Bazzi method can be somewhat difficult to use, but it serves in solving recurrences that model division of the problem into substantially unequally sized subproblems. The master method is simpler to use, but it applies only when subproblem sizes are equal.

 
 

Chapter 5: Probabilistic Analysis and Randomized Algorithms

This chapter introduces probabilistic analysis and randomized algorithms. If you are unfamiliar with the basics of probability theory, you should read Appendix C, which reviews this material. Probabilistic analysis and randomized algorithms will be revisited several times throughout this book.

5.1 The hiring problem

Suppose that you need to hire a new office assistant. Your previous attempts at hiring have been unsuccessful, and you decide to use an employment agency. The employment agency will send you one candidate each day. You will interview that person and then decide to either hire that person or not. You must pay the employment agency a small fee to interview an applicant. To actually hire an applicant is more costly, however, since you must fire your current office assistant and pay a large hiring fee to the employment agency. You are committed to having, at all times, the best possible person for the job. Therefore, you decide that, after interviewing each applicant, if that applicant is better qualified than the current office assistant, you will fire the current office assistant and hire the new applicant. You are willing to pay the resulting price of this strategy, but you wish to estimate what that price will be.

The procedure HIRE-ASSISTANT, given below, expresses this strategy for hiring in pseudocode. It assumes that the candidates for the office assistant job are numbered 1 through n. The procedure assumes that you are able to, after interviewing candidate i, determine if candidate i is the best candidate you have seen so far. To initialize, the procedure creates a dummy candidate, numbered 0, who is less qualified than each of the other candidates.

HIRE-ASSISTANT(n)1  best  0     ® candidate 0 is a least-qualified dummy candidate2  for i  1 to n3       do interview candidate i4          if candidate i is better than candidate best5             then best  i6                  hire candidate i

The cost model for this problem differs from the model described in Chapter 2. We are not concerned with the running time of HIRE-ASSISTANT, but instead with the cost incurred by interviewing and hiring. On the surface, analyzing the cost of this algorithm may seem very different from analyzing the running time of, say, merge sort. The analytical techniques used, however, are identical whether we are analyzing cost or running time. In either case, we are counting the number of times certain basic operations are executed.

Interviewing has a low cost, say ci, whereas hiring is expensive, costing ch. Let m be the number of people hired. Then the total cost associated with this algorithm is O(nci + mch). No matter how many people we hire, we always interview n candidates and thus always incur the cost nci associated with interviewing. We therefore concentrate on analyzing mch, the hiring cost. This quantity varies with each run of the algorithm.

This scenario serves as a model for a common computational paradigm. It is often the case that we need to find the maximum or minimum value in a sequence by examining each element of the sequence and maintaining a current "winner." The hiring problem models how often we update our notion of which element is currently winning.

Worst-case analysis

In the worst case, we actually hire every candidate that we interview. This situation occurs if the candidates come in increasing order of quality, in which case we hire n times, for a total hiring cost of O(nch).

It might be reasonable to expect, however, that the candidates do not always come in increasing order of quality. In fact, we have no idea about the order in which they arrive, nor do we have any control over this order. Therefore, it is natural to ask what we expect to happen in a typical or average case.

Probabilistic analysis

Probabilistic analysis is the use of probability in the analysis of problems. Most commonly, we use probabilistic analysis to analyze the running time of an algorithm. Sometimes, we use it to analyze other quantities, such as the hiring cost in procedure HIRE-ASSISTANT. In order to perform a probabilistic analysis, we must use knowledge of, or make assumptions about, the distribution of the inputs. Then we analyze our algorithm, computing an expected running time. The expectation is taken over the distribution of the possible inputs. Thus we are, in effect, averaging the running time over all possible inputs.

We must be very careful in deciding on the distribution of inputs. For some problems, it is reasonable to assume something about the set of all possible inputs, and we can use probabilistic analysis as a technique for designing an efficient algorithm and as a means for gaining insight into a problem. For other problems, we cannot describe a reasonable input distribution, and in these cases we cannot use probabilistic analysis.

For the hiring problem, we can assume that the applicants come in a random order. What does that mean for this problem? We assume that we can compare any two candidates and decide which one is better qualified; that is, there is a total order on the candidates. (See Appendix B for the definition of a total order.) We can therefore rank each candidate with a unique number from 1 through n, using rank(i) to denote the rank of applicant i, and adopt the convention that a higher rank corresponds to a better qualified applicant. The ordered list <rank(1), rank(2), ..., rank(n)> is a permutation of the list <1, 2, ..., n>. Saying that the applicants come in a random order is equivalent to saying that this list of ranks is equally likely to be any one of the n! permutations of the numbers 1 through n. Alternatively, we say that the ranks form a uniform random permutation; that is, each of the possible n! permutations appears with equal probability.

Section 5.2 contains a probabilistic analysis of the hiring problem.

Randomized algorithms

In order to use probabilistic analysis, we need to know something about the distribution on the inputs. In many cases, we know very little about the input distribution. Even if we do know something about the distribution, we may not be able to model this knowledge computationally. Yet we often can use probability and randomness as a tool for algorithm design and analysis, by making the behavior of part of the algorithm random.

In the hiring problem, it may seem as if the candidates are being presented to us in a random order, but we have no way of knowing whether or not they really are. Thus, in order to develop a randomized algorithm for the hiring problem, we must have greater control over the order in which we interview the candidates. We will, therefore, change the model slightly. We will say that the employment agency has n candidates, and they send us a list of the candidates in advance. On each day, we choose, randomly, which candidate to interview. Although we know nothing about the candidates (besides their names), we have made a significant change. Instead of relying on a guess that the candidates will come to us in a random order, we have instead gained control of the process and enforced a random order.

More generally, we call an algorithm randomized if its behavior is determined not only by its input but also by values produced by a random-number generator. We shall assume that we have at our disposal a random-number generator RANDOM. A call to RANDOM(a, b) returns an integer between a and b, inclusive, with each such integer being equally likely. For example, RANDOM(0, 1) produces 0 with probability 1/2, and it produces 1 with probability 1/2. A call to RANDOM(3, 7) returns either 3, 4, 5, 6 or 7, each with probability 1/5. Each integer returned by RANDOM is independent of the integers returned on previous calls. You may imagine RANDOM as rolling a (b - a + 1)-sided die to obtain its output. (In practice, most programming environments offer a pseudorandom-number generator: a deterministic algorithm returning numbers that "look" statistically random.)a

Exercises 5.1-1
Start example

Show that the assumption that we are always able to determine which candidate is best in line 4 of procedure HIRE-ASSISTANT implies that we know a total order on the ranks of the candidates.

End example
Exercises 5.1-2:
Start example

Describe an implementation of the procedure RANDOM(a, b) that only makes calls to RANDOM(0, 1). What is the expected running time of your procedure, as a function of a and b?

End example
Exercises 5.1-3:
Start example

Suppose that you want to output 0 with probability 1/2 and 1 with probability 1/2. At your disposal is a procedure BIASED-RANDOM, that outputs either 0 or 1. It outputs 1 with some probability p and 0 with probability 1 - p, where 0 < p < 1, but you do not know what p is. Give an algorithm that uses BIASED-RANDOM as a subroutine, and returns an unbiased answer, returning 0 with probability 1/2 and 1 with probability 1/2. What is the expected running time of your algorithm as a function of p?

 

5.2 Indicator random variables

In order to analyze many algorithms, including the hiring problem, we will use indicator random variables. Indicator random variables provide a convenient method for converting between probabilities and expectations. Suppose we are given a sample space S and an event A. Then the indicator random variable I {A} associated with event A is defined as

(5.1) 

As a simple example, let us determine the expected number of heads that we obtain when flipping a fair coin. Our sample space is S = {H, T}, and we define a random variable Y which takes on the values H and T, each with probability 1/2. We can then define an indicator random variable XH, associated with the coin coming up heads, which we can express as the event Y = H. This variable counts the number of heads obtained in this flip, and it is 1 if the coin comes up heads and 0 otherwise. We write

The expected number of heads obtained in one flip of the coin is simply the expected value of our indicator variable XH:

E[XH]

=

E[I{Y = H}]

 

=

1 · Pr{Y = H} + 0 · Pr{Y = T}

 

=

1 · (1/2) + 0 · (1/2)

 

=

1/2.

Thus the expected number of heads obtained by one flip of a fair coin is 1/2. As the following lemma shows, the expected value of an indicator random variable associated with an event A is equal to the probability that A occurs.

Lemma 5.1
Start example

Given a sample space S and an event A in the sample space S, let XA = I{A}. Then E[XA] = Pr{A}.

Proof By the definition of an indicator random variable from equation 1) and the definition of expected value, we have

E[XA]

=

E[I{A}]

 

=

1 · Pr{A} + 0 · Pr{Ā}

 

=

Pr{A},

where Ā denotes S - A, the complement of A.

End example

Although indicator random variables may seem cumbersome for an application such as counting the expected number of heads on a flip of a single coin, they are useful for analyzing situations in which we perform repeated random trials. For example, indicator random variables give us a simple way to arrive at the result of equation (C.36). In this equation, we compute the number of heads in n coin flips by considering separately the probability of obtaining 0 heads, 1 heads, 2 heads, etc. However, the simpler method proposed in equation (C.37) actually implicitly uses indicator random variables. Making this argument more explicit, we can let Xi be the indicator random variable associated with the event in which the ith flip comes up heads. Letting Yi be the random variable denoting the outcome of the ith flip, we have that Xi = I{Yi = H}. Let X be the random variable denoting the total number of heads in the n coin flips, so that

We wish to compute the expected number of heads, so we take the expectation of both sides of the above equation to obtain

The left side of the above equation is the expectation of the sum of n random variables. By Lemma 5.1, we can easily compute the expectation of each of the random variables. By equation (C.20)-linearity of expectation-it is easy to compute the expectation of the sum: it equals the sum of the expectations of the n random variables. Linearity of expectation makes the use of indicator random variables a powerful analytical technique; it applies even when there is dependence among the random variables. We now can easily compute the expected number of heads:

Thus, compared to the method used in equation (C.36), indicator random variables greatly simplify the calculation. We shall use indicator random variables throughout this book.

Analysis of the hiring problem using indicator random variables

Returning to the hiring problem, we now wish to compute the expected number of times that we hire a new office assistant. In order to use a probabilistic analysis, we assume that the candidates arrive in a random order, as discussed in the previous section. (We shall see in Section 5.3 how to remove this assumption.) Let X be the random variable whose value equals the number of times we hire a new office assistant. We could then apply the definition of expected value from equation (C.19) to obtain

but this calculation would be cumbersome. We shall instead use indicator random variables to greatly simplify the calculation.

To use indicator random variables, instead of computing E[X] by defining one variable associated with the number of times we hire a new office assistant, we define n variables related to whether or not each particular candidate is hired. In particular, we let Xi be the indicator random variable associated with the event in which the ith candidate is hired. Thus,

(5.2) 

and

(5.3) 

By Lemma 5.1, we have that

E[Xi] = Pr {candidate i is hired},

and we must therefore compute the probability that lines 5-6 of HIRE-ASSISTANT are executed.

Candidate i is hired, in line 5, exactly when candidate i is better than each of candidates 1 through i - 1. Because we have assumed that the candidates arrive in a random order, the first i candidates have appeared in a random order. Any one of these first i candidates is equally likely to be the best-qualified so far. Candidate i has a probability of 1/i of being better qualified than candidates 1 through i - 1 and thus a probability of 1/i of being hired. By Lemma 5.1, we conclude that

(5.4) 

Now we can compute E[X]:

(5.5) 
(5.6) 

Even though we interview n people, we only actually hire approximately ln n of them, on average. We summarize this result in the following lemma.

Lemma 5.2
Start example

Assuming that the candidates are presented in a random order, algorithm HIRE-ASSISTANT has a total hiring cost of O(ch ln n).

Proof The bound follows immediately from our definition of the hiring cost and equation (5.6).

End example

The expected interview cost is a significant improvement over the worst-case hiring cost of O(nch).

Exercises 5.2-1
Start example

In HIRE-ASSISTANT, assuming that the candidates are presented in a random order, what is the probability that you will hire exactly one time? What is the probability that you will hire exactly n times?

End example
Exercises 5.2-2
Start example

In HIRE-ASSISTANT, assuming that the candidates are presented in a random order, what is the probability that you will hire exactly twice?

End example
Exercises 5.2-3
Start example

Use indicator random variables to compute the expected value of the sum of n dice.

End example
Exercises 5.2-4
Start example

Use indicator random variables to solve the following problem, which is known as the hat-check problem. Each of n customers gives a hat to a hat-check person at a restaurant. The hat-check person gives the hats back to the customers in a random order. What is the expected number of customers that get back their own hat?

End example
Exercises 5.2-5
Start example

Let A[1 .. n] be an array of n distinct numbers. If i < j and A[i] > A[j], then the pair (i, j) is called an inversion of A. (See Problem 2-4 for more on inversions.) Suppose that each element of A is chosen randomly, independently, and uniformly from the range 1 through n. Use indicator random variables to compute the expected number of inversions.

End example

 

5.3 Randomized algorithms

In the previous section, we showed how knowing a distribution on the inputs can help us to analyze the average-case behavior of an algorithm. Many times, we do not have such knowledge and no average-case analysis is possible. As mentioned in Section 5.1, we may be able to use a randomized algorithm.

For a problem such as the hiring problem, in which it is helpful to assume that all permutations of the input are equally likely, a probabilistic analysis will guide the development of a randomized algorithm. Instead of assuming a distribution of inputs, we impose a distribution. In particular, before running the algorithm, we randomly permute the candidates in order to enforce the property that every permutation is equally likely. This modification does not change our expectation of hiring a new office assistant roughly ln n times. It means, however, that for any input we expect this to be the case, rather than for inputs drawn from a particular distribution.

We now explore the distinction between probabilistic analysis and randomized algorithms further. In Section 5.2, we claimed that, assuming that the candidates are presented in a random order, the expected number of times we hire a new office assistant is about ln n. Note that the algorithm here is deterministic; for any particular input, the number of times a new office assistant is hired will always be the same. Furthermore, the number of times we hire a new office assistant differs for different inputs, and it depends on the ranks of the various candidates. Since this number depends only on the ranks of the candidates, we can represent a particular input by listing, in order, the ranks of the candidates, i.e., <rank(1), rank(2), ..., rank(n)>. Given the rank list A1 = <1, 2, 3, 4, 5, 6, 7, 8, 9, 10>, a new office assistant will always be hired 10 times, since each successive candidate is better than the previous one, and lines 5-6 will be executed in each iteration of the algorithm. Given the list of ranks A2 = <10, 9, 8, 7, 6, 5, 4, 3, 2, 1>, a new office assistant will be hired only once, in the first iteration. Given a list of ranks A3= <5, 2, 1, 8, 4, 7, 10, 9, 3, 6>, a new office assistant will be hired three times, upon interviewing the candidates with ranks 5, 8, and 10. Recalling that the cost of our algorithm is dependent on how many times we hire a new office assistant, we see that there are expensive inputs, such as A1, inexpensive inputs, such as A2, and moderately expensive inputs, such as A3.

Consider, on the other hand, the randomized algorithm that first permutes the candidates and then determines the best candidate. In this case, the randomization is in the algorithm, not in the input distribution. Given a particular input, say A3 above, we cannot say how many times the maximum will be updated, because this quantity differs with each run of the algorithm. The first time we run the algorithm on A3, it may produce the permutation A1 and perform 10 updates, while the second time we run the algorithm, we may produce the permutation A2 and perform only one update. The third time we run it, we may perform some other number of updates. Each time we run the algorithm, the execution depends on the random choices made and is likely to differ from the previous execution of the algorithm. For this algorithm and many other randomized algorithms, no particular input elicits its worst-case behavior. Even your worst enemy cannot produce a bad input array, since the random permutation makes the input order irrelevant. The randomized algorithm performs badly only if the random-number generator produces an "unlucky" permutation.

For the hiring problem, the only change needed in the code is to randomly permute the array.

RANDOMIZED-HIRE-ASSISTANT(n)1 randomly permute the list of candidates2 best  0      ® candidate 0 is a least-qualified dummy candidate3 for i  1 to n4       do interview candidate i5          if candidate i is better than candidate best6             then best  i7                  hire candidate i

With this simple change, we have created a randomized algorithm whose performance matches that obtained by assuming that the candidates were presented in a random order.

Lemma 5.3
Start example

The expected hiring cost of the procedure RANDOMIZED-HIRE-ASSISTANT is O(ch ln n).

Proof After permuting the input array, we have achieved a situation identical to that of the probabilistic analysis of HIRE-ASSISTANT.

End example

The comparison between Lemmas 5.2 and 5.3 captures the difference between probabilistic analysis and randomized algorithms. In Lemma 5.2, we make an assumption about the input. In Lemma 5.3, we make no such assumption, although randomizing the input takes some additional time. In the remainder of this section, we discuss some issues involved in randomly permuting inputs.

Randomly permuting arrays

Many randomized algorithms randomize the input by permuting the given input array. (There are other ways to use randomization.) Here, we shall discuss two methods for doing so. We assume that we are given an array A which, without loss of generality, contains the elements 1 through n. Our goal is to produce a random permutation of the array.

One common method is to assign each element A[i] of the array a random priority P[i], and then sort the elements of A according to these priorities. For example if our initial array is A = <1, 2, 3, 4> and we choose random priorities P = <36, 3, 97, 19>, we would produce an array B = <2, 4, 1, 3>, since the second priority is the smallest, followed by the fourth, then the first, and finally the third. We call this procedure PERMUTE-BY-SORTING:

PERMUTE-BY-SORTING(A)1 n  length[A]2 for i  1 to n3      do P[i] = RANDOM(1, n3)4 sort A, using P as sort keys5 return A

Line 3 chooses a random number between 1 and n3. We use a range of 1 to n3 to make it likely that all the priorities in P are unique. (Exercise 5.3-5 asks you to prove that the probability that all entries are unique is at least 1 - 1/n, and Exercise 5.3-6 asks how to implement the algorithm even if two or more priorities are identical.) Let us assume that all the priorities are unique.

The time-consuming step in this procedure is the sorting in line 4. As we shall see in Chapter 8, if we use a comparison sort, sorting takes (n lg n) time. We can achieve this lower bound, since we have seen that merge sort takes Θ(n lg n) time. (We shall see other comparison sorts that take Θ(n lg n) time in Part II.) After sorting, if P[i] is the jth smallest priority, then A[i] will be in position j of the output. In this manner we obtain a permutation. It remains to prove that the procedure produces a uniform random permutation, that is, that every permutation of the numbers 1 through n is equally likely to be produced.

Lemma 5.4
Start example

Procedure PERMUTE-BY-SORTING produces a uniform random permutation of the input, assuming that all priorities are distinct.

Proof We start by considering the particular permutation in which each element A[i] receives the ith smallest priority. We shall show that this permutation occurs with probability exactly 1/n!. For i = 1, 2, ..., n, let Xi be the event that element A[i] receives the ith smallest priority. Then we wish to compute the probability that for all i, event Xi occurs, which is

Pr {X1 X2 X3 ··· Xn-1 Xn}.

Using Exercise C.2-6, this probability is equal to

Pr {X1} · Pr{X2 | X1} · Pr{X3 | X2 X1} · Pr{X4 | X3 X2 X1}

Pr{Xi | Xi-1 Xi-2 ··· X1} Pr{Xn | Xn-1 ··· X1}.

We have that Pr {X1} = 1/n because it is the probability that one priority chosen randomly out of a set of n is the smallest. Next, we observe that Pr {X2 | X1} = 1/(n - 1) because given that element A[1] has the smallest priority, each of the remaining n - 1 elements has an equal chance of having the second smallest priority. In general, for i = 2, 3, ..., n, we have that Pr {Xi | Xi-1 Xi-2 ··· X1} = 1/(n - i + 1), since, given that elements A[1] through A[i - 1] have the i - 1 smallest priorities (in order), each of the remaining n - (i - 1) elements has an equal chance of having the ith smallest priority. Thus, we have

Click To expand

and we have shown that the probability of obtaining the identity permutation is 1/n!.

We can extend this proof to work for any permutation of priorities. Consider any fixed permutation σ = <σ(1), σ(2), ..., σ(n)> of the set {1, 2, ..., n}. Let us denote by ri the rank of the priority assigned to element A[i], where the element with the jth smallest priority has rank j. If we define Xi as the event in which element A[i] receives the σ(i)th smallest priority, or ri = σ(i), the same proof still applies. Therefore, if we calculate the probability of obtaining any particular permutation, the calculation is identical to the one above, so that the probability of obtaining this permutation is also 1/n!.

End example

One might think that to prove that a permutation is a uniform random permutation it suffices to show that, for each element A[i], the probability that it winds up in position j is 1/n. Exercise 5.3-4 shows that this weaker condition is, in fact, insufficient.

A better method for generating a random permutation is to permute the given array in place. The procedure RANDOMIZE-IN-PLACE does so in O(n) time. In iteration i, the element A[i] is chosen randomly from among elements A[i] through A[n]. Subsequent to iteration i, A[i] is never altered.

RANDOMIZE-IN-PLACE(A)1 n  length[A]2 for i  to n3      do swap A[i]  A[RANDOM(i, n)]

We will use a loop invariant to show that procedure RANDOMIZE-IN-PLACE produces a uniform random permutation. Given a set of n elements, a k-permutation is a sequence containing k of the n elements. (See Appendix B.) There are n!/(n - k)! such possible k-permutations.

Lemma 5.5
Start example

Procedure RANDOMIZE-IN-PLACE computes a uniform random permutation.

Proof We use the following loop invariant:

  • Just prior to the ith iteration of the for loop of lines 2-3, for each possible (i - 1)-permutation, the subarray A[1 .. i - 1] contains this (i - 1)-permutation with probability (n - i + 1)!/n!.

We need to show that this invariant is true prior to the first loop iteration, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates.

  • Initialization: Consider the situation just before the first loop iteration, so that i = 1. The loop invariant says that for each possible 0-permutation, the sub-array A[1 .. 0] contains this 0-permutation with probability (n - i + 1)!/n! = n!/n! = 1. The subarray A[1 .. 0] is an empty subarray, and a 0-permutation has no elements. Thus, A[1 .. 0] contains any 0-permutation with probability 1, and the loop invariant holds prior to the first iteration.

  • Maintenance: We assume that just before the (i - 1)st iteration, each possible (i - 1)-permutation appears in the subarray A[1 .. i - 1] with probability (n - i + 1)!/n!, and we will show that after the ith iteration, each possible i-permutation appears in the subarray A[1 .. i] with probability (n - i)!/n!. Incrementing i for the next iteration will then maintain the loop invariant.

    Let us examine the ith iteration. Consider a particular i-permutation, and denote the elements in it by <x1, x2, ..., xi>. This permutation consists of an (i - 1)-permutation <x1, ..., xi-1> followed by the value xi that the algorithm places in A[i]. Let E1 denote the event in which the first i - 1 iterations have created the particular (i - 1)-permutation <x1,..., xi-1> in A[1 .. i - 1]. By the loop invariant, Pr {E1} = (n - i + 1)!/n!. Let E2 be the event that ith iteration puts xi in position A[i]. The i-permutation <x1, ..., xi> is formed in A[1 .. i] precisely when both E1 and E2 occur, and so we wish to compute Pr {E2 E1}. Using equation (C.14), we have

    Pr {E2 E1} = Pr{E2 | E1}Pr{E1}.

    The probability Pr {E2 | E1} equals 1/(n-i + 1) because in line 3 the algorithm chooses xi randomly from the n - i + 1 values in positions A[i .. n]. Thus, we have

  • Termination: At termination, i = n + 1, and we have that the subarray A[1 .. n] is a given n-permutation with probability (n - n)!/n! = 1/n!.

    Thus, RANDOMIZE-IN-PLACE produces a uniform random permutation.

End example

A randomized algorithm is often the simplest and most efficient way to solve a problem. We shall use randomized algorithms occasionally throughout this book.

Exercises 5.3-1
Start example

Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the first iteration. His reasoning is that one could just as easily declare that an empty subarray contains no 0-permutations. Therefore, the probability that an empty subarray contains a 0-permutation should be 0, thus invalidating the loop invariant prior to the first iteration. Rewrite the procedure RANDOMIZE-IN-PLACE so that its associated loop invariant applies to a nonempty subarray prior to the first iteration, and modify the proof of Lemma 5.5 for your procedure.

End example
Exercises 5.3-2
Start example

Professor Kelp decides to write a procedure that will produce at random any permutation besides the identity permutation. He proposes the following procedure:

PERMUTE-WITHOUT-IDENTITY(A)1 n  length[A]2 for i  1 to n3      do swap A[i]  A[RANDOM(i + 1, n)]

Does this code do what Professor Kelp intends?

End example
Exercises 5.3-3
Start example

Suppose that instead of swapping element A[i] with a random element from the subarray A[i .. n], we swapped it with a random element from anywhere in the array:

PERMUTE-WITH-ALL(A)1 n  length[A]2 for i  1 to n3      do swap A[i]  A[RANDOM(1, n)]

Does this code produce a uniform random permutation? Why or why not?

End example
Exercises 5.3-4
Start example

Professor Armstrong suggests the following procedure for generating a uniform random permutation:

PERMUTE-BY-CYCLIC(A)1  n  length[A]2  offset  RANDOM(1, n)3  for i  1 to n4       do dest  i + offset5          if dest > n6             then dest  dest -n7          B[dest]  A[i]8  return B

Show that each element A[i] has a 1/n probability of winding up in any particular position in B. Then show that Professor Armstrong is mistaken by showing that the resulting permutation is not uniformly random.

End example
Exercises 5.3-5:
Start example

Prove that in the array P in procedure PERMUTE-BY-SORTING, the probability that all elements are unique is at least 1 - 1/n.

End example
Exercises 5.3-6
Start example

Explain how to implement the algorithm PERMUTE-BY-SORTING to handle the case in which two or more priorities are identical. That is, your algorithm should produce a uniform random permutation, even if two or more priorities are identical.

 

5.4 Probabilistic analysis and further uses of indicator random variables

This advanced section further illustrates probabilistic analysis by way of four examples. The first determines the probability that in a room of k people, some pair shares the same birthday. The second example examines the random tossing of balls into bins. The third investigates "streaks" of consecutive heads in coin flipping. The final example analyzes a variant of the hiring problem in which you have to make decisions without actually interviewing all the candidates.

5.4.1 The birthday paradox

Our first example is the birthday paradox. How many people must there be in a room before there is a 50% chance that two of them were born on the same day of the year? The answer is surprisingly few. The paradox is that it is in fact far fewer than the number of days in a year, or even half the number of days in a year, as we shall see.

To answer this question, we index the people in the room with the integers 1, 2, ..., k, where k is the number of people in the room. We ignore the issue of leap years and assume that all years have n = 365 days. For i = 1, 2, ..., k, let bi be the day of the year on which person i's birthday falls, where 1 bi n. We also assume that birthdays are uniformly distributed across the n days of the year, so that Pr {bi = r} = 1/n for i = 1, 2, ..., k and r = 1, 2, ..., n.

The probability that two given people, say i and j, have matching birthdays depends on whether the random selection of birthdays is independent. We assume from now on that birthdays are independent, so that the probability that i's birthday and j's birthday both fall on day r is

Pr {bi = r and bj = r}

=

Pr{bi = r}Pr{bj = r}

 

=

1/n2.

Thus, the probability that they both fall on the same day is

(5.7) 

More intuitively, once bi is chosen, the probability that bj is chosen to be the same day is 1/n. Thus, the probability that i and j have the same birthday is the same as the probability that the birthday of one of them falls on a given day. Notice, however, that this coincidence depends on the assumption that the birthdays are independent.

We can analyze the probability of at least 2 out of k people having matching birthdays by looking at the complementary event. The probability that at least two of the birthdays match is 1 minus the probability that all the birthdays are different. The event that k people have distinct birthdays is

where Ai is the event that person i's birthday is different from person j's for all j < i. Since we can write Bk = Ak Bk-1, we obtain from equation (C.16) the recurrence

(5.8) 

where we take Pr{B1} = Pr{A1} = 1 as an initial condition. In other words, the probability that b1, b2, ..., bk are distinct birthdays is the probability that b1, b2, ..., bk-1 are distinct birthdays times the probability that bk bi for i = 1, 2, ..., k - 1, given that b1, b2, ..., bk-1 are distinct.

If b1, b2, ..., bk-1 are distinct, the conditional probability that bk bi for i = 1, 2, ..., k - 1 is Pr {Ak | Bk-1} = (n - k + 1)/n, since out of the n days, there are n - (k - 1) that are not taken. We iteratively apply the recurrence (5.8) to obtain

Inequality (3.11), 1 + x ex, gives us

when -k(k - 1)/2n ln(1/2). The probability that all k birthdays are distinct is at most 1/2 when k(k - 1) = 2n ln 2 or, solving the quadratic equation, when . For n = 365, we must have k 23. Thus, if at least 23 people are in a room, the probability is at least 1/2 that at least two people have the same birthday. On Mars, a year is 669 Martian days long; it therefore takes 31 Martians to get the same effect.

An analysis using indicator random variables

We can use indicator random variables to provide a simpler but approximate analysis of the birthday paradox. For each pair (i, j) of the k people in the room, we define the indicator random variable Xij, for 1 i < j k, by

By equation (5.7), the probability that two people have matching birthdays is 1/n, and thus by Lemma 5.1, we have

E [Xij]

=

Pr{person i and person j have the same birthday}

 

=

1/n.

Letting X be the random variable that counts the number of pairs of individuals having the same birthday, we have

Taking expectations of both sides and applying linearity of expectation, we obtain

When k(k - 1) 2n, therefore, the expected number of pairs of people with the same birthday is at least 1. Thus, if we have at least individuals in a room, we can expect at least two to have the same birthday. For n = 365, if k = 28, the expected number of pairs with the same birthday is (28 · 27)/(2 · 365) 1.0356.

Thus, with at least 28 people, we expect to find at least one matching pair of birth-days. On Mars, where a year is 669 Martian days long, we need at least 38 Martians.

The first analysis, which used only probabilities, determined the number of people required for the probability to exceed 1/2 that a matching pair of birthdays exists, and the second analysis, which used indicator random variables, determined the number such that the expected number of matching birthdays is 1. Although the exact numbers of people differ for the two situations, they are the same asymptotically: .

5.4.2 Balls and bins

Consider the process of randomly tossing identical balls into b bins, numbered 1, 2,..., b. The tosses are independent, and on each toss the ball is equally likely to end up in any bin. The probability that a tossed ball lands in any given bin is 1/b. Thus, the ball-tossing process is a sequence of Bernoulli trials (see Appendix C.4) with a probability 1/b of success, where success means that the ball falls in the given bin. This model is particularly useful for analyzing hashing (see Chapter 11), and we can answer a variety of interesting questions about the ball-tossing process. (Problem C-1 asks additional questions about balls and bins.)

How many balls fall in a given bin? The number of balls that fall in a given bin follows the binomial distribution b(k; n, 1/b). If n balls are tossed, equation (C.36) tells us that the expected number of balls that fall in the given bin is n/b.

How many balls must one toss, on the average, until a given bin contains a ball? The number of tosses until the given bin receives a ball follows the geometric distribution with probability 1/b and, by equation (C.31), the expected number of tosses until success is 1/(1/b) = b.

How many balls must one toss until every bin contains at least one ball? Let us call a toss in which a ball falls into an empty bin a "hit." We want to know the expected number n of tosses required to get b hits.

The hits can be used to partition the n tosses into stages. The ith stage consists of the tosses after the (i - 1)st hit until the ith hit. The first stage consists of the first toss, since we are guaranteed to have a hit when all bins are empty. For each toss during the ith stage, there are i - 1 bins that contain balls and b - i + 1 empty bins. Thus, for each toss in the ith stage, the probability of obtaining a hit is (b-i +1)/b.

Let ni denote the number of tosses in the ith stage. Thus, the number of tosses required to get b hits is . Each random variable ni has a geometric distribution with probability of success (b - i + 1)/b and, by equation (C.31),

By linearity of expectation,

The last line follows from the bound (A.7) on the harmonic series. It therefore takes approximately b ln b tosses before we can expect that every bin has a ball. This problem is also known as the coupon collector's problem, and says that a person trying to collect each of b different coupons must acquire approximately b ln b randomly obtained coupons in order to succeed.

5.4.3 Streaks

Suppose you flip a fair coin n times. What is the longest streak of consecutive heads that you expect to see? The answer is Θ(lg n), as the following analysis shows.

We first prove that the expected length of the longest streak of heads is O(lg n). The probability that each coin flip is a head is 1/2. Let Aik be the event that a streak of heads of length at least k begins with the ith coin flip or, more precisely, the event that the k consecutive coin flips i, i + 1, ..., i + k - 1 yield only heads, where 1 k n and 1 i n -k +1. Since coin flips are mutually independent, for any given event Aik, the probability that all k flips are heads is

(5.9) 

and thus the probability that a streak of heads of length at least 2 lg n begins in position i is quite small. There are at most n - 2 lg n + 1 positions where such a streak can begin. The probability that a streak of heads of length at least 2 lg n begins anywhere is therefore

(5.10) 

since by Boole's inequality (C.18), the probability of a union of events is at most the sum of the probabilities of the individual events. (Note that Boole's inequality holds even for events such as these that are not independent.)

We now use inequality (5.10) to bound the length of the longest streak. For j = 0, 1, 2,..., n, let Lj be the event that the longest streak of heads has length exactly j, and let L be the length of the longest streak. By the definition of expected value,

(5.11) 

We could try to evaluate this sum using upper bounds on each Pr {Lj} similar to those computed in inequality (5.10). Unfortunately, this method would yield weak bounds. We can use some intuition gained by the above analysis to obtain a good bound, however. Informally, we observe that for no individual term in the summation in equation (5.11) are both the factors j and Pr {Lj} large. Why? When j 2 lg n, then Pr {Lj} is very small, and when j < 2 lgn, then j is fairly small. More formally, we note that the events Lj for j = 0, 1,..., n are disjoint, and so the probability that a streak of heads of length at least 2 lg n begins anywhere is . By inequality (5.10), we have . Also, noting that , we have that . Thus, we obtain

The chances that a streak of heads exceeds r lg n flips diminish quickly with r. For r 1, the probability that a streak of r lg n heads starts in position i is

Pr {Ai,r lg n}

=

1/2r lg n

 

1/nr.

Thus, the probability is at most n/nr = 1/nr-1 that the longest streak is at least r lg n, or equivalently, the probability is at least 1 - 1/nr-1 that the longest streak has length less than r lg n.

As an example, for n = 1000 coin flips, the probability of having a streak of at least 2 lg n = 20 heads is at most 1/n = 1/1000. The chances of having a streak longer than 3 lg n = 30 heads is at most 1/n2 = 1/1,000,000.

We now prove a complementary lower bound: the expected length of the longest streak of heads in n coin flips is (lg n). To prove this bound, we look for streaks of length s by partitioning the n flips into approximately n/s groups of s flips each. If we choose s = (lg n)/2, we can show that it is likely that at least one of these groups comes up all heads, and hence it is likely that the longest streak has length at least s = (lg n). We will then show that the longest streak has expected length (lg n).

We partition the n coin flips into at least n/ (lg n)/2 groups of (lg n)/2 consecutive flips, and we bound the probability that no group comes up all heads. By equation (5.9), the probability that the group starting in position i comes up all heads is

Pr {Ai, (lg n)/}

=

1/2(lgn)/

 

.

The probability that a streak of heads of length at least (lg n)/2 does not begin in position i is therefore at most . Since the n/ (lg n)/2 groups are formed from mutually exclusive, independent coin flips, the probability that every one of these groups fails to be a streak of length (lg n)/2 is at most

For this argument, we used inequality (3.11), 1 + x ex , and the fact, which you might want to verify, that for sufficiently large n.

Thus, the probability that the longest streak exceeds (lg n)/2 is

(5.12) 

We can now calculate a lower bound on the expected length of the longest streak, beginning with equation (5.11) and proceeding in a manner similar to our analysis of the upper bound:

Click To expand

As with the birthday paradox, we can obtain a simpler but approximate analysis using indicator random variables. We let Xik = I{Aik} be the indicator random variable associated with a streak of heads of length at least k beginning with the ith coin flip. To count the total number of such streaks, we define

Taking expectations and using linearity of expectation, we have

By plugging in various values for k, we can calculate the expected number of streaks of length k. If this number is large (much greater than 1), then many streaks of length k are expected to occur and the probability that one occurs is high. If this number is small (much less than 1), then very few streaks of length k are expected to occur and the probability that one occurs is low. If k = c lg n, for some positive constant c, we obtain

If c is large, the expected number of streaks of length c lg n is very small, and we conclude that they are unlikely to occur. On the other hand, if c < 1/2, then we obtain E [X] = Θ(1/n1/2-1) = Θ(n1/2), and we expect that there will be a large number of streaks of length (1/2) lg n. Therefore, one streak of such a length is very likely to occur. From these rough estimates alone, we can conclude that the length of the longest streak is Θ(lg n).

5.4.4 The on-line hiring problem

As a final example, we consider a variant of the hiring problem. Suppose now that we do not wish to interview all the candidates in order to find the best one. We also do not wish to hire and fire as we find better and better applicants. Instead, we are willing to settle for a candidate who is close to the best, in exchange for hiring exactly once. We must obey one company requirement: after each interview we must either immediately offer the position to the applicant or must tell them that they will not receive the job. What is the trade-off between minimizing the amount of interviewing and maximizing the quality of the candidate hired?

We can model this problem in the following way. After meeting an applicant, we are able to give each one a score; let score(i) denote the score given to the ith applicant, and assume that no two applicants receive the same score. After we have seen j applicants, we know which of the j has the highest score, but we do not know if any of the remaining n - j applicants will have a higher score. We decide to adopt the strategy of selecting a positive integer k < n, interviewing and then rejecting the first k applicants, and hiring the first applicant thereafter who has a higher score than all preceding applicants. If it turns out that the best-qualified applicant was among the first k interviewed, then we will hire the nth applicant. This strategy is formalized in the procedure ON-LINE-MAXIMUM(k, n), which appears below. Procedure ON-LINE-MAXIMUM returns the index of the candidate we wish to hire.

ON-LINE-MAXIMUM(k, n)1 bestscore  -2 for i  to k3      do if score(i) > bestscore4            then bestscore  score(i)5 for i  k + 1 to n6      do if score(i) > bestscore7            then return i8 return n

We wish to determine, for each possible value of k, the probability that we hire the most qualified applicant. We will then choose the best possible k, and implement the strategy with that value. For the moment, assume that k is fixed. Let M(j) = maxij {score(i)} denote the maximum score among applicants 1 through j. Let S be the event that we succeed in choosing the best-qualified applicant, and let Si be the event that we succeed when the best-qualified applicant is the ith one interviewed. Since the various Si are disjoint, we have that . Noting that we never succeed when the best-qualified applicant is one of the first k, we have that Pr {Si} = 0 for i = 1, 2,..., k. Thus, we obtain

(5.13) 

We now compute Pr {Si}. In order to succeed when the best-qualified applicant is the ith one, two things must happen. First, the best-qualified applicant must be in position i, an event which we denote by Bi. Second, the algorithm must not select any of the applicants in positions k + 1 through i - 1, which happens only if, for each j such that k + 1 j i - 1, we find that score(j) < bestscore in line 6. (Because scores are unique, we can ignore the possibility of score(j) = bestscore.) In other words, it must be the case that all of the values score(k + 1) through score(i - 1) are less than M(k); if any are greater than M(k) we will instead return the index of the first one that is greater. We use Oi to denote the event that none of the applicants in position k + 1 through i - 1 are chosen. Fortunately, the two events Bi and Oi are independent. The event Oi depends only on the relative ordering of the values in positions 1 through i - 1, whereas Bi depends only on whether the value in position i is greater than all the values 1 through i - 1. The ordering of positions 1 through i - 1 does not affect whether i is greater than all of them, and the value of i does not affect the ordering of positions 1 through i - 1. Thus we can apply equation (C.15) to obtain

Pr {Si} = Pr {Bi Oi} = Pr {Bi} Pr {Oi}.

The probability Pr {Bi} is clearly 1/n, since the maximum is equally likely to be in any one of the n positions. For event Oi to occur, the maximum value in positions 1 through i - 1 must be in one of the first k positions, and it is equally likely to be in any of these i - 1 positions. Consequently, Pr {Oi} = k/(i - 1) and Pr {Si} = k/(n(i - 1)). Using equation (5.13), we have

We approximate by integrals to bound this summation from above and below. By the inequalities (A.12), we have

Evaluating these definite integrals gives us the bounds

which provide a rather tight bound for Pr {S}. Because we wish to maximize our probability of success, let us focus on choosing the value of k that maximizes the lower bound on Pr {S}. (Besides, the lower-bound expression is easier to maximize than the upper-bound expression.) Differentiating the expression (k/n)(ln n - ln k) with respect to k, we obtain

Setting this derivative equal to 0, we see that the lower bound on the probability is maximized when ln k = ln n - 1 = ln(n/e) or, equivalently, when k = n/e. Thus, if we implement our strategy with k = n/e, we will succeed in hiring our best-qualified applicant with probability at least 1/e.

Exercises 5.4-1
Start example

How many people must there be in a room before the probability that someone has the same birthday as you do is at least 1/2? How many people must there be before the probability that at least two people have a birthday on July 4 is greater than 1/2?

End example
Exercises 5.4-2
Start example

Suppose that balls are tossed into b bins. Each toss is independent, and each ball is equally likely to end up in any bin. What is the expected number of ball tosses before at least one of the bins contains two balls?

End example
Exercises 5.4-3:
Start example

For the analysis of the birthday paradox, is it important that the birthdays be mutually independent, or is pairwise independence sufficient? Justify your answer.

End example
Exercises 5.4-4:
Start example

How many people should be invited to a party in order to make it likely that there are three people with the same birthday?

End example
Exercises 5.4-5:
Start example

What is the probability that a k-string over a set of size n is actually a k-permutation? How does this question relate to the birthday paradox?

End example
Exercises 5.4-6:
Start example

Suppose that n balls are tossed into n bins, where each toss is independent and the ball is equally likely to end up in any bin. What is the expected number of empty bins? What is the expected number of bins with exactly one ball?

End example
Exercises 5.4-7:
Start example

Sharpen the lower bound on streak length by showing that in n flips of a fair coin, the probability is less than 1/n that no streak longer than lg n-2 lg lg n consecutive heads occurs.

End example
Problems 5-1: Probabilistic counting
Start example

With a b-bit counter, we can ordinarily only count up to 2b - 1. With R. Morris's probabilistic counting, we can count up to a much larger value at the expense of some loss of precision.

We let a counter value of i represent a count of ni for i = 0, 1,..., 2b -1, where the ni form an increasing sequence of nonnegative values. We assume that the initial value of the counter is 0, representing a count of n0 = 0. The INCREMENT operation works on a counter containing the value i in a probabilistic manner. If i = 2b - 1, then an overflow error is reported. Otherwise, the counter is increased by 1 with probability 1/(ni+1 - ni), and it remains unchanged with probability 1 - 1/(ni+1 - ni).

If we select ni = i for all i 0, then the counter is an ordinary one. More interesting situations arise if we select, say, ni = 2i-1 for i > 0 or ni = Fi (the ith Fibonacci number-see Section 3.2).

For this problem, assume that is large enough that the probability of an overflow error is negligible.

  1. Show that the expected value represented by the counter after n INCREMENT operations have been performed is exactly n.

  2. The analysis of the variance of the count represented by the counter depends on the sequence of the ni. Let us consider a simple case: ni = 100i for all i 0. Estimate the variance in the value represented by the register after n INCREMENT operations have been performed.

End example
Problems 5-2: Searching an unsorted array
Start example

Thus problem examines three algorithms for searching for a value x in an unsorted array A consisting of n elements.

Consider the following randomized strategy: pick a random index i into A. If A[i] = x, then we terminate; otherwise, we continue the search by picking a new random index into A. We continue picking random indices into A until we find an index j such that A[j] = x or until we have checked every element of A. Note that we pick from the whole set of indices each time, so that we may examine a given element more than once.

  1. Write pseudocode for a procedure RANDOM-SEARCH to implement the strategy above. Be sure that your algorithm terminates when all indices into A have been picked.

  2. Suppose that there is exactly one index i such that A[i] = x. What is the expected number of indices into A that must be picked before x is found and RANDOM-SEARCH terminates?

  3. Generalizing your solution to part (b), suppose that there are k 1 indices i such that A[i] = x. What is the expected number of indices into A that must be picked before x is found and RANDOM-SEARCH terminates? Your answer should be a function of n and k.

  4. Suppose that there are no indices i such that A[i] = x. What is the expected number of indices into A that must be picked before all elements of A have been checked and RANDOM-SEARCH terminates?

Now consider a deterministic linear search algorithm, which we refer to as DETERMINISTIC-SEARCH. Specifically, the algorithm searches A for x in order, considering A[1], A[2], A[3],..., A[n] until either A[i] = x is found or the end of the array is reached. Assume that all possible permutations of the input array are equally likely.

  1. Suppose that there is exactly one index i such that A[i] = x. What is the expected running time of DETERMINISTIC-SEARCH? What is the worst-case running time of DETERMINISTIC-SEARCH?

  2. Generalizing your solution to part (e), suppose that there are k 1 indices i such that A[i] = x. What is the expected running time of DETERMINISTIC-SEARCH? What is the worst-case running time of DETERMINISTIC-SEARCH? Your answer should be a function of n and k.

  3. Suppose that there are no indices i such that A[i] = x. What is the expected running time of DETERMINISTIC-SEARCH? What is the worst-case running time of DETERMINISTIC-SEARCH?

Finally, consider a randomized algorithm SCRAMBLE-SEARCH that works by first randomly permuting the input array and then running the deterministic linear search given above on the resulting permuted array.

  1. Letting k be the number of indices i such that A[i] = x, give the worst-case and expected running times of SCRAMBLE-SEARCH for the cases in which k = 0 and k = 1. Generalize your solution to handle the case in which k 1.

  2. Which of the three searching algorithms would you use? Explain your answer.

Chapter notes

Bollobás [44], Hofri [151], and Spencer [283] contain a wealth of advanced probabilistic techniques. The advantages of randomized algorithms are discussed and surveyed by Karp [174] and Rabin [253]. The textbook by Motwani and Raghavan [228] gives an extensive treatment of randomized algorithms.

Several variants of the hiring problem have been widely studied. These problems are more commonly referred to as "secretary problems." An example of work in this area is the paper by Ajtai, Meggido, and Waarts [12].