CSCI 503B Analysis of Algorithms.  

Instructor: Funda Ergun


Announcements

Welcome to CSCI 503!

I will use this page to post various announcements and documents; any information you need about this class will be in one of these documents. Please check this page regularly for updates.

Regarding homework submissions, please look below to where the assignments are for more info.

There are some resources for recurrences at the bottom, check them out!

News: midterm will be on Monday the 27th of Oct. The previous lecture will be dedicated to review. You can bring an index card to the exam and write formulas on one side only.

OK, the worst is over :-) The new homework is posted, do you recognize it?

I have posted a new set of sample questions, take a look below. Final exam coming up on Monday Dec 15 at 8 am -- wake up with the chickens and come to class as the farmers go out to milk their cows...


Basic Information

Location: GY 436 MW 9:30 -- 10:45.

Instructor: Funda Ergun. Office: LH 323. Email: fergun@indiana.edu.

Instructor office hours: M 11:15 -- 12:15 or by appointment.

AI: Jiecao Chen. Office hours Wed 1:00--2:00pm. Email: jiecchen@umail.iu.edu

Textbook: Introduction to Algorithms. T. Cormen, C. Leiserson, R. Rivest, C. Stein.


Grading Policies:

Grade distribution: one midterm, around 6 homeworks, one project, final. Percentages 25, 25, 15, 35 per cent respectively. No collaboration allowed unless otherwise noted.

Late penalties: 10 points each late day. Talk to me if you have a valid excuse for submitting your work late and the penalty might be waived.

The project can be reading/comparing a couple papers, some research that you do on a topic, or programming and evaluating an interesting algorithm, etc. This is typically a no-programming course, but, if you're really interested in getting involved in some programming, there is room for that in the project so talk to me.


Tentative List of Topics, sort-of-blog:

What is an algorithm, analyzing algorithms.

Growth of functions, asymptotic complexity. As of September 8, we are looking into asymptotic complexity. Analyzing functions, comparing them. Big-Oh, Theta, Omega, little-oh and theta covered, along with how to write a proof for these and some properties.

Sorting/searching. On September 10, we looked at Insertion Sort, Mergesort, and deterministic Quicksort and discussed best and worst cases. On September 15, we discussed average case of Quicksort. We then looked at the n log n lower bound for comparison-based sorting using a decision tree. We also did a review of logarithms, exponents, etc. On Sep 18, we continued our discussion of recurrence relations that had started with our sorting algorithms. We saw three techniques for solving them and did some examples. We also solved the egg puzzle!

Greedy algorithms. On Sep 22, 24 we looked into the Master Theorem for solving recurrences some more, then went into Greedy algorithms. In general we are doing this by considering two problems: one involves scheduling and one is Huffman Coding for data compression. In the meantime, we mentioned the TARDIS!!! As of October 8, we are finishing our discussion on Greedy algorithms and move on to Dynamic Programming. The proof of the optimality of Huffman Codes was something we thought about a lot! We also proved that everybody has brown eyes, to show pitfalls of choosing your base case carelessly in inductive proofs. Since some of us used inductive proofs in the last homework, this was an important point to make. New hw assigned on Oct 8.

Dynamic programming. We discussed how we can structure solving subproblems optimally and joining them to get an answer efficiently. This involves recursion/divide and conquer done in a smart way. We saw three problems: rod cutting, matrix parenthesization, and LCS. We saw top-down and bottom-up solutions. The midterm is at the end of dynamic programming. The date is Oct 27.

NP-Completeness. Big one. Are some problems intrinsically hard, or do we just not know how to solve them? How can we tell? We did a lot of examples here.

Graph algorithms. We did MST, discussed the importance of the data structure. We did basics of network flows.


Homeworks, etc.

All homeworks are due in class. You can write by hand, type using a tool such as latex, or word if you're brave :-) You are allowed to consult the book and/or your notes, but please don't work in groups or look in another book, or consult the Internet... If you have any questions, please email me or ask in class.

Here is Homework 1. Due September 17. NOTE for QUESTIONS 1, 2: The function has to be positive for positive values of n.

Solutions for Hw 1. This is also on Oncourse.

Homework 2. Due Mon Oct 6. Note: the last question has been changed.

Solutions for Hw 2.

Homework 3. Due Wed Oct 15.

Solutions for Hw 3.

Homework 4. Due Wed Oct 22.

Solutions for Hw 4.

Homework 5. Due Wed Nov 5.

Homework 6. Due Wed Dec 3.

Sample exam

Sample exam 2


Other Resources

Some more on recurrences and examples of solving recurrences using the Master Theorem can be found here and here and here.

Some videos that form a nice sequence (the examples start at minute 6:30 in the first one, but the explanation part is also helpful) are here and here and here.

If you go to the above videos, as part of the same sequence you will see some examples of when the Master Theorem does not apply. Those are nice as well.

Greedy algorithms:

You can just google Huffman coding and get a million useful videos and other resources. Here's one.

Some nice NP-completeness videos, a few of many: here, and here, and here, and here. Thanks to the original authors.


Here are some thoughts on how to write your project reports. This is just a sample; the actual report would depend on your particular project.

End of Serious Stuff  

Fun stuff begins here


I will post mathematical/algorithmic puzzles/notes here. Send me your solutions/puzzles/comments.

I have two pieces of rope, each burns in an hour. How can I measure 45 minutes? You guys solved this very easily.

The five pirates puzzle. Hmmmm.

I have two biased coins, with bias unknown, close to half (say between .4 and .6). How can I simulate a fair coin? Analyze.

I have a building of infinite height and two eggs. I would like to figure out from what floor I can drop an egg without breaking it. Call that height h -- the egg will survive a drop from h but not from h+1. Find an algorithm to find out what h is by using the two eggs that involves the smallest number of drops. You could clearly do this with one egg and h+1 drops -- you try floors 1, 2, 3, etc until your egg breaks; the last floor you successfully tried before your egg broke is, by definition, h. The efficiency turns out to be much better with just two eggs. Don't forget that your building is infinite height; you can't assume that you know the highest floor number (yes, that would help a lot). This is a good example of how a little more resource (in this case one more egg) can significantly improve your algorithm.

Answer: O(sqrt(h)) if h is the highest floor that the egg will survive. You try floors 1, 1+2, 1+2+3, 1+2+3+4, 1+2+3+4+5, etc. so the sequence is 1, 3, 6, 10, 15, 21, 28, ... Once your first egg breaks, you start from the last trial it survived and go up one by one. For instance, if it breaks on floor 28, you start from 22 and try 22, 23, ... until it breaks again at h+1. We showed in class that you do O(sqrt h) drops for the first egg, similarly for the second egg.

Questions asked in class: Is there a lower bound? What happens if we try the Fibonacci series? What if we have more eggs? Why can't we do binary search? (we can't do binary search because the building is infinitely tall, but even if it were not, you would still need logarithmic number of eggs in the height of the building). The other questions are for you to think about.


If you'd like to see something that will blow your mind, look at this article, better yet, watch the video. In a nutshell, it shows why 1+2+3+... = -1/12. Yes, a bunch of increasing positive integers, when added ad infinitum, gives you a negative fraction which is less than 1. At first glance, the proof seems kosher, except for a little convergence issue. Proof that dealing with infinity is tricky!