# Introduction to Algorithms: Investigations on the Quadratic Sorting Algorithms

### 5 February 2001

You have probably heard of a sorting algorithm called "bubble sort" - it is included in this course simply to show how stupid it is.
We shall concentrate on algorithms based on these two ideas:
• insertion sort: you have some sorted stuff already, and insert the next value in the appropriate place;
• selection sort: you have some of the lowest values already sorted, and search the rest of the data for the least remaining value.
A fourth algorithm, permutation sort, is only considered very briefly. The idea of this one is to find out the final position of each value before we move anything at all.
It is very common to be given data that are almost sorted - just a few (or a small percentage of) items are out of place. Insertion sort performs very well in this situation: it basically verified that the values are in order, and does additional work proportional to the extent of the divergence from sortedness. Bubble sort apparently does something similar - sometimes, but it can go very wrong; there is an exam question about this.
We study these algorithms from three points of view:
• The "model answers" built into the TESTER allow you to watch the algorithms in action, and find out how long they take to run, without having to program them correctly yourself first.
• Then you will program them yourself.
• We shall also prove correctness using loop invariant diagrams.
There is also a handwritten handout that describes the four quadratic sorting algorithms. This needs to be incorporated.

## 1  Animations

Use Netscape to watch the animations of insertion sort, selection sort and bubble sort that Jon Rowson and Daniel Dor-Chay wrote. (Daniel was a first year Algorithms student when he did this.)
These run correctly with Netscape (with, of course, JAVA enabled) under Linux, but for some reason they don't work properly with Netscape on NT.
However, you should close Netscape down altogether before you start doing timing experiments, as it will muck things up for you!

## 2  The model answers inside the TESTER

The TESTER can also animate the algorithms, albeit in a less colourful way.
It has model answers for most of the exercises in the course built in to it. These can be tested in much the same way as you would test your own code, except that the first argument to run, trace or check is now the name of the algorithm without .java on the end.
(a) Try the following, adding suitable arguments to consider almost sorted data:
```   trace insertion-sort
trace selection-sort
trace bubble-sort
```
As we shall see, bubble sort is the slowest, but the tracing output is a bit misleading, as it doesn't show the search operations that insertion sort and selection sort have to perform first.
(b) Now do the following to find out how long the various sorting algorithms take:
```   check insertion-sort
check insertion-sort sorted
check insertion-sort reverse sorted
check insertion-sort displaced=5
check insertion-sort displacedpc=1
check selection-sort
check bubble-sort
check permutation-sort
check merge-sort
check quick-sort
check heap-sort
check shell-sort
```
together with the obvious variations.
(c) Rank the algorithms in order of speed.
(d) At what size do the nlog(n) algorithms overtake insertion sort on unsorted data?
(e) How does the coefficient of n2 depend on the amount of unsorted data?
(f) Work out (from the formulae) how long these algorithms would take to sort a million numbers.
(g) Conversely, what is the biggest array that they could sort in an hour?
Textbooks indulge themselves in tabulating the values of functions such as these (i.e. the time taken by certain algorithms) for various given values of n. These values sometimes need to be written using towers of exponents.1 The practical question is, however, the inverse of this: visitors to your web site are willing to wait for a certain amount of time (one second, maybe), so these functions place a bound on the maximum size of your dataset.

## 3  Almost sorted data

The TESTER can supply various kinds of random data to your method, both in the single runs and for the timings. Try the following arguments:
• sorted
• reverse sorted
• strictly sorted
• displaced=10
• displacedpc=.5
where the last two provide sorted data and then scatter either 10 or 0.5% respectively of random values in random positions.
The meanings of the follow should be obvious:
• size
• range
• range_upper
• range_lower
For quicksort we shall use
• sawtooth
• sawtoothpc
to provide a "triangular" (decreasing, then increasing) distribution.
When you use check to do timings, the TESTER now creates a gnuplot command file to draw a graph. The you can look at the graph on the screen by doing
```   gnuplot insertion-sort.plot
```
This file is plain text: you can look at it with emacs. Inside you will find out how to print the graph (on a printer).

## 4  Programming insertion and selection sort

There are templates Insertion.java, Selection.java and Bubble.java in your coursework directories.
These do not assume that you have already written Shift.java etc. and got them working - they use methods out of the TESTER. (If you compare the "object-oriented hocus-pocus" at the end of Insertion.java you can see how it chooses whether to use your code or mine.)
Selection sort requires a method called where_is_min. You can write your own using this template.
Bubble.java shows all that you need to connect your code for a sorting algorithm to the TESTER: the class line has to be
```   public class MySorter implements Sorting
```
where Sorting.java is the following interface, i.e. a list of the methods that you have to supply:
```   public interface Sorting {
int[] sort (int[] array);
void choice (int c);
void cutoff (int n); // used for hybrid mergesort and quicksort
}
```
(cutoff will be used for mergesort and quicksort; the choice feature of the TESTER will be used in the binary search exercise.)

## 5  Other things to think about

(a) Given a reverse-sorted array (i.e. in descending order), for each element that insertion sort considers, how many times round the loop in the shift (and linear_search) methods do you go? How many times do you go round this loop altogether, as insertion sort passes over the entire array, as a function of narray.length? (This is the worst case.)
(b) Suppose instead you have a random array. Where, on average, would you expect each new element to be inserted in the already-sorted part? Repeat the previous calculation in this, the average, case.
(c) How much work does insertion sort do in the best case, of already-sorted data?
Warning: it is an accident that the average case in this situation is mid-way between the best and worst!
(d) When the data are mostly already sorted in ascending order, but with some number of misplaced values, how does the execution time depend on this number?
(e) When there is instead some percentage of misplaced values, how does the execution time depend on the percentage?
(f) In implementing the search, why should we look for the last existing occurrence (if any) of the new value?
(g) For the search part of the algorithm, you could have used binary search (instead of linear search). Which was quicker, and by how much? Does it change the overall time-complexity of insertion sort from being quadratic? Even though binary search is inherently quicker than linear search, what still costs more time now?
(h) For the best programmers to think about but not try to program: How could this cost be eliminated? If you did eliminate this cost, would it still be possible to use binary search?

## 6  Further thoughts on bubble sort

(a) What happens to the greatest value on the first pass of bubble sort?
(b) What about the second greatest value? For definiteness, consider an array of length 18 in which the greatest value occurs at position 12 and the second greatest at position 6.
(c) Why does it take at most n passes to sort the array?
Try to formulate a proof, using a loop invariant diagram.
(d) Describe an array that does take all n passes.
(e) Compare the way in which bubble sort works with selection sort (reversing the order). Make precise the statement that selection sort does the same thing, but with fewer assignments.
(f) You have seen that bubble sort has a directional bias to it, which could be removed by passing alternately up and down the array. 2 Implement this and find out how it compares with insertion sort on almost sorted data.
(g) Can you devise almost sorted arrays that force the two-way bubble sort to take O(n2) steps?

## 7  Using a nested loop instead

We implemented insertion sort using separate search and shift methods in order to understand how these components fits together. If you look up insertion sort in any of the algorithms textbooks (do not do this until you have done the exercises in the way that I have told you), you will find instead that the code is written with nested loops.
Copy all three methods (where_to_insert(), shift() and sort()) into Insertion2.java and delete the searcher. and shifter. Object-Oriented Hocus-Pocus. Do the timings again. As you see, it's a lot faster without the OO stuff.
Copy Insertion2.java to a third file, Insertion3.java. Strip the where_to_insert() and shift() methods down to the code that is actually used (discarding the comments). Do the timings again.
Non in-line them - i.e. replace the method-calls with the code itself, giving something like this:
```    while (  ) {
while (   ) {
}
while (   ) {
}
}
```
Do the timings again.
What do you notice about the pointer-values through which the two inner loops run? Combine the two inner loops into one. Try to make the text and the execution of the code as short as possible. Do the timings again.
Now (and only now) look up the code for Insertion Sort in a book.
Do you understand how the textbook works?
Would you have understood the textbook code if you had been given it straight away, or would you have just treated it as a piece of hocus-pocus like the object-oriented stuff above?
Did any of the steps in trimming this code down make any difference to the form of the dependency of the time on the size of the array and on how far it was from being sorted? What changed?
Of these various steps, which one made the most difference to the time?

## 8  Permutation sort

The idea is to find out, for the value originally in array[i], exactly what its final position is in the sorted array, without moving anything.
In Discrete Structures you will learn the formal input-output definition of a function. Sorting must induce a permutation, which is a bijective function. Permutation sort determines this function explicitly.
(a) Assume at first that there are no repetitions in the array. If the value array[i] ends up at array[j], the number j counts something - what? Write a subroutine to do this.
(b) Now for the case with repetitions. (Consider first the situation where all of the values are equal.) Suppose that the various occurrences of each particular value have to remain in the same order in the sorted array ("if you can't decide on a new order, use the old one"). What does j count now? In order for the function to be a permutation, different values of i have to give different values of j.