# Introduction to Algorithms: Insertion Sort

### 23 March 1999

This is the proof of correctness for insertion sort, using a loop invariant diagram. It is the model answer for a test that was set in week 9 in 1998-9.
You should write accounts similar to this for the other algorithms in the course that process arrays from left to right:
• array shift (already done in the model answer to the first test),
• linear search,
• binary search,
• selection sort,
• insertion sort (already done on this sheet),
• merge (the loop, not recursive merge-sort),
• split (as used in recursive quick-sort).
The pattern for merge-sort and quick-sort is different, as they involve recursion instead of loops. See the end of this sheet for a brief discussion. You will not be expected to prove the correctness of the recursive algorithms in the exam.

You are given an array
```    int[] A = new int[n];
```
that has somehow been filled with ints, and are required to use the insertion sort algorithm to sort it.
(a) Draw a loop invariant diagram to show all of the pointers (indices) into the array that you would use to code the algorithm. It should indicate typical positions of these pointers during the execution of the loop, and what is known about the values in the segments into which the pointers divide the array.
There should be four parts to the diagram: initial, typical, next and final, and the relationships between these should be clearly indicated.
(b) Write the JAVA code to initialise the pointers.
int j=0; (or 1)
(c) How do you know that the loop invariant (a) is justified by the initialisation (b)?
The "typical" version of the loop invariant in the case j=0 says that the "sorted" part is empty (or has exactly one element), so is trivially sorted, whilst the "untouched" part is the whole (or rest) of the array, i.e.  that the array is in its original form.
(d) What is the condition that goes inside the while() statement to decide whether to execute the body of the loop (again) or to exit from it?
while (j < n)
(e) When the insertion sort algorithm was presented to you in lectures, two subroutines (methods) were used inside the main loop of the algorithm. Write the JAVA code that goes inside the body of the loop, making use of these two subroutines.
```   int x = A[j]; // the next "unsorted" entry
int k = search (A, j, x);
shift (A, k, k+1, j-k);
A[k] = x; // re-insert the new entry
j++;
```
The pieces of code search and shift that are called within the body of the loop above are properly called subroutines. The term method refers to the way in which encapsulated objects are accessed from outside, in the object-oriented programming paradigm, which is not the subject of this course.
(f) In order to prove that insertion sort is correct, what is the specification of the first of these subroutines? That is, what condition, expressed in logical notation, must the inputs, output and possible side-effects satisfy? Be careful to allow for the boundary cases. You are not asked to say anything about the implementation of the subroutine (or even name the algorithm that it might use), or how long it takes.
k = search (A, j, x) finds x in A[] before j:
(k=0 ∨A[k−1] ≤ x) ∧(0 ≤ kj) ∧(x < A[k] ∨k=j)
(g) What is the specification of the second subroutine? Describe it in English, not logical notation.
shift (A, k, k+1, j-k) shifts the segment of length j-k starting at k to start at k+1, leaving the rest of the array ( < k and ≥ j) as it was.
(h) How do you know that the loop invariant (a) is valid after the code in part (e) has been executed? Explain the relevance to this of the assumption that the loop invariant was valid beforehand, the specifications of the two subroutines in (f) and (g) and the actual code in part (e).
The shift and re-insertion effect a permutation on the array; in fact they perform a (jk+1)-cycle on the affected entries and the identity on the rest. The loop invariant, as assumed at the top of the loop, says that the segment to the left of j was sorted. When j has been incremented, it delimits another such segment, containing an extra entry with value x inserted before position k; this is sorted because the specification of the search subroutine says that the appropriate inequalities hold.
If you chose instead to use the in-line version of the code in part (p), you would have had to give a second loop invariant and proof for the inner loop. This would involve reproducing the whole of the answer to the first test!
(i) How do you know that the loop terminates?
The positive integer nj is reduced in each iteration.
(j) What can you deduce from the loop condition (d) and the loop invariant (a) when the loop has terminated?
When the loop condition fails, j=n. The loop invariant in this case says that the "sorted" part of the array is the whole of it, and the "untouched" part none. It also says that the array as it currently stands is a permutation of the original state. In other words, the array has been sorted as required.
(k) What is the total number of operations that are performed by the second subroutine throughout the execution of the insertion sort algorithm, in the worst case? What "operations" are meant here? Express your answer as a function of n, the size of the array.
In the worst case, where k=0, shift performs j comparisons and assignments on the jth call, totalling 1/2n(n−1).
(l) What kind of data in the array give rise to the worst-case behaviour?
This happens when ∀j. ∀i < j. A[j] < A[i], i.e.  when the array is reverse-sorted (without repetitions).
(m) How many operations are performed in the average case? Make clear the relationship between this answer and that to part (k).
In the average case, A[j] is approximately the median of the sorted segment, so shift has to do about j/2 assignments on the jth call, totalling approximately n2/4.
(n) How many operations are performed in the best case, and how must the subroutines be implemented in order to achieve this? You need only describe the essential feature in words: there is no need to write the JAVA code. What kind of data in the array give rise to the best-case behaviour?
In the best case, k=j, so the loop in shift is not executed. This happens when ∀j. ∀i < j. A[j] ≥ A[i], i.e.  the array was sorted (possibly with repetitions). To take advantage of this situation, search must return the rightmost occurrence of x, otherwise any repetitions of x will get shifted unnecessarily. Linear search from the right will do this, but binary search will take n logn steps altogether.
(o) In "O()" notation, what is the complexity of insertion sort in this best case?
O(n), because there are other steps in the main loop (and in the initialisation of search and shift) that are performed once for each iteration of the main loop, i.en times.
(p) It is, of course, more efficient to do the job of the two subroutines "in-line" i.e. by writing a nested loop instead of calling them as separate methods. Write the whole of the insertion sort program in this way.
```   int j=2;
while (j<n) {
int x = A[j];
int k = j;
while (k > 0 && A[k-1] > x) {
A[k] = A[k-1];
k--;
}
A[k] = x;
j++;
}
```

This is www.PaulTaylor.EU/algorithms/insertion_sort.html and it was derived from algorithms/insertion_sort.tex which was last modified on 21 July 2007.