자료구조, 알고리즘

Intro Sort(Introspective Sort),,, 다큐먼트

디버그정 2009. 11. 14. 05:35

A guide to Introsort

My university, BA Stuttgart, has every IT student do a student research project spanning over the 5th and 6th semester. I applied for a topic that is close to basic informatics. The topic I chose and luckily was assigned to, is the implementation and analysis of Introsort, a state-of-the-art sorting algorithm. Below you can find my original paper linked as a pdf and a translation that lacks some of the details but represents a complete description of Introsort and an implementation in Java.

Abstract

The paper at hand deals with the introspective sorting algorithm Introsort which is based on the popular algorithm Quicksort. Over the course of these pages the steps and improvements Quicksort went through, from the form it was created in to how it is employed today, are displayed. Consequently, the gap from Quicksort to Introsort is closed and Introsort’s inner workings are discussed in detail. The considerations regarding the efficiency of the algorithms are confirmed in a benchmark. At the very end a short summary is given that also acts as a guideline to select one of the discussed algorithms in a specific scenario. All code samples are written in Java.

Table of contents

1 Introduction

Sorting algorithms are a central topic in the essential of computer science, since sorting is a recurring task. So over the years a whole group of algorithms was created, out of which each and every one follows a different approach. For being simple and efficient, Quicksort today is a popular sorting algorithm. Since its invention a lot of people have tried to enhance it, which is why the currently used forms of Quicksort are performing very well and have been thoroughly examind. One of the more recent suggestions for improvement even renames the algorithm to Introsort. The choice of this name is based in the use of introspection or reflexion. Reflexion as such means:

Reflexion in programming means that an algorithm can gain information about its own structure

Therefore the step from Quicksort to Introsort brings also means a conceptional difference as Quicksort does not have any reflection whatsoever. Another reason is that Introsort actually consists of a couple of sorting algorithms - so naming it after a single one them would not be fitting. This paper will explain the indicated concept completely by describing Quicksort in its most simple and classic form and the various improvements that were applied over time until it reached its current shape. At the same time considerations about the efficiency of the discussed algorithms is being made. The gained information both about the sorting algorithms as such and the efficiency are put through a series of tests in the benchmarking chapter.

2 Quicksort

Quicksort is one of the simplest and fastet sorting procedures. It was first introduced by C. A. R. Hoare in [Hoare 1961]. This chapter describes the basic functionality, inner workings and discusses its efficiency in a best case and worst case scenario.

2.1 Basic principle

Quicksort implements the divide-and-conquer strategy, which solves a problem in three steps. First the problem is divided into subproblems which are all handled individually. Then the outcome of those subproblems is combined into a global solution. Equally Quicksort divides the array of data to be sorted into two parts and sorts those parts indepently of each other. Combining those two parts is trivial given the right implementation. Thereby the partitioning of the array of data depends on the data itself. In fact this is realised by choosing a pivot element. According to [Sedgewick 1988] a pivot element at the position i that divides the data in two parts from the left side l to i-1 and i+1 to the right side r needs to meet three requirements:

  1. The chosen pivot element is at its final position within the array of data
  2. All elements within the l to i-1 part are smaller than or equal to the pivot element
  3. All elements within the i+1 to r part are greater than or equal to the pivot element

Quicksort has a method to make sure that these requirements are met. This can be realized in a fairly simple way: First an arbitrary pivot element is chosen. Since there is no guarantee that the first requirement is fulfilled it needs to be moved to its final position. This goal can be obtained by making sure that the second and third requirement are satisfied. This is true since logically 2 AND 3 imply 1. The approach is as follows: a pointer is set to the left and right border of the array of data. The left pointer searches the data from the left side until an element is found that is bigger than or equal to the pivot element. The right pointer does the same thing from the right side, only it is looking for elements that are smaller than or equal to the pivot element. Once elements are found on both sides they trade their positions within the array. This is repeated until all listed requirements are met, which comes true when the two pointers meet. That very same procedure is used for both parts of the data to the left and right side of the pivot element recursively until the length of the subarrays is minimal and the whole array is sorted.

2.2 Efficiency

This examines the efficiency of Quicksort regarding its runtime behaviour. The variable that needs to be looked at is the number of elements in the array of data that will be sorted. This is also the case for every efficiency consideration further into the document.

Best-case behaviour: When Quicksort divides each array of data into two parts of equal length for every subarray in the recursive process, the best case performance is reached.

Quicksort best case behaviour

The algorithm is finished when every subarray has a length of 1. The recursion depth is easy to calculate, when the problem is viewed bottom up. The question to ask is: How often does 1 need to be doubled to reach the original array length.

2^x = n <-> x = log(n)/log(2) = lb(n)

So there are lb(n) levels on each of which n elements need to be processed. Therefore the complexity in a best case scenario is n*lb(n).

Worst-case behaviour: In the worst case Quicksort divides the data into two subarrays of the length 1 and n-1. Then recursion degenerates as depicted in the graphic:

Quicksort worst case behaviour

In this case the recursion depth is n-1 and the number of elements that need to be process decreases by 1 on each level. According to Gauss:

n + n-1 + … + 1 = n*(n+1)/2

Therefore Quicksort has a quadratic worst case behaviour. This case is completely independent from the choice of the pivot element and is valid for all variations of Quicksort.

3 Special aspects of Quicksort

This chapter discusses various enhancements to the basic principle of Quicksort mentioned in the previous chapter and lays the foundation for further considerations that will be made when Introsort itself is discussed.

3.1 Picking a pivot element

Quicksorts behaviour is significantly determined by the choice of the pivot element.

The best possible pivot element: The very best pivot element is the one which divides every subarray into further subarrays of equal length. As mentioned before this causes the best case behaviour to happen. The median value element of the current array would meet this requirement. However to find this element an additional linear complexity of n steps is added on each recursion level. This would cause the global complexity to rise to n*n*lb(n) and is not worth implementing.

Using a random pivot element: Picking a random pivot element for partitioning makes it very unlikely to cause worst case behaviour. However a non-deterministic factor is added to the algorithm. This means that sorting the very same array of data twice will could result in radically different results regarding the time of computation needed. This might be quite objectional depending on the task and context. In addition to that a generator for random numbers needs to be available. Even though the quality of the random numbers does not have to be great, it could be a limitation on very basic hardware platforms.

The median-of-3 pivot element: Another alternative is to pick the median element out of the first, middle and last element in the array of data. To implement this the median-of-3 algorithm is being implemented. To come to a result each of the candidates needs to be compared to all other candidates. The table below shows a decision table that explains the flow. When implemented with if... else structures the result can be computed with 2 or 3 steps. So the complexity to find a pivot element is really small. In the table the three elements a, b and c are compared:

a>b|         true        |       false
---+---------------------+----------------------
b>c| true |     false    | true         | false
---+---------------------+----------------------
a>c|  X   | true | false | true | false |  X
---+---------------------+----------------------
   |  b   |  c   |  a    |  a   |  c    |  b

This way of picking the pivot element improves Quicksort compared to a trivial choice like always using the first, middle or last element quite a lot. It becomes highly unlikely that a worst case scenario takes place. The absolute worst case is even excluded as long as no elements of the same value are in the array twice, since the minimum or maximum element can not be picked. In addition to that Quicksort performs with its best case behaviour when it is used on an (almost) presorted array of data, which is a common task. This is the strategy that is often implemented in reality.

3.2 Creating a worst case scenario

As mentioned before the worst case is nearly independent from the choice of the pivot element. While they might differ a little bit they still mean quadratic complexity. Today’s common implementations of Quicksort use a random or median-of-3 pivot element. However it is not possible to create a set of data that makes the worst case behaviour of Quicksort occur. Therefore if the worst case needs to be generated it will be done for a median-of-3 Quicksort. The special of permutation that causes this algorithm to degenerate into quadratic behaviour was called median-of-3 killer sequence by its inventor David Musser. [Musser 1997] His definition:

Let k be even and positiv, then the following permutation 1, 2, …, 2k is a median of three killer sequence: 1, k+1, 3, k+3, 5, …, 2k-3, k-1, 2k-1, 2, 4, 6, …, 2k-2, 2k

However there is no hint in literature as how to create those sequence algorithmically, therefore the following verbalisation might be more suitable:

Let k be even and positiv, and a[] an array of data from a[1] to a[2k]. For i from 1 to k for each uneven i a[i] = i and a[i+1] = k+i. For each even i a[k+i] = 2i.

The actual implementation of this algorithm can be found in the appendix.

3.3 Limiting the minimum section length

Another enhancement to Quicksort is to use a more optimized algorithm for very small subarrays of data. At those levels Quicksort is not efficient anymore since the algorithm is fairly complex to sort arrays with only a limited number of elements. This can be implemented by not checking whether the left and right border or the array are equal to each other but have a certain distance between them. Then the algorithm has two options: it can either use a different sorting algorithm on that subarray right then or stop and use the same algorithm on the whole array. Both methods work equally well. [Sedgewick 1988] suggests to use Insertionsort. When limiting the minimum section length to 5 to 25 elements about 20% of time for the whole process can be saved.

4 Introsort

The following chapter displays the conceptional and implementation specific details of Introsort. Subsequently the efficiency is examined.

4.1 Basic principle

In addition to the enhancements to Quicksort that have been discussed in the previous chapter, Introsort introduces another change - the one that brings the introspective element into play. This is done by monitoring the recursion depth the algorithm reaches as a function of the length of the array of data. Since the recursion depth for the best and worst case runtime are known a reasonable value in between can be calculated dynamically. This value acts as a threshold and once it is exceeded Introsorts detects that the Quicksort algorithm it uses degenerates to quadratic behaviour. The reaction to this is changing the sorting algorithm for the current subarray of data. In [Musser 1997] it is suggested to use Heapsort. While Heapsort is in fact a very good choice its actual inner workings are not important here. For reasons of completeness and so that all the code that is used in the implementation later on is documented, there is a section in the appendix explaining Heapsort in detail.

4.2 Implementation

To keep the exemplary implementation simple, ordinary arrays will be sorted. Also instead of iterator classes as they are used in Musser’s STL implementation simple integer variables containing indexes will be used. The following source code displays Introsort’s inner loop. The arguments to this function are lo and hi, which contain the limits of the subarray in the whole array. The variable dept_limit is a counter that watches and limits the recursion depth as discussed earlier. Before each new function call of the inner loop it is decremented. Once the value of it reaches 0 the criteria to switch to Heapsort is met. The partition() method is identical to its Quicksort equivalent (see the Quicksort chapter and the appendix containing the whole source code).

private static void introsort_loop (int lo , int hi , int depth_limit )
{
	while (hi -lo > size_threshold )
	{
		if ( depth_limit == 0)
		{
			heapsort (lo , hi );
			return ;
		}
		depth_limit --;
		int p= partition (lo , hi ,
			medianof3 (lo , lo +((hi -lo )/2)+1 , hi -1));
		introsort_loop (p, hi , depth_limit );
		hi=p;
	}
}

Also in this implementation subarrays that are smaller than size_threshold are being ignored. Musser suggests to set this variable to 16. However depending on the information that is known about the data varying this parameter can optimise the sorting process. As suggested earlier, after Introsort is done Insertionsort puts finishing touches on the whole array. A complete description of Insertionsort can too be found in the appendix. It is also notable that the size_threshold is checked with a higher priority than the depth_limit which actually does not limit the maximum number of recursive levels allowed until the data is sorted but until the minimum subarray length is reached. Therefore there is a corelation between those parameters: i.e. when the size_threshold is enlarged the depth_limit could be reduced and vice versa.

4.3 Efficiency

Since an examination of Quicksort has already been made for Quicksort it is not necessary to do it again for best and worst case with Introsort. In a best case scenario Introsort behaves in the same way Quicksort does, only it takes a fixed number of extra operations to determine the current recursion level. Therefore the overall complexity does not differ, yet a small linear factor is added. The interesting part however is the worst case behaviour.

Until the limitation of the recursion depth comes into play, Introsort behaves like a worst case Quicksort. For the rest of the data Heapsort is applied. Heapsort has the very same complexity regardless of the permutation of the data O(n * log(n)). Be r the maximum recursion depth and n the length of the array that is to be sorted. Then this is Introsorts worst case complexity:

Complexity worst case of Introsort

The interpretation of this formula is being complicated by the fact that r is not a constant factor but a function of n. Musser suggests in his original paper that should be r = 2 * lb(n). It is not trivial to simplify the equation further however it can be analyzed by looking at the summands discretely. 2 * lb(n) and (n-r) * log(n-r) individually have a complexity of O(n * log(n)). Adding the two terms does not change that, it just introduces a constant factor k to the term (which is simply left out in the big-O notation). The third summand lb(n) * (lb(n) + 1) / 2 behaves like O(lb(n)^2), even though has a positive influence since the sign is negative, it does change the overall complexity. The picture below shows the discussed equation precisely and compares it to other algorithm complexities. g(x) depicts the equation above, h(x) = x * log(x) and i(x) = x^2. The bottom line is: Introsort’s worst case behaviour is far from being quadratic, but there is a very noticeable difference to a pure n * log(n) behaviour.

Introsort worst case behaviour

5 Benchmark

The following chapter presents the results of the practical analysis of Introsort, that seek to validate the theoretical examinations made previously. On top of that Introsort is being compared to other sorting algorithms in different sorting scenarios. The most obvious method to achieve this is to measure the time to sort an array of data in each of the scenarios using all sorting algorithms that are to be tested. The downside of measuring time is that the allocation of the underlying operating system brings a certain falsification of the results along that is hard to assess [Wagenknecht 2003]. Also while computing speeds tend to get ever faster, the minimal length of the array that takes a measurable amount of time to sort with a certain precision, i.e. in miliseconds, becomes bigger accordingly. Still also looking at fairly small short arrays might be interesting too. An alternative approach tries to decouple the measurements from its underlying hardware and technology by looking solely at that elementary computing steps necessary to execute the algorithm. These can be counted by counters and compared for different algorithms. In the case of sorting algorithms the interesting basic operations are comparisons and assignments. Hardly any sorting alogrithm comes to mind that needs complex arithmetic computations. Also the assumption is made that basic structures like loops and function calls are need equally by all algorithms and do not take a huge amount of time. For the sake of convenience, in the discussion of the results even though basic operations are counted, we talk about the speed of an algorithm, even though time is only indirectly involved. The following two sections present both kinds of tests.

5.1 Basic operation benchmark

For the basic operation benchmark four algorithms are considered. First of all, Introsort in the presented implementation, as a short summary of features: median-of-3 pivot element, minimal subarray length of 16, a limitation of the recursion depth and Heapsort as the stopper algorithm. The second participant is an optimized (opt.) Quicksort , which implements median-of-3 pivot elements and a minimal subarray length of 16. The third algorithm is a basic Quicksort, that also only uses median-of-3 pivot elements. The fourth algorithm is Heapsort, to show that one would generally be better of using it in the first place. This group of sorting algorithm is used to do tests with arrays of varying lengths containing random numbers, presorted numbers and median-of-3 killer sequences.

In the following table containing the results for the randomly ordered arrays, some observations can be made: Intosort and Quicksort (opt.) are on average 10%-15% faster than the unoptimized version. The overhead that Introsort has compared to the optimized version of Quicksort however is only a very small percentage. Heapsort is the slowest algorithm for randomly sorted arrays and takes about twice as long as Quicksort and Introsort. Basic operations for random arrays:

Length (in 1000) algorithm assignments comparisons total
1 Introsort 21.6 17.7 39.3
Quicksort (opt.) 21.5 17.7 39.2
Quicksort 22.4 23.2 45.5
Heapsort 36.1 36.3 72.4
4 Introsort 102.7 83.1 185.8
Quicksort (opt.) 102.2 83.1 185.4
Quicksort 105.4 106.2 211.6
Heapsort 172.5 177.4 349.9
16 Introsort 470.9 377.8 848.7
Quicksort (opt.) 469.2 377.8 847.0
Quicksort 495.7 493.8 989.5
Heapsort 803.3 838.3 1641.6
64 Introsort 2141.2 1709.3 3850.5
Quicksort (opt.) 2134.6 1709.3 3843.9
Quicksort 2198.8 2155.5 4354.3
Heapsort 3658.6 3863.1 7521.7
256 Introsort 9629.7 7666.5 17296.3
Quicksort (opt.) 9603.1 7666.5 17269.6
Quicksort 9790.2 9499.1 19289.4
Heapsort 16423.7 17498.0 33921.7
1024 Introsort 42482.9 33676.1 76159.0
Quicksort (opt.) 42376.6 33676.1 76052.7
Quicksort 43347.4 41711.0 85058.4
Heapsort 72873.9 78192.9 151066.7

The next test with an presorted array of data, shows no new results for the coherence between Introsort and Quicksort. Interesting however is the direct comparison to the results of the basic operations benchmark with the random arrays. Since the median-of-3 strategy helps the Quicksort family to best case behaviour they gain an astonishing 40% of speed. Heapsort however needs even more operations. Basic operations for presorted arrays:

Length (in 1000) algorithm assignments comparisons total
1 Introsort 10.6 9.8 20.4
Quicksort (opt.) 10.5 9.8 20.3
Quicksort 13.5 14.1 27.6
Heapsort 38.2 38.1 76.3
4 Introsort 50.4 47.1 97.5
Quicksort (opt.) 50.1 47.1 97.2
Quicksort 61.9 64.6 126.5
Heapsort 181.4 185.0 366.5
16 Introsort 233.8 220.3 454.1
Quicksort (opt.) 232.6 220.3 452.8
Quicksort 279.6 290.2 569.8
Heapsort 837.8 868.8 1706.5
64 Introsort 1063.2 1009.1 2072.4
Quicksort (opt.) 1058.3 1009.1 2067.4
Quicksort 1246.2 1288.9 2535.1
Heapsort 3797.4 3986.2 7783.7
256 Introsort 4765.0 4548.6 9313.6
Quicksort (opt.) 4745.2 4548.6 9293.8
Quicksort 5496.8 5667.5 11164.4
Heapsort 17003.1 18005.9 35009.0
1024 Introsort 21108.0 20242.4 41350.5
Quicksort (opt.) 21028.9 20242.4 41271.3
Quicksort 24035.3 24718.0 48753.4
Heapsort 75269.3 80325.5 155594.8

The last and most important test uses median-of-3 killer sequences. Here it becomes apparent how Introsort improves Quicksort’s worst case behaviour. Even for very small arrays with a length of only 1000 elements, Introsort is twice as fast as Quicksort. As the arrays get bigger, this factor increases nearly quadratic: at 4000 elements Introsort is 5.8 times faster, at 16000 elements it is 19 times as fast and at 64000 elements it is 65.7 times faster than Quicksort et cetera. Another interesting fact is that the limitation of the minimum subarray length is at its maximal effectivity in the worst case scenario when comparing the two Quicksort algorithms. Here the optimized version is even 50% faster and can sort longer arrays without crashing. In this case, of course Heapsort is the fastest sorting algorithm, since Introsort runs with worst case behaviour until switching to Heapsort. So Heapsort takes only about 65% of Introsorts basic operations to sort a median-of-3 killer array. Basic operations for median-of-3 killer sequence:

Length (in 1000) algorithm assignments comparisons total
1 Introsort 54.6 53.6 108.1
Quicksort (opt.) 110.5 108.3 218.8
Quicksort 206.5 208.8 415.3
Heapsort 35.9 35.8 71.7
4 Introsort 270.6 270.3 540.9
Quicksort (opt.) 1585.3 1574.2 3159.5
Quicksort 3095.5 3103.6 6199.0
Heapsort 171.1 174.8 346.0
16 Introsort 1270.7 1285.8 2556.5
Quicksort (opt.) 24419.0 24366.9 48785.9
Quicksort 48473.5 48502.3 96975.8
Heapsort 796.8 827.9 1624.7
64 Introsort 5806.9 5931.1 11737.9
Quicksort (opt.) 386043.4 385804.8 771848.2
Quicksort 770320.7 770420.7 1540741.4
Heapsort 3636.1 3824.5 7460.6
256 Introsort 26052.9 26799.7 52852.6
Quicksort (opt.) 6153857.4 6152777.5 ca. 1.23*10^7
Quicksort - - -
Heapsort 16322.3 17329.7 33652.0
5.2 Time benchmark

The time benchmark only considers randomly presorted arrays of different lengths. However a bigger group of algorithms is being benchmarked. These are different variants of Quicksort, Qsort as used in the FreeBSD libs, Psort of Plan9, Dsort according to Dijsktra, Heapsort and Introsort. The machine used is an AMD Athlon 64 3200+, 2GB RAM, Ubuntu Linux 6.10 (kernel version 2.6.17-11-generic) and Java 1.5. To level out the problems that arise with the allocation of ressources discussed earlier, each algorithm sorts the same random permutation of data 100 times. The result is the median time of these sorting processes, which can be found in the table below. The only Quicksort variant that clearly indicates that there is an overhead to running the extra intelligence of Introsort is Qsort as used in the FreeBSD libraries [QSO 1993], which is 5%-8% faster. This increase in speed however is bought of by acceptiong a quadratic worst case behaviour. A test versus this behaviour however was not carried out since the permutation that would be needed are way different from the median of 3 killer sequences. This is because the choice of pivot element is much more complex in this algorithm. Aside of that the results would not show any new insights compared to the benchmark with the basic operations. All times are milliseconds:

Length (in 1000) 1 4 16 64 256 1024
Quicksort (Lehrbuch) 0.83 3.41 7.73 43.02 228.39 1155.47
QuicksortOhneRekursionsProbleme 0.98 1.73 7.77 42.54 232.5 1153.4
QuicksortOhneRekursionsProblemeHelbig 0.95 1.73 7.81 43.6 226.59 1187.25
Qsort (FreeBSD) 0.93 1.57 6.54 34.98 179.52 919.5
Psort (Plan9) 1.05 1.88 8.69 44.43 243.55 1213.03
Dsort (Dijkstra) 1.17 2.18 9.07 48.97 240.04 1191.03
QuicksortSchwinn 1.17 2.13 11.53 86.73 905.84 14226.31
Introsort 0.95 1.67 7.46 40.16 210.49 1055.4
Heapsort 1.12 2.35 12.23 79.68 470.25 2328.95

6 Conclusion

Putting this paper’s conclusion into one statement, it can be said that Introsort reaches the goals that have been set. As has been shown the worst case behaviour of Quicksort is drastically improved. Therefore it is a sorting algorithm that can be used in nearly every situation. However when deciding to implement an for a specific task the conditions this task brings with itself need to be considered. If no information about the permutation of the data is available and a worst case behaviour needs to be ruled out Introsort is a premium choice. However if just some information is available it is usually not to hard to find a way of choosing a pivot element for a regular Quicksort implementation that makes Introsort’s overhead unnecessary. For strongly predefined permutations of data even more optimized sorting algorithms can be considered.

Appendix A: Sorting algorithms

Description of other sorting algorithms that are mentioned in the paper. To be typed up.

A1 Insertion sort
A2 Heapsort

Appendix B: Source Code

The three sections below contain the source code that has been used wo write the paper above.

B1 Algorithm to create median-of-3 killer sequences
private static Sortable [] createMedianOf3KillerArray ( int length )
{
    if (( length %2)==1)
        return null;
    int k= length /2;
    Sortable [] a = new Sortable [ length ];
    for (int i=1; i<=k; i++)
    {
        if ((i \%2) == 1)
        {
            a[i -1] = new SortableInteger (i);
            a[i] = new SortableInteger (k+i);
        }
        a[k+i -1] = new SortableInteger (2*i);
    }
    return a;
}
B2 The class Introsort

The Introsort Java class.

public class Introsort implements Sorter
{
    /*
     * Class Variables
     */
    private static Sortable[] a;
    private static int size_threshold = 16;

    /*
     * Public interface
     */
    public void sort(Sortable[] a0)
    {
        a=a0;
        introsort_loop(0, a.length, 2*floor_lg(a.length));
        insertionsort(0, a.length);
    }

    public void sort(Sortable[] a0, int begin, int end)
    {
    	if (begin < end)
    	{
	    a=a0;
	    introsort_loop(begin, end, 2*floor_lg(end-begin));
	    insertionsort(begin, end);
    	}
    }

    /*
     * Quicksort algorithm modified for Introsort
     */
    private static void introsort_loop (int lo, int hi, int depth_limit)
    {
    	while (hi-lo > size_threshold)
    	{
    	    if (depth_limit == 0)
    	    {
    	        heapsort(lo, hi);
    		return;
    	    }
    	    depth_limit=depth_limit-1;
	    int p=partition(lo, hi, medianof3(lo, lo+((hi-lo)/2)+1, hi-1));
	    introsort_loop(p, hi, depth_limit);
	    hi=p;
    	}
    }

    private static int partition(int lo, int hi, Sortable x)
    {
    	int i=lo, j=hi;
    	while (true)
    	{
    	    while (a[i].smaller(x)) i++;
    	    j=j-1;
    	    while (x.smaller(a[j])) j=j-1;
    	    if(!(i < j))
    		return i;
    	    exchange(i,j);
    	    i++;
    	}
    }

    private static Sortable medianof3(int lo, int mid, int hi)
    {
        if (a[mid].smaller(a[lo]))
        {
            if (a[hi].smaller(a[mid]))
                return a[mid];
            else
            {
                if (a[hi].smaller(a[lo]))
		    return a[hi];
		else
		    return a[lo];
            }
        }
	else
	{
            if (a[hi].smaller(a[mid]))
            {
                if (a[hi].smaller(a[lo]))
		    return a[lo];
		else
		    return a[hi];
            }
            else
                    return a[mid];
	}
    }

    /*
     * Heapsort algorithm
     */
    private static void heapsort(int lo, int hi)
    {
        int n = hi-lo;
	for (int i=n/2; i>=1; i=i-1)
	{
	   downheap(i,n,lo);
	}
	for (int i=n; i>1; i=i-1)
	{
	   exchange(lo,lo+i-1);
	   downheap(1,i-1,lo);
	}
    }

    private static void downheap(int i, int n, int lo)
    {
        Sortable d = a[lo+i-1];
        int child;
        while (i<=n/2)
	{
	   child = 2*i;
	   if (child < n && a[lo+child-1].smaller(a[lo+child]))
	   {
	       child++;
	   }
	   if (!d.smaller(a[lo+child-1])) break;
	   a[lo+i-1] = a[lo+child-1];
	   i = child;
	}
	a[lo+i-1] = d;
    }

    /*
     * Insertion sort algorithm
     */
    private static void insertionsort(int lo, int hi)
    {
	int i,j;
	Sortable t;
	for (i=lo; i < hi; i++)
	{
            j=i;
	    t = a[i];
	    while(j!=lo && t.smaller(a[j-1]))
	    {
	        a[j] = a[j-1];
		j--;
	    }
	    a[j] = t;
	}
    }

    /*
     * Common methods for all algorithms
     */
    private static void exchange(int i, int j)
    {
        Sortable t=a[i];
        a[i]=a[j];
        a[j]=t;
    }

    private int floor_lg(int a)
    {
        return (int)(Math.floor(Math.log(a)/Math.log(2)));
    }
}
B3 Implemented interfaces

Introsort implements this interface:

public interface Sorter
{
    void sort (Sortable [] f, int from , int to);
    /**
     * Ascendingly sorts the array from from to to,
     * where from is included and to is not.
     **/
    void sort (Sortable [] f);
}

All variables used in the sorted arry need to implement this interface:

public interface Sortable
{
    boolean smaller ( Sortable ob );
}

References

[QSO 1993] (1993). Qsort 3 FreeBSD manpage. http://www.freebsd.org/, as viewed on 19/02/2007.
[Bentley 1993] Bentley, Jon L. (1993). Engineering a Sort Function. Software - Practice and Experience. Volume 23, Issue 11, S. 1249-1265.
[Floyd 1964] Floyd, Robert W. (1964). Algorithm 245: Treesort 3 . Communications of the ACM. Volume 7, Issue 12, S. 701.
[Hoare 1961] Hoare, C. A. R. (1961). Algorithm 64: Quicksort. Communications of the ACM. Volume 4, Issue 7, S. 321.
[Musser 1997] Musser, David R. (1997). Introspective sorting and selection algorithms. Rensselaer Polytechnic Institute, Troy, NY 12180, musser@cs.rpi.edu.
[Papula 1990] Papula, Lothar (1990). Mathematische Formelsammlung für Ingenieure und Naturwissenschaftler. Vieweg & Sohn. 3. Auflage, S. 9.
[Sedgewick 1988] Sedgewick, Robert (1988). Algorithmen. Addison-Wesley. 2. Auflage.
[Wagenknecht 2003] Wagenknecht, Christian (2003). Algorithmen und Komplexität. Fachbuchverlag Leipzig.
[Williams 1964] Williams, J. W. J. (1964). Algorithm 232: Heapsort. Communications of the ACM. Volume 7, Issue 6, S. 347-348.