Heapsort, Quicksort, and Entropy
Numerous web-pages compare heapsort and quicksort.
Most of them say something along the lines of `both take an average time scaling as N log N, but A good implementation of QUICKSORT usually beats HEAPSORT in practice.'
Some take this folklore a bit further, giving quantitative details: `On average the number of comparisons done in HEAPSORT is about twice as much as in QUICKSORT, but HEAPSORT avoids the slight possibility of a catastrophic degradation of performance.'
But few seem to ask the question `why should heapsort use twice as many comparisons?' People spend a lot of effort on trying to `get the best of both worlds', making hybrid sorting algorithms such as `introspective sort', which applies quicksort recursively and occasionally switches to heapsort if the recursion depth gets big.
Quicksort and heapsort have been thoroughly compared by Paul Hsieh. He says `I suspected that heapsort should do better than its poor reputation and I think these results bear that out.' In his tests, the best compiler (for either heapsort or quicksort) produced a heapsort that was about 20% faster than quicksort, in total CPU time.
The total CPU tally is different from the number of comparisons made. Heapsort used an average of 61,000 comparisons, and Quicksort 22,000 comparisons, to sort lists of about 3000 objects. See his article for the explanation of the contrast between the comparison-count result and the CPU-time result.
The question I'd like to address, however, is, why Heapsort uses more comparisons than quicksort. Paul Hsieh says `what struck me is that I could not see really why heapsort is slower than quicksort. And I've not heard or read a credible explanation for this either.'
I think there is a simple explanation, based on the idea of expected information content. To make this readable, let's ramble our way via a classic puzzle.
Read full article from Heapsort, Quicksort, and Entropy
No comments:
Post a Comment