The Quick sort performs worst ie, at O(n^2) when all the values of the pivot chosen is either the largest or smallest of the taken set. Consider this example.
1 2 3 4 5
The pivot chosen say is 1, you will have 4 elements on the right side of the pivot and no elements on the left side. Applying this same logic recursively and the pivot chosen is 2, 3, 4, 5 respectively, we have attained a situation where this sort has performed at its worst possible time.
It has been recommended and proven that Quicksort performs well if the input is shuffled well.
Moreover, selection of a sort usually depends on a clear knowledge about the input domain. For example, if the input is huge, then there is something called as external sort which may use external memory. If the input size is very small, we may go for a merge sort but not for medium and huge input sets since it uses extra memory. The main advantage of Quick sort is its "in-place"ness meaning, no extra memory is being used for the input data. Its worst case time on paper is O(n^2) but still is widely preferred and used. My point here is, sorting algorithms can be changed based on the knowledge on the input set and its a matter of preference.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…