201

My co-workers took me back in time to my University days with a discussion of sorting algorithms this morning. We reminisced about our favorites like StupidSort, and one of us was sure we had seen a sort algorithm that was O(n!). That got me started looking around for the "worst" sorting algorithms I could find.

We postulated that a completely random sort would be pretty bad (i.e. randomize the elements - is it in order? no? randomize again), and I looked around and found out that it's apparently called BogoSort, or Monkey Sort, or sometimes just Random Sort.

Monkey Sort appears to have a worst case performance of O(∞), a best case performance of O(n), and an average performance of O(n·n!).

What is the currently official accepted sorting algorithm with the worst average sorting performance (and there fore beeing worse than O(n·n!))?

Scorix
  • 487
  • 6
  • 20
womp
  • 115,835
  • 26
  • 236
  • 269
  • 11
    How many bogomips per bogosort? Inquiring minds want to know. – zombat Apr 09 '10 at 18:30
  • 13
    To clarify, are you excluding the trivial case where best case performance is O(∞)? – tloflin Apr 09 '10 at 18:42
  • @tloflin - generally yeah, I assume any named algorithm would have to have a remote chance of success. I'll clarify the question a bit - are there any algorithms that are worse than n*n! on average? – womp Apr 09 '10 at 18:51
  • 3
    http://c2.com/cgi/wiki?MultiplyAndSurrender – Josh Lee Apr 09 '10 at 18:58
  • My buttheadsort, detailed below, would be worse than n*(n!^n)! – Patrick Karcher Apr 09 '10 at 18:59
  • 7
    I heard that the monkey sort is also known as "drunk man sort", a name that I find much more evocative. – Matteo Italia Apr 09 '10 at 19:44
  • You can adapt your BogoSort to use a very slow (and even better: erroneous) random generator. – Debilski Apr 09 '10 at 20:01
  • O(foo) is defined as the worst case time complexity, not a generic performance metric. Best case time complexity is Ω(foo). There is no notation for average time complexity, though Θ(foo) comes close. Not really important to the question, though. – ember arlynx Nov 26 '12 at 23:04
  • @Debilski: That will make it a constant factor slower, which is ignored in big-O notation. For a slower _notation_ I don't think you'll find much. – Mooing Duck Dec 14 '12 at 17:35
  • 6
    @Matteo Italia - or it could be called "Toddler Sort" as anyone with 2 year old can attest. – Martin Capodici Jun 03 '14 at 22:40
  • UserSort: promt the user to sort the array manually. – HerpDerpington Aug 31 '15 at 16:39
  • As of 2013, we have https://xkcd.com/1185/ – ai8p588t Jan 11 '17 at 10:01
  • As of 2019, worstsort is a sorting algorithm, that has a finite end but can be made as inefficient as needed by choosing functions that grow fast on given input numbers. https://en.wikipedia.org/wiki/Bogosort#Related_algorithms – Scorix Dec 27 '19 at 23:04

26 Answers26

478

From David Morgan-Mar's Esoteric Algorithms page: Intelligent Design Sort

Introduction

Intelligent design sort is a sorting algorithm based on the theory of intelligent design.

Algorithm Description

The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally Sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.

Analysis

This algorithm is constant in time, and sorts the list in-place, requiring no additional memory at all. In fact, it doesn't even require any of that suspicious technological computer stuff. Praise the Sorter!

Feedback

Gary Rogers writes:

Making the sort constant in time denies the power of The Sorter. The Sorter exists outside of time, thus the sort is timeless. To require time to validate the sort diminishes the role of the Sorter. Thus... this particular sort is flawed, and can not be attributed to 'The Sorter'.

Heresy!

pfabri
  • 885
  • 1
  • 9
  • 25
BioGeek
  • 21,897
  • 23
  • 83
  • 145
  • 109
    Also known as "Assumption Sort": Assume the list is sorted, return! – BioGeek Apr 09 '10 at 19:42
  • 50
    +100 - this answer is made out of 100% pure win. – womp Apr 09 '10 at 19:48
  • 12
    Hey! Don't forget "Indecisive Sort" (Also know as "Schrodinger's Sort" or "Quantum Sort"), where the list may or may not be sorted, however checking it will reveal whether or not it is. Here is my sample implementation: `void quantum_sort (void *b, size_t n, size_t s, int (*c)(const void *, const void*)) { if (rand () % 2) qsort (b, n, s, c); }`. – Joe D Aug 29 '10 at 18:50
  • 6
    We should dub this **Candide Sort**: `"This is the best of all posibble worlds because it is the world that is, and so in the best possible world the array would already be sorted!"` – echochamber Jun 12 '14 at 17:17
  • 4
    I, for one, welcome our new sorting overlord. All hail the sorter! – Bryson Oct 10 '14 at 23:00
  • Turning `doesntNeedToSort;` into `monkeySort;` – Don Larynx Apr 24 '15 at 06:29
  • bool intelligent_design_sort( list ){ return false; } – Erik Bergstedt Feb 29 '16 at 11:48
  • 1
    This answer has a faint whiff of Douglas Adams about it – Darren H Dec 09 '16 at 20:33
340

Many years ago, I invented (but never actually implemented) MiracleSort.

Start with an array in memory.
loop:
    Check to see whether it's sorted.
    Yes? We're done.
    No? Wait a while and check again.
end loop

Eventually, alpha particles flipping bits in the memory chips should result in a successful sort.

For greater reliability, copy the array to a shielded location, and check potentially sorted arrays against the original.

So how do you check the potentially sorted array against the original? You just sort each array and check whether they match. MiracleSort is the obvious algorithm to use for this step.

EDIT: Strictly speaking, this is not an algorithm, since it's not guaranteed to terminate. Does "not an algorithm" qualify as "a worse algorithm"?

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
  • 45
    I assume one can use cosmic rays to prove correctness of this algorithm. – ghord Nov 25 '12 at 09:57
  • 1
    What's the big O of this? `O(2^N)`? – Mooing Duck Dec 14 '12 at 17:50
  • 15
    @MooingDuck: I don't think it actually has a big O. – Keith Thompson Feb 21 '13 at 18:17
  • 1
    There are many algorithms which are not guaranteed to terminate (Bogosort for one). I'm think this algorithm's _average case_ is `O(2^N)`, since each bit in the dataset halves the odds of success each round (but I'm not betting money on being right). You're right that technically it's big-O-infinity since it may not terminate. – Mooing Duck Feb 21 '13 at 19:15
  • 7
    @MooingDuck: Strictly speaking, if it doesn't terminate it's not an algorithm, according to both what they taught me in college and the [Wikipedia article](http://en.wikipedia.org/wiki/Algorithm). – Keith Thompson Feb 21 '13 at 19:39
  • @KeithThompson: That's interesting, we might have to ask on TheoreticalCompSci or something. I agree with what you said, but clarify that both this and the bogosort do _eventually_ produce output and terminate at a final state, there simply happens to be no upper bound on the time before that happens. I think it still qualifies as an algorithm, but I'm less certain. – Mooing Duck Feb 21 '13 at 21:21
  • 1
    The Wikipedia article isn't conclusive. It says "While there is no generally accepted formal definition of 'algorithm,' an informal definition could be 'a set of rules that precisely defines a sequence of operations.'" Given the halting problem, it seems pretty strange to have terminology that requires us to solve it just in order to know what to call something. Also, what would you call the things that don't always terminate or the things that we're not sure if they terminate? That definition creates quite a mess. – Olathe Aug 02 '13 at 12:03
  • 7
    @Olathe: The Halting Problem says we can't determine *for all programs* whether they halt, but there are plenty of programs for which we can make that determination. We *know* Quicksort and Bubblesoft halt, and we know they're algorithms. – Keith Thompson Aug 02 '13 at 15:08
  • @Olathe Hold on a sec while I clarify the wiki for you.... :P – sisharp Aug 30 '13 at 18:52
  • There must be some severe misunderstanding from that duck to think that an "algorithm" that consists of waiting for an array to sort itself without doing anything to it has O(2^N). – Jim Balter Jun 05 '14 at 19:40
  • "Given the halting problem, it seems pretty strange to have terminology that requires us to solve it just in order to know what to call something." -- Severe abstraction failure. Programs that do or don't halt are programs that do or don't halt regardless of any ability to determine that. "Also, what would you call the things that don't always terminate" -- Severe basic logic failure. Since an algorithm is a procedure that always halts, something that doesn't always halt is a procedure that isn't an algorithm. – Jim Balter Jun 05 '14 at 19:42
  • 1
    "The Halting Problem says we can't determine for all programs whether they halt" -- it's remarkable how many people don't understand this. Virtually every program ever written by a human being can be shown to halt or not halt. It's just that a program that purportedly determines that for all programs can't determine that for itself, so we know it's not generally possible. This theoretical limit is constantly misapplied. – Jim Balter Jun 05 '14 at 19:50
154

Quantum Bogosort

A sorting algorithm that assumes that the many-worlds interpretation of quantum mechanics is correct:

  1. Check that the list is sorted. If not, destroy the universe.

At the conclusion of the algorithm, the list will be sorted in the only universe left standing. This algorithm takes worst-case Θ(N) and average-case θ(1) time. In fact, the average number of comparisons performed is 2: there's a 50% chance that the universe will be destroyed on the second element, a 25% chance that it'll be destroyed on the third, and so on.

Prashant
  • 394
  • 1
  • 6
  • 18
BioGeek
  • 21,897
  • 23
  • 83
  • 145
  • 49
    But time ceases to exist in the universe you just destroyed. So an observer in a universe that you have not yet checked will not be able to tell how much of the algorithm has been executed. Thus, this algorithm always takes O(1) time, since previous universe-destructions don't exist anymore. – Barry Brown Apr 09 '10 at 20:09
  • 1
    To quote the author of the above algorithm: On average, the universe is destroyed. It takes only O(1) time to decide to destroy the universe, and (assumption) O(1) time to destroy the universe. (destroyUniverse has all the time in the world, which should be a constant as the universe has no inputs, or else something is really screwed up with our understanding of the univerrse.) Therefore the average case is O(1). http://www.reddit.com/r/programming/comments/thtx/my_new_favorite_sorting_algorithm/ctl3h?context=2 – BioGeek Apr 09 '10 at 20:12
  • 2
    Isn't it always O(n), since you have to check every element? – Brendan Long Apr 09 '10 at 20:32
  • 13
    Yes, in the only universe that observes the list sorted, it took O(n) time to execute - how long it took in other universes is irrelevant. – Nick Johnson Apr 10 '10 at 11:26
  • 21
    This algorithm has a much bigger problem, however. Assume that one in 10 billion times you will mistakenly conclude a list is sorted when it's not. There are 20! ways to sort a 20 element list. After the sort, the remaining universes will be the one in which the list was sorted correctly, and the 2.4 million universes in which the algorithm mistakenly concluded the list was sorted correctly. So what you have here is an algorithm for massively magnifying the error rate of a piece of machinery. – Nick Johnson Oct 11 '11 at 23:19
  • 13
    This is obviously the best sorting algorithm, not the worst. – Boann Nov 25 '12 at 12:54
  • 1
    For an easier-to-implement version, have it use a quantum postdecision device. That way, if, after we shuffle it we determine that it is not correct, we can have the sorting process terminated _before_ we check if it is sorted. This ensures that all we need to do is simply check the list after the shuffle, because if our check concluds it isn't sorted, we wouldn't have checked (and we already know for sure that we did check it). Much easier than destroying the universe (we don't have capable hardware yet). – AJMansfield Mar 13 '13 at 21:44
  • 9
    You have missed out a crucial first step: 0. Shuffle the list using true randomness. – Beetle Mar 22 '13 at 18:27
  • 13
    Failure to head Beetle's advice may result in all universes being destroyed. – CrashCodes Jun 05 '14 at 20:19
  • In "average complexity", the average is taken over all possible inputs, not over all possible universes (unless stated otherwise). So step 1 is O(N) plus the cost of universe destruction, as Brendan Long stated. – Pablo H Jul 20 '21 at 22:26
  • 1
    Analyzing the complexity of destroyUniverse() gets tricky: yeah, size of universe is constant (?), but the size of (the representation of) the input list is less than size of universe, because it is contained therein... – Pablo H Jul 20 '21 at 22:29
69

Jingle Sort, as described here.

You give each value in your list to a different child on Christmas. Children, being awful human beings, will compare the value of their gifts and sort themselves accordingly.

Cube Drone
  • 5,790
  • 1
  • 16
  • 14
68

I'm surprised no one has mentioned sleepsort yet... Or haven't I noticed it? Anyway:

#!/bin/bash
function f() {
    sleep "$1"
    echo "$1"
}
while [ -n "$1" ]
do
    f "$1" &
    shift
done
wait

example usage:

./sleepsort.sh 5 3 6 3 6 3 1 4 7
./sleepsort.sh 8864569 7

In terms of performance it is terrible (especially the second example). Waiting almost 3.5 months to sort 2 numbers is kinda bad.

Kw4s
  • 45
  • 3
  • 8
57

I had a lecturer who once suggested generating a random array, checking if it was sorted and then checking if the data was the same as the array to be sorted.

Best case O(N) (first time baby!) Worst case O(Never)

Daniel
  • 1,994
  • 15
  • 36
48

There is a sort that's called bogobogosort. First, it checks the first 2 elements, and bogosorts them. Next it checks the first 3, bogosorts them, and so on.

Should the list be out of order at any time, it restarts by bogosorting the first 2 again. Regular bogosort has a average complexity of O(N!), this algorithm has a average complexity of O(N!1!2!3!...N!)

Edit: To give you an idea of how large this number is, for 20 elements, this algorithm takes an average of 3.930093*10^158 years,well above the proposed heat death of the universe(if it happens) of 10^100 years,

whereas merge sort takes around .0000004 seconds, bubble sort .0000016 seconds, and bogosort takes 308 years, 139 days, 19 hours, 35 minutes, 22.306 seconds, assuming a year is 365.242 days and a computer does 250,000,000 32 bit integer operations per second.

Edit2: This algorithm is not as slow as the "algorithm" miracle sort, which probably, like this sort, will get the computer sucked in the black hole before it successfully sorts 20 elemtnts, but if it did, I would estimate an average complexity of 2^(32(the number of bits in a 32 bit integer)*N)(the number of elements)*(a number <=10^40) years,

since gravity speeds up the chips alpha moving, and there are 2^N states, which is 2^640*10^40, or about 5.783*10^216.762162762 years, though if the list started out sorted, its complexity would only be O(N), faster than merge sort, which is only N log N even at the worst case.

Edit3: This algorithm is actually slower than miracle sort as the size gets very big, say 1000, since my algorithm would have a run time of 2.83*10^1175546 years, while the miracle sort algorithm would have a run time of 1.156*10^9657 years.

Community
  • 1
  • 1
bacca2002
  • 1
  • 2
  • 3
  • 2
    great worked answer. sad it doesnt have visibility – swyx Jul 23 '17 at 07:06
  • The time includes the fact that alpha particles won't flip any values, because if that is not the case, taking into account the probability of flipping a bit would make this sorting algorithm never ending, and a flip bit occurs almost 96% of times every 3 days in a computer with 4gb of memory ([link](https://www.macobserver.com/columns-opinions/devils-advocate/computer-makers-negligently-ignore-bit-flip-errors) source). – wafL Nov 03 '21 at 22:45
  • Oh My God you are a genius! – hamidb80 Feb 24 '22 at 06:54
30

If you keep the algorithm meaningful in any way, O(n!) is the worst upper bound you can achieve.

Since checking each possibility for a permutations of a set to be sorted will take n! steps, you can't get any worse than that.

If you're doing more steps than that then the algorithm has no real useful purpose. Not to mention the following simple sorting algorithm with O(infinity):

list = someList
while (list not sorted):
    doNothing
Yuval Adam
  • 161,610
  • 92
  • 305
  • 395
  • 15
    But it takes O(n) to check whether it's sorted, so you can get O(n*n!) – erikkallen Apr 09 '10 at 19:29
  • 4
    @erikkallen: Certainly we can come up with an algorithm to verify sortedness that's worse than O(n). For example, for each element in the array, verify that it's greater than all previous ones, much like insertion sort works. That's an O(n^2) algorithm, and I'm sure I could come up with worse given a little thought. – David Thornley Apr 09 '10 at 20:21
  • 9
    @David Thornley: the following checking algorithm would perhaps show the same spirit as the bogosort: pick two random elements, check that the one with the smaller index is smaller or equal to the one with the larger index, then repeat. Keep a square bit matrix to see which combinations have already been checked. Of course, checking this matrix could also be done in a random walk... – Svante Apr 09 '10 at 20:47
22

Bogobogosort. Yes, it's a thing. to Bogobogosort, you Bogosort the first element. Check to see if that one element is sorted. Being one element, it will be. Then you add the second element, and Bogosort those two until it's sorted. Then you add one more element, then Bogosort. Continue adding elements and Bogosorting until you have finally done every element. This was designed never to succeed with any sizable list before the heat death of the universe.

21

You should do some research into the exciting field of Pessimal Algorithms and Simplexity Analysis. These authors work on the problem of developing a sort with a pessimal best-case (your bogosort's best case is Omega(n), while slowsort (see paper) has a non-polynomial best-case time complexity).

Derrick Turk
  • 4,246
  • 1
  • 27
  • 27
17

Here's 2 sorts I came up with my roommate in college

1) Check the order 2) Maybe a miracle happened, go to 1

and

1) check if it is in order, if not 2) put each element into a packet and bounce it off a distant server back to yourself. Some of those packets will return in a different order, so go to 1

Seth
  • 707
  • 1
  • 9
  • 20
14

There's always the Bogobogosort (Bogoception!). It performs Bogosort on increasingly large subsets of the list, and then starts all over again if the list is ever not sorted.

for (int n=1; n<sizeof(list); ++n) {
  while (!isInOrder(list, 0, n)) {
    shuffle(list, 0, n);
  }
  if (!isInOrder(list, 0, n+1)) { n=0; }
}
IceMetalPunk
  • 5,476
  • 3
  • 19
  • 26
  • 6
    I like the idea that this algorithm is designed to never finish "before the heat death of the universe for any sizeable list" – A.Grandt Feb 22 '14 at 09:36
12

1 Put your items to be sorted on index cards
2 Throw them into the air on a windy day, a mile from your house.
2 Throw them into a bonfire and confirm they are completely destroyed.
3 Check your kitchen floor for the correct ordering.
4 Repeat if it's not the correct order.

Best case scenerio is O(∞)

Edit above based on astute observation by KennyTM.

Patrick Karcher
  • 22,995
  • 5
  • 52
  • 66
  • 9
    No, this is worse because there's no chance of it succeeding. How would the index cards get into your kitchen? They're blowing around outside. It's called, uh, buttheadsort. – Patrick Karcher Apr 09 '10 at 18:31
  • I think he means throw the cards up in the air *outside*, and then check your floor *inside*, where there are guaranteed to be no cards. Although not a "named" algorithm... it is certainly worse! – womp Apr 09 '10 at 18:32
  • 10
    @Patrick Quantum tunneling. – kennytm Apr 09 '10 at 18:39
  • 8
    @KennyTM. That had actually occurred to me. *There is an extremely small but non-zero chance that any object might disappear and reappear at any other point in the universe.* I guess it *could* happen to a thousand index cards . . . Oi. Dangit, my algorithm is **flawed**. I'll fix it . . . – Patrick Karcher Apr 09 '10 at 18:49
  • 3
    It's kind of like having tea and no tea at the same time. Or space travel using an infinite improbability drive. – Barry Brown Apr 09 '10 at 19:33
  • Simpler variant: 1. Add 0 at the end of the list. 2. Check if this resulted in sorting it. 3. If not, repeat (or retry, doesn't matter). – The Vee Feb 09 '23 at 10:18
11

The "what would you like it to be?" sort

  1. Note the system time.
  2. Sort using Quicksort (or anything else reasonably sensible), omitting the very last swap.
  3. Note the system time.
  4. Calculate the required time. Extended precision arithmetic is a requirement.
  5. Wait the required time.
  6. Perform the last swap.

Not only can it implement any conceivable O(x) value short of infinity, the time taken is provably correct (if you can wait that long).

david.pfx
  • 10,520
  • 3
  • 30
  • 63
8

Nothing can be worse than infinity.

Joseph Salisbury
  • 2,047
  • 1
  • 15
  • 21
  • 45
    Infinity + 1. Jinx, no returns. – zombat Apr 09 '10 at 18:31
  • 26
    Not for extremely large values of 1 ;) – zombat Apr 09 '10 at 18:53
  • Infinity + Infinity = Infinity. – kennytm Apr 09 '10 at 19:01
  • 8
    What really blows my mind about the concept of infinity, is that you can have different "sizes" of infinity. For example, consider the set of all integers - it is infinite in size. Now consider the set of all *even* integers - it is also infinite in size, but it is also clearly half the size of the first set. Both infinite, but different sizes. So awesome. The concept of "size" simply fails to work in the context of infinity. – zombat Apr 09 '10 at 19:54
  • 4
    @zombat: You're talking about cardinality, not infinity as a symbol indicating a trend on the real line / complex plane. – kennytm Apr 09 '10 at 20:15
  • 19
    @zombat. The size of the set of even integers is the same as the size of the set of the integers, as shown by the fact that you can place them in one-to-one correspondence. Now, there are more real numbers than integers, as first shown by Cantor. – David Thornley Apr 09 '10 at 20:19
  • And if you *really* want to blow your mind about infinity: http://blog.xkcd.com/2010/02/09/math-puzzle – BlueRaja - Danny Pflughoeft Apr 09 '10 at 23:21
  • Whether \omega + 1 = \omega depends on whether you're doing cardinal or ordinal arithmetic. – uckelman Apr 12 '10 at 09:24
6

Segments of π

Assume π contains all possible finite number combinations. See math.stackexchange question

  1. Determine the number of digits needed from the size of the array.
  2. Use segments of π places as indexes to determine how to re-order the array. If a segment exceeds the size boundaries for this array, adjust the π decimal offset and start over.
  3. Check if the re-ordered array is sorted. If it is woot, else adjust the offset and start over.
Community
  • 1
  • 1
CrashCodes
  • 3,237
  • 12
  • 38
  • 42
5

Bozo sort is a related algorithm that checks if the list is sorted and, if not, swaps two items at random. It has the same best and worst case performances, but I would intuitively expect the average case to be longer than Bogosort. It's hard to find (or produce) any data on performance of this algorithm.

tloflin
  • 4,050
  • 1
  • 25
  • 33
4
Recursive Bogosort (probably still O(n!){
if (list not sorted)
list1 = first half of list.
list 2 = second half of list.
Recursive bogosort (list1);
Recursive bogosort (list2);
list = list1 + list2
while(list not sorted)
    shuffle(list);
}
Simon Kuang
  • 3,870
  • 4
  • 27
  • 53
4

A worst case performance of O(∞) might not even make it an algorithm according to some.

An algorithm is just a series of steps and you can always do worse by tweaking it a little bit to get the desired output in more steps than it was previously taking. One could purposely put the knowledge of the number of steps taken into the algorithm and make it terminate and produce the correct output only after X number of steps have been done. That X could very well be of the order of O(n2) or O(nn!) or whatever the algorithm desired to do. That would effectively increase its best-case as well as average case bounds.

But your worst-case scenario cannot be topped :)

Anurag
  • 140,337
  • 36
  • 221
  • 257
4

My favorite slow sorting algorithm is the stooge sort:

void stooges(long *begin, long *end) {
   if( (end-begin) <= 1 ) return;
   if( begin[0] < end[-1] ) swap(begin, end-1);
   if( (end-begin) > 1 ) {
      int one_third = (end-begin)/3;
      stooges(begin, end-one_third);
      stooges(begin+one_third, end);
      stooges(begin, end-one_third);
   }
}

The worst case complexity is O(n^(log(3) / log(1.5))) = O(n^2.7095...).

Another slow sorting algorithm is actually named slowsort!

void slow(long *start, long *end) {
   if( (end-start) <= 1 ) return;
   long *middle = start + (end-start)/2;
   slow(start, middle);
   slow(middle, end);
   if( middle[-1] > end[-1] ) swap(middle-1, end-1);
   slow(start, end-1);
}

This one takes O(n ^ (log n)) in the best case... even slower than stoogesort.

Jason Plank
  • 2,336
  • 5
  • 31
  • 40
3

Double bogosort

Bogosort twice and compare results (just to be sure it is sorted) if not do it again

Viktor Mellgren
  • 4,318
  • 3
  • 42
  • 75
2

This page is a interesting read on the topic: http://home.tiac.net/~cri_d/cri/2001/badsort.html

My personal favorite is Tom Duff's sillysort:

/*
 * The time complexity of this thing is O(n^(a log n))
 * for some constant a. This is a multiply and surrender
 * algorithm: one that continues multiplying subproblems
 * as long as possible until their solution can no longer
 * be postponed.
 */
void sillysort(int a[], int i, int j){
        int t, m;
        for(;i!=j;--j){
                m=(i+j)/2;
                sillysort(a, i, m);
                sillysort(a, m+1, j);
                if(a[m]>a[j]){ t=a[m]; a[m]=a[j]; a[j]=t; }
        }
}
fsanches
  • 417
  • 4
  • 9
1

One I was just working on involves picking two random points, and if they are in the wrong order, reversing the entire subrange between them. I found the algorithm on http://richardhartersworld.com/cri_d/cri/2001/badsort.html, which says that the average case is is probably somewhere around O(n^3) or O(n^2 log n) (he's not really sure).

I think it might be possible to do it more efficiently, because I think it might be possible to do the reversal operation in O(1) time.

Actually, I just realized that doing that would make the whole thing I say maybe because I just realized that the data structure I had in mind would put accessing the random elements at O(log n) and determining if it needs reversing at O(n).

AJMansfield
  • 4,039
  • 3
  • 29
  • 50
1

Randomsubsetsort.

Given an array of n elements, choose each element with probability 1/n, randomize these elements, and check if the array is sorted. Repeat until sorted.

Expected time is left as an exercise for the reader.

1

You could make any sort algorithm slower by running your "is it sorted" step randomly. Something like:

  1. Create an array of booleans the same size as the array you're sorting. Set them all to false.
  2. Run an iteration of bogosort
  3. Pick two random elements.
  4. If the two elements are sorted in relation to eachother (i < j && array[i] < array[j]), mark the indexes of both on the boolean array to true. Overwise, start over.
  5. Check if all of the booleans in the array are true. If not, go back to 3.
  6. Done.
Brendan Long
  • 53,280
  • 21
  • 146
  • 188
1

Yes, SimpleSort, in theory it runs in O(-1) however this is equivalent to O(...9999) which is in turn equivalent to O(∞ - 1), which as it happens is also equivalent to O(∞). Here is my sample implementation:

/* element sizes are uneeded, they are assumed */
void
simplesort (const void* begin, const void* end)
{
  for (;;);
}
Joe D
  • 2,855
  • 2
  • 31
  • 25