In-place modification algorithms could become very hard to handle.
Consider a couple:
- Inplace out-shuffle in linear time. Uses number theory.
- In-place merge sort, was open for a few years. An algorithm came but was too complicated to be practical. Uses very complicated bookkeeping.
Sorry, if this sounds discouraging, but there is no magic elixir which will solve all in-place algorithm problems for you. You need to work with the problem, figure out its properties and try to exploit them (as is the case with most algorithms).
That said, for array modifications which result in a permutation of the original array, you can try the method of following the cycles of the permutation
. Basically any permutation can be written as a disjoint set of cycles (see John's answer too). For instance the permutation:
1 4 2 5 3 6
of 1 2 3 4 5 6
can be written as
1 -> 1 2 -> 3 -> 5 -> 4 -> 2 6 -> 6.
you can read the arrow as 'goes to'.
So to permute the array 1 2 3 4 5 6
you follow the three cycles:
1 goes to 1.
6 goes to 6.
2 goes to 3, 3 goes to 5, 5 goes to 4 and 4 goes to 2.
To follow this long cycle, you can use just one temp variable. Store 3 in it. Put 2 where 3 was. Now put 3 in 5 and store 5 in the temp and so on. Since you only use constant extra temp space to follow a particular cycle, you are doing an in-place modification of the array for that cycle.
Now if I gave you a formula for computing where an element goes to, all you now need is the set of starting elements of each cycle.
A judicious choice of the starting points of the cycles can make the algorithm easy. If you come up with the starting points in O(1) space, you now have a complete in-place algorithm. This is where you might actually have to get familiar with the problem and exploit its properties.
Even if you didn't know how to compute the starting points of the cycles, but had a formula to compute the next element, you could use this method to get an O(n) time in-place algorithm in some special cases.
For instance: if you knew the array of signed integers held only positive integers.
You can now follow the cycles, but negate the numbers in them as an indicator of 'visited' elements. Now you can walk the array and pick the first positive number you come across and follow the cycles for that, making the elements of the cycle negative and continue to find untouched elements. In the end you just make all the elements positive again to get the resulting permutation.
You get an O(n) time and O(1) space algorithm! Of course, we kind of 'cheated' by using the sign bits of the array integers as our personal 'visited' bitmap.
Even if the array was not necessarily integers, this method (of following the cycles, not the hack of sign bits :-)) can actually be used to tackle the two problems you state:
The inshuffle (or out-shuffle) problem
: When 2n+1 is a power of 3, it can be shown (using number theory) that 1,3,3^2, etc are in different cycles and all cycles are covered using those. Combine this with the fact that the inshuffle is susceptible to divide and conquer, you get an O(n) time, O(1) space algorithm (the formula is i -> 2*i modulo 2n+1). Refer the above paper for more details.
The cyclic shift an array problem
: Cyclic shift an array of size n by k also gives a permutation of the resulting array (given by the formula i goes to i+k modulo n), and can also be solved in linear time and in-place using the following the cycle method. In fact, in terms of the number of element exchanges this following cycle method is better than the 3 reverses algorithm. Of course, following the cycle method can kill the cache because of the access patterns and in practice the 3 reverses algorithm might actually fare better.
Read full article from How to master in-place array modification algorithms? - Stack Overflow