it helps me with the crosswords :p
def ramnaga(letters, word = ""): if len(letters) == 0: if is_word(word): return [word] else: return  words =  for i in range(len(letters)): newletters = letters[0:i] + letters[i+1:] l = letters[i] words += ramnaga(newletters, word + l) return words
this would help of course if wordpress didn’t mess up the indendation [edit: ahh sourcecode mode] [edit2: untested modification, watch out!].
implementation of is_word() is left as an exercise to the reader. You obviously need a wordlist (*nix users can look in /usr/share/dict/words. Windows users can scour the internet because their OS is rubbish and doesn’t come with anything useful. I recommend using the ‘set’ object to store it because it automatically removes duplicates and it provides O(1) membership testing, which is all you care about). If you remove the is_word condition, it’s just a standard permutation generator but it’ll use a tonne of memory (easily 350MiB on a 10 character word) storing gibberish before you can filter it.
On my CPU it takes about 25 seconds on a 10 character word and about 250 on 11 letters. It’s impractical above that. If you reimplented it in C you might squeeze a 12 letter word out, but there’s a limit to how fast you can get an inherently O(n!) algorithm. I’d be interested to know if you could use some kind of coding theory to figure out permutations based on hamming distances, or something. It seems then that your complexity would be in terms of the size of the dictionary?
call as follows:
>>> words = list(set( ramnaga("ramnaga") )) # list(set()) to uniqueify elements
# and convert back to a list
written in about 4 minutes due to the extremely rare good luck of getting a recursive problem right first time.