2

I have been trying to find an implementation of Korf's Rubik's cube solving algorithm in python as I am making a project where I am comparing algorithm efficiency. Unfortunately, I haven't been able to find any implementation so far, does anyone have any examples of Korf's algorithm in Python?

  • I implemented Korf's in C++. Without trying to be controversial, python is just too slow, though cython is an option. There are billions upon billions of twists and equally many cube state comparisons required to solve the cube using Korf's. For some rough numbers, my program applies about 402 millions twists and comparisons per minute. I helped another person with their python version, and they were getting about 15.5 million. – benbotto Jul 07 '21 at 21:35
  • You may find my article on Medium helpful: https://medium.com/@benjamin.botto/implementing-an-optimal-rubiks-cube-solver-using-korf-s-algorithm-bf750b332cf9?sk=bf4d6a245e07e37dc94d84e77489ffc6 – benbotto Jul 07 '21 at 21:35
  • For a fast Python solver which solves random cubes on average in a few minutes optimally see https://github.com/hkociemba/RubiksCube-OptimalSolver – Herbert Kociemba Dec 22 '21 at 01:19

1 Answers1

2

Korf's algorithm is not the best method for an optimal solver for Rubik's cube. It is far better to implement an algorithm which exploits the symmetries of the cube which leads to smaller sizes of the pruning tables. Michael Reid's optimal solver algorithm is best suited. It uses phase 1 of the two-phase algorithm (which can exploit 16 cube symmetries) in three different directions to exploit all 48 cube symmetries. The pruning depth for IDA* is considerably higher than with Korf's method on average.

I am sceptical if Python if fast enough though. I may give it a try since I implemented my two-phase algorithm in Python and it should not be too difficult to adapt this code.

  • Python's slow speed is somewhat offset by today's faster computers with more memory (Korf's paper being 24 years old), and "fast enough" is of course relative to one's patience :-) – Stefan Pochmann Nov 29 '21 at 19:04
  • Sadly it seems that it is not possible to speed up the computation by using the many cores of modern CPU's in Python and using multiple threads because of this GlobalInterpreterLock. – Herbert Kociemba Nov 30 '21 at 11:42
  • You can still use different *processes*, no? – Stefan Pochmann Dec 03 '21 at 13:55
  • Yes of course. But since the processes share no data the pruning tables will have to be loaded multiple times. Korf's tables altogether had a size of 80 MB, so it would be no problem to run multiple instances if you have 8 GB of RAM. I now managed to implement the phase 1 based approach suggested by Michael Reid which uses a 35 MB large pruning table in Python. For random cubes a complete depth 16 search takes roughly 20 min on average. The Python program generates only about 160.000 nodes/s, which is incredible slow (Cube Explorer generates nodes more than 50 times faster). – Herbert Kociemba Dec 03 '21 at 19:10