I need to find the optimal path connecting two planar points. I'm given a function that determines the maximal advance velocity, which depends on both the location and the time.
My solution is based on Dijkstra algorithm. At first I cover the plane by a 2D lattice, considering only discrete points. Points are connected to their neighbors up to specified order, to get sufficient direction resolution. Then I find the best path by (sort of) Dijkstra algorithm. Next I improve the resolution/quality of the found path. I increase the lattice density and neighbor connectivity order, together with restricting the search only to the points close enough to the already-found path. This may be repeated until the needed resolution is achieved.
This works generally well, but nevertheless I'd like to improve the overall algorithm performance. I've implemented several tricks, such as variable lattice density and neighbor connectivity order, based on "smoothness" of the price function. However I believe there's a potential to improve Dijkstra algorithm itself (applicable to my specific graph), which I couldn't fully realize yet.
First let's agree on the terminology. I split all the lattice points into 3 categories:
- cold - points that have not been reached by the algorithm.
- warm - points that are reached, but not fully processed yet (i.e. have potential for improvement)
- stable - points that are fully processed.
At each step Dijkstra algorithm picks the "cheapest" warm lattice point, and then tries to improve the price of its neighbors. Because of the nature of my graph, I get a kind of a cloud of stable points, surrounded by a thin layer of warm points. At each step a warm point at the cloud perimeter is processed, then it's added to the stable cloud, and the warm perimeter is (potentially) expanded.
The problem is that warm points that are consequently processed by the algorithm are usually spatially (hence - topologically) unrelated. A typical warm perimeter consists of hundreds of thousands of points. At each step the next warm point to process is pseudo-randomal (spatially), hence there's virtually no chance that two related points are processed one after another.
This indeed creates a problem with CPU cache utilization. At each step the CPU deals with pseudo-random memory location. Since there's a large amount of warm points - all the relevant data may not fit the CPU cache (it's order of tens to hundreds of MB).
Well, this is indeed the implication of the Dijkstra algorithm. The whole idea is explicitly to pick the cheapest warm point, regardless to other properties.
However intuitively it's obvious that points on one side of a big cloud perimeter don't make any sense to the points on another side (in our specific case), and there's no problem to swap their processing order.
Hence I though about ways of "adjusting" the warm points processing order, yet without compromising the algorithm in general. I thought about several ideas, such as diving the plane into blocks, and partially solving them independently until some criteria is met, meaning their solution may be interfered. Or alternatively ignore the interference, and potentially allow the "re-solving" (i.e. transition from stable back to warm).
However so far I could not find rigorous method.
Are there any ideas how to do this? Perhaps it's a know problem, with existing research and (hopefully) solutions?
Thanks in advance. And sorry for the long question.