0

Consider a shortest-path finding problem, such that the shortest path from green to red must be found. For this, I would like to use the Hill Climbing approach and using the Manhattan Distance as my heuristics. I calculated some of those distances already as you can see. Moreover, there are walls where green cannot pass through. enter image description here

In this scenario, the agent green would go the field which has a MD of 3, and after that the algorithm would already end. We did not arrive at the global maximum or the best possible solution. Now, I am looking for a scenario, where the Hill Climbing approach, given the Manhattan Distance as the heuristic and the path finding problem as described, where the agent actually finds a path, which is NOT globally optimal. I could not come up with any example, which I guess cannot be the case.

kklaw
  • 452
  • 2
  • 4
  • 14
  • I don't understand why the algorithm would end after moving to three. It hasn't found a path. I don't think I really understand what you're asking. – Layne Bernardo May 14 '22 at 05:26

1 Answers1

1

Maybe I did not understand your problem, but my impression is:
Using the Manhattan distance as if there would no walls is wrong. The box below the green box does not have a Manhattan distance 3 to the red box, but 9, and the box left or right of the green box does not have a Manhattan distance 5 to the red box, but 7.
The 1st step, thus, is to compute the Manhattan distances right. Afterwards, you could represent your path finding problem by a weighted graph and use a standard shortest path finding algorithm.

EDIT:

Your problem is the combination of a heuristic that ignores obstacles and thus favors a dead end, with an optimization strategy that finds the next local optimum: The Manhattan distance (ignoring the obstacles) claims the min distance is 3, and Hill Climbing goes in this direction and is unable to turn back.
So either you use another heuristic that makes realistic claims, or you use an optimization strategy that temporarily accepts worse solutions in order to escape from a local optimum.
In the 1st case, you had to find possible paths which requires to traverse your plane with some random component, and then to select an acceptable good path.
In the 2nd case, you had to try worse solutions as new starting point for an optimization until an acceptable good local optimum is found.

EDIT 2:

I am sorry, I think only now I understand your question.
You want to use the Manhattan distance that ignores the obstacles to direct Hill Climbing towards the locally best direction, and you want to find an example where this algorithm finds a local optimum.
Well, your example is exactly what you are looking for: Starting at the green square, the heuristic suggests to go down, but there the algorithm realizes that it is stuck at a square, where the actual path to the red square, considering the obstacles, has length 9.
This is the local optimum: The only possible path (considering the obstacles) is to go from the square with distance 3 to the green square with distance 4, i.e. downhill. But Hill Climbing is not able to do so, and thus the algorithm is stuck.

Reinhard Männer
  • 14,022
  • 5
  • 54
  • 116
  • I get your point, but using Manhattan Distance as you describe it would no be longer be a heuristic right? It would be just the path cost and it that case Hill Climbing would find an optimal solution I guess – kklaw May 13 '22 at 18:47
  • That's right. If the path costs are right, standard optimization algorithms can find the optimum (if they are not stuck in a local optimum). Is this not want you want? In which way could any kind of heuristic be better? – Reinhard Männer May 13 '22 at 18:53
  • Hmm, I am not really looking for a good heuristic to solve this problem. Rather I am interested in the question: Is it possible that the above heuristic, i.e. ignoring obstacles, leads to some solution (a path is found, without landing in a dead-end like in the picture) that is not optimal. So there actually exists another, shorter path, which was simply not considered by the Hill Climbing approach. That obviously must be associated with the placement of the walls, but I could not find such placement. – kklaw May 13 '22 at 18:57
  • In this case you had to use an algorithm that is able to escape from dead ends, e.g. Simulated Annealing, Genetic Algorithms or others. – Reinhard Männer May 13 '22 at 19:02
  • So in other words: Either we arrive at a solution that is optimal, or we arrive at a dead end? – kklaw May 13 '22 at 19:11
  • No. Please see my edit above. – Reinhard Männer May 14 '22 at 04:54
  • So, do you have an idea for a heuristic and a certain environment (i.e. placement of obstacles) that makes the algorithm find a local maximum, which is not optimal? That is exactly what I am looking for – kklaw May 14 '22 at 17:51
  • Sorry I just saw your 2nd edit. Well, I always associated a local optimum, in the sense of path finding, with an actual path that leads from some start to the goal. Clearly, in my example there is no such path, so I thought in my example there is no local optimum, there is simply no solution. So I was looking for some heuristic, or other measurement, that actual finds some path, which is however costlier than the optimal path. I hope that makes sense – kklaw May 17 '22 at 13:54