0

What would be the best algorithm in terms of speed for locating an object in a field?

The field consists of 18 by 18 squares with side length 30.48 cm. The robot is placed in the square (0,0) and its job is to reach the light source while avoiding obstacles along the way. To locate the light source, the robot does a 360 degree turn to find the angle with the highest light reading and then travels towards the source. It can reliably detect a light source from 100 cm.

The way I'm implementing this presently is I'm storing the information about each tile in a 2x2 array. The possible values of the tiles are unexplored (default), blocked (there's an obstacle), empty (there's nothing in there). I'm thinking of using the DFS algorithm where the children are at position (i+3,j) or (i,j+3). However, considering the fact that I will be doing a rotation to locate the angle with the highest light reading at each child, I think there may be an algorithm which may be able to locate the light source faster than DFS. Also, I will only be travelling in the x and y directions since the robot will be using the grid lines on the floor to make corrections to it's x and y positions.

I would appreciate it if a fast and reliable algorithm could be suggested to accomplish this task.

shanethehat
  • 15,460
  • 11
  • 57
  • 87
lbj-ub
  • 1,425
  • 2
  • 19
  • 34
  • Is there only 1 light source? If so you could avoid the 360 degree turn and just use feedback to point the head in the direction where the intensity of light starts decreasing, (so always move toward increasing intensity), I did a similar thing at school. – ajon Nov 09 '12 at 22:26
  • What are you trying to minimize? Computational speed, or the total travel time of the robot? And how does this work anyway if can't see detect the light source from more than 4 squares away? Is the robot supposed to search around until it sees something? – eh9 Nov 09 '12 at 22:41
  • I'm trying to minimize the total travel time of the robot. The robot is supposed to search around until it reliably locates the light source (i.e. within 100 cm). If it is not able to locate the light source from the current tile, it moves to the next tile as determined by the DFS algorithm. Now, i'm not sure whether DFS is the best choice to determine which tile to travel to next in the event that the light source is not located from the present tile. – lbj-ub Nov 09 '12 at 22:51
  • @JimGarrison - I disagree, this is a programming problem ... albeit a high-level one. – Stephen C Nov 09 '12 at 23:56

2 Answers2

0

This is a really broad question, and I'm not an expert so my answer is based on "first principles" thinking rather than experience in the field.

(I'm assuming that your robot has generally unobstructed line of sight and movement; i.e. it is an open area with scattered obstacles, not in a maze.)

The problem is interpreting the information that you get back from a 360 degree scan.

  • If the robot sees the light source, then traversing a route to the light source is either trivial, or a "simple" maze walking task.

  • The difficulty is when you don't see the source. It might mean that the source is not within the circle of visibility. But it could also mean that the light is behind an obstacle. And unfortunately, a simple sensor like you are describing cannot distinguish these two cases.

If your sensor system allowed you to see the obstacles, you could plot the locations of the "shadow" regions (regions behind obstacles), and use that to keep track of the places that are left to search. So your strategy would be to visit a small number of locations and do a scan at each, then methodically "tidy up" a small number of areas that were in shadow.

But since you cannot easily tell where the shadow areas are, you need an algorithm that (ultimately) searches everywhere. DFS is a general strategy that searches everywhere, but it does it by (in effect) looking in the nooks and crannies first. A better strategy is to a breadth first search, and only visit the nooks and crannies if the wide-scale scans didn't find the light source.


I would appreciate it if a fast and reliable algorithm could be suggested to accomplish this task.

I think you are going to need to develop one yourself. (Isn't this the point of the problem / task / competition?)

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
0

Although it may not look like it, this looks a more like a maze following problem than anything. I suppose this is some kind of challenge or contest situation, where there's always a path from start to target, but suppose there's not for a moment. One of the successful results for a robot navigating a beacon fully surrounded by obstacles would be a report with a description of a closed path of obstacles surrounding a signal. If there's not such a closed path, then you can find a hole in somewhere; this is why is looks like maze following.

So the basic algorithm I'd choose is to start with a spiraling-inward tranversal, sweeping out a path narrow enough so that you're sure to see a beacon if one is present. If there are no obstacles (a degenerate case), this finds the target in minimal time. (Hint: each turn reduces the number of cells your sensor can locate per step.)

Take the spiral traversal to be counter-clockwise. What you have then is related to the rule for solving mazes by keeping your right hand on the wall and following the generated path. In this case, you have the complication that, while the start of the maze is on the boundary, the end may not be. It's possible of the right-hand-touching path to fail in such a situation. Detecting this situation requires looking for "cavities" in the region swept out by adjacency to the wall.

eh9
  • 7,340
  • 20
  • 43