0

I'm working on a research project involving a microscope (with a camera connected to the view port; the video feed is streamed to an application we're developing) and a manipulator arm. The microscope and manipulator arm are both controlled by a Luigs & Neumann control box (very obsolete - the computer interfaces with it with a serial cable and its response time is slowwww.) The microscope can be moved in 3 dimensions; X, Y, and Z, whose axes are at right angles to one another. When the box is queried, it will return decimal values for the position of each axis of each device. Each device can be sent a command to move to a specific position, with sub-micrometer precision.

The manipulator arm, however, is adjustable in all 3 dimensions, and thus there is no guarantee that any of its axes are aligned at right angles. We need to be able to look at the video stream from the camera, and then click on a point on the screen where we want the tip of the manipulator arm to move to. Thus, the two coordinate systems have to be calibrated.

Right now, we have achieved calibration by moving the microscope/camera's position to the tip of the manipulator arm, setting that as the synchronization point between the two coordinate systems, and moving the manipulator arm +250um in the X direction, moving the microscope to the tip of the manipulator arm at this new position, and then using the differences between these values to define a 3d vector that corresponds to the distance and direction moved by the manipulator, per unit in the microscope coordinate system. This is repeated for each axis of the manipulator arm.

Once this data is obtained, in order to move the manipulator arm to a specific location in the microscope coordinate system, a system of equations can be solved by the program which determines how much it needs to move the manipulator in each axis to move it to the center point of the screen. This works pretty reliably so far.

The issue we're running into here is that due to the slow response time of the equipment, it can take 5-10 minutes to complete the calibration process, which is complicated by the fact that the tip of the manipulator arm must be changed occasionally during an experiment, requiring the calibration process to be repeated. Our research is rather time sensitive and this creates a major bottleneck in the process.

My linear algebra is a little patchy, but it seems like if we measure the units traveled by the tip of the manipulator arm per unit in the microscope coordinate system and have this just hard coded into the program (for now), it might be possible to move all 3 axes of the manipulator a specific amount at once, and then to derive the vectors for each axis from this information. I'm not really sure how to go about doing this (or if it's even possible to do this), and any advice would be greatly appreciated. If there's any additional information you need, or if you need clarification on anything please let me know.

DivideByZer0
  • 745
  • 8
  • 26
  • Are you sure that the manipulator arm is controlled in x, y and z? Usually robot arms are controlled by either absolute or relative angle of each of the joints. Or is the control box already converting (x,y,z) to joint angles? – Roland Smith May 02 '12 at 21:25
  • Well, the motors operate via a screw of sorts that moves them in a straight line along an axis. They're pretty simple - just 3 axes bolted onto one another. – DivideByZer0 May 02 '12 at 22:21
  • I see. Unless there is too much play in the bearings and structure, one would expect the alignment deviations between the axes of the manipulator and those of the microscope to be fairly constant. If that is the case, you should be able to use coordinate transformations to correct for the deviations. E.g. in your sketch, the x-axis of the arm isn't perpendicular to the y-axis. So a move in x will change both the x- and y-value in a cartesian coordinate system. So you have to multiply the displacement vector (x,0,0) with a 3x3 matrix to get the cartesian displacement (x,y,z). – Roland Smith May 03 '12 at 18:27

1 Answers1

0

You really need four data points to characterize three independent axes of movement.

Can you can add some other constraints, ie are the manipulator axes orthogonal to each other, even if not fixed relative to the stage's axes? Do you know the manipulator's alignment roughly, even if not exactly?

What takes the most time - moving the stage to re-center? Can you move the manipulator and stage at the same time? How wide is the microscope's field of view? How much distance-distortion is there near the edges of the view - does it actually have to be re-centered each time to be accurate? Maybe we could come up with a reverse-screen-distortion mapping instead?

Hugh Bothwell
  • 55,315
  • 8
  • 84
  • 99
  • Thanks for your response, let's see... Of the manipulator axes, Y and Z are potentially orthogonal to each other. X is mounted on Z, and is at an angle. Its projection onto the XY plane is most likely orthogonal to Y. Here's a quick sketch, if that helps: [link](http://imgur.com/b7dtZ) I can't move the manipulator and stage at the same time, as they are being controlled by the same box and issuing a new command interrupts the previous one in progress. It does its best to move the microscope afterwards to roughly where the manipulator tip is going to be, to make re-centering easier. – DivideByZer0 May 02 '12 at 22:15
  • I'm not sure about reverse-screen-distortion mapping, nor am I sure about what the field of view is. I'll have to research this for the particular lens we are using. Right now you can click on a position on the screen, and the screen will center there. This is pretty accurate and makes calibration somewhat easier, but the main lag is in moving the manipulator, updating the coordinates and making sure its no longer moving, moving the microscope and doing the same, and then moving & asking coordinates multiple times for each move you need to make to center the screen on the manipulator. – DivideByZer0 May 02 '12 at 22:18
  • Also, you said that it needs four data points for calibration - what exactly did you mean by this? Does the method that we're currently using correspond to 4 data points? (the points that are being used are start point, X+250um, Y+250um, and Z+350um). However, we've measured the precise distance traveled per unit in the screen coordinates for each of the manipulators, if that helps. I was actually a bit hesitant to say that the Y and Z axes were orthogonal, but after inspecting it, it appears they should be, at least roughly speaking. – DivideByZer0 May 02 '12 at 22:25
  • Also, the lens is 4x magnification. Here's a link to it: [lens info](http://objectives.nikoninstruments.com/compare.php?c[]=27) – DivideByZer0 May 02 '12 at 23:12