0

In my project I display some spheres in a 3D coordinate system. See in the picture below.

enter image description here

Each sphere display an lab - colorvalue. To create a sphere I use the Meshfunction in DirectX:

// Radius der Kugel
private const float radius = 4f;
// Die Anzahl der Ebenen einer Kugel
private const int slices = 8;
// Die Anzalh der Flächen einer Ebene
private const int stacks = 8;

// Das Mesh zum Darstellen der Kugel
private Mesh mesh = null;
private Vector3 vec;
public Vector3 min;
public Vector3 max;

public void createMesh(Device device, Color color, params float[] labValues)
{
    // Erstellt die Kugel mit der Anbindung an das Device
    mesh = Mesh.Sphere(device, radius, slices, stacks);
    // Kopiert das Mesh zum Erstellen des VertexArrays
    Mesh tempMesh = mesh.Clone(mesh.Options.Value, Vertex.FVF_Flags, device);
    // Erstellt den VertexArray
    Vertex[] vertData = (Vertex[])tempMesh.VertexBuffer.Lock(0, typeof(Vertex), LockFlags.None, tempMesh.NumberVertices);

    // Weist jedem Vertex die Farbe und die Position zu
    for (int i = 0; i < vertData.Length; ++i)
    {
        vertData[i].color = color.ToArgb();
        vertData[i].x += labValues[1];
        vertData[i].y += labValues[0] - 50f;
        vertData[i].z += labValues[2];
    }
    min = new Vector3(labValues[1], labValues[0] + 100f, labValues[2]);
    max = new Vector3(labValues[1], labValues[0] - 100f, labValues[2]);

    // Gibt den VertexBuffer in der Kopie frei
    tempMesh.VertexBuffer.Unlock();
    // Löscht den Mesh aus dem Speicher
    mesh.Dispose();
    // Legt die Kopie in der Meshinstanz ab
    mesh = tempMesh;

    Vector3 vTemp = new Vector3(labValues[1], labValues[0], labValues[2]);
    vec = vTemp;
}

struct Vertex
{
    public float x, y, z; // Position of vertex in 3D space
    public int color;     // Diffuse color of vertex

    /// <summary>
    /// Konstruktor der Vertex
    /// </summary>
    /// <param name="_x">X(A) - Position</param>
    /// <param name="_y">Y(L) - Position</param>
    /// <param name="_z">Z(B) - Position</param>
    /// <param name="_color">Die Farbe</param>
    public Vertex(float _x, float _y, float _z, int _color)
    {
        x = _x; y = _y; z = _z;
        color = _color;
    }

    // Das Format des Vertex
    public static readonly VertexFormats FVF_Flags = VertexFormats.Position | VertexFormats.Diffuse;
}

For the device rotation, I pick up the mouse move coordinates and use them for the rendering. So I don't change the camera position. I rotate only the device object:

Matrix MX = Matrix.RotationX(impValue.ObjektRotationY);
impValue.ObjektRotationY = 0;
Matrix MY = Matrix.RotationY(impValue.ObjektRotationX);
impValue.ObjektRotationX = 0;

Matrix Rotation = device.Transform.World;
Rotation *= MY;
Rotation *= MX;

device.Transform.World = Rotation;

Now I add a function to click on a sphere (Picking - Tutorial) for showing the lab - values:

public Sphere getSphereByCoordinates(Device device, List<Sphere> meshList, Vector3 cameraVec, float x, float y)
{
    // Temporäre Liste für die Kugeln
    List<Sphere> tempSphereList = new List<Sphere>();
    Sphere closestSphere = null;

    // Instanz des dichten und fernen Vektors
    Vector3 v3Near = new Vector3(x, y, 0);
    Vector3 v3Far = new Vector3(x, y, 1);

    // Wandelt den 2D Vektor in einen 3D Vektor um
    v3Near.Unproject(device.Viewport, device.Transform.Projection, device.Transform.View, device.Transform.World);
    v3Far.Unproject(device.Viewport, device.Transform.Projection, device.Transform.View, device.Transform.World);
    // Subtrahiert die beiden Vektoren
    v3Far.Subtract(v3Near);

    // Geht jede einzelne Kugel durch
    foreach(Sphere tempSphere in meshList)
    {
        // Prüft ob sich die Punkte schneiden und fügt es ggf. einer Liste hinzu
        if(tempSphere.labMesh.Intersect(v3Near, v3Far))
            tempSphereList.Add(tempSphere);
        }

        // Die nächste Distanz
        double closestDistance = -1.0;
        // Geht alle zutreffenden Kugeln durch und sucht sich die nahste Kugel zur Kamera aus
        foreach(Sphere tempSphere in tempSphereList)
        {
            //VertexBuffer haha = tempSphere.labMesh.
            double theDistance = Distance(cameraVec, tempSphere.labVector);

            if (theDistance < closestDistance || closestDistance == -1d)
            {
                closestDistance = theDistance;
                closestSphere = tempSphere;
            }
        }

        return closestSphere;
    }

private double Distance(Vector3 v1, Vector3 v2)
{
    // Erstellt einen Differenzvektor
    Vector3 difference = new Vector3(   v1.X - v2.X, 
                                        v1.Y - v2.Y, 
                                        v1.Z - v2.Z);
    // Gibt die berechnete Distanz zurück
    `return Math.Sqrt(Math.Pow(difference.X, 2f) + Math.Pow(difference.Y, 2f) + Math.Pow(difference.Z, 2f));
}

How you can see I set all meshes who intersect the MouseWorldCoordinates into a temp List. This algorithm works correctly. My Problem is by getting the correct distance now. Because I callculate the the distance from the camera-vector and the static lab-value-vector. How I can transform the static lab value to the device-world when the device was rotated before clicking on a sphere? Do you know a solution? Thank you for your help!

Community
  • 1
  • 1
Maxim
  • 317
  • 1
  • 5
  • 13
  • Not related to your problem, but please `PascalCase` your public functions... (http://msdn.microsoft.com/en-us/library/vstudio/ms229043%28v=vs.100%29.aspx) – antonijn Mar 27 '13 at 16:48

1 Answers1

0

I doubt that your intersection code works, because it does not take the device transformation into consideration. You should perform an analytic calculation of the intersection anyway. Mesh.Intersect can become very slow, because it has to check every triangle. You have the sphere's position and radius. You should check the intersection based on those values (see distance of ray/point).

If you want to use Mesh.Intersect, you have to transform the ray, because the method considers only the untransformed mesh. So you need a transformation that reverts the device's rotation, which is exactly the inverse of the World matrix. See TransformCoordinate, TransformNormal and Invert. You can then perform the intersection with the transformed ray.

As for the distance calculation you have two options. You can either transform the camera position with the inverse World matrix or transform the sphere position with the (not inverse) world matrix.

Nico Schertler
  • 32,049
  • 4
  • 39
  • 70