3

I'm using WIC and Direct2D (via SharpDX) to composite photos into video frames. For each frame I have the exact coordinates where each corner will be found. While the photos themselves are a standard aspect ratio (e.g. 4:3 or 16:9) the insertion points are not -- they may be rotated, scaled, and skewed.

enter image description here

Now I know in Direct2D I can apply matrix transformations to accomplish this... but I'm not exactly sure how. The examples I've seen are more about applying specific transformations (e.g. rotate 30 degrees) than trying to match an exact destination.

Given that I know the exact coordinates (A,B,C,D) above, is there an easy way to map the source image onto the target? Alternately how would I generate the matrix given the source and destination coordinates?

roufamatic
  • 18,187
  • 7
  • 57
  • 86
  • My first instinct is to ask - why Direct2D and not Direct3D? The solution would be as trivial as rendering two triangles with vertices (A, B, D) and (A, D, C). It seems Direct2D cannot render primitives this way. – Ani Jul 05 '12 at 18:18
  • I'm new to all of this; since I'm only working in two dimensions I naively assumed 2D. If you have an answer with Direct3D I'd be extremely happy. Though is there a chance of render artifacts along the shared line AD? – roufamatic Jul 05 '12 at 18:36
  • There should be no artifacts if you share the vertices and render it as an indexed mesh. See my answer below. – Ani Jul 05 '12 at 18:39

2 Answers2

2

If Direct3D is an option, all you will need to do is to render the quadrilateral as two triangles (with the frog texture mapped onto it).

To make sure there are no artifacts, render the quad as an indexed mesh, like in the example here (note that it shares vertex 0 and vertex 2 across both triangles). Of course, you can replace the actual vertex coordinates with A, B, C and D.

To begin, you can check out these tutorials for SlimDX, an excellent set of .NET bindings to DirectX.

Ani
  • 10,826
  • 3
  • 27
  • 46
2

It is not really possible to achieve this with Direct2D. That could be possible with Direct2D1.1 (from Win8 Metro) with a custom vertex shader, but in the end, as ananthonline suggest, It will be much easier to do it with Direct3D11.

Also you can use triangle strip primitives that are easier to setup (you don't need to create an index buffer). For the coordinates, you can directly sends coordinates to a vertex shader without any transforms (the vertex shaders will copy input SV_POSITION directly to the pixel shader). You just have to map your coordinates into x [-1,1] and y[-1,1]. I suggest you to start with SharpDX MiniCubeTexture sample, change the matrix to perform an orthonormal projection (instead of the sample perspective).

xoofx
  • 3,682
  • 1
  • 17
  • 32
  • Thanks for the help. I've been using Direct2D against a WICRenderTarget to composite photos onto a background image and save as PNG. With this technique, how do I get everything composited and saved? Is there a way to convert the transformed photo into a Direct2D bitmap? – roufamatic Jul 06 '12 at 16:56
  • 1
    You will have to create a Direct2D.Bitmap using a DXGI.Surface (A surface can be queried from Direct3D texture with texture.QueryInterface();. You just need to call once this method when you create the Direct3D Texture) and draw this Direct2D bitmap on your WICRenderTarget. But you could also readback the texture from the GPU directly with Direct3D, and save it with WIC, without having to use Direct2D. – xoofx Jul 06 '12 at 22:24