3

I have looked on google, but the only thing that I could find was a tutorial on how to create one by using photoshop. No interest! I need the logic behind it. (and i dont need the logic of how to 'use' a bump map, i want to know how to 'make' one!)

I am writing my own HLSL shader and have come as far as to realize that there is some kind of gradient between two pixels which will show its normal - thus with the position of the light can be lit accoardingly.

I want to do this real time so that when the texture changes, the bumpmap does too.

Thanks

Y S
  • 41
  • 1
  • 3
  • I am not sure if it is possible. What you want is actually to create a 3D model (normal map) of a 2D texture. – Ondra May 18 '12 at 12:41

4 Answers4

3

I realize that I'm way WAY late to this party but I, too, ran into the same situation recently while attempting to write my own normal map generator for 3ds max. There's bulky and unnecessary libraries for C# but nothing in the way of a simple, math-based solution.

So I ran with the math behind the conversion: the Sobel Operator. That's what you're looking to employ in the shader script.

The following Class is about the simplest implementation I've seen for C#. It does exactly what it's supposed to do and achieves exactly what is desired: a normal map based on either a heightmap, texture or even a programmatically-generated procedural that you provide.

As you can see in the code, I've implemented if / else to mitigate exceptions thrown on edge detection width and height limits.

What it does: samples the HSB Brightness of each pixel / adjoining pixel to determine the scale of the output Hue / Saturation values that are subsequently converted to RGB for the SetPixel operation.

As an aside: you could implement an input control to scale the intensity of the output Hue / Saturation values to scale the subsequent affect that the output normal map will provide your geometry / lighting.

And that's it. No more having to deal with that deprecated, tiny-windowed PhotoShop plugin. Sky's the limit.

Screenshot of C# winforms implementation (source / output):

enter image description here

C# Class to achieve a Sobel-based normal map from source image:

using System.Drawing;
using System.Windows.Forms;

namespace heightmap.Class
{
    class Normal
    {
        public void calculate(Bitmap image, PictureBox pic_normal)
        {
            Bitmap image = (Bitmap) Bitmap.FromFile(@"yourpath/yourimage.jpg");
            #region Global Variables
            int w = image.Width - 1;
            int h = image.Height - 1;
            float sample_l;
            float sample_r;
            float sample_u;
            float sample_d;
            float x_vector;
            float y_vector;
            Bitmap normal = new Bitmap(image.Width, image.Height);
            #endregion
            for (int y = 0; y < w + 1; y++)
            {
                for (int x = 0; x < h + 1; x++)
                {
                    if (x > 0) { sample_l = image.GetPixel(x - 1, y).GetBrightness(); }
                    else { sample_l = image.GetPixel(x, y).GetBrightness(); }
                    if (x < w) { sample_r = image.GetPixel(x + 1, y).GetBrightness(); }
                    else { sample_r = image.GetPixel(x, y).GetBrightness(); }
                    if (y > 1) { sample_u = image.GetPixel(x, y - 1).GetBrightness(); }
                    else { sample_u = image.GetPixel(x, y).GetBrightness(); }
                    if (y < h) { sample_d = image.GetPixel(x, y + 1).GetBrightness(); }
                    else { sample_d = image.GetPixel(x, y).GetBrightness(); }
                    x_vector = (((sample_l - sample_r) + 1) * .5f) * 255;
                    y_vector = (((sample_u - sample_d) + 1) * .5f) * 255;
                    Color col = Color.FromArgb(255, (int)x_vector, (int)y_vector, 255);
                    normal.SetPixel(x, y, col);
                }
            }
            pic_normal.Image = normal; // set as PictureBox image
        }
    }
}
BJS3D
  • 149
  • 1
  • 11
2

A sampler to read your height or depth map.

/// same data as HeightMap, but in a format that the pixel shader can read
/// the pixel shader dynamically generates the surface normals from this.
extern Texture2D HeightMap;
sampler2D HeightSampler = sampler_state
{
    Texture=(HeightMap);
    AddressU=CLAMP;
    AddressV=CLAMP;
    Filter=LINEAR;
};

Note that my input map is a 512x512 single-component grayscale texture. Calculating the normals from that is pretty simple:

#define HALF2 ((float2)0.5)
#define GET_HEIGHT(heightSampler,texCoord) (tex2D(heightSampler,texCoord+HALF2)) 
///calculate a normal for the given location from the height map 
/// basically, this calculates the X- and Z- surface derivatives and returns their  
/// cross product. Note that this assumes the heightmap is a 512 pixel square for no particular  
/// reason other than that my test map is 512x512. 
float3 GetNormal(sampler2D heightSampler, float2 texCoord) 
{ 

      /// normalized size of one texel. this would be 1/1024.0 if using 1024x1024 bitmap. 
    float texelSize=1/512.0; 

    float n = GET_HEIGHT(heightSampler,texCoord+float2(0,-texelSize)); 
    float s = GET_HEIGHT(heightSampler,texCoord+float2(0,texelSize)); 
    float e = GET_HEIGHT(heightSampler,texCoord+float2(-texelSize,0)); 
    float w = GET_HEIGHT(heightSampler,texCoord+float2(texelSize,0)); 


    float3 ew = normalize(float3(2*texelSize,e-w,0)); 
    float3 ns = normalize(float3(0,s-n,2*texelSize)); 
    float3 result = cross(ew,ns); 

    return result; 
}

and a pixel shader to call it:

#define LIGHT_POSITION (float3(0,2,0))

float4 SolidPS(float3 worldPosition : NORMAL0, float2 texCoord : TEXCOORD0) : COLOR0
{
    /// calculate a normal from the height map    
    float3 normal = GetNormal(HeightSampler,texCoord);
    /// return it as a color. (Since the normal components can range from -1 to +1, this 
      /// will probably return a lot of "black" pixels if rendered as-is to screen.
    return float3(normal,1);        
} 

LIGHT_POSITION could (and probably should) be input from your host code, though I've cheated and used a constant here.

Note that this method requires 4 texture lookups per normal, not counting one to get the color. That may not be an issue for you (depending on whatever else your're doing). If that becomes too much of a performance hit, you can either just call it whenever the texture changes, render to a target, and capture the result as a normal map.

An alternative would be to draw a screen-aligned quad textured with the heightmap to a render target and use the ddx/ddy HLSL intrinsics to generate the normals without having to resample the source texture. Obviously you'd do this in a pre-pass step, read the resulting normal map back, and then use it as an input to your later stages.

In any case, this has proved fast enough for me.

3Dave
  • 28,657
  • 18
  • 88
  • 151
0

The short answer is: there's no way to do this reliably that produces good results, because there's no way to tell the difference between a diffuse texture that has changes in color/brightness due to bumpiness, and a diffuse texture that has changes in color/brightness because the surface is actually a different colour/brightness at that point.

Longer answer:

If you were to assume that the surface were actually a constant colour, then any changes in colour or brightness must be due to shading effects due to bumpiness. Calculate how much brighter/darker each pixel is from the actual surface colour; brighter values indicate parts of the surface that face 'towards' the light source, and darker values indicate parts of the surface that face 'away' from the light source. If you also specify the direction the light is coming from, you can calculate a surface normal at each point on the texture such that it would result in the shading value you calculated.

That's the basic theory. Of course, in reality, the surface is almost never a constant colour, which is why this approach of using purely the diffuse texture as input tends not to work very well. I'm not sure how things like CrazyBump do it but I think they're doing things like averaging the colour over local parts of the image rather than the whole texture.

Ordinarily, normal maps are created from actual 3D models of the surface that are 'projected' onto lower-resolution geometry. Normal maps are just a technique for faking that high-resolution geometry, after all.

Superpig
  • 763
  • 5
  • 10
-1

Quick answer: It's not possible.
A simple generic (diffuse) texture simply does not contain this information. I haven't looked exactly how Photoshop does it (seen it once used by an artist), but I think they just simply do something like 'depth=r+g+b+a', which basically returns a heightmap/gradient. And then converting the heightmap to a normalmap using a simple edge detect effect to get a Tangent space Normal Map.

Just keep in mind, in most cases you use a normal map to simulate a high res 3D geometry mesh, as it fills in the blank spot vertex-normals leave behind. If your scene heavily relies on lighting, this is a no-go, but if it's a simple directional light, this 'might' work. Of course, this is just my experience, you might just as well be working on a completely different type of project.

Marking
  • 754
  • 1
  • 8
  • 23