5

I'm a beginner pixel shader writer and I'm running into some trouble. I want to take a 256x256, 16-bit input (DXGI_FORMAT_R16_UINT) image, and pass it through a 256x256 look-up texture (DXGI_FORMAT_R8_UNORM) to convert it to a 256x256 8-bit output.

Unfortunately, I seem to be running into a lot of trouble and the output seems to always clamp to black or white.

Also, I'm not sure which DXGI formats I should be using, and also, which data type correlates with each format.

// Global Variables
Texture2D<uint> imgTexture : register( t0 );
Texture2D lutTexture : register( t1 );
SamplerState SampleType : register( s0 );

// Structures 
struct PS_INPUT
{
  float4 Pos : SV_POSITION;
  float2 Tex : TEXCOORD0;
};

// Pixel Shader
float4 PS( PS_INPUT input) : SV_Target
{
  uint pixelValue = imgTexture[input.Tex];
  uint2 index = { pixelValue / 256, pixelValue % 256 };
  // uint row = pixelValue / 256;
  // uint col = pixelValue % 256;

  float4 output = lutTexture[index];
  output.g = output.r;
  output.b = output.r;
  output.a = 1.0f;

  return output;    
}

Should I be normalizing the pixelValue before trying to turn it into a 2D index?

Should I be normalizing the index before using it?

Should I be sampling instead?

Am I even on the right path here?

I would appreciate ANY help, thanks!

SJoshi
  • 1,866
  • 24
  • 47
  • Just curious, why ever would you want to do that? Obviously you want to do this realtime (ie. offline converting is not an option) but (ok I'm repeating myself) but why? – Valmond Oct 13 '11 at 20:24
  • Well, I'm displaying medical image data, which comes in as something between 8-bit to 12-bit (greyscale) data. This data also comes with LUTs which specify the luminance transforms that need to be applied to the images before putting them to the screen. So, ACTUALLY, I would want to only use up to 12-bits of the 16-bit input and then apply the LUT to that... Confused? I know I am... – SJoshi Oct 14 '11 at 03:24
  • One thing that might be the problem with the black/white only; Usually the values are from 0 to 1, not 0 to 255 in a texture. – Valmond Oct 16 '11 at 11:18
  • Yeah, might be running into an issue like that at the moment. My "0" and "255" values might be undergoing rounding/truncation errors... – SJoshi Oct 17 '11 at 20:17

2 Answers2

0

You are using :

uint pixelValue = imgTexture[input.Tex];

Which requires pixel value, but you are providing input.Tex (which I guess is normalized from 0 to 1). So in that case you are loading the same pixel.

you can use :

uint pixelValue = imgTexture[input.Pos.xy];

or :

uint pixelValue = imgTexture.Load(int3(input.Pos.xy, 0));

instead.

And yes your formats are correct for your use case (no need to normalize or denormalize).

mrvux
  • 8,523
  • 1
  • 27
  • 61
0

You are definitely on the right track. But, like Valmond mentioned, the value of pixelValue will be in the range [0..1].

How exactly is the LUT set up? I'm guessing that the first axis is the value to be transformed, but what is the second? Once I know that, I can give a solution with code.

  • @Valmond I have been working on a few other things in the meantime, but I sort of hack-jobbed this together by multiplying out by 65535 and then doing my division and modulus math. However, I would like to know the "correct" way. Mike, I'm not sure I entirely understand your question though. If the LUT were 1d, it would run from indices 0-65535 and within each index would be the new grayscale mapping (with values b/w 0-255). However, since the texture won't support that, I wrap it up into 256x256 instead, and then do my funky math to map a 16-bit 'intensity' into a 256x256 lookup. – SJoshi Dec 04 '11 at 00:20
  • since imgTexture is Texture2D and DXGI_FORMAT_R16_UINT, result is not normalized, it's the raw value (0 -> MAX_USHORT) in that case – mrvux Aug 24 '21 at 12:59