7

I use vkGetPhysicalDeviceSurfaceFormatsKHR to get supported image formats for the swapchain, and (on Linux+Nvidia, using SDL) I get VK_FORMAT_B8G8R8A8_UNORM as the first option and I go ahead and create the swapchain with that format:

VkSwapchainCreateInfoKHR swapchain_info = {
    ...
    .imageFormat = format, /* taken from vkGetPhysicalDeviceSurfaceFormatsKHR */
    ...
};

So far, it all makes sense. The image format used to draw on the screen is the usual 8-bits-per-channel BGRA.

As part of my learning process, I have so far arrived at setting up a lot of stuff but not yet the graphics pipeline1. So I am trying the only command I can use that doesn't need a pipeline: vkCmdClearColorImage2.

The VkClearColorValue used to define the clear color can take the color as float, uint32_t or int32_t, depending on the format of the image. I would have expected, based on the image format given to the swapchain, that I should give it uint32_t values, but that doesn't seem to be correct. I know because the screen color didn't change. I tried giving it floats and it works.

My question is, why does the clear color need to be specified in floats when the image format is VK_FORMAT_B8G8R8A8_UNORM?


1 Actually I have, but thought I would try out the simpler case of no pipeline first. I'm trying to incrementally use Vulkan (given its verbosity) particularly because I'm also writing tutorials on it as I learn.

2 Actually, it technically doesn't need a render pass, but I figured hey, I'm not using any pipeline stuff here, so let's try it without a pipeline and it worked.


My rendering loop is essentially the following:

  • acquire image from swapchain
  • create a command buffer with the following:
    • transition from VK_IMAGE_LAYOUT_UNDEFINED to VK_IMAGE_LAYOUT_GENERAL (because I'm clearing the image outside a render pass)
    • clear the image
    • transition from VK_IMAGE_LAYOUT_GENERAL to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
  • submit command buffer to queue (taking care of synchronization with swapchain with semaphores)
  • submit for presentation
Shahbaz
  • 46,337
  • 19
  • 116
  • 182

1 Answers1

5

My question is, why does the clear color need to be specified in floats when the image format is VK_FORMAT_B8G8R8A8_UNORM?

Because the normalized, scaled, or sRGB image formats are really just various forms of floating-point compression. A normalized integer is a way of storing floating-point values on the range [0, 1] or [-1, 1], but using a much smaller amount of data than even a 16-bit float. A scaled integer is a way of storing floating point values on the range [0, MAX] or [-MIN, MAX]. And sRGB is just a compressed way of storing linear color values on the range [0, 1], but in a gamma-corrected color space that puts precision in different places than the linear color values would suggest.

You see the same things with inputs to the vertex shader. A vec4 input type can be fed by normalized formats just as well as by floating-point formats.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • Say a color component is an 8-bit unsigned integer, so it has a range of [0, 255]. As you said, that's a representative of the range [0, 1]. That's perfectly fine. However, when you provide a value in that format, you would specify it for example as (127, 63, 127) rather than (0.5, 0.25, 0.5), although they really represent the same thing. I thought that B8G8R8A8 means that the colors are stored as 4 8-bit integers, but you are saying that's not the case? What's the point of calling it B8G8R8A8 then? – Shahbaz May 06 '16 at 04:07
  • Actually I just noticed that there are also formats like `B8G8R8A8_UINT`. I guess the `UNORM` is then telling you that the format is in `float`. – Shahbaz May 06 '16 at 04:08
  • I found the [explanation on normalized integers](https://www.opengl.org/wiki/Normalized_Integer). Feel free to ignore the previous comments. – Shahbaz May 06 '16 at 04:10
  • @Shahbaz: FYI: you can delete your comments with the X button that appears beside them when you hover your mouse over them. – Nicol Bolas May 06 '16 at 04:14
  • I know. I left them there in case you had something interesting to say about them. Either way it's strange to me that there is actually a distinction between UNORM and UINT, given that they are the exact same bit pattern. – Shahbaz May 06 '16 at 04:20
  • unorm a = 1.0 (255); unorm b = a*a (255); uint c = 255; uint d = c * c / 256 (254). Basically that is trying to tell multiplication implementation has to be a bit different for unorm than simple fixed point integer multiplication. – Pauli Nieminen Oct 22 '16 at 12:00
  • @PauliNieminen, two years later and a graphics programmer, so I definitely know the difference now. However, your example is really not useful as in `uint d = c * c / ...` you should really divide by `255`, that is the result of the two operations are the same (although the `unorm` version has the division implicit, which is a valid point) – Shahbaz Mar 06 '18 at 04:09