1

I am using the following preprocessor hack

// shader_descriptor.hpp

#ifdef __cplusplus
    #include "./shader_source.hpp"
    #include "engine/utils/vec_t.hpp"
    #include <tuple>
    #include <span>

    #define BEGIN_SHADER_DESCRIPTOR(namespace_name, name)                                     \
        namespace namespace_name                                                                   \
        {                                                                                          \
            struct name                                                                            \
            {                                                                                      \
                using port_types = std::tuple <

    #define SHADER_INPUT(type, name) ::type##f_t,

    #define END_SHADER_DESCRIPTOR()                                                           \
        nullptr_t > ;                                                                              \
        static ::shaders::vertex_shader_source<std::span<uint32_t const>> vertex_shader();   \
        static ::shaders::fragment_shader_source<std::span<uint32_t const>>                  \
        fragment_shader();                                                                         \
                                                                                                   \
        static constexpr auto num_inputs = std::tuple_size_v<port_types> - 1;                      \
        }                                                                                          \
        ;                                                                                          \
        }

#else
const int port_counter_base = __COUNTER__;
    #define BEGIN_SHADER_DESCRIPTOR(namespace_name, name)
    #define SHADER_INPUT(type, name)                                                          \
        layout(location = __COUNTER__ - port_counter_base - 1) in type name;
    #define IDIS_END_SHADER_DESCRIPTOR()
#endif

so i can get extract the correct inputs from the source code. The inputs are defined like this:

// tsetprog.hpp

#include "./shader_descriptor.hpp"

BEGIN_SHADER_DESCRIPTOR(idis::shaders, testprog)
SHADER_INPUT(vec2, loc)
SHADER_INPUT(vec4, vert_color)
END_SHADER_DESCRIPTOR()

And the a shader module (here the vertex shader):

#include "./testprog_def.hpp"

layout(location = 0) out vec4 frag_color;

void main()
{
    gl_Position = vec4(loc, 0.0, 1.0);
    frag_color = vert_color;
}

The shader is then compiled in two steps:

targets = args['targets']
source_file = args['source_file']
with tempfile.TemporaryDirectory(suffix = None, prefix = 'maike_' + args['build_info']['build_id']) as tmpdir:
    new_source_file = tmpdir + '/' + os.path.basename(source_file)
    with open(new_source_file, 'wb') as tmpfile:
        tmpfile.write('#version 450\n'.encode())
        cpp = subprocess.run(['cpp', '-P', source_file], stdout=subprocess.PIPE)
        tmpfile.write(cpp.stdout)

    result = subprocess.run(['glslangValidator', '-V', '-Os', '-o', targets[0], new_source_file]);

While this approach works, it has some limitations:

  1. It is not possible to use a single vertex buffer with all inputs (no way to generate the correct struct to use from the c++ side)

  2. Will probably not work well with uniform buffers for similar reasons to (1).

Is there any other option to guarantee correct bindings at compile time (that is: A binding should have the correct type, and all inputs must be connected). Currently, I use the following function to bind buffers:

        template<class ShaderDescriptor, class... Buffers>
        render_pass_section& bind(VkCommandBuffer cmdbuff,
                                  std::reference_wrapper<pipeline<ShaderDescriptor> const> pipeline,
                                  std::reference_wrapper<Buffers const>... buffers)
        {
            static_assert(((Buffers::buffer_usage & VK_BUFFER_USAGE_VERTEX_BUFFER_BIT) && ...));
            static_assert(sizeof...(Buffers) == ShaderDescriptor::num_inputs);
            static_assert(std::is_same_v<std::tuple<typename Buffers::value_type..., nullptr_t>,
                                         typename ShaderDescriptor::port_types>);
            std::array<VkBuffer, sizeof...(Buffers)> handles{buffers.get().handle()...};
            std::array<VkDeviceSize, sizeof...(Buffers)> offsets{};
            vkCmdBindVertexBuffers(
                cmdbuff, 0, std::size(handles), std::data(handles), std::data(offsets));
            vkCmdBindPipeline(cmdbuff, VK_PIPELINE_BIND_POINT_GRAPHICS, pipeline.get().handle());
            return *this;
        }

ShaderDescriptor is being generated by the preprocessor.

user877329
  • 6,717
  • 8
  • 46
  • 88
  • Do you use the preprocessor for performance? – Sebastian Feb 21 '22 at 05:36
  • @sebastian Not as a primare reason. If I change the shader associated with the pipeline, and forget to update the bindings, I want detect the error at compile time. That is also why I assert that it is indeed a vertex buffer. – user877329 Feb 21 '22 at 16:21
  • Perhaps you could combine the preprocessor with `constexpr` functions on the host side, which do checks and prepare the binding configuration? – Sebastian Feb 21 '22 at 16:31
  • @Sebastian. I have a solution for the trivial case with no uniforms and 1:1 between bindings and buffers. The problem is that it doesn't scale. I cannot really generate structs, and it is not possible to know in advance the number of inputs, thus the tuple with nullptr_t at the end. – user877329 Feb 21 '22 at 16:57
  • "no way to generate the correct struct to use from the c++ side" - what is the thing preventing this, aren't you already generating a struct? – Andrea Feb 21 '22 at 21:36

0 Answers0