-1

My MTKView is in BGRA and I setup my pipelineDescriptor in BGRA like below :

pipelineDescriptor.colorAttachments.objectAtIndexedSubscript(0).setPixelFormat(MTLPixelFormatBGRA8Unorm);

now the problem is that if I send data encoded in RGBA (I mean color is a 16 bytes encoded in R(4bytes)+G(4bytes)+B(4bytes)+A(4bytes)) to the shader below then it's work well!

RenderCommandEncoder.setVertexBuffer(LvertexBuffer{buffer}, 0{offset}, 0{atIndex})

and

#include <metal_stdlib>                                                                                         
using namespace metal;                                                                                          

#include <simd/simd.h>                                                                                          

struct VertexIn {                                                                                               
    vector_float4 color;                                                                                        
    vector_float2 pos;                                                                                          
};                                                                                                              

struct VertexOut {                                                                                              
    float4 color;                                                                                               
    float4 pos [[position]];                                                                                    
};                                                                                                              

vertex VertexOut vertexShader(                                                                                  
                              const device VertexIn *vertexArray [[buffer(0)]],                                 
                              unsigned int vid [[vertex_id]],                                                    
                              constant vector_uint2 *viewportSizePointer [[buffer(1)]] 
                             )                                                                                  
{                                                                                                               
    // Get the data for the current vertex.                                                                     
    VertexIn in = vertexArray[vid];                                                                             
    VertexOut out;

    ...  
    out.color = in.color;                                                                                       
    ....                                                                                                                                                                                                                                     
    return out;                                                                                                 
}                                                                                                               

fragment float4 fragmentShader(                                                                                 
                                VertexOut interpolated [[stage_in]],                                             
                                texture2d<half> colorTexture [[ texture(0) ]]                                             
                              )                                                                                 
{                                                                                                               
    return interpolated.color;                                                                                  
}

How it's possible? is it about little and big endian ?

zeus
  • 12,173
  • 9
  • 63
  • 184
  • 2
    If you define your shaders to work in terms of BGRA using bytes then you submit 32 bit values defined as XRGB where bits 0->7 define the B values. These 8 bit values are then converted to floating point, but they do not magically grow in precision so you can use half precision and avoid float. I would suggest that you start with an already working solution and build on top of that. – MoDJ Oct 17 '19 at 18:44

1 Answers1

3

The color(s) you return from a fragment function are assumed to be in RGBA order, regardless of the pixel format of your render target. They get swizzled and/or converted as necessary to match the format of the destination. The same thing happens when sampling/writing other textures: colors always arrive in RGBA order.

warrenm
  • 31,094
  • 6
  • 92
  • 116
  • so you mean that in the vertexShader & fragmentShader colors are always in RGBA order ? in this way what the purpose to define a pixelformat via pipelineDescriptor.colorAttachments? – zeus Oct 17 '19 at 20:48
  • The render pipeline state has to know the destination pixel format so it knows _how_ to swizzle and convert the values returned by the fragment function. – warrenm Oct 17 '19 at 21:35