After a thorough read of the Vulkan spec's language on synchronization, I'm trying to confirm that a specific scenario does not introduce a data race. Consider the snippet below, where work in a second queue submit reads the results of work from the first submit, and the host waits on the first submit's fence in between:
VkFence first_work_fence = ... (unsignaled);
VkSubmitInfo first_work_submit_info = ... (no semaphore wait / signal);
vkQueueSubmit(chosen_queue, 1, &first_work_submit_info, first_work_fence); // (1)
...
vkWaitForFences(device, 1, &first_work_fence, VK_TRUE, UINT64_MAX); // (2)
...
VkSubmitInfo reads_first_work_submit_info = ... (no semaphore wait / signal);
vkQueueSubmit(chosen_queue, 1, &reads_first_work_submit_info, ...); // (3)
According to my reading of the spec, the following are true for the three steps above:
- Creates a fence signal operation, which is a memory dependency whose first synchronization scope covers all work in the submission command and first access scope covers all memory access performed by the device (spec 7.3); as an operation that performs a memory dependency, it generates an availability operation making the writes in the operation's first access scope available to the device (spec appendix B)
- Waiting on the fence ensures that the fence signal operation has occurred, and thus has made the writes from the first submit available but not necessarily visible to the device domain according to the above
- "
vkQueueSubmit
performs... a visibility operation with source scope of the device domain and destination scope of all agents and references on the device." (spec appendix B)
Together, this appears to mean that work submitted in (3) can access all side effects of work submitted in (1) without further synchronization or explicit memory visibility operations (e.g., barriers). Is this correct?