The GPU is free to do whatever it wants, however it wants, as long as it follows the constraints set by the Vulkan spec. The only real constraints in Vulkan queues are synchronization primitives. As long as everything ends up in the right order according to the semaphores, everything in between semaphores can happen in any order. This can happen within a command buffer, within a queue, within a queue family, within a device, or across devices (a device being the virtual context represented as a VkDevice, not a physical device).
Taking from NVidia's explanation of the rendering process on their GPUs, within a single Graphics Processing Cluster there is a single rasterizer, and a lot of cores and dispatch units to handle the shaders, most of their GPU's have multiple GPCs, so each one can presumably be working on rendering a different triangle. In practice things are wildly more complex than what I've described.
So can you render things in parallel: sure, why not; will you notice: assuming you setup your synchronization primitives correctly, probably not.
Pragmatically speaking, this would be something you would ask your support engineer for the various GPU manufacturers you work with, and they would be able to go over how to best optimize your renderer.