This question is related to fio (flexible i/o tester) utility manages I/O queues for a NVME storage (SSD's in particular) whilst using libaio
engine.
For testing i am using Ubuntu 14.04 and a commercial nvme SSD.
I enforce the fio experiment with following arguments:
direct=1
numjobs=256
iodepth=1
ioengine=libaio
group_reporting
For this example assume the nvme device advertises/supports 10 IO queue creations with max queue depth of 64. You may assume 10 queue creations are successful as part of the initialization.
Based on the above parameters and constraints,
"How would fio use the queues, for I/O commands?"
OR to narrow down the scope "Does the iodepth argument in fio directly equate to nvme_queue_depth for the test"
this is the question.
I expect something like below to be going under the hood but do not have the right information.
example scenario 1:
does fio generate 256 jobs/threads which try to submit i/o to nvme_queues and try to keep atleast 1 i/o cmd in the 10 nvme_queues at any point. If the job sees the que is full (i.e. if (one) i/o cmd is present in a nvme_queue, just tries to submit in other 9 nvme_queues or perhaps round robins till it can find an empty queue)example scenario 2:
does the fio 256 threads/jobs, not really respect the iodepth==nvme_quedepth to be used for this test and submit multiple i/o's anyway. So for example each thread just submits 1 i/o cmd to nvme_queues without any check on depth of commands in the 10 nvme_queues. In other words that the 256 threads try maintain roughly 25 or 26 i/o pending/inflight in the 10 nvme_queues.
Link to defintion of iodepth from fio documentation.
Are either scenario true ? and is there a way to ascertain the same with an experiment.
going through specification for nvme and fio does not really clearly state how this scenario is handled or is vague.
Update: Below are the the two scenarios in image format https://i.stack.imgur.com/4tGcm.jpg (unable to embed) top one is scenario 1 and bottom is scn. 2
Update 2: Sorry if the question is on the vague side, but i will attempt to improve on it by expanding a bit further.
My understanding that between the device and fio. Hence the question spans across many layers of code and/or protocol which are at play here. to list them
1. Fio (application) and libaio
2. Linux Kernel/OS
3. Nvme Driver
4. The Storage device SSD controller and code.
The two scenarios explained above are vague attempt to explain at a very high level to answer my own question, as i am not expert on above mentioned layers by far.
According to an answer below it seems scenario 1
is loosely relevant. And wanted to know tiny bit more regarding general policy and predictability through all the layers. Partial explanations are OK hopefully combining to a complete one.
so a third naive rephrase of the question would be "how does fio issue traffic and how they really end up the storage nvme_queues?"