That would be the optimal configuration, but it also depends on the applications that make use of the Infiniband connections and you would have to take care that those applications and their processes are bound to the same CPU and memory region the Infiniband adapter is bound to. That could be somewhat tricky.
When running e.g. MPI applications which make use of all CPUs and cores of one server, you very likely will have QPI communication anyway and I doubt that a second Infiniband card will give a real speed up such a scenario. I am also unsure if in such a scenario the MPI stack can load-balance over both Infiniband adapters.
I think where you could have a real benefit from such a configuration is e.g. a storage server, where data comes in over Infiniband and needs to be sent to a RAID or SAS adapter. In this case I could imagine that the performance is more consistent when all the data traffic does not have to cross CPU and memory boundaries.