11

I've come across several situations where the claim is made that doing a dot product in GLSL will end up being run in one cycle. For example:

Vertex and fragment processors operate on four-vectors, performing four-component instructions such as additions, multiplications, multiply-accumulates, or dot products in a single cycle.

http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter35.html

I've also seen a claim in comments somewhere that:

    dot(value, vec4(.25))

would be a more efficient way to average four values, compared to:

    (x + y + z + w) / 4.0

Again, the claim was that dot(vec4, vec4) would run in one cycle.

I see that ARB says that dot product (DP3 and DP4) and cross product (XPD are single instructions, but does that mean that those are just as computationally expensive as doing a vec4 add? Is there basically some hardware implementation, along the lines of multiply-accumulate on steroids, in play here? I can see how something like that is useful in computer graphics, but doing in one cycle what could be quite a few instructions on their own sounds like a lot.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
ultramiraculous
  • 1,062
  • 14
  • 21

3 Answers3

12

The question cannot be answered in any definitive way as a whole. How long any operation takes in hardware is not just hardware-specific, but also code specific. That is, the surrounding code can completely mask the performance an operation takes, or it can make it take longer.

In general, you should not assume that a dot product is single-cycle.

However, there are certain aspects that can certainly be answered:

I've also seen a claim in comments somewhere that:

would be a more efficient way to average four values, compared to:

I would expect this to be kinda true, so long as x, y, z, and w are in fact different float values rather than members of the same vec4 (that is, they're not value.x, value.y, etc). If they are elements of the same vector, I would say that any decent optimizing compiler should compile both of these to the same set of instructions. A good peephole optimizer should catch patterns like this.

I say that it is "kinda true", because it depends on the hardware. The dot-product version should at the very least not be slower. And again, if they are elements of the same vector, the optimizer should handle it.

single instructions, but does that mean that those are just as computationally expensive as doing a vec4 add?

You should not assume that ARB assembly has any relation to the actual hardware machine instruction code.

Is there basically some hardware implementation, along the lines of multiply-accumulate on steroids, in play here?

If you want to talk about hardware, it's very hardware-specific. Once upon a time, there was specialized dot-product hardware. This was in the days of so-called "DOT3 bumpmapping" and the early DX8-era of shaders.

However, in order to speed up general operations, they had to take that sort of thing out. So now, for most modern hardware (aka: anything Radeon HD-class or NVIDIA 8xxx or better. So-called DX10 or 11 hardware), dot-products do pretty much what they say they do. Each multiply/add takes up a cycle.

However, this hardware also allows for a lot of parallelism, so you could have 4 separate vec4 dot products happening simultaneously. Each one would take 4 cycles. But, as long as the results of these operations are not used in the others, they can all execute in parallel. And therefore, the four of them total would take 4 cycles.

So again, it's very complicated. And hardware-dependent.

Your best bet is to start with something that is reasonable. Then learn about the hardware you're trying to code towards, and work from there.

Community
  • 1
  • 1
Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • Ok, thanks. "You should not assume that ARB assembly has any relation to the actual hardware machine instruction code." is basically the concise answer I was hoping for. It just seems like ARB is kinda niche and hard to find a lot of Google-able material on. This was one of those "tribal knowledge" type things that I couldn't seem to verify, and the fact that it was true for a period of time makes sense. Cool stuff. – ultramiraculous May 26 '12 at 02:25
5

Nicol Bolas handled the practical answer, from the perspective of "ARB assembly" or looking at IR dumps. I'll address the question "How can 4 multiples and 3 adds be one cycle in hardware?! That sounds impossible.".

With heavy pipelining, any instruction can be made to have a one cycle throughput, no matter how complex.

Do not confuse this with one cycle of latency!

With fully pipelined execution, an instruction can be spread out into several stages of the pipeline. All stages of the pipeline operate simultaneously.

Each cycle, the first stage accepts a new instruction, and its outputs go into the next stage. Each cycle, a result comes out the end of the pipeline.

Let's examine a 4d dot product, for a hypothetical core, with a multiply latency of 3 cycles, and an add latency of 5 cycles.

If this pipeline were laid out the worst way, with no vector parallelism, it would be 4 multiplies and 3 adds, giving a total of 12+15 cycles for a total latency of 27 cycles.

Does this mean that a dot product takes 27 cycles? Absolutely not, because it can start a new one every cycle, and it gets the answer to it 27 cycles later.

If you needed to do one dot product and had to wait for the answer, then you would have to wait the full 27 cycle latency for the result. If, however, you had 1000 separate dot products to compute, then it would take 1027 cycles. The first 26 cycles, there were no results, on the 27th cycle the first result comes out the end, after the 1000th input was issued, it took another 26 cycles for the last results to come out the end. This makes the dot product take "one cycle".

Real processors have the work distributed across the stages in various ways, giving more or less pipeline stages, so they may have completely different numbers than what I describe above, but the idea remains the same. Generally, the less work you do per stage, the shorter the clock cycle can become.

doug65536
  • 6,562
  • 3
  • 43
  • 53
0

the key is that a vec4 can be 'operated' on in a single instruction (see the work Intel did on 16 byte register operations, aka much of the basis for IOS accelerated framework).

if you start splitting and swizzling apart the vector there will no longer be a 'single memory address' of the vector to perform the op on.

Joe
  • 51
  • 2