x86 indicates that the algorithm is optimized for 32-bit platforms. This means it operates on 32-bit unsigned integers.
x64 is then optimized for 64-bit platforms, operating on 64-bit unsigned integers.
Also, the results between the two are not compatible. The hash values for the same input will be different depending if it is MurmurHash3_x86_128
or MurmurHash3_x64_128
for example.
Does this mean I have to use the x64 functions and provide a x64 executable to be able to use this hash library on x64 systems? Or can I simply use the x86 version, and just encounter poorer performance?
64-bit hash functions can be compiled for 32-bit systems but will end up being quite slow because the compiler splits computations into two parts. If 32-bit support is important, you should use a x86-optimized function, not a x64-optimized one. On x64 systems 32-bit code runs fine, although I would consider that to be an under-utilization. x64-optimized algorithms are much more efficient when on 64-bit CPUs.
Am I correct in thinking that the _32 _64 _128 bit versions only mean that more bit versions offer better distribution?
I suppose the answer is yes. If by distribution you mean "less likely to cause collisions". Each additional bit of memory used in a hash dramatically increases the number of possible outcomes. A 4-bit hash has 16 possible hashes, while 64 provides 18 quintillion (128 then providing 340.2 undecillion!). 256 bits provide so much that it is often enough for cryptographic security purposes.
Something else to be aware of: Lately, modern hash functions utilize new instruction sets of CPUs such as CRC32, AES, SSE2, SIMD - where the function takes advantage of specific CPU features/instructions to achieve better performance under supported hardware. This can greatly speed up hashing on CPUs that support these modern features.