7

I'm considering the feasibility of programming a multi-user RTS game (partly) in C++. What I quickly discovered, is that one hard requirement is that the game simulation must be fully deterministic to the very last bit across the server and all clients, to be able to limit the network communication to the user input, and not the game state itself. Since everyone has a different computer, this seems like a hard problem.

So, is there some "magic" way of getting the C++ compiler to create an executable that will be fully deterministic across Linux (the server), Windows and Mac? I think the two main OSS C++ compilers are GCC and Clang, so I was wondering if one performs better than the other in this regard.

I would also be interested in any test-suite that could be used to validate C++ determinism.

[EDIT] By deterministic, I meant that the compiled program, given the same initial state, and the input in the same order, will always produce the same output, on any platform where it runs. So, also across the network. Consistent sounds like an appropriate definition of this behavior to me, but I'm not a native speaker, so I might misinterpret the exact meaning.

[EDIT#2] While discussions about whether determinism/consistency matters, and whether I should aim for that in a game engine, and how big a problem it generally is in C++, is quite interesting, it does not in any way actually answer the question. So far, no one had any fact telling me if I should use Clang or GCC to get the most reliable/deterministic/consistent results.

[EDIT#3] It just occurred to me that there IS a way to get exactly the same result in C++ as in Java. One has to take an open source implementation of the JVM, and extract the code that implements the operators and mathematical functions. Then you turn it into a stand-alone library and call inlineable functions in it, instead of using operators directly. It would be a pain to do by hand, but if the code is generated, then it's a perfect solution. Maybe that could even be done with classes and operator overloading, so it looks natural as well.

Sebastien Diot
  • 7,183
  • 6
  • 43
  • 85
  • 3
    Define "deterministic". So long as you don't rely on outside sources (user input, memory allocator addresses, etc.) to determine your control flow, nor any undefined, unspecified or implementation-defined behaviour, then a C++ program should always run the same. – Oliver Charlesworth Jul 10 '11 at 11:32
  • How do you mean "deterministic"? As in, for example, "consistent in function parameter evaluation order?" Or what – Armen Tsirunyan Jul 10 '11 at 11:33
  • @Oli: Not if it calls something in the C standard library, particularly so rand() (and most games do have some element of randomness). – David Hammen Jul 10 '11 at 11:38
  • @David: That's a good point. Are there any other cases like that? – Oliver Charlesworth Jul 10 '11 at 11:39
  • I think the OP is more interested in *consistency* along various platforms. E.g. knowing that data types have the same size or that integer operations overflow in the same manner, or even that structs have the same in-memory layout. – thkala Jul 10 '11 at 11:41
  • @thkala: No, he's talking about lockstep-networking, where every client runs the game management code, so that you only have to pass controller data from machine to machine. For that to work, every machine must be performing the exact same computations as the others. Hence the game loop for each program must be deterministic: given the same inputs, it provides binary-identical outputs. – Nicol Bolas Jul 10 '11 at 11:43
  • @Oli: I asked a question recently about the difference between C++ and Java operators, and the answers were that C++ does not fully define many situations, which are left as a choice to the compiler implementer, and I'm not even talking about floating point, but things like the bit-shift operator on a negative integer value. – Sebastien Diot Jul 10 '11 at 12:20
  • 3
    @Sebastien: Indeed. But as a general rule, you shouldn't be relying on any specific behaviour for this sort of thing when you write your code. In particular, if you rely on *undefined behaviour* (e.g. shifting by a negative amount), then you may not even get identical behaviour on the same machine with the same compiler. If you rely on *implementation-defined behaviour* (e.g right-shifting a negative number), then it is up to you to ensure that the compilers and platforms that you use all do the same thing (they declare their behaviour in their documentation). – Oliver Charlesworth Jul 10 '11 at 12:26

5 Answers5

0

Since everyone has a different computer, this seems like a hard problem.

It's not. Really, this kind of networking is quite simple, so long as you don't do anything that is undefined by the specification. IEEE-754 is very clear on exactly how floating-point math is to be done, how rounding is to be done, etc, and it is implemented identically across platforms.

The biggest thing you need to not do is rely on SIMD CPU instructions in code that needs to be deterministic (note: this is physics, AI, and such: game state. Not graphics, which is where you need SIMD). These kinds of instructions play fast-and-loose with the floating-point rules. So no SIMD in your game code; only in the "client" code (graphics, sound, etc).

Also, you need to make sure that your game state doesn't depend on things like the time; each game state clock tick should be a fixed time interval, not based on the PC's clock or anything of that nature.

Obviously, you should avoid any random function you don't have the code to. But again, only for your main gameplay loop; the graphics stuff can be client-specific, since it's just visuals and doesn't matter.

That is pretty much it, as far as keeping the two game states in sync. The compiler you use isn't going to be a big issue for you.

Note that StarCraft and StarCraft II use this as the basis of their networking model. They both run on Macs and PCs, and both can play against each other. So it's very possible, and doesn't require Clang.

Though if you like Clang, you should use it. But that should be because you like it, not for networking.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • What about, for instance, trigonometric/transcendental functions such as `sin` and `log`? – Oliver Charlesworth Jul 10 '11 at 11:47
  • Assuming he's sticking to a single compiler and a single standard library implementation (across multiple platforms), that shouldn't be a problem. But for added security, he could implement a common version of these functions. – Nicol Bolas Jul 10 '11 at 11:52
  • 4
    Wrong, thinks will break as soon as the compiler uses registers with more internal precision as required by IEEE-754. You will get results that are all within spec but are not bit-exact anymore. This 'broke' several networking games back 10 years ago where optimized code-paths for Intel and AMD had slightly different orders of evaluation. It was close but not bit-exact. That stuff is still a huge problem – Nils Pipenbrinck Jul 10 '11 at 11:59
  • 1
    @Oli: Exactly. rand() is just the worst offender. sin() and log() are not consistent down to the last bit, either. For that matter, even something innocuous like `a*(b+c)` or `a+b+c` can be an offender. The results may well be different on the same computer, same compiler, but different optimization levels. Floating point arithmetic is neither associative nor distributive. – David Hammen Jul 10 '11 at 13:24
0

Don't rely on undefined or unspecified behaviour, (particularly, don't use floating-point), and it doesn't matter what compiler you use.

If a is 1, and b is 2, then a + b is 3. This is guaranteed by the language standard.

C++ isn't a land in which in some compilers some things are 'deterministic' and in other compilers they aren't. C++ states some facts (like 1 + 2 == 3) and leaves some things up to the compiler (like order of evaluation of function arguments). If the output of your program only depends on the former (and the user's input), and you use a standards-compliant compiler, then your program will always produce the same output given the same user input.

If the output of your program also depends on (say) the user's operating system, then the program's still deterministic, it's just that the output is now determined by both the user's input and their operating system. If you want the output only dependent on the user's input, it's up to you to ensure that the user's operating system is not a contributing factor to the output of your program. One way to do this is to only rely on behaviour guaranteed by the language standard, and to use a compiler that conforms to that standard.

In summary, all code is deterministic, based on its input. You just have to make sure that the input is made up only of the things you want it to be.

Karu
  • 4,512
  • 4
  • 30
  • 31
-1

I think which compiler you are using does not matter that much.

Such fully-deterministic approach was used in Doom for example. Instead of random-number generator they were using a fixed "random"-number array. Game time was measured in in-game ticks (which was about 1/30 of a second if I remember).

If you measure everything by in-game mechanics, rather than offloading your work to some standard libraries, which may come in various versions, I believe you should achieve good portability across different machines. Provided, of course, if those machines will be fast enough to run your code!

However, the network communication can create troubles on its own: latencies, drops, etc. Your game should be able to handle delayed messages and, if necessary, resynchronise itself. You might want, for example, send a full (or at least: more verbose) game state from time to time, rather than relying only on the user input.

Think also about possible exploits:

  • Client: I am throwing a grenade
  • Server: You have no grenades
  • Client: I don't care. I am throwing a grenade nevertheless
CygnusX1
  • 20,968
  • 5
  • 65
  • 109
  • The purpose of this networking model is to essentially ensure maximum performance. Sending the whole gamestate at any point other than the detection of full desynchronization (thus breaking the model) would be antithetical to this goal. Also, you can't "throw a grenade" via exploit. The clients only send controller data, not the meaning behind that data. So each client sees that you pressed the "K" key, and each interprets this based on the current state. If you hack your game to allow "K" to always throw a grenade, the other client won't have that. Thus desync will quickly occur. – Nicol Bolas Jul 10 '11 at 11:49
-1

This is somewhat of a fools errand. Your program will not be "fully deterministic" (whatever that means) "to the very last bit" on a big endian machine versus a little endian machine, nor on a 64 bit machine versus a 32 bit machine versus some other random machine.

Speaking of random, many games have an element of randomness. If you are achieving this by calling the c-standard function rand(), all bets are off.

David Hammen
  • 32,454
  • 9
  • 60
  • 108
  • Some minimal research show that many RTS games can only work *because* they are fully deterministic. As soon as the server detects that a client diverges, the client gets thrown out. Therefore this is not only an achievable goal, but one that stand on most, if not all, RTS design documents. And most serious game development relies on homegrown rand(), precisely because of this. – Sebastien Diot Jul 10 '11 at 12:13
  • There is no true randomness in CPUs, they *are* deterministic. The PRNG with the same seed yields the same sequence of values. Of course, since the standard doesn't require any particular PRNG algorithm, you either need to implement your own PRNG or make sure you're using the same version of the same stdlib implementation on every platform. –  Jul 10 '11 at 13:04
-1

If you start using floating point all your bets are off. You will run into hard to find/fix problems where you get different values even on the same platform, just by choosing a Intel or AMD cpu.

Lots of runtime libraries have optimized code-paths for different chips. These are all within the spec, but some are slightly more precise than that. This results in subtle roundoff errors that sooner or later accumulate to a difference that may break things.

Your goal should be to get away without 100% determinisim. After all: Does it matter for the player if a opponent is a pixel more to the left than it should? It is not. What is important is, that the little differences between clients and server don't ruin the gameplay.

What the player sees on his screen should look deterministic, so he doesn't feel cheated, but it is in no way required.

The games I've worked on archived this by constantly resynchronizing the game-state of all entities between all clients. We did however almost never sent the entire gamestate but we sent the game state of few objects each frame distributing the job over several seconds.

Just give the objects where it matters most higher priority than other and it will be fine. In a car racing game for example it does not matter to have the exact position of an opponent car if it is far away from you, and it's fine to only update it every 20 seconds or so.

Inbetween these updates just trust that the little round-off errors don't accumulate so much that you run into troubles.

Nils Pipenbrinck
  • 83,631
  • 31
  • 151
  • 221
  • This cannot be applied to all games. What I am considering is a Minecraft-like game. We are talking about a number of "blocks" that cannot even be expressed by a 64bit long. That kind of state cannot by synced at regular interval because it is just too big. It has to be recomputed locally, and recomputed exactly the same way on every client. – Sebastien Diot Jul 10 '11 at 12:37
  • determinism could be a good goal for stable software, repeatability. Ensuring that different platforms give the same floating point results may be difficult of course, e.g. floating point SIMD instructions may use different internal rounding; but there is nothing wrong with the goal that an engine should be able to run it's core state in a precise deterministic manner IMO - it might even be essential for controller feel in some cases. float values could be wrapped to ensure the desired method is always used. A software float implementation could be used to test against? – centaurian_slug Nov 27 '11 at 05:20