2

A cache efficent way of storing components in ECS is dividing up types of components into large arrays, then having each system iterating over the components. However, let's say I also want to avoid false sharing between the rendering and the physics thread trying to access the coordinates of an entity at the same time.

Let's assume a cache line is 64 bytes large. And let's say I have a 'Positions' array that is 1 GiB. I can divide it into 64 bytes pages, and I only need one boolean value to store wether the page is busy ( being written ) or not. Using std::vector<bool>, that uses only 1 bit for every bool, that would take up 2 Mib of memory.

So far it sounds doable. However, I still don't have a way to efficently deal with the situation where a worker thread finds a page is busy.

Should I busy wait? Is there a common pattern to solve this problem?

Or more importantly, is this painless overengineering? I'm just trying to make my "homework" framework extension proof, for a matter of learning. Never having made a large engine, I don't know if false sharing is actually a noteworthy performance cap in this situation.

Alex
  • 45
  • 1
  • 7
  • Is your rendering engine reading the data at the same time the physics engine writes it? That sounds dubious. Normally the physics engine should compute and write state `N+1` while rendering engine reads state `N`. – spectras Aug 29 '17 at 00:50
  • Then should I have the engines copy the data to their own buffers? Sounds like that's how I make the physics/logic/any other engine compute state `N+1` while `N` is being rendered, but it also sounds like a lot of memory transfer – Alex Aug 29 '17 at 11:15
  • The point is: if your rendering engine reads your data at the same time your physics engine modifies it, your program just won't work. That's why they are made to work in a pipelined kind of way: physics engine works on step N+1 while rendering engine works on step N (and while the GPU works on step N-1 basically, and while the screen shows N-2 if you use double-buffering) – spectras Aug 29 '17 at 11:57

0 Answers0