This seems a slightly odd architecture to me - you will be forced to do a lot of work to keep two separate scene graphs synchronised and you'll probably find it impossible to keep them completely decopuled (the situation you are describing in the question is an example, but there will be lots more.....)
I'd encourage you to think about a single game object graph. You can still have physics strategies and render strategies for each object, but I'd suggest seeing them more as "plug-ins" to the game object rather than separate object graphs. This way the game object can have a position / rotation vectors that are accessed by both physics and render components.
An alternative, if you don't fancy re-architecting to a single game object graph, would be to separate out the position / rotation information into a separate structure, e.g. a large array of vectors. Both Physics and Render objects can share access to this structure.
This would imply:
- Both Physics and Render objects would need to know the index of their own position in the array (either by storing the index directly, or by some form of hashed lookup)
- Both Physics and Render objects would have to be happy with the same position / rotation formats
- You'd have to to some extra bookkeeping as objects are created / destroyed
- You'd have to be a little careful about concurrency, e.g. what happens if new Physics objects are added while rendering is taking place?
Overall I'm not sure that this gains you much...... but it might make sense if you have some other constraint such as the design of a 3rd-party physics library.