I'll try my best to explain the situation, but please ask any questions if I am not clear about it.
My situation: I have two computers, A and B. On computer A, I have a program that views a database and displays it to the user. On computer B, I have a program where various actions by the user cause data to ultimately end up in said database(very indirect process through many third party applications).
My 'goal': I need to figure out from the time I do an action on B, how long it will take for me to be able to access that data from the program on A. Ideally, I need to narrow this down to a 50ms margin of error.
Tools:
- Computer A and B share a local connection
- They can communicate via TCP-and I've used C# for this communication before. I know it isn't accurate to say, but we can effectively treat the communication times B->A and A->B as constant(but each direction is their own constant)
- They have access to a shared NTFS-formated Network drive.
- I can tell, to a negligible degree(for this purpose, O(n) effectively is zero), on B when the action is performed
- I can tell, to a negligible degree(treat O(n) as zero) when a value is accessible from the program/database in A.
Constraints/Roadblocks
- Computer B does not have internet access
- The clocks cannot be trusted to be in sync-even if the time currently looks to be in sync. I've seen the clocks on both computers to be off from each-other by measures of minutes. I do not know how timestamps on the network drive are determined.
- While the time in the TCP communication effectively is constant to me, I do not know what those constants (A->B, and B->A) are or how to measure them.
- The actions and retrievals happen in third party programs, the database is a reflection of another database, and I cannot access the server these values are on.
- (Edit/added) Computer B does not know when data lands in the database, only when the action to place it there fires off.
I think this ultimately turns in to some sort of math problem where I would somehow use differences in timestamps for messages within each computer, instead of comparing the time stamps between the computers. So more of measuring how long it takes for the TCP communication to work instead of looking at how long it takes for the process. If I had that, I could send a message from B to A when the event happens and a message from A to B when the data is available, and then subtract out the communication time. If this is my best option I don't know how I would set it up, practically or in theory, since I can't trust their own timestamps.
Also, while I would greatly appreciate some responses/answers/ideas- to be honest part of my writing this is for my benefit to clearly describe and think about the issue. Additionally, I couldn't find an answer to this problem so hopefully this will pop up if someone else has a similar problem down the road.