We have list of 50 tables in "SQL Server-A" in "PC-A" and same 50 tables in "SQL server-B" in "PC-B" both are connected together in network.
"SQL server-A" acts as staging since it is directly connected with sensors. A python Program tries to transfers all the data from 50 tables in "SQL Server-A" to "SQL Server-B" every hour ("every hour, last 2 hours data will be transfered in incremental way"). While transfering, how to avoid the row insertion in "SQL Server-B" tables if the row already exist in the tables?
Earlier we used PostgreSQL in place "server-B". We used bulk transfer 2 hour data to PostgreSQL "Temp" table and then from "Temp" table we insert to actual table with query "on Conflict do nothing". What is its equivalent in SQL Server?
For example: table in "Server-A" and DB : DB-1 & Table: Table-1 [here "ID" Column is a primary key]
ID Name Value Timestamp
1 X 12 2022-02-14 09:46:24.840
2 Y 15 2022-02-14 09:47:24.840
3 A 35 2022-02-14 09:48:24.840
4 B 56 2022-02-14 09:49:24.840
5 C 86 2022-02-14 09:50:24.840
Table in "Server-B" and DB : DB-1 & Table: Table-1
ID Name Value Timestamp
1 X 12 2022-02-14 09:46:24.840
2 Y 15 2022-02-14 09:47:24.840
3 A 35 2022-02-14 09:48:24.840
Need to insert only the ID 4 & 5 in the Table-1 of DB-1 in server-B and skip ID 1,2 & 3. What is the best way to transfer this in SQL, since tables in Server-B hold millions of rows?