So I have just started a job and all of our data is on this terrible access database with no input validation or protection against changing data. I am not that familiar with database management as of yet, though I am working on it and I need to polish this turd if you will. There are multiple tables with the same data copied from one table to another with no relationships where they make sense. One particular table has no primary keys designated and somehow every record has been duplicated at least once and sometimes up to 4 times. It is a modem database. Field1 is a modem_name, Field2 is a software edition, field3 is a waveform, the rest of the fields are data attributed to that waveform.
So as you can see there are many fields that say "modemA" in field1 and "software 1.2.3" in field2 and many fields that say "waveformX" in field3, etc. I need to delete the duplicate records, but I don't understand how to make the query check all of the fields as one whole record rather than individual fields. Because say modemA and modemB both have a software edition of 1.0, and both use waveformX and waveformY but "modemA" & "1.0" & "waveformX" is different from "modemA" & "1.0" & "waveformY" is different from "modemB" & "1.0" & "waveformX" ect... .
I wonder if it is best to just break it all out into separate tables with appropriate one to many relationships. I cannot go through the 7k records and delete every other one and I can't figure out how to add all the fields together to filter out the excess.