1

I am trying to wrap my head around how to test Key values in a CSV file against its parent database.

I have a table with address values, each record with a numeric key. This table has many duplicates, each with a unique key.

I have a program that has taken duplicates and pointed them to a 'parent' key to hand off any relationships that may be tied to that key.

example:

Key  StreetNo   StreetName   Unit
---  --------   ----------   ----
2     123        5th          
8     123        5th          A
1     123        5th

This would be in my CSV file: 2, 8, 1

Note that I am only looking at StreetNo and StreetName. Also, there are instance of many duplicates, the next CSV row may be something like 4, 5, 6, 9, 11

I need to test this file for accuracy, that an address won't point off to a parent of a wrong address. I only want to test StreetNo and Name.

I know I can read in a CSV using SQL- however my CSV columns vary. The right way to go would be to take the first key in the CSV, concatenate StreetNO and StreetName, store it, and then compare it with the rest of the keys' concatenated values. Repeat for the length of the CSV.

Example:

CSV 2, 8, 1

2 = "123 5th"; check against "123 5th", "123 5th"

If there was no match, the CSV row would be printed.

I'm at a standstill, has anyone approached this type of dynamic CSV testing - the logic is simple but I'm stuck implementing.

Joey Davis
  • 11
  • 1
  • Okay- so I imported the csv of keys and then updated to address values by key. This gives me a clunky start, as I can select by all rows that match. I basically have a big table with rows of just address numbers and names, and if any row has a field different from the rest I have my answers. – Joey Davis Dec 07 '12 at 03:46

0 Answers0