I have records that aren't unique, however, I can't get delete any of the duplicates because the data has a valid reason for being there. Normally, I spend time manually editing every single line adding 1,2,3,4 in sequence for the amount of duplicates that may occur. I'm looking for help automating this process.
Updating the fields is fairly simple and concatenating them with anything else is easily done however, I'd like to be able to run a script that updates the table by finding all the duplicates, ranks them and then adds that rank to the duplicates to make it unique. It is this ranking bit where I'm out of my depth slightly.
For context, I've run scripts that update the keys by combining the row numbers however each month the dataset changes slightly and therefore row numbers get thrown out; example below.
BEGIN
;WITH [CTE] AS
(
SELECT
ROW_NUMBER() OVER (PARTITION BY [COLUMN_NAME] ORDER BY [COLUMN_NAME]) AS [ROWNUMBER],
[COLUMN_NAME]
FROM [TABLE_NAME]
)
UPDATE [CTE]
SET [COLUMN_NAME] = CONCAT([COLUMN_NAME],([ROWNUMBER] - 1))
WHERE [ROWNUMBER] > 1
SELECT [COLUMN_NAME] FROM [TABLE_NAME] ORDER BY [COLUMN_NAME]
END`
If this can be edited in any way to work in the way described above that would be brilliant, if not, if anyone more learned than myself can point me in another direction, I would be most grateful.