The primary key of the table has to be unique. If your dataset grows over time it is not sufficient to check whether the columns are now unique. They will have to be unique for ever. So in this case I would avoid a natural PK and choose a surrogate key.
If you are 100% sure that there must be a natural key but you just don't know the columns you start with determining the distinct cardinality of the colums:
SELECT COUNT(*),
COUNT(DISTINCT column1),
COUNT(DISTINCT column3),
...
FROM table
If for any column the distinct number of values is equal to the total number of rows you have found the key. If it is multi columns you need to take a try-and-error approach. You know that the product of the distinct column values of your key candidates has to be greater or equal to the total number of rows
COUNT(DISTINCT key_column1) * COUNT(DISTINCT key_column2) * COUNT(DISTINCT key_column3) >= COUNT(*)
Knowing this you can look for plausible combinations like so
SELECT COUNT(DISTINCT key_column1 || key_column2 || key_column3), COUNT(*)
FROM table
If the table has only a couple of thousand rows performance should not be an issue.
I gutes you can make this fully automatic by using the data dictionary but I have no solution at hand.