Right, working from Jean-François Corbett's answer, which stores the contents in arrays before proceeding for efficiency, but adapting it to check for all duplicate rows in a progressive fashion, bottom-up. You get something like this:
Public Sub FillDuplicates()
Dim lastRow As Integer
Dim dColumn As Variant, eColumn As Variant, fColumn As Variant
Dim rowAltered() As Boolean
'Find the last row in Column D with content
lastRow = Cells(Rows.Count, "D").End(xlUp).Row
'Acquire data from columns: D, E & F in to arrays
dColumn = Range("D1").Resize(lastRow, 1).Value
eColumn = Range("E1").Resize(lastRow, 1).Value
fColumn = Range("F1").Resize(lastRow, 1).Value
ReDim rowAltered(1 To lastRow)
'Loop through all rows from bottom to top, using each D/E column value as a key
For cKeyRow = lastRow To 1 Step -1
'Ignore rows that have already been replaced
If Not rowAltered(cKeyRow) Then
'Loop through all rows above current key row looking for matches
For cSearchRow = cKeyRow To 1 Step -1
'If the row is a match and has not previously been changed, alter it
If Not rowAltered(cSearchRow) And dColumn(cKeyRow, 1) = dColumn(cSearchRow, 1) And eColumn(cKeyRow, 1) = eColumn(cSearchRow, 1) Then
fColumn(cSearchRow, 1) = fColumn(cKeyRow, 1)
rowAltered(cSearchRow) = True
End If
Next cSearchRow
End If
Next cKeyRow
'Store the amended F column back in the spreadsheet
Range("F1").Resize(lastRow, 1) = fColumn
End Sub
Note, all the work with rowAltered
to determine rows that have been processed simply saves processing time. It would not be necessary, as the bottom-to-top action of the process would replace future key row values with lower duplicates as it went. Just it will do the replacements multiple times for each further duplicate up the page. The rowAltered
check prevents this.
If you left the data in the spreadsheet, then you could use .Find()
methods perhaps on the columns to locate duplicates, rather than the inner loop. But I doubt it would be more efficient.