If there are a lot of these files scattered across a variety of directories, find
mind be better.
find -name abc\* -printf "%T@ %f\n" |sort -nr|sed 's/^.* //; q;'
Breaking that out -
find -name 'abc*' -printf "%T@ %f\n" |
find
has a ton of options. This is the simplest case, assuming the current directory as the root of the search. You can add a lot of refinements, or just give /
to search the whole system.
-name 'abc*'
picks just the filenames you want. Quote it to protect any globs, but you can use normal globbing rules. -iname
makes the search case-insensitive.
-printf
defines the output. %f
prints the filename, but you want it ordered on the date, so print that first for sorting so the filename itself doesn't change the order. %T
accepts another character to define the date format - @
is the unix epoch, seconds since 00:00:00 01/01/1970, so it is easy to sort numerically. On my git bash emulation it returns fractions as well, so it's great granularity.
$: find -name abc\* -printf "%T@ %f\n"
1594219755.7741618000 abc123
1594219775.5162510000 abc321
1594219734.0162554000 abc456
find
may not return them in the order you want, though, so -
sort -nr |
-n
makes it a numeric sort. -r
sorts in reverse order, so that the latest file will pop out first and you can ignore everything after that.
sed 's/^.* //; q;'
Since the first record is the one we want, sed
can just use s/^.* //;
to strip off everything up to the space, which we know will be the timestamp numbers since we controlled the output explicitly. That leaves only the filename. q
explicitly quits after the s///
scrubs the record, so sed
spits out the filename and stops without reading the rest, which prevents the need for another process (head -1
) in the pipeline.