Note that for general CSV file handling, a proper library should be used. If the data are very simple, i.e. no embedded commas, newlines etc. simpler tools can be used.
You have a good awk
solution from steve so I'll add an answer based on coreutils and grep:
# find columns to remove
pattern=current
cols=$(head -n1 a.csv | tr ',' '\n' | grep -n "$pattern" | cut -d: -f1 | paste -s -d,)
# remove all columns that matched
cut --complement -d, -f$cols a.csv
Output:
voltage, power, voltage, power
2 , 6 , 12 , 144
3 , 15 , 10 , 100
Note that the --complement
option is a GNU cut extension. To generate $cols
for other cuts, something like this should do (tested in zsh on FreeBSD):
# number of columns
file=a.csv
pattern=current
n=$(head -n1 "$file" | tr ',' '\n' | wc -l)
# generate complementary list
cols=$(jot $n \
| grep -xvFf <(head -n1 "$file" | tr ',' '\n' | grep -n "$pattern" | cut -d: -f1) \
| paste -s -d, -)
# remove columns
cut -d, -f$cols "$file"