Not sure if this answers your question, but I'd focus my attention elsewhere.
Namely why the word "Dècor" becomes, if I understood correctly, when loaded in your BigQuery table "Décor".
Let's say you have a CSV file with the following content:
Dècor|Dècor|Dècor
Dècor|Dècor|Dècor
If you load it in BigQuery with encoding "ISO-8859-1" it gets corrupted.
bq load --autodetect --source_format=CSV -field_delimiter="|" -encoding='ISO-8859-1' mydataset.test_french gs://my-bucket/broken_french.csv
And here's how the table inside BigQuery looks like:
Row string_field_0 string_field_1 string_field_2
1 Dècor Dècor Dècor
2 Dècor Dècor Dècor
On the other hand, if you use 'UTF-8' encoding, like so:
bq load --autodetect --source_format=CSV -field_delimiter="|" -encoding='UTF-8' mydataset.test_french2 gs://my-bucket/broken_french.csv
the result in BigQuery looks as it should:
Row string_field_0 string_field_1 string_field_2
1 Dècor Dècor Dècor
2 Dècor Dècor Dècor
So, in the case where you're using the wrong encoding to load your data, I'd reload them by using the correct one.