I've got an azure VM with a number of files on it. Some of these files are pretty messed up, for example, containing a UTF8 BOM and non-UTF8 characters, in particular, smart quotes like so:
<option ef="“Late”" />
In order to fix this, I have a small C# utility that opens a StreamReader
on each:
StreamReader sr = new StreamReader(filename, Encoding.ASCII, true);
calls .ReadToEnd()
, and then checks CurrentEncoding
. If I run this process in a powershell window, it returns System.Text.ASCIIEncoding
as expected because of the smart quotes and because that's what it does when run anywhere else. If I run it inside a chocolatey package or through octopus deploy, CurrentEncoding
equals System.Text.UTF8Encoding
.
I'm calling .ReadToEnd()
because MSDN says that performing a read will set the encoding correctly. What is different about chocolatey and octopus that is making StreamReader
guess the wrong encoding?