Back in the day using ADO, we used GetRows() to pull back an array and loop through it, because it was faster than using rs.MoveNext to walk through records. I'm writing an application that pulls back half a million rows and writes them out into a file. Pulling the data from SQL takes about 3 minutes, but writing it to a CSV is taking another 12 minutes. From the looks of it, it's because I'm looping through a SqlDataReader. What is a faster alternative?
Keep in mind, I do not know what the SQL Structure will look like as this is calling a reporting table that tells my application what query should be called. I looked at using linq and return an array, but that will require knowing the structure, so that will not work.
Note the below code, case statement has many cases, but to cut down on space, I removed them all, except one.
StringBuilder rowValue = new StringBuilder();
SqlDataReader reader = queryData.Execute(System.Data.CommandType.Text, sql, null);
//this is to handle multiple record sets
while (reader.HasRows)
{
for (int i = 0; i < reader.FieldCount; i++)
{
if (rowValue.Length > 0)
rowValue.Append("\",\"");
else
rowValue.Append("\"");
rowValue.Append(reader.GetName(i).Replace("\"", "'").Trim());
}
rowValue.Append("\"" + Environment.NewLine);
File.AppendAllText(soureFile, rowValue.ToString());
while (reader.Read())
{
rowValue = new StringBuilder();
for (int i = 0; i < reader.FieldCount; i++)
{
String value = "";
switch (reader.GetFieldType(i).Name.ToLower())
{
case "int16":
value = reader.IsDBNull(i) ? "NULL" : reader.GetInt16(i).ToString();
break;
}
if (rowValue.Length > 0)
rowValue.Append("\",=\""); //seperate items
else
rowValue.Append("\""); //first item of the row.
rowValue.Append(value.Replace("\"", "'").Trim());
}
rowValue.Append("\"" + Environment.NewLine); //last item of the row.
File.AppendAllText(soureFile, rowValue.ToString());
}
//next record set
reader.NextResult();
if (reader.HasRows)
File.AppendAllText(soureFile, Environment.NewLine);
}
reader.Close();