I'm working in a codebase with a mixture of CString
, const char*
and std::string
(non-unicode), where all new code uses std::string
exclusively. I've now had to do the following:
{
CString tempstring;
load_cstring_legacy_method(tempstring);
stdstring = tempstring;
}
and worry about performance. The strings are DNA sequences so we can easily have 100+ of them with each of them ~3M characters. Note that adjusting load_cstring_legacy_method
is not an option. I did a quick test:
// 3M
const int stringsize = 3000000;
const int repeat = 1000;
std::chrono::steady_clock::time_point startTime = std::chrono::steady_clock::now();
for ( int i = 0; i < repeat; ++i ){
CString cstring('A', stringsize);
std::string stdstring(cstring); // Comment out
cstring.Empty();
}
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - startTime).count() << " ms" << std::endl;
and commenting out the std::string
gives 850 ms
, with the assignment its 3600 ms
. The magnitude of the difference is suprising so I guess the benchmark might not be doing what I expect. Assuming there is a penalty, is there a way I can avoid it?