I want to run multiple benchmarks using google benchmark library after loading up a large file. I use the following code for this purpose. The function read_collection()
loads the contents of the file and the benchmark Build
processes the contents from coll
.
#define COLLECTION 'w'
class BuildFixture : public ::benchmark::Fixture {
public:
std::unique_ptr<Collection> coll;
BuildFixture() {
cout << "Constructor\n";
coll = std::make_unique<Collection>(Collection(COLLECTION));
coll->read_collection();
}
~BuildFixture() {
cout << "Destroy collection\n";
coll.reset();
}
};
BENCHMARK_DEFINE_F(BuildFixture, Build1)(benchmark::State& state) {
nrows = static_cast<size_t>(state.range(0));
for (auto _ : state) {
// Do something with coll and nrows
}
}
BENCHMARK_DEFINE_F(BuildFixture, Build2)(benchmark::State& state) {
nrows = static_cast<size_t>(state.range(0));
for (auto _ : state) {
// Something else with coll and nrows
}
}
BENCHMARK_REGISTER_F(BuildFixture, Build1)->Arg(10);
BENCHMARK_REGISTER_F(BuildFixture, Build2)->Arg(20);
BENCHMARK_MAIN();
When I run this code, each benchmark with arguments 10 and 20 executes the constructor (for a total of two times), runs the benchmarks and then calls the destructors. So the output looks like
Constructor
Constructor
.. (benchmarking outputs)..
Destroy collection
Destroy collection
This ends up taking too much time to read the (same) file multiple times and also takes up additional memory for holding the same data for several benchmarks. I am also worried whether the results will get affected by page faults. Therefore, I have two questions:
- Is there a way to avoid having to read the file twice as it would save some execution time (although this time is not counted in the benchmark).
- (If not) How can I restructure multiple benchmark code in a way that each benchmark calls the constructor, performs benchmarking, destructs and then move on to the next benchmark? (without having to use multiple
main
functions, of course)
Update 1
The benchmarks I need to register are different. I am not looking to pass different args to the same benchmark. I have updated the question accordingly with Build1
and Build2
.