I am investigating whether an application at work can benefit from an S3 based storage system rather than traditional NFS. So, I downloaded Minio on another computer attached to my local LAN, and wrote a quick S3 PutObject test with the aws-sdk.
I grabbed the content from /etc/passwd as test data (about 5kb)
for (int i = 0; i < 100; i++)
{
struct timeval tv;
gettimeofday(&tv,nullptr);
QByteArray tvData = QByteArray::fromRawData(reinterpret_cast<char *>(&tv),sizeof(struct timeval));
QByteArray filename = QCryptographicHash::hash(tvData,QCryptographicHash::Sha1).toHex();
filename[1] = '/';
filename[4] = '/';
Aws::S3::Model::PutObjectRequest req;
req.SetBucket(primaryBucket);
req.SetKey(filename.constData());
const std::shared_ptr<Aws::IOStream> input_data =
Aws::MakeShared<Aws::StringStream>("SampleAllocationTag", ba.constData());
req.SetBody(input_data);
auto outcome = myClient->PutObject(req);
if (!outcome.IsSuccess())
{
std::cout << "PutObject error: "
<< outcome.GetError().GetExceptionName() << " - "
<< outcome.GetError().GetMessage() << std::endl;
return false;
}
}
This small transaction of 100 small files takes 8 seconds to run. Seems ridiculously slow to me. Does anyone have any ideas, or am I missing something huge here? Again, running Minio on a local network (actually same network switch) computer, just default setup pointing to a directory. AWS S3 SDK built from source, both machines Fedora 31. I'm looking for something that can handle hundreds of files per second, write, read, and delete, with sometimes bursts into the thousands, this is orders of magnitude too slow out of the box.