3

I rented 4 server instances from my friends data center for an extremely cheap price. As advertised, each instance should hold Intel CPUs with 4x cores at 2.6GHz clock frequency each, 8GB of RAM, and 50GB of SATA Drive Memory.

He provided access to those instances which were spawned out of 4 larger dedicated servers through Docker.

Knowing that it is possible to fake these specifications all the way down to mucking around with the Linux kernel though, what is the most reliable (not necessarily most accurate) way to ensure that these server instances I have access holds some form of these specifications?

I was thinking if there was a test I could run periodically which would not be intrusive to running tasks on those instances that would let me check the clock core freq. and RAM, or if there was any way to cryptographically prove/ensure that computations as a result of those specifications were possible.

Anyone have any thoughts?

  • 1
    I'm not understanding the crypto connection. Are you just wondering if crypto would be a good way to benchmark the machines? – Neil Smithline Mar 29 '18 at 19:54
  • 1
    Researching through, I found that several cryptographic proofs of work utilizing [memory hard/CPU bound functions](https://en.wikipedia.org/wiki/Proof-of-work_system) could be an answer if we look at the rate in which it computes results that could be verified of such functions. Not too sure if they're practically feasible for my case though. –  Mar 29 '18 at 19:56

1 Answers1

1

Yes, it is possible to confirm if the server matches the specifications you were given, assuming you do not need extremely high accuracy, and assuming it is not maliciously designed to try to fool your tests (e.g. giving you full resources but only when being tested) but rather reports untrue specifications. The simplest way to test this is to try to allocate as much memory as you can. You can do the same with your storage, to ensure your quota is not smaller than you believe. There are also numerous benchmarks for the processor that will tell you the overall performance of the system. If you expect there to be more advanced trickery at play to fool you, you can do a more exhaustive set of tests:

Memory

Testing the amount of memory is simple. Just try to allocate as much non-compressible, non-duplicatable memory as you can. To do this accurately, you will also want to disable overcommit, which allows allocation of more memory than you actually have available. Now allocate as much memory from the kernel as you can and write to that memory, until the memory allocation fails (mmap(2) returns -ENOMEM). This will give you a good lower bound for the amount of memory you have. Note that some virtualization solutions will give you a burst memory limit, where a request for more memory than you are allowed will be permitted for a brief period. I know OpenVZ has the ability to do this and it is common among low-cost VPSes, but I do not know about Docker.

Testing memory throughput is also possible with benchmarking tools. A proper memory benchmark will ensure that the CPU cache will not taint the results. This will give you a rough estimate of the overall throughput of the memory you are given. Note that real-world speeds are often higher as the CPU cache will keep recently-used memory contents in much faster memory.

Storage

Testing the amount of storage you have available is similar to, but simpler than, testing the amount of usable memory you have. All you have to do is write as much non-compressible data to the drive as you can. Once you are unable to continue writing due to being out of free space, you will know how much storage you have been given. You may want to check that everything you have written is still there, as some storage devices (especially cheap Chinese flash drives) misreport the amount of storage they have, resulting in writes to high addresses wrapping and overwriting your earlier writes.

Benchmarking your storage can be done with popular tools for the job. This will give you an indication of both the throughput and the iops of the medium. Be aware that benchmarking storage devices is an action which is far more sensitive to other simultaneous workloads than many other benchmarks.

Processor

Testing the number of cores your processor has can be done by seeing at which point increasing concurrency of a parallelizable task does not increase performance. Spin up a workload on a single thread to max out one core. Then spin up another, and another. Keep doing this until the performance improvements begin to drop off. At the point when adding a new thread does not increase performance, or actually decreases it, you will know you have exceeded the number of hardware threads available to run your software threads. Do note however that many modern processors utilize hyperthreading or another form of simultaneous multitasking. Hyperthreading improves performance for heterogeneous workloads, but two logical cores on the same physical core still have finite resources. Don't be surprised if it appears that virtual cores do not count.

The performance of an individual processor is often subjective, as some tasks are more efficient than others (dividing two floating point numbers takes far more cycles than adding the same numbers). However, if your benchmarking shows that the actual performance is far less than the reported performance, you know you are being throttled. Getting the maximum clock rate is possible by using the RDTSC instruction, which reports the number of cycles that would have elapsed at maximum clock rate (the rate is adjusted dynamically) since the CPU has last started up. A simple counter loop in assembly run with a high priority (to maximize the timeslot it is given) can also give you an idea as to the current clock speed of the processor. You will need to understand the specific CPU's performance characteristics to interpret this though, as knowing the latency of different instructions and the size of the processor's pipeline all matter.

forest
  • 462
  • 3
  • 13