A bit late, but perhaps still relevant info: t2d
seems to give you an entire EPYC core per vCPU, when n2d
/c2d
seem to give you a hyper thread per vCPU (so 2vCPUs/core). You can verify that if you test scalability when running single-threaded vs multithreaded scalable workloads - if your vCPUs are whole cores, you get optimal scalability (2xvCPU = 2xPerformance for the right load).
You were curious about other cloud solutions. I recently did some extensive testing on multiple providers, including Linode and DO (but not Vultr as they are not as reputable). You can look at the Multi-threaded performance & CPU scalability section, from the graph, whatever has 90%+ is a full core per vCPU, otherwise it's a hyper-thread.
The short story is: most "dedicated" CPU instances across providers give you threads, with notable exceptions being the aforementioned t2d
, as well as the ARM-powered VMs (e.g. Altair Altra, AWS Graviton2), which give you a full core for each vCPU. On the other hand, while many "shared" instances give you less consistent single-thread performance, they behave more like they are full cores per vCPU - probably because workloads are moved the free cores and nodes aren't usually excessively busy - and both Linode's and Digital Ocean's lowest cost shared instances are like that (making them great deals given the price). So, with those, your single-threaded performance will vary a bit with how busy the node is, but your 2xvCPU instance will actually have about 2x the performance of 1xvCPU if you can run things in parallel.