3

I am running GETH node on google cloud compute engine instance and started with HDD. It grows 1.5TB now. But it is damn slow. I want to move from HDD to SSD now. How I can do that? I got some solution like : - make a snapshot from the existing disk(HDD) - Edit the instance and attach new SSD with the snapshot made. - I can disconnected old disk afterwards.

One problem here I saw is : Example - If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.

But, I want to understand if it actually works? Because this is a node I want to use for production. I already waiting too long and cannot afford to wait more.

One problem here I saw is : If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.

  • Tip: Do NOT put a large amount of data on the OS boot disk. Create a new disk and add it to your instance. For some OS the boot drive cannot be larger than 2 TB. If you run out of space, you will need to know what you are doing to fix it. – John Hanley Jul 25 '19 at 15:30

2 Answers2

0

You should try to use Zonal SSD persistent disks.

As standing in documentation

Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.

Jakub Bujny
  • 4,400
  • 13
  • 27
0

The description of the issue is confusing so I will try to help from my current understanding of the problem. First, you can use a booting disk snapshot to create a new booting disk accomplishing your requirements, see here. The size limit for persistent disk is of 2 TB so I don’t understand your comment about the 500 GB minimum size. If your disk have 1.5 TB then will meet the restriction.

Anyway, I don’t recommend having such a big disk as a booting disk. A better approach could be to use a smaller boot disk and expand the total capacity by attaching additional disks as needed, see this link.

Alex6Zam
  • 156
  • 5