31

I get an error when trying to SSH into my instance. It tells me to check the console serial output. From what I understand it says my disk has no more space. What do I do? I need SSH to clear space but can't SSH because there is no space!

This is what I see:

Starting OpenBSD Secure Shell server: sshdopen: No space left on device
Oct 14 13:18:13 instance-1 sshd[2771]: Server listening on 0.0.0.0 port 22.
[?25l[?1c7[1GOct 14 13:18:13 instance-1 sshd[2771]: Server listening on :: port 22.
[[32m ok [39;49m8[?25h[?0c.
udhcpd: Disabled. Edit /etc/default/udhcpd to enable it.
mktemp: failed to create file via template `/tmp/tmp.XXXXXXXXXX': No space left on device
mktemp: failed to create file via template `/tmp/tmp.XXXXXXXXXX': No space left on device
mktemp: failed to create file via template `/tmp/tmp.XXXXXXXXXX': No space left on device
mktemp: failed to create file via template `/tmp/tmp.XXXXXXXXXX': No space left on device
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  2384  100  2384    0     0  1126k      0 --:--:-- --:--:-- --:--:-- 2328k
Oct 14 13:18:13 instance-1 google: {"attributes":{"sshKeys":"ishener:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAFbCZDZDvuIxUbH5AHYeUU/WUWaOBYI1S7Yl9k3oVFwrenn6XsMdDHKiSH2VtpenQ7mHu3YcLDFe0pO1AwJjnSO39JR/3tTVLeVbuHDTEhOhDHt0NE84S1rqHX6r591IDwLhoGnFdNibGs0Sc0uyR/kRxl5hjAWdskOm4wzald+uRctBd+hbdBmt6az7iF2UzHEV362LxUtIzaYWoo1hnhld07+eimi6t+bUHsgqDkVGaEUUDaRFWTaNlFI9UW/AMYOcu9C24molfpPKwe2R5C5HLI+8nNI7qvoGtrUZww7K5bxNQPe+bMvVitArjYItYNDU6OXvQVA/u6gnqnbt1MM= google-ssh {\"userName\":\"ishener@gmail.com\",\"expireOn\":\"2015-10-14T13:13:05+0000\"}\nishener:ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKT2j4VbRM6DXSjLb5UlOdzfaB4K2TvZHWGV3JD7T++EbWX87JLOKg6AdbDPWTlgKRan02TIT/Xshy28r7fCCc= google-ssh {\"userName\":\"ishener@gmail.com\",\"expireOn\":\"2015-10-14T13:12:58+0000\"}\nrsa-key-20150806:ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAxtMUn200CaOuRa8bHFuqrjDiyUDrLECUf9V/ZpxT24lrqEbS1bDT7oWQwcuxQZEcrTnfeCEDeIwQpbNoOGp8NufrZUUG8jpVnVQqCHQZ3T+0Gs6et6JYxldhb0xT3KJVwQM+qnZOfwsk7co/+XOhE63u62NvUlqpGDQQUvuFY1wV3B7Rfjhg2JYEHCMswLRnSfnvyxp6+uQJ
Oct 14 13:18:13 instance-1 google: 4THd/FlcGQJyJHUvpVQAqBIii8yc59+Rb32Xlyii6YU4+G60dfP6ON1iX2qkxJT5/mIkPfd3yPizbGsYhJbaqNQHPUE9hdqTlfk3gyA8S6SySNwViQtUqOH+sbo+suiJHHwr67V/qw== rsa-key-20150806\n"},"cpuPlatform":"Intel Ivy Bridge","description":"","disks":[{"deviceName":"instance-1","index":0,"mode":"READ_WRITE","type":"PERSISTENT"}],"hostname":"instance-1.c.united-wavelet-102819.internal","id":1871676137734806120,"image":"","machineType":"projects/273410245967/machineTypes/g1-small","maintenanceEvent":"NONE","networkInterfaces":[{"accessConfigs":[{"externalIp":"104.197.52.39","type":"ONE_TO_ONE_NAT"}],"forwardedIps":[],"ip":"10.240.238.207","network":"projects/273410245967/networks/default"}],"scheduling":{"automaticRestart":"TRUE","onHostMaintenance":"MIGRATE"},"serviceAccounts":{"273410245967-compute@developer.gserviceaccount.com":{"aliases":["default"],"email":"273410245967-compute@developer.gserviceaccount.com","scopes":["https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write"]},"default":{"al
Oct 14 13:18:13 instance-1 google: iases":["default"],"email":"273410245967-compute@developer.gserviceaccount.com","scopes":["https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write"]}},"tags":["http-server","https-server"],"virtualClock":{"driftToken":"12943060760861539723"},"zone":"projects/273410245967/zones/us-central1-f"}
Oct 14 13:18:13 instance-1 google: No startup script found in metadata.
[....] startpar: service(s) returned failure: tomcat7 ...[?25l[?1c7[1G[[31mFAIL[39;49m8[?25h[?0c [31mfailed![39;49m
Oct 14 13:18:40 instance-1 accounts-from-metadata: WARNING Could not update /home/rsa-key-20150806/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:18:40 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:18:40 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener_zaph/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:18:45 instance-1 sshd[2884]: Connection closed by 173.194.92.49 [preauth]
Oct 14 13:18:46 instance-1 sshd[2886]: Connection closed by 173.194.92.49 [preauth]
Oct 14 13:18:47 instance-1 accounts-from-metadata: WARNING Could not update /home/rsa-key-20150806/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:18:47 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:18:47 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener_zaph/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:18:49 instance-1 sshd[2903]: Connection closed by 173.194.92.52 [preauth]
Oct 14 13:18:51 instance-1 sshd[2905]: Connection closed by 173.194.92.52 [preauth]
Oct 14 13:18:55 instance-1 sshd[2907]: Connection closed by 173.194.92.52 [preauth]
Oct 14 13:19:02 instance-1 sshd[2909]: Connection closed by 173.194.92.48 [preauth]
Oct 14 13:19:17 instance-1 sshd[2912]: Connection closed by 173.194.92.48 [preauth]
Oct 14 13:19:58 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener_zaph/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:19:58 instance-1 accounts-from-metadata: WARNING Could not update /home/rsa-key-20150806/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:19:58 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:20:01 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener_zaph/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:20:01 instance-1 accounts-from-metadata: WARNING Could not update /home/rsa-key-20150806/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:20:01 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:20:02 instance-1 sshd[2929]: Connection closed by 173.194.92.51 [preauth]
Oct 14 13:20:03 instance-1 sshd[2946]: Connection closed by 173.194.92.50 [preauth]
Oct 14 13:20:06 instance-1 sshd[2948]: Connection closed by 173.194.92.49 [preauth]
Oct 14 13:20:09 instance-1 sshd[2950]: Connection closed by 173.194.92.51 [preauth]
Oct 14 13:20:13 instance-1 sshd[2952]: Connection closed by 173.194.92.50 [preauth]
Oct 14 13:20:25 instance-1 sshd[2955]: Connection closed by 173.194.92.49 [preauth]
Oct 14 13:20:36 instance-1 sshd[2957]: Connection closed by 173.194.92.52 [preauth]
Oct 14 13:20:55 instance-1 sshd[2959]: Connection closed by 173.194.92.52 [preauth]
Oct 14 13:21:01 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener_zaph/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:21:01 instance-1 accounts-from-metadata: WARNING Could not update /home/rsa-key-20150806/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:21:01 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:21:34 instance-1 sshd[2977]: Connection closed by 173.194.92.51 [preauth]
Oct 14 13:23:01 instance-1 accounts-from-metadata: WARNING Could not update /home/rsa-key-20150806/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:23:01 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener_zaph/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Oct 14 13:23:01 instance-1 accounts-from-metadata: WARNING Could not update /home/ishener/.ssh/authorized_keys due to [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
Moshe Shaham
  • 15,448
  • 22
  • 74
  • 114
  • 2
    time to hook up the monitor.. – ergonaut Oct 14 '15 at 13:32
  • 1
    @ergonaut i'm sorry, but what does that mean? – Moshe Shaham Oct 14 '15 at 13:33
  • can you do roughly the following: stop the instance, mount the disk on another server, clean it up, unmount here & remount there, start server? In case it's a network drive and you can do that in compute engine – zapl Oct 14 '15 at 13:35
  • I did stop & start the server. I don't have another server to mount it. Why should that work? if the disk is full, what does it matter which server? – Moshe Shaham Oct 14 '15 at 13:43
  • Analyze each directory's disk usage, in my case, I saw some google headers files getting downloaded every day without deleting the older versions in the same dir. They got piled up and caused disk 100% usage crashing everything. I linked this disk to a new instance as an additional disk, not as a boot disk and cleaned that directory, and linked back to the original instance as a boot disk. All good now. – Ruby9191 Dec 02 '21 at 07:26

2 Answers2

34

You have different options to fix this issue:

  1. Check if your operating system supports automatic resizing: If so, using Cloud Console you can edit VM's root disk and increase its size. Your virtual machine instance can automatically resize the partition to recognize the additional space after you restart the instance.

  2. Use Interactive Serial Console feature to login to your VM and clean up your VM's disk or copy them to another storage, if you would need them later.

  3. If you know what data you want to delete, you can configure a startup script to remove the files and reboot your VM to run the script (e.g. rm /tmp/*).

  4. You can detach the persistent disk and attach this disk to another machine as an additional disk. On the temporary machine, you can mount it and clean up your data or copy them to another storage, if you would need them later. Finally recreate the original instance with the same boot disk. You can follow the same steps described in this video to add your disk to another Linux VM but add your existing boot disk instead of creating a new disk.

  5. Check if your operating system supports automatic resizing: If yes, then create a snapshot of your persistent disk, create a new persistent disk with larger size from the snapshot. Finally recreate the original instance with this larger boot disk.

Kamran
  • 3,397
  • 26
  • 40
  • 2
    automatic resizing not working, I've tried. The 4 point solved my problem. – Terry Lin Jan 16 '18 at 01:28
  • 2
    Tried 1,3 and skipped 2. Neither worked for my stupidly overfilled ubuntu 16.04 VM. Number 4 all the way to success. I created a clone of the original image, attached it as secondary drive to a second VM, used it to expand the filesystem that had been filled. Once that was done, detached it and used it as a boot drive for a third VM. – Finlay Feb 17 '18 at 23:28
  • 1
    Step 4 worked to retrieve a data and disk from a machine which is not reachable due to ssh issues. **STEP 1** I first created a snapshot from the disk of the broken machine. **STEP 2** Then I created a new fresh machine (same area zone) and I added to this machine a new disk, which I based on the snapshot already created (all disk machines and snapshot must reside on the same area in order to be visible when you create the new disk). **STEP 3** Once connected the new disk you wont format it, but you must mount it in the destination machine -- in order to make the olde files visible. – m.piunti Jun 06 '18 at 16:43
  • 1
    Hey Everyone, sorry to resurrect an old topic. But I have same issue. I have mounted my new disk (snapshot) but all the files I am interested in are not showing up. I know it's the correct path and correct drive because the predicate directory is there. /opt/code_base/productName -- but nothing under this is there. I thought maybe a permissions issue. How to bypass the permissions on this mount and access all files? – GrafixMastaMD Jun 13 '18 at 05:33
4

For anyone else that runs into this problem!

Simplest solution as I just had to deal with this (going from full cannot access ssh 10 GB Boot SSD on centos 7 instance to 20 GB Boot SSD)

Using the Cloud Console:

  1. create snapshot of the boot disk : https://console.cloud.google.com/compute/snapshots
  2. create new boot disk from snapshot just created but set the new disk size to 20 GB (or whatever you feel you may need) : https://console.cloud.google.com/compute/disks
  3. Next Stop the instance : https://console.cloud.google.com/compute/instances
  4. Once stopped edit the instance and click the 'X' to the right of the boot disk and the "ADD" button will become available, click that and select your newly created boot disk and save - this will detach the overloaded disk you cannot access and attach your newly created boot disk with a complete partition at your new size no need to extend a 10 gb partition to 20 as it is auto-magic

This seams to solve multiple issues related with a boot disk with no remaining space.

If you like me saw the ssh login notice saying you should switch to oslogin because it couldn't connect by web ssh and then set it up properly only to find out "even tho you can access the instance over ssh it is extremely limited because it cannot add the user to the sudo group policy and it can't create a home directory - so you get errors trying to do anything including extending the boot disk which I also tried (but didn't have permissions to get to the home directory I need to in order to delete files to allow a connection without errors to extend the boot drive or add the necessary tools thru yum) this will definitely fix all that as well.

Once completed "if your on a budget" I recommend deleting the snapshot and old boot disk that is no-longer attached to anything.

Supporting links: How to Create a Snapshot: https://cloud.google.com/compute/docs/disks/create-snapshots#creating_snapshots

How to create boot persistent disk from a snapshot: https://cloud.google.com/compute/docs/disks/create-root-persistent-disks#applying_snapshots

How to Update a boot disk for an instance: https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk#updating_a_boot_disk_for_an_instance

sidgrafix
  • 133
  • 1
  • 6
  • I have the slight feeling that this was a casualty of two things... there was very little space that was freed up once the machine was started with the new disk and your operating system actually resized the partition automatically after reboot. In my case, this did not work, and I'm still seeing the "no space left on device" errors. It's a ubuntu 2004 image, and for some reason it never gets resized even after this procedure. I will try with the solution to mount into another machine. – LaloLoop Feb 16 '22 at 22:32
  • Update: Cannot update my comment after 5 mins. I detached the disk, mounted it as secondary in another machine and proceeded to resize as you mentioned, which worked just fine. Mostly what [Kamran's answer](https://stackoverflow.com/a/33130874/3211335) mentions – LaloLoop Feb 16 '22 at 23:03