The feature to stop spot instances is only available when AWS preempts an instance. A user cannot request to stop spot instances, only terminate them. This forces a fresh fetch from S3 to create the root volume on every launch. Since blocks are lazily loaded, the experience will be varied latency on every boot.
To get EBS volumes without any S3 penalty, the volumes must pre-exist and be mounted to the spot instance when launched.
One solution to get the feel of having a "warm" root volume is to use chroot with the attached volume.
Every AMI has a snapshot ID. That snapshot ID can be provisioned as 1 or many EBS standalone volumes. These volumes will act like a stopped on-demand instance. If the intent is to get speed and not any higher level of security, once the volume is mounted, bind system paths to the chroot location. Something similar to the following will work in most cases:
mount -o bind /proc /mnt/myMount/proc
mount -o bind /sys /mnt/myMount/sys
mount -o bind /dev /mnt/MyMount/dev
mount -o bind /dev/pts /mnt/MyMount/dev/pts
mount -o bind /tmp /mnt/MyMount/tmp
mount -o bind /run /mnt/MyMount/run
mount -o bind /run/lock /mnt/MyMount/run/lock
mount -o bind /dev/shm /mnt/MyMount/dev/shm
Last, configure ssh with chroot to use the mounted path as root:
Match User ubuntu
ChrootDirectory /mnt/dlami
Now when the spot instance launches and the volume is mounted the user will be placed on the attached EBS, where blocks are only retrieved from S3 once (as with on-demand) and kept between instance associations. The spot instance can be terminated and a new instance can remount the "warm" storage.
You will need to have a system in place to match existing EBS volumes with new spot requests, as well as UserData or API calls that will take care of attaching the volumes and setting up chroot.
At Spotinst, we thought this was an exciting use case and wrote a blog to go further into detail here: https://blog.spotinst.com/2018/10/09/imagenet-ec2-spot/