I acheived this using distributed caching which I found easy. First of all you need a S3 bucket or s3 compatible storage like minio. You can set MinIo locally where gitlab runner exsists with following commands.
docker run -it --restart always -p 9005:9000 \
-v /.minio:/root/.minio -v /export:/export \
--name minio \
minio/minio:latest server /export
Check the IP address of the server:
hostname --ip-address
Your cache server will be available at MY_CACHE_IP:9005
Create a bucket that will be used by the Runner:
sudo mkdir /export/runner
runner is the name of the bucket in that case. If you choose a different bucket, then it will be different. All caches will be stored in the /export directory.
Read the Access and Secret Key of MinIO and use it to configure the Runner:
sudo cat /export/.minio.sys/config/config.json | grep Key
Next step is to configure your runner to use the cache. For that following is the sample config.toml
[[runners]]
limit = 10
executor = "docker+machine"
[runners.cache]
Type = "s3"
Path = "path/to/prefix"
Shared = false
[runners.cache.s3]
ServerAddress = "s3.example.com"
AccessKey = "access-key"
SecretKey = "secret-key"
BucketName = "runner"
Insecure = false
I hope this answer will help you
Reference:
https://docs.gitlab.com/runner/install/registry_and_cache_servers.html
https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching