There are two AWS services here that are relavent:
- Amazon Glacier, recently called Amazon S3 Glacier, is made for long term corporate archiving. You shouldn't use it unless you fully understand the service, the vaults, the archives, etc.
- Amazon S3 is a much more flexible service, file / object based. It has a number of storage classes, including standard, infrequent access (IA), Glacier, and deep archive. This is the service you should use.
Using AWS S3 you can simply upload your backup file. You can use different file names based on the date, or you can overwrite the old file and have versioning turned on.
You could use an incremental backup tool such as Restic, which will reduce your storage requirements, and can age out the backups (eg grandfather, father, son scheme). If you use Restic you have to be sure not to put files such as index into glacier class due to the recall time - best keep them in IA class or similar. You can technically move the data files to a glacier type class, but the minimum storage time can negate the savings. Simply storing the files into S3 deep archive class may be simpler and cheaper.
Either way, this is simple to achieve. You have a cron job that does the database export then uses the S3 API to upload the file to S3. Here's how I do it for mysql with restic
mysqldump --skip-dump-date -h localhost db_name > /var/backups/database/database-name.sql
restic -q --repo s3:s3.amazonaws.com/bucketname/foldername backup /var/backups/database --exclude="*.tmp" --exclude="thumbnails" --cleanup-cache
Alternately if you want to do a basic S3 upload, the command line I use on Windows is this - Linux is similar
aws s3 sync c:\backupfolder s3://bucket-name/ --profile AWS-cli-profile-name --storage-class DEEP_ARCHIVE --exclude *.txt
In both cases I turn on bucket versioning, but it's especially critical if you just upload the files without an incremental backup like Restic. In that case you will probably want to delete old versions otherwise costs can increase significantly over time.