If storage is 100% full, the compression won't work as there is no space for a temporary file.
Copy logs to other storage. scp -r /var/log/ otherhost:
Review and delete old log files. find /var/log -mtime +7
Expand file system if necessary.
Compress some large files. Reload services to open a new log file. gzip /var/log/httpd/access_log ; systemctl reload httpd.service
Implement logrotate or equivalent script to manage these automatically. The usual pattern is to move the current file to a new name, and reopen a new log file.
Consider implementing a remote log server and shipping logs off the host instead.
Whether sending a signal to a service or otherwise reloading it is acceptable is up to you. Of course, you can try it on a test system if this makes you nervous.
If you don't tell the service to open a new file there is another option: truncate in place. cp /dev/null file.log
or logrotate option copytruncate
. However, beware the warning about this not being atomic from the lograte man page:
Note that there is a very small time slice between copying the file
and truncating it, so some logging data might be lost.