Here's a deep dive that might be helpful:
https://aws.amazon.com/blogs/database/planning-i-o-in-amazon-aurora/
Read operations in Aurora MySQL operate on 16 KB pages. So you can do a rough calculation of how many pages will be needed for the full-table scan.
But is each page entirely full or is there some empty space - that depends on the fill factor. So probably that rough calculation needs to be adjusted to take the fill factor into account.
The first full-table scan will take some number of I/Os, resulting in some corresponding cost. But then if you do another table scan, maybe the whole table is already in the buffer pool, so no I/Os are required (i.e. nothing has to be fetched from storage). For a busy cluster, it might be more economical to use a bigger instance class so all table and index data is kept in the buffer pool all the time. MySQL also has some optimizations like the "midpoint insertion strategy" to prevent occasional big queries from knocking actual frequently accessed data out of the buffer pool.
For realistic cost estimation for I/O, you'd want to measure your actual workload over time and extrapolate. The blog suggests monitoring in CloudWatch the metric '[Billed] Volume Read IOPS (Count)'.