AWS CloudTrail provides with management API calls bulk logging, but logs are monstrous, only viewable & downloadable. There is also the option to set multi- or single-regional single- or cross-account "Trails", which can log only a) system management API calls, only "read", only "write" (create/change) or both; b) object-level selected S3 buckets (storage) API calls & c) selected Lambda(serverless) functions API calls. While CloudWatch Logs expose system components faults, state change events etc. So the former is about security and the latter is about system health, also providing with metrics, graphs, dashboards etc, including custom. It also allows quite comfortable logs processing and analysis. The Trails logs can be copied to CloudWatch and fetched with API (it was my plan), but namely copied, because Trails always store logs in a S3 bucket... in ~5-minutes chunks gziped into files with counter-intuitive names under hilarious directory (prefix) structure, and nobody (hey, AWS, maybe you? :D) can change it. To add, such logging storage method costs double-triple vs small web-site hosted on similar S3 bucket due to much i/o and encryption. But most important, these logs are very hard to process for my purpose: to catch resource-creating API calls, so if I don't find lower-level approach, I have to importing to CloudWatch, filter there (at additional cost) and export to my tags processor. Too much for tagging automation to be a part of low cost and low price app... So:
- How to skip S3? Could deny-access bucket policy help?
- Is it possible to filter unnecessary events prior to feeding CloudWatch Log?
To be precise: i'd like to log only creation of taggable resources to implement sophisticated auto-tagging. Maybe it can be resolved without CloudWatch, too?
Thank you in advance, I am not really deep in AWS logging and short of time