False alerts were very common in my experience. There is an ongoing process to tweak filters and monitors to avoid false alerts, but it was always something (and usually more than one something). The frequency of false positives made the response time on legit alerts longer (i.e. a lot of people would ignore anything until it occurred three times). And your implementation would need to simulate a large range of legit alerts from each specific system, otherwise I foresee admins quickly learning which are the app-generated false positives (ignore the 'root account modified' alert / alert at 13:18 / etc) with possible negative repercussions (oh no! the root account really was modified)
But to address the utility of intentionally inserting false alerts ... it seems like something of value from an HR/management perspective. It's not like I had analysts who were losing focus because there wasn't anything getting reported. And, as above, my experience has been that false positives diminish response rather than causing people to realize they've been unfocused. However, as a manager, tracking how these alerts are handled (response time, resolution) might provide a useful metric for evaluations and identifying employees who were under-performing.
From a technical perspective, there are a few times I've intentionally inserted alerting items into system logs files -- not for individuals to review but something I would programmatically close out & alert if the inserted alarm wasn't received. The purpose of this was to ensure the end-to-end process (logging, log ingestion, data analysis, alert creation) was functioning properly. It's also nice to be able to generate alerts on a particular system to test any new component or workflow.