A very scalable solution (see arch diagram) to what you are describing is similar but if I were to build this it would be using Kafka or managed Kafka with a data connection to your database (e.g. Debezium connector for MySQL). This connector would perform your change data capture (CDC) and stream those events using Kafka queues.
The reason why the use of Kafka is important here vs. AWS MQ or some other queue solution like RabbitMQ is that Apache Kafka guarantees delivery of the events in the correct order even when you have multiple consumers. Both Rabbit and managed MQ can make similar guarantees but this breaks down when you introduce many consumers of the event stream.
As for the messaging component of your solution, it would be more pragmatic and scalable to publish messages to a webhook service or similar that is a consumer of your kafka queues and is recording events to a separate database so that the event stream can be subscribed to vs. relying on a service that has to know who its consumers are.
So your gut is correct that this solution is not incredibly scalable albeit not a "bad" solution by any means. The two recommendations I made I think would help you move forward in scalability.
I hope this helped.
Cheers.
