It depends :)
Transactions put a limit on the number of concurrent operations your system can handle. Is that limit a problem or not, use cases and db implementation details are needed to check.
On the other hand, transactions make things much easier.
Reading the comment on another response I saw:
eventual consistency cannot be used because then Customer would be able to use one discount for multiple orders
On a distrubuted system (modelled using DDD) the only way to guarantee this, is having the Discount and the Order under the same aggregate, because the aggregate define the consistency boundary, you can check invariants on the same data that will be stored, atomically.
Using a transaction, you are (in some way) expanding the boundary of your aggregate to have the Order and Discount in it, as no concurrent operation can be executed on the two entities (because of the transaction locks).
Opening to eventual consistency usually is done having the inconsistencies managed as businness domain rules.
One way to do it could having the rules for when a Discount is used two times.
This can be done in the process manager handling the event that when it tries to "Deactivate" the Discount it rejects the command because "AlreadyDisabled".
The ProcessManager knowing the possibility of a rejection because AlreadyDisabled at that point can cancel the Order, or change it in some way, notifying some system or whatever is the best strategy (from the business perspective). But in that case the "process" of the order creation takes into account the fact that it can happen that a discount is used for the second time.
Obviously the implementation of techincal implementation of events dispatching should minimize the possibility of that happening, but still it will be possible (we are talking about handling 100% of the cases)
Transactions make handling these cases easier, but put a limit on the reachable scale of the system.
Solutions that allows for big scale of the system, needs to manage lot of details and require bigger effort to be implemented.
As last thing, domain events could be modelled and used in a way for which when an aggregate is stored, events get published and you have a single transaction spanning the aggregate change and all the operation done by the events listeners (process managers).
The good thing of this is that in this way you are decoupling the Order and Discount, without having the part of the systems managing them having to know each other, and/or could be simpler to add other processing, plus you can test processes in isolation (you can manually publish an event to a process manager without the need of having to have to do with the Order).
What's the best solution? It's a matter of trade-off on your use case