If you're looking to see what isolation level will make the sample code work as it stands, rather than what is the best way to solve the problem addressed by the sample code, you would need the guarantees of at least REPEATABLE READ.
Databases which use strict two-phase locking (S2PL) for concurrency allow READ COMMITTED transactions to drop shared locks at the completion of each statement, or even earlier, so between the time transaction A checks availability and the time it claims the seats, someone else could come through with transaction B and read again, without causing either transaction to fail. Transaction A might block transaction B briefly, but both would update, and you could be over-sold.
In databases using multi-version concurrency control (MVCC) for concurrency, reads don't block writes and writes don't block reads. At READ COMMITTED, each statement uses a new snapshot of the database based on what has committed, and in at least some (I know this is true in PostgreSQL), concurrent writes are resolved without error. So even if transaction A was in the process of updating the sold count, or had done so and not committed, transaction B would see the old count and proceed to update. When it attempted the update, it could block waiting for the previous update, but once that committed, it would find the new version of the row, check whether it meets the selection criteria, update if it does and ignore the row if not, and proceed to commit without error. So, again, you are over-sold.
I guess that answers Q2, if you choose to use transaction isolation. The problem can be solved at a lower isolation level by modifying the example code to take explicit locks, but that will usually cause more blocking that using an isolation level which is strict enough to handle it automatically.