No database product, no matter how high you move up the chain, will let your share connections and transactions between threads concurently. You will always have to serialize access to a transaction/connection. You are asking for a non-sense requirement. One thread is going to commit what the other just rolled back? There are many products that support thousands and thousands of clients reading and writing the database at the same time. None, ever, 'shared' connections between threads.
To achieve high scalability you should do the same thing everyone else does:
- use connection pooling to amortize the cost of login handshake
- associate a connection with a request for the duration of servicing a request
- use short transactions
- optimize access path to avoid reading unnecessary data (eliminate all scans)
- use proper locking granularity
If the load is read intensive then you can consider a versioned isolation model, like read committed snapshot in SQL Server. If the load is update intensive then you must follow OLTP principles and use short, granular locks to do any update, while avoiding work on correlated items from different transactions to reduce deadlock probability.