If you are using MDBs as defined in JEE spec using @MessageDriven
annotation, then it is up to the server container to manage actual instantiation and scaling of these beans. I am not that familiar with Websphere, but most servers have a notion of EJB pooling, that roughly translates to thread pool - giving you parallel execution out-of-the-box. This way, the server has a set of instances ready to process the messages in your queue. Each bean instance will only be active for the time required to execute its onMessage
method, after that it will be cleaned up and returned to the queue. So lets say, that you have a pool of MDBs, the size of 20. If you have more then 20 message waiting in the queue, then the server will use up all of the available instances and process 20 message simultaneously.
In Wildfly/JBoss for example, you manage your EJB pools using the EJB subsystem and corresponding pool settings.
<subsystem xmlns="urn:jboss:domain:ejb3:4.0">
<!--omitted for brevity... -->
<mdb>
<resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
<pools>
<bean-instance-pools>
<strict-max-pool name="mdb-strict-max-pool" derive-size="from-cpu-count" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>
<!--omitted for brevity... -->
Here we specify, that Message driven beans should use a pool named mdb-strict-max-pool
that derives its size from the number of CPUs on our system. You can also specify absolute values, e.g. max-pool-size="20"
All this is only relevant, if you are running the queue on a single server instance. If you are really doing a message intensive application, chances are that you will need a distributed messaging, with dedicated message broker and multiple processing instances. While many servers support such scenarios(e.g. Wildfly ActiveMQ cluster), it is a really a topic for another discussion.
For more info have a look on MDB tutorial and your server documentation.
Happy hacking.