Application Servers were cooked up as an idea to help ration a scarce resource: compute power in the middle tier. This idea came from mainframe land, initially, where CPU was scarce and expensive, and therefore a great deal of time and effort and money was spent on dicing up the mainframe CPU to various users, throttling workload, keeping load off the database, "passivating" transactions until load diminished after hours, and so on. In those days people spent millions of dollars on software that kept track of transactions on the mainframe, so as to be able to perform "chargeback" - internal cost accounting for the use of the expen$ive mainframe. Yes, people spent boatloads money so they could do billing to internal departments for the use of the mainframe.
The thing is, Intel (and later AMD), Cisco (et al), EMC, Microsoft, and Linux rendered the entire idea moot. Computing became cheap. Really cheap. There is really, really, really no need to ration mid-tier compute resource. What's a dual-cpu server go for these days? How many of those could you deploy for the annual salary of ONE IT guy? This is an inversion of the old economics of mainframe computing, where compute was the expensive, scarce resource, and people were relatively (!!) cheap. Now people are the expensive part, and compute is cheap.
App servers, and all the bells and whistles that they have for restricting access to compute, or parceling it out, or throttling it or even "monitoring" transactions for purpose of chargeback.... these things are not necessary when you have racks of cheap AMD servers.
Another thing App servers did was shield the database from workload. Essentially, offload work from the db server. Time was, developers put the business logic in a stored procedure and, boom, there was your app. But there were scalability issues with this approach. Now, though, stored procs are fast and efficient. Database servers can scale up, cheaply. Chances are, you do not have one of the top-100 workload volumes in the world that cannot be borne on Intel hardware, with stored proc logic.
One major problem with the stored proc approach though, is that it is still sort of hard to author stored procs in mainstream languages (Java, C#, VB, etc), and manage them. Yes, I know about SQL CLR and Java VMs managed by the DB. But those aren't mainstream approaches. Also, the DB Admin doesn't like code jockeys messing up his utilization graphs. For these reasons there is still a desire to write business logic in a separate language dedicated to the purpose. And there's still a desire to run and manage that logic on dedicated compute resources. But... a traditional "app server"?? No. It doesn't make sense.
Put all your logic on an Intel server and let er rip! If you need more scale, clone the box. All the business logic is stateless (right?), so you can scale OUT. Use 3 "app" server machines, or 4 or 5 or however many you need. All running exactly the same code. Clones of each other. Regardless how many business logic machines you have, do not physically distribute the workload. For max scalability, strive to keep each transaction on a single box. That is a recipe for efficiency and optimal use of resources.
It is a best practice to use logical tiers in your application architecture. This makes it easier to develop and maintain. But do not suppose that logical separation must imply, or even recommend, physical separation. It does not.