As TomTom and ThatGrameGuy have pointed out, your design assumption (that you can achieve what you want with DRBD) is flawed. DRBD is useful for synchronizing Block Devices (so says the name: Distributed Replicated Block Device). You could theoretically use it in your current scenario as TomTom describes (Async mode, for things where it's OK to lose some data), but you're not describing anything where that situation would exist.
It also seems that you're making this far more complicated than it needs to be: It sounds like you just need a simple "Primary/Secondary" environment.
For Web Stuff
Web sites are changed periodically. It's easy enough to make a back-up of the web site that can be restored to a remote server (or store everything in a version control system, or use a configuration management system to "push" the website to multiple servers).
For Databases
Web sites are often database-backed these days (which is usually the only part that changes "continuously") - but every database engine worth using has some kind of replication capability (you say you're already using this).
Configured properly DB replication is way better than DRBD for replicating databases, because the remote DB engine guarantees the same level of ACID that the master server has.
For Email
Like TomTom said in his answer there's no point to replicating your outgoing mail spool.
If you lose the master server with one or two emails in the queue your users can re-send them, and it's a corner-case anyway because unless the recipient's server is down the email is off your system in a few seconds. Not worth worrying about.
People's mailboxes are another story: Here you're going to want backups (or a mail system that supports replication). This may mean there's an hour or a day where people don't have access to their old email when you fail over to the secondary server (while you restore the old message), but that's usually OK because they're getting their current emails. If the restore time is not acceptable you can continuously restore the backups to the secondary server (or use something like rsync to keep the mail boxes in sync every few hours).
There are a couple of edge cases in what I described above that you should be aware of.
One, If your servers are "very busy" you may need to load balance properly (using something like HAProxy to distribute web requests between "front-end" servers, and moving mail and DB onto their own servers). That's how you scale out properly.
Techy computer-sciency explanation: the DRBD hackery the bandwidth requirements with DRBD are close to O(N^2) where N = the number of nodes, the solution I've outlined is roughly O(N) where N = the number of DR sites - and the number of DR sites is not likely to exceed 2).
Two, If your web servers write data to the local file system you will need to re-architect that solution (store the files in the database, or in a NoSQL database like MongoDB, or to a central storage server (over NFS or something similar, and possibly replicating THIS with Async DRBD to your off-site location for near-real-time disaster recovery) - basically some solution to ensure that the local-file writes get made available to all the other "front end" servers.