Sure, this is what we do. We have two web servers, A and B. We have haproxy sitting infront of them, dutifully balancing requests between them. Like you, we don't store session data in a way that a request must go back to its originating server.
When we want to do an upgrade, we take Server A out of the pool. We perform the upgrade and test the site internally. Then, we put Server A in the pool, and at the same moment, take Server B out of the pool. We then upgrade Server B, and put it back in.
Upside: Site upgrades with 0 downtime.
Downside: During the upgrade you have no load balancing or redundancy. To solve this you have to many more servers into the mix.
As to whether or not it's worth it, only you can answer that. Our software is incredibly complex and a deployment can take upwards of 5 minutes (and that's with it being fully automated), and it's in use pretty much 24/7. So it was a no brainer for us.
If your site only gets traffic sporadically throughout the day, or mostly during say, working hours, and you're not pushing the barriers of what a single web server can handle, then it might not be worth the hassle. Throwing up a maintenance banner for 10 seconds while svn export
runs might be a more appropriate solution.