First and foremost, the logic behind why DRS moves something is very complicated, so trying to figure out why it does something is usually the path to madness.
That being said, lowering the aggression setting is what's usually done when DRS is a bit too trigger-happy, unless there's some other obvious underlying issue, like a VM being too close to the maximum configuration of a host (VMware isn't a very happy camper if you assign 90% of host resources to a single VM). The aggression setting doesn't really matter that much, DRS will still kick in regardless if any host becomes too congested, it'll just be less aggressive, obviously. As I stated above, due to so many factors being considered by DRS, the aggression setting isn't really comparable between different environments, usually 3 is a good starting point, but some environments need it to be dropped down a notch or two.
Exclusions are a bit of a different beast, they are best reserved for VMs that don't take too kindly to being moved. An example is hot-standby software that checks if it's peer is online very frequently, I've seen applications that starts to fail over if the hot peer is unresponsive for more than a millisecond. Another application for exclusions are VMs that you want to stay put, a good example is when you have a stretched cluster over multiple datacenters. Then it makes sense to exclude your domain controllers from DRS and manually place them on certain hosts in certain datacenters, so that DRS doesn't get too clever and place them all in the same datacenter.