I've been in your shoes. There is no single answer that will do it all.
You can spend a lot of money on a discovery product and hope that it knows all about your applications. They do exist and some are supposed to be quite good.
Of course, it might not be as good as you need or want. And nothing will find the "dependency" that is introduced by a daily or weekly scheduled script on a seemingly-unrelated server that is responsible for taking an extract from a work-order system and FTP-ing it into a payroll system. Or the physical fax line on the Linux box that you somehow didn't notice during your inventory ...
I like Joe's answer a lot - starting from each department and working with their "power users" (who definitely might not be computer "power users") is a vital part of a comprehensive discovery project. That's the bottom-up approach. This is also where you'll find people running business-critical apps on their own machines, possibly shared out to their own workgroup.
Another part of the approach is to get into every machine and run something that shows, or even better logs, TCP and UDP connections for $period_of_time and see if you can catch traffic (ports and endpoints, probably don't want a full network capture) related to unknown applications. Likewise with inventorying scheduled tasks, service accounts, etc. That's the BFMI top-down approach.
Because of the possibility of non-connection-oriented asynchronous processes, and things that don't run on servers (or client apps that simply run from fileshares), I don't think there can be a single automated approach for this. It's going to be manpower-intensive to do it right. Of course, you can simply aim for 80% and then start migrating or decomming, with enough communication to catch the things that make users scream when they break.