6

Imagine this situation:

CompanyA is acquiring a subdivision of CompanyB. CompanyB lays off most of their staff for that subdivision and is not helpful when asking for documentation. Now, you must use ADMT to migrate users, groups, workstations, and servers from CompanyB's child domain into CompanyA's Active Directory.

How can you determine which servers hosting custom LOB or pre-packaged applications have dependencies on other servers in the environment so that a migration strategy can be planned accordingly? Something like VMware vCenter Infrastructure Navigator can do some of this, but are there other ways besides spending a large amount of time just poking around?

Answers can assume an all Windows environment running on vSphere 5.1, though answers for other situations are OK as well.

MDMarra
  • 100,734
  • 32
  • 197
  • 329
  • 3
    One approach (among others that you'll need to pursue) is to start with the end user and work your way out from there. Identify a key user in each functional area (accounting, shipping, etc.) (preferably a department head or someone who has a keen grasp of the LOB's in use) and identify the LOB's they're using. Use that information to "build" a dependency tree/view of the systems that support those LOB's. Things like AD/DNS, email, etc. should be fairly easy to flesh out. Others, like CRM, ERP, custom shipping applications, etc. will probably take more involved digging. – joeqwerty Aug 12 '13 at 18:03
  • 1
    If you interview users for this, which should certainly form at least part of your approach, make sure you specifically dig for special occasions such as accountancy year end or any similar periodic things that they take for granted but won't think of because of how rarely they occur. – Rob Moir Aug 13 '13 at 07:13

2 Answers2

6

I've been in your shoes. There is no single answer that will do it all.

You can spend a lot of money on a discovery product and hope that it knows all about your applications. They do exist and some are supposed to be quite good.

Of course, it might not be as good as you need or want. And nothing will find the "dependency" that is introduced by a daily or weekly scheduled script on a seemingly-unrelated server that is responsible for taking an extract from a work-order system and FTP-ing it into a payroll system. Or the physical fax line on the Linux box that you somehow didn't notice during your inventory ...

I like Joe's answer a lot - starting from each department and working with their "power users" (who definitely might not be computer "power users") is a vital part of a comprehensive discovery project. That's the bottom-up approach. This is also where you'll find people running business-critical apps on their own machines, possibly shared out to their own workgroup.

Another part of the approach is to get into every machine and run something that shows, or even better logs, TCP and UDP connections for $period_of_time and see if you can catch traffic (ports and endpoints, probably don't want a full network capture) related to unknown applications. Likewise with inventorying scheduled tasks, service accounts, etc. That's the BFMI top-down approach.

Because of the possibility of non-connection-oriented asynchronous processes, and things that don't run on servers (or client apps that simply run from fileshares), I don't think there can be a single automated approach for this. It's going to be manpower-intensive to do it right. Of course, you can simply aim for 80% and then start migrating or decomming, with enough communication to catch the things that make users scream when they break.

mfinni
  • 36,144
  • 4
  • 53
  • 86
  • 2
    The users screaming when things don't work is a good method to get out the "smaller" stuff that's overlooked during migration :) although, probably not the best approach. – Nathan C Aug 12 '13 at 19:29
  • There's always an element of risk when migrating or decomming undocumented business systems. That's what they get for technical debt, thanks for playing IT with us this week. – mfinni Aug 12 '13 at 19:42
3

The single most effective way is to turn off each server, one-by-one, and note what breaks and what complains (but still functions).

It's especially helpful if you have good monitoring, alerting, log analysis (logstash, splunk, etc), and metrics in place beforehand.

This is not a Best-Practice.

Or...

Another approach is to take note of all services and processes running on each server, excluding the ones common to every machine (base Windows processes/services, Antivirus, etc...)

gWaldo
  • 11,957
  • 8
  • 42
  • 69