Using a Windows Server machine to provide NAT services seems like overkill, and creates a maintenance need (Windows updates) that will, invariably, cause service outages with the FTP site (since you'll have to reboot the NAT "gateway" machine regularly). I think you'd be better off using an embedded device that supports either a layer 3 solution (like NAT) or a layer 7 solution (like a TCP or FTP proxy).
Perhaps you're simply taking FTP uploads and don't care about remote users' ability to download files. In that case, you can probably get away with something like what you're talking about, so long as you have a way to merge any files received during a failover back into the production file corpus. (That's probably just an XCOPY or some such and not a big deal, but not knowing your back-end systems it's hard to say.)
If you're expecting downloads from remote users during a failover then a bigger issue than network-level or application protocol-level access is going to be continuity of access to the files hosted by the FTP server. Unless you've got a way to mitigate the single point of failure of the back-end file storage you can do anything you want at the network or application layer and still be dead in the water.
You may be able to use something simple like DFS replication (or even just replication scripts with tools like ROBOCOPY or rsync) to keep the production and failover FTP servers in sync. Your SLA windows are going to dictate how close to realtime your replication consistency needs to be.