Coming up with a security model for this needs a lot more effort and information than you've provided here.
(and BTW SELinux is the very last thing you should be looking at for a secure, supportable, affordable solution)
What would the appropriate home folder be set to for these non-airflow servers?
It should follow your convention for service accounts. This should be based on the required I/O characteristics and isolation from other users of the system. In most cases just adding them in /home would suffice.
one cannot simply log into the box as the airflow user
I don't understand what you are trying to tell us here. If you want to automate the running of a script on a remote system, then using ssh with keypairs is by far the simplest solution. Using both keypairs and password logins (if required) on the same OpenSSH server is a bit more tricky than only one authentication method - but still doable. If you don't want real users using the account, then don't tell them the password and deny access the ssh private key with permissions (which openssh does by default).
If you go down the route of tunnelling the invocation across ssh, then you should have a look at the options for restricted shells (rbash, rssh).
nor should they be using it for anything outside of a couple of scripts
How seriously does this need to be enforced? Settings the permissions on all the executable files would be difficult to maintain. You could chroot the login shell to prevent the user accessing most of commands - and use an overlay filesystem to selectively make those required for the script available. For preference you would ensure that the user has no permission to read/write anywhere and define the access to the scripts via sudo.
Rather than using an ssh connection you could communicate with a daemon running on the target to carry out the task.
Sadly an awful lot of "security" guides recommend disabling [x]inetd - when privilege separation is a key tool for building secure systems. (I've yet to start drinking the systemd kool-aid, but it has its own capability for on-demand services). However this would require you to build your own authentication layer. But a simple elaboration solving the authentication issue would be to configure a stunnel instance to accept only the client certificates issued to the airflow server and to exec your script - or a means of multiplexing the scripts:
#!/bin/bash
read USERCMD
USERCMD=`basename "${USERCMD}"`
if [ -x "${HOME}/bin/${USERCMD}" ]; then
"${HOME}/bin/${USERCMD}"
fi
exit
...and use a client stunnel on the airflow server or openssl s_client to connect.