AWS Lambda
AWS Lambda doesn't have GPU support and is tragically suited for distributed training of neural networks. It's maximum run time is 15 minutes, they don't have enough memory to hold dataset (maybe small part of it only).
You may want AWS Lambda for lightweight inference jobs after your neural network/ML model was trained.
As AWS Lambda autoscales it would be well suited for tasks like single image classification and immediate return for multiple users.
Ray
What you should be after for parallel and distributed training are AWS EC2 instances. For deep learning p3 isntances might be a good choice due to Tesla V100 offering. For more CPU heavy load, c5 instances might be a good fit.
When it comes to Ray it indeed doesn't support Windows, but it supports Docker (see installation guide). You may log into container with ray preconfigured after mounting/copying your source code into container with this command:
docker run -t -i ray-project/deploy
and run it from there. For docker installation on Windows see here. It should be doable this way. If not, use some other docker
image like ubuntu, setup everything you need (ray and other libraries) and run from within container (or better yet, make the container executable so it outputs to your console as you wanted).
It should be doable this way.
If not, you may manually log into small AWS EC2 instance, setup your environment there and run as well.
You may wish to check this friendly introduction to settings and ray documentation to get info how to configure your exact use case.