0

I've been playing with Azure AI, I've done the pong game and cartpole etc. I was curious how this same technology could be leveraged for quality assurance.

I've been successful with building environments and executing training with ray-on-aml cluster. What I don't understand is how to open a headless browser, navigate to a site, grab the raw html and a screenshot to dump to the logs. Eventually I would like to have the agent scrape for all links, buttons, onclick events etc, and take action, get reward, continue, etc.

Here is what is working so far:

  • main.py gets workspace information, uses dockerfile to install ubuntu, python deps, ray cluster spin up.

Files:

  • main.py
  • my_training.py

Do I need to adapt 'my_training.py' to open chrome driver and go to website?

my_training.py looks something like this:

import os
import ray
from ray.rllib import train
from ray import tune
from utils import callbacks

if __name__ == "__main__":

# Parse arguments and add callbacks to config
train_parser = train.create_parser()

args = train_parser.parse_args()
args.config["callbacks"] = {"on_train_result": callbacks.on_train_result}

# Trace if video capturing is on
if 'monitor' in args.config and args.config['monitor']:
    print("Video capturing is ON!")

# Start ray head (single node)
os.system('ray start --head')
ray.init(address='auto')

# Run training task using tune.run
tune.run(
    run_or_experiment=args.run,
    config=dict(args.config, env=args.env),
    stop=args.stop,
    checkpoint_freq=args.checkpoint_freq,
    checkpoint_at_end=args.checkpoint_at_end,
    local_dir=args.local_dir
)

I'm not sure how to fit these pieces together. I would assume I would adapt this and send output to callback etc.

Any help would be greatly appreciated.

molbdnilo
  • 64,751
  • 3
  • 43
  • 82
Mateo
  • 187
  • 1
  • 4
  • 15

0 Answers0