I have a basic custom model that is essentially just a copy-paste of the default RLLib fully connected model (https://github.com/ray-project/ray/blob/master/rllib/models/tf/fcnet.py) and I'm passing in custom model parameters through a config file with a "custom_model_config": {}
dictionary. This config file looks as follows:
# Custom RLLib model
custom_model: test_model
# Custom options
custom_model_config:
## Default fully connected network settings
# Nonlinearity for fully connected net (tanh, relu)
"fcnet_activation": "tanh"
# Number of hidden layers for fully connected net
"fcnet_hiddens": [256, 256]
# For DiagGaussian action distributions, make the second half of the model
# outputs floating bias variables instead of state-dependent. This only
# has an effect is using the default fully connected net.
"free_log_std": False
# Whether to skip the final linear layer used to resize the hidden layer
# outputs to size `num_outputs`. If True, then the last hidden layer
# should already match num_outputs.
"no_final_linear": False
# Whether layers should be shared for the value function.
"vf_share_layers": True
## Additional settings
# L2 regularization value for fully connected layers
"l2_reg_value": 0.1
When I start the training process with this setup, RLLib gives me the following warning:
Custom ModelV2 should accept all custom options as **kwargs, instead of expecting them in config['custom_model_config']!
I understand what **kwargs does, but I'm not sure how to go about implementing it with a custom RLLib model to fix this warning. Any ideas?