Yes you are kind of right. Frameworks like Keras, TF (which also uses keras btw) and Pytorch are general Deep Learning frameworks. For most artificial neural network use-cases these frameworks work just fine and your typical pipeline is going to look something like:
- Preprocess your dataset
- Select an appropriate model for this problem setting
- model.fit(dataset)
- Analyze results
Reinforcement Learning though is substantially different from most other Data Science ML applications. To start with, in RL you actually generate your own dataset by having your model (the Agent) interact with an environment; this complicates the situation substantially particularly from a computational standpoint. This is because in the traditional ML scenario most of the computational heavy-lifting is done by that model.fit() call. And the good thing about the aforementioned frameworks is that from that call your code actually enters very efficient C/C++ code (usually also implementing CUDA libraries to use the GPU).
In RL the big problem is the environment that the agent interacts with. I separate this problem in two parts:
a) The environment cannot be implemented in these frameworks because it will always change based on what you are doing. As such you have to code the environment and - chances are - it's not gonna be very efficient.
b) The environment is a key component in the code and it constantly intreacts multiple times with your Agent, and there are multiple ways in which that interaction can be mediated.
These two factors lead to the necessity to standardize the environment and the interaction between it and the agent. This standardization allows for highly reusable code and also code that is more interpretable by others in how it exactly operates. Furthermore it is possible this way to, for example, easily run parallel environments (TF-agents allows this for example) even though your environment object is not really written to manage this.
RL frameworks are thus providing this standardization and features that come with it. Their relation to Deep Learning frameworks is that RL libraries often come with a lot of pre-implemented and flexible agent architectures that have been among the most relevant in the literature. These agents are usually nothing more than a some fancy ANN architecture wrapped in some class that standardizes their operation within the given RL framework. Therefore as a backend for these ANN models, RL frameworks use DL frameworks to run the computations efficiently.