I wish to use the Vicuna open source model to train my dataset.
I don't have a GPU in my computer, so I wanted to use their RESTful API Server. I used Windows PowerShell for the commands below.
According to their explanation (https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md)
First, I launched the command
> python3 -m fastchat.serve.controller
Then, it opened a localhost for me. I opened it in my browser and it displayed the following message:
> {"detail":"Not Found"}.
Next, I opened a new PowerShell window and ran their second command:
> python3 -m fastchat.serve.model_worker --model-path lmsys/vicuna-7b-v1.3
However, I encountered the following error:
AssertionError: Torch not compiled with CUDA enabled
Does this error occur because I do not have a GPU in my computer?