-1

TLDR; I'm trying to run h2ogpt locally, specifically with the h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 model in a conda environment. For some reason my GPU can't be found.

Long version:

I've activated the env, installed requirements text and extra torch requirement but when I try to run this command, Powershell 7 says "no GPU detected":

python generate.py -base_model=h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 -score_model=None -prompt_type=human_bot -cli=True -load_8bit=True

It does continue to download/load the weights but the computer can't cope, makes screen output stutter etc. I would try load_4bit but definitely says "no GPU".

I have run nvidia-smi:

| NVIDIA-SMI 536.67                 Driver Version: 536.67       CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1070      WDDM  | 00000000:08:00.0  On |                  N/A |
|  0%   37C    P0              31W / 151W |   1900MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |

I've just tried to installed CUDA toolkit to make sure I wasn't being silly - had it installed already, all drivers up-to-date too.

I assume Powershell is fine and can "see" my GPU because it can run 'nvidia-smi' and bring back info. Double checking the github docs I meet hardware requirements.

Looking through the github read me for h2ogpt I found a setting "CUDA_VISIBLE_DEVICES" which seems to be under 'What ENVs can I pass to control h2oGPT?'. I believe this is for if there are multiple GPUs but may be mistaken.

Thanks for reading.

sw016428
  • 11
  • 1
  • 3
  • 1
    What makes you think it's PowerShell that's telling you "no gpu detected" when you're running python...? – Mathias R. Jessen Jul 27 '23 at 23:21
  • More detailed logs would help. Sounds like it's bitsandbytes that prints out that warning. Did you follow instructions from here to install bitsandbytes for Windows? https://github.com/h2oai/h2ogpt/blob/main/docs/README_WINDOWS.md which installs https://github.com/jllllll/bitsandbytes-windows-webui, which has more instructions you may find helpful. – Arno Candel Jul 28 '23 at 01:44

1 Answers1

0

Try this command in your env:

import torch
torch.cuda.is_available()

If it show "False" then your torch probably has wrong version and this a reason of "No GPUs detected". Just check torch after install requirements.txt.

In my case, torch from cache was 2.1.0 version, so for unknown reasons installed 2.1.0 instead 2.0.1. I cleaned cache, reinstalled python and used manual for GPU step by step and It's finally work. Hope it helps you, too.

Sergey S
  • 1
  • 1