2

I'm at the end of my rope. I don't know anything about coding, I don't know how to fix this, I'm not even trying to code a program, and I am getting this while trying to create a token using automatic1111's textual inversion method. Not training it, mind you, just creating the token. The full error looks like this:

File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 275, in run_predictoutput = await app.blocks.process_api(File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 787, in process_apiresult = await self.call_function(fn_index, inputs, iterator)File

"C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 694, in call_functionprediction = await anyio.to_thread.run_sync(File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_syncreturn await get_asynclib().run_sync_in_worker_thread(File

"C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_threadreturn await futureFile

"C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in runresult = context.run(func, *args)File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\modules\textual_inversion\ui.py", line 11, in create_embeddingfilename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, init_text=initialization_text)File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 161, in create_embeddingembedded = embedding_layer.token_embedding.wrapped(ids.to(devices.device)).squeeze(0)File

"C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_implreturn forward_call(*input, **kwargs)File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forwardreturn F.embedding(File "C:\Users\ME\Desktop\AutomaticWebUI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2199, in embeddingreturn torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)

I don't know what to do. Here's everything I've tried so far:

  1. entirely reinstalled the automatic repo from scratch
  2. deleted everything mentioning 'cuda' on my c drive
  3. reinstalling my graphics drivers
  4. uninstalling and reinstalling python and git
  5. uninstalling the cuda thing I installed originally as well as everything else it installed

I got rid of tensorflow as well, there is no cudatoolkit installed anymore, and somehow it still thinks I have 2 cpus or whatever this means, and I seriously don't want to have to nuke my harddrive and go back to factory settings over this. All of this started because I was trying to get deepdanbooru's interrogator to work on my gpu, and that required me to use this: https://www.tensorflow.org/install/pip

and somehow in the process of doing that I've basically bricked it and I'm just overwhelmed and in over my head, and I have 0 idea of what to do beyond this, because I don't know how I convinced it that I have something called cuda:0 on my pc, when there is no file or any directory or anything installed with that name.

I only have 1 cpu and 1 gpu. I don't know how to convince it that whatever this cuda:0 is is not something that exists as far as I know, and I don't know how to remove it.

If anyone could tell me how to do that so I don't have to entirely wipe my hard drive, that would be great. And please explain in the simplest terms, because I'm autistic and I know nothing about python coding.

Please help.

talonmies
  • 70,661
  • 34
  • 192
  • 269
Armads
  • 41
  • 1
  • 4
  • This is a problem in the code, none of the things you tried would fix the code, if you don't know any programming, you are not the person to fix this, I suggest that you submit a issue to the repo so the authors can fix it. – Dr. Snoopy Oct 17 '22 at 08:31
  • And note that cuda:0 is just a pytorch way to refer to your GPU, it is not a device you installed or anything like that. – Dr. Snoopy Oct 17 '22 at 08:32
  • If you can, I'd suggest you install the project inside a anaconda environment: https://www.anaconda.com/ and install pytorch as suggested here: https://pytorch.org/ into that environment. That avoids issues between dependencies of different projects. – cherrywoods Oct 17 '22 at 10:28
  • @Dr.Snoopy okay but this didn't happen until I installed the nvidia cuda package, which was a mistake, and it worked before. Is it possible to undo whatever I did? I'm pretty sure I messed up at step 4 of the link I posted, but I can't figure out how to undo that. – Armads Oct 17 '22 at 14:40
  • @cherrywoods I tried that originally and had more issues where it wouldn't download the right dependencies; is there more of a benefit to doing that over just using git bash to download it? – Armads Oct 17 '22 at 14:41
  • Yes, it provides you with a virtual environment. Keeps project separate from each other. Avoids collisions of dependencies of different projects. – cherrywoods Oct 17 '22 at 14:46

1 Answers1

1

I have found the solution! It isn't actually a flaw in their code, it's a mix of user error and my lack of knowledge as to how the code works.

Here's the solution, in case anyone else is like me and screws this up.

When I reinstalled this via the git bash, I edited the .bat file with --medvram as the guide recommended.

HOWEVER. Apparently, adding this command entirely confuses it. Removing the --medvram command fixes the problem.

Armads
  • 41
  • 1
  • 4
  • This literally means there are flaws in the code :) – Dr. Snoopy Oct 17 '22 at 15:45
  • @Dr.Snoopy okay yes, that is technically true. I meant that the fault wasn't them, it was me, their code works fine I just didn't realize that this edit that was in the guide to supposedly make it work better suddenly convinced it I had some other gpu. – Armads Oct 17 '22 at 19:09