1

i am currently playing around with some generative models, such as stable-diffusion and i was wondering if it is technically possible and actually sensible to fine-tune the model on a Geforce RTX3070 with 8GB VRAM. Its just to play around a bit so small dataset and i dont expect good results out of it, but to my understanding if i turn down the batch size far enough and use lower resolution images it should be technically possible. Or am i missing something because on their repository they say that you need a GPU with at least 24GB.

I did not get to coding yet because i wanted to first check if its even possible before i end up setting everything up and then find out it does not work.

  • Which repository are you trying to use? Without knowing the model size it's relatively difficult to say yes or no. – Steven Feb 17 '23 at 00:19

1 Answers1

0

I've read about a person that was able to train using 12GB of ram, using the instructions in this video:

https://www.youtube.com/watch?v=7bVZDeGPv6I&ab_channel=NerdyRodent

It sounds a bit painful though. You would definitely want to try using the

--xformers

and

--lowvram

command line arguments when you startup SD. I would love to hear how this turns out and if you get it working.

Good Lux
  • 896
  • 9
  • 19
  • Unfortunately i wasn't able to run it locally, but i did not try too long and instead switched to fine tuning my model with a colab notebook which worked quite well. But thanks for the response – Fabian Riedlsperger Apr 10 '23 at 19:46