Use A30 graphics card on the cloud server Use the --share --listen command when starting stable diffusion webui.sh, By using share, open the page that uses the official proxy, and the graphics card can run at full power without slowing down when running inference tasks, around 17-19it/s; If use the ip port interface opened by listen, the graphics card runs at full power for the first time when running the inference task, but the second time the inference speed starts to slow down significantly, like 6-8it/s, and the graphics card utilization rate decreases, nearly half power only. At this time, use the share page to run the task again, and find that it will also slow down and cannot recover.
Deleted all plugins but still occur