0

I use the Diffusers MacOS app that created Hugging Face to generate a image, and I use the diffuser python library to generate image with same parameters, but I got the different result.

I kept the following parameters the same in both methods:

  • model
  • prompt
  • seed
  • guidanceScale
  • step

I want to know what other parameters besides these will affect the generation results of the image in stable diffusion.

Or What should I do if I want to generate the same image using these two methods.

Here are the diffuser app screenshots and code of python I am using:

diffusers macos app

from diffusers import StableDiffusionPipeline
from diffusers import DPMSolverMultistepScheduler

import torch

prompt = "A realistic beautiful natural landscape, 4k resolution, hyper detailed"
negativePrompt = ""
seed=248
guidanceScale=10.5
stepCount=10
repoId = "runwayml/stable-diffusion-v1-5"

generator = torch.manual_seed(seed)

dpm = DPMSolverMultistepScheduler.from_pretrained(repoId, subfolder="scheduler")

pipe = StableDiffusionPipeline.from_pretrained(
    repoId,
    safety_checker=None,
    scheduler=dpm,
)
pipe = pipe.to("mps")

pipe.enable_attention_slicing()

_ = pipe(prompt, num_inference_steps=1)

image = pipe(
    prompt,
    negative_prompt=negativePrompt,
    generator=generator,
    guidance_scale=guidanceScale,
    num_inference_steps=stepCount
).images[0]

0 Answers0