添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Mac users: Please provide feedback on if these instructions do or don't work for you, and if anything is unclear or you are otherwise still having problems with your install that are not currently mentioned here.

Important notes

Currently most functionality in the web UI works correctly on macOS, with the most notable exceptions being CLIP interrogator and training. Although training does seem to work, it is incredibly slow and consumes an excessive amount of memory. CLIP interrogator can be used but it doesn't work correctly with the GPU acceleration macOS uses so the default configuration will run it entirely via CPU (which is slow).

Most samplers are known to work with the only exception being the PLMS sampler when using the Stable Diffusion 2.0 model. Generated images with GPU acceleration on macOS should usually match or almost match generated images on CPU with the same settings and seed.

Automatic installation

New install:

  • If Homebrew is not installed, follow the instructions at https://brew.sh to install it. Keep the terminal window open and follow the instructions under "Next steps" to add Homebrew to your PATH.
  • Open a new terminal window and run brew install cmake protobuf rust python@3.10 git wget
  • Clone the web UI repository by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
  • Place Stable Diffusion models/checkpoints you want to use into stable-diffusion-webui/models/Stable-diffusion . If you don't have any, see Downloading Stable Diffusion Models below.
  • cd stable-diffusion-webui and then ./webui.sh to run the web UI. A Python virtual environment will be created and activated using venv and any remaining missing dependencies will be automatically downloaded and installed.
  • To relaunch the web UI process later, run ./webui.sh again. Note that it doesn't auto update the web UI; to update, run git pull before running ./webui.sh .
  • Existing Install:

    If you have an existing install of web UI that was created with setup_mac.sh , delete the run_webui_mac.sh file and repositories folder from your stable-diffusion-webui folder. Then run git pull to update web UI and then ./webui.sh to run it.

    Downloading Stable Diffusion Models

    If you don't have any models to use, Stable Diffusion models can be downloaded from Hugging Face . To download, click on a model and then click on the Files and versions header. Look for files listed with the ".ckpt" or ".safetensors" extensions, and then click the down arrow to the right of the file size to download them.

    Some popular official Stable Diffusion models are:

  • Stable DIffusion 1.4 ( sd-v1-4.ckpt )
  • Stable Diffusion 1.5 ( v1-5-pruned-emaonly.ckpt )
  • Stable Diffusion 1.5 Inpainting ( sd-v1-5-inpainting.ckpt )
  • Stable Diffusion 2.0 and 2.1 require both a model and a configuration file, and image width & height will need to be set to 768 or higher when generating images:

  • Stable Diffusion 2.0 ( 768-v-ema.ckpt )
  • Stable Diffusion 2.1 ( v2-1_768-ema-pruned.ckpt )
  • For the configuration file, hold down the option key on the keyboard and click here to download v2-inference-v.yaml (it may download as v2-inference-v.yaml.yml ). In Finder select that file then go to the menu and select File > Get Info . In the window that appears select the filename and change it to the filename of the model, except with the file extension .yaml instead of .ckpt , press return on the keyboard (confirm changing the file extension if prompted), and place it in the same folder as the model (e.g. if you downloaded the 768-v-ema.ckpt model, rename it to 768-v-ema.yaml and put it in stable-diffusion-webui/models/Stable-diffusion along with the model).

    Also available is a Stable Diffusion 2.0 depth model ( 512-depth-ema.ckpt ). Download the v2-midas-inference.yaml configuration file by holding down option on the keyboard and clicking here , then rename it with the .yaml extension in the same way as mentioned above and put it in stable-diffusion-webui/models/Stable-diffusion along with the model. Note that this model works at image dimensions of 512 width/height or higher instead of 768.

    Troubleshooting

    Web UI Won't Start:

    If you encounter errors when trying to start the Web UI with ./webui.sh , try deleting the repositories and venv folders from your stable-diffusion-webui folder and then update web UI with git pull before running ./webui.sh again.

    Poor Performance:

    Currently GPU acceleration on macOS uses a lot of memory. If performance is poor (if it takes more than a minute to generate a 512x512 image with 20 steps with any sampler) first try starting with the --opt-split-attention-v1 command line option (i.e. ./webui.sh --opt-split-attention-v1 ) and see if that helps. If that doesn't make much difference, then open the Activity Monitor application located in /Applications/Utilities and check the memory pressure graph under the Memory tab. If memory pressure is being displayed in red when an image is generated, close the web UI process and then add the --medvram command line option (i.e. ./webui.sh --opt-split-attention-v1 --medvram ). If performance is still poor and memory pressure still red with that option, then instead try --lowvram (i.e. ./webui.sh --opt-split-attention-v1 --lowvram ). If it still takes more than a few minutes to generate a 512x512 image with 20 steps with with any sampler, then you may need to turn off GPU acceleration. Open webui-user.sh in Xcode and change #export COMMANDLINE_ARGS="" to export COMMANDLINE_ARGS="--skip-torch-cuda-test --no-half --use-cpu all" .

    Hiyo, thanks so much for this! I'm happy to be a tester for this.

    I'm trying to get this setup on an M1 Max laptop; I removed a previous version that I'd installed with the "old" instructions (which didn't actually work; I had to do some file editing per this thread, which finally yielded a functional UI session). I removed all of that entirely and re-fetched the repo fresh following the above instructions. When I try to launch the UI now, I get:

    Traceback (most recent call last):
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/launch.py", line 295, in <module>
        start()
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/launch.py", line 286, in start
        import webui
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/webui.py", line 11, in <module>
        from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/modules/call_queue.py", line 7, in <module>
        from modules import shared
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/modules/shared.py", line 12, in <module>
        import modules.interrogate
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/modules/interrogate.py", line 9, in <module>
        from torchvision import transforms
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/__init__.py", line 5, in <module>
        from torchvision import datasets
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/datasets/__init__.py", line 1, in <module>
        from ._optical_flow import KittiFlow, Sintel, FlyingChairs, FlyingThings3D, HD1K
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/datasets/_optical_flow.py", line 13, in <module>
        from .utils import verify_str_arg
      File "/Users/xxxxx/dev/github/stable-diffusion-webui/venv/lib/python3.10/site-packages/torchvision/datasets/utils.py", line 6, in <module>
        import lzma
      File "/Users/xxxxx/.pyenv/versions/3.10.0/lib/python3.10/lzma.py", line 27, in <module>
        from _lzma import *
    ModuleNotFoundError: No module named '_lzma'
    

    I'm not sure this is relevant, but: I use Python via pyenv, and currently have it set up to use 3.10.0 (not 3.10.6, as the 'main' instructions suggest, nor 3.10.8, which is what I have working on a separate Windows machine). Would that impact this?

    FWIW, prior to this, I also ran into two other issues trying to create TI embeddings, which I mention here.

    I was getting this error:when I use hire.fix a image, the python begin to withdraw for no apparent reason and the console show a warning "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
    warnings.warn('resource_tracker: There appear to be %d '". I don't know what to do.

    @pxp291032:

    I have no notes about the warning you're getting, but I've found in order for Hires.fix work properly, I need to change the COMMANDLINE_ARGS.

    Refer to this comment to see my settings. Hope this helps!

    ModuleNotFoundError: No module named '_lzma' - FIXED

    Mac M1 Pro, Mac OS 14.1.2, Python (pyenv) 3.10.6

    Couldn't get it to work following
    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon

    On first launch of ./webui.sh kept running into
    ModuleNotFoundError: No module named '_lzma'
    none of the proposed workarounds/solutions here in this discussion worked.

    The one solution that made the application run was:
    https://gist.github.com/iandanforth/f3ac42b0963bcbfdf56bb446e9f40a33

    brew reinstall xz
    pyenv uninstall 3.10.6 # Replace with your version number
    pyenv install 3.10.6 # Reinstall (now with the lzma lib available)
    pyenv local 3.10.6 # Set this version to always run in this directory
    pip install pandas

    Error = Error Domain=com.apple.appleneuralengine Code=6 "createProgramInstanceForModel:modelToken:qos:isPreCompiled:enablePowerSaving:skipPreparePhase:statsMask:memoryPoolID:enableLateLatch:modelIdentityStr:owningPid:cacheUrlIdentifier:aotCacheUrlIdentifier:error:: Program load failure (0x5)" UserInfo={NSLocalizedDescription=createProgramInstanceForModel:modelToken:qos:isPreCompiled:enablePowerSaving:skipPreparePhase:statsMask:memoryPoolID:enableLateLatch:modelIdentityStr:owningPid:cacheUrlIdentifier:aotCacheUrlIdentifier:error:: Program load failure (0x5)}
    2024-05-14 14:47:46.347 Python[51352:8930884] ANE load failed!
    2024-05-14 14:47:46.347 Python[51352:8930884] Issues were found in compilation, falling back to GPU

    What is the situation? After clicking to generate the image, there was no response...

    Got some useful benchmarks about making one-off images too. For people running Stable Diffusion WebUI on battery, CoreML support is a must (600 -> 130 joules/image). Or even for the climate conscious, reducing emissions from 170 to 40 mg CO2 per image.

    apple/ml-stable-diffusion#54 (comment)

    Guernika & Mochi Diffusion are good examples!
    https://github.com/godly-devotion/MochiDiffusion
    https://huggingface.co/Guernika/CoreMLStableDiffusion

    I believe it is more easier to use already existing methods

    Make no mistake, GPU rendering is faster than the neural engine. But it takes up more resources to do so.

    I've benchmarked both.

    正克隆到 'stable-diffusion-webui'...
    remote: Enumerating objects: 31542, done.
    remote: Counting objects: 100% (50/50), done.
    remote: Compressing objects: 100% (28/28), done.
    错误:RPC 失败。curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8)
    错误:预期仍然需要 6921 个字节的正文
    fetch-pack: unexpected disconnect while reading sideband packet
    致命错误:过早的文件结束符(EOF)
    致命错误:fetch-pack:无效的 index-pack 输出怎么解决

    Much better instructions!
    I'd slightly modify point 4, or insert an extra point between 4 and 5:

    If you already downloaded one or more Stable Diffusion models, and you are using them across multiple user interfaces (A1111, InvokeAI, etc), you might NOT want to copy the .ckpt files in stable-diffusion-webui/models/Stable-diffusion to avoid duplications and unnecessary consumption of storage.

    Instead, you can keep all your models in a separate folder and launch A1111 by adding the --ckpt-dir flag.

    For example, if you keep your models in /Users/YourUserName/Desktop/Models, then you can launch A1111 by writing in each of the next steps:
    ./webui.sh --ckpt-dir "/Users/YourUserName/Desktop/Models/"

    Notice that your folder Models can have as many subfolders as you want to keep models organized. A1111 will parse all of them and list all .ckpt in the drop-down menu.

    or something like that.

    I'm mentioning it only because I've read dozens of comments about people complaining about the space consumption issue.

    Copy or type in here the full path to your Auto1111 /models/stable-diffusion folder, and then also the full path to the folder you want to keep your models in on the external drive. I'll check back here later and give you a command to use that you can just copy and paste into a terminal window on your end to make the symlink mentioned above.

    @jrittvo I would really like to know how to do this also because my space on the main HD is getting very full! Can it be done in the startup ARGS?

    Here are my 2 links for the models:

  • Local on iMac: /Users/briancurtis/stable-diffusion-webui/models/Stable-diffusion
  • Link to connected Drive: /Volumes/BEEJ2TB/Sept_Moved_SD_Models/Stable-diffusion
  • Thanks my friend :)

    First move or rename the /Users/briancurtis/stable-diffusion-webui/models/Stable-diffusion folder because it will be replaced by the symlink.

    Then, in Terminal:

    cd /Users/briancurtis/stable-diffusion-webui/models

    ln -s /Volumes/BEEJ2TB/Sept_Moved_SD_Models/Stable-diffusion

    @jrittvo Thank You so much! :) This worked perfectly and now I can have ALL them accessible tgether and free-up . . . 200GB (cripes! I didn't realise I had tht many til I put them all tgether! lol) 😮🤪

    Been looking for this answer for weeks now. 👍🏻⭐️⭐️⭐️

    Tried doing the "existing installation" instructions and got the following error after running webui.sh:

    Fetching updates for K-diffusion...
    Checking out commit for K-diffusion with hash: 51c9778f269cedb55a4d88c79c0246d35bdadb71...
    Traceback (most recent call last):
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 294, in <module>
        prepare_enviroment()
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 239, in prepare_enviroment
        git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 97, in git_clone
        run(f'"{git}" -C {dir} checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}")
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 49, in run
        raise RuntimeError(message)
    RuntimeError: Couldn't checkout commit 51c9778f269cedb55a4d88c79c0246d35bdadb71 for K-diffusion.
    Command: "git" -C repositories/k-diffusion checkout 51c9778f269cedb55a4d88c79c0246d35bdadb71
    Error code: 128
    stdout: <empty>
    stderr: fatal: reference is not a tree: 51c9778f269cedb55a4d88c79c0246d35bdadb71
    

    Tried deleting the repositories and venv directories as instructred. Response from git pull was "already up to date" and it died in the same place with the same error (not that I expected success at that point).

    Completely trashed everything and tried a fresh install. That worked, though I'm still getting the No module 'xformers'. Proceeding without it message.

    I cloned the repo and followed the instructions; however, No module 'xformers'. Proceeding without it, still shows (even after I installed them using python -m pip install -U torch, python -m pip install -U xformers - the torch installation was needed as it could not find torch in my case).

    Were any of you able to resolve this issue?

    I don't know if it helps, but I run a batch count of 100 every night with the following settings:

  • Model: 768-v-ema.ckpt
  • Sampler: DDIM
  • Steps: 100
  • CFG Scale: 9
  • Restore face: no
  • Seed: -1
  • Size: 768x768px
  • Batch size: 1
  • On my MBP M1 Pro with 32GB RAM, these settings in A1111 (last commit) generate a new image approx every 4min.
    I leave a bunch of other apps open during the generation: browser, mail client, etc.

    Every morning, I check the terminal and every image is within the average. Except for one night, 2 days ago.
    Inexplicably, and just that night, a few image generations within the same batch took random time: 17min, 30min, 52min.
    I assumed it was some background process subtracting resources from the system, but maybe not?

    Other than that, my generations are perfectly regular at approx 4min.

    I do notice that this appears to be launching into python 3.9.12, rather than the 3.10.x preferred by Invoke-AI. This is confirmed by observing memory use in Activity Monitor. This is especially odd given that i have confirmed via the brew info command that the Homebrew version of python installed is 3.10 as well. So questions:

  • Is it likely that the python version is causing this?
  • Whether or not it's likely, how do i install in such a way that it uses Python 3.10?
  • so THIS is odd.

    When I run the following command everything launches fine: python_cmd="python3.10" ./webui.sh --opt-split-attention-v1 --medvram

    HOWEVER

    When I adjust webui-user.sh to introduce the exact same arguments in that command, it errors out with the following:

    Traceback (most recent call last):
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 294, in <module>
        prepare_enviroment()
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 209, in prepare_enviroment
        run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 73, in run_python
        return run(f'"{python}" -c "{code}"', desc, errdesc)
      File "/Users/richpizor/stable-diffusion-webui/launch.py", line 49, in run
        raise RuntimeError(message)
    RuntimeError: Error running command.
    Command: "/opt/homebrew/opt/python@3.10/bin/python3.10" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
    Error code: 1
    stdout: <empty>
    stderr: Traceback (most recent call last):
      File "<string>", line 1, in <module>
    AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
            

    You’ll want to change that line to export COMMANDLINE_ARGS="$COMMANDLINE_ARGS --opt-split-attention-v1" The reason it isn’t working for you to completely override it is because you need to have --skip-torch-cuda-test also.

    Also if you’ve updated web UI recently then it automatically sets python_cmd to python3.10 if it’s available, so you don’t need to change it (you can if you want though, doesn’t matter).

    Does anyone have any insight into what would cause the errors in the terminal below after launching './webui.sh'?

    I'm running a 2021 MacBook Pro 16" (10c/24c w/ 64GB of RAM) on Ventura 13.0.1.

    Had one issue before this where I was unable to access "https://github.com/TencentARC/GFPGAN.git/", so I did it manually and then it worked the next time.

    Have tried clearing 'repositories' and 'venv' but still getting the same message.

    Python 3.10.8 (main, Oct 21 2022, 22:22:30) [Clang 14.0.0 (clang-1400.0.29.202)]
    Commit hash: 44c46f0ed395967cd3830dd481a2db759fda5b3b
    Installing torch and torchvision
    Installing gfpgan
    Installing clip
    ^AInstalling open_clip
    åy≈yCloning Stable Diffusion into repositories/stable-diffusion-stability-ai...
    Cloning Taming Transformers into repositories/taming-transformers...
    Cloning K-diffusion into repositories/k-diffusion...
    Cloning CodeFormer into repositories/CodeFormer...
    Cloning BLIP into repositories/BLIP...
    Installing requirements for CodeFormer
    Installing requirements for Web UI
    Launching Web UI with arguments: --no-half --use-cpu interrogate
    Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
    No module 'xformers'. Proceeding without it.
    LatentDiffusion: Running in eps-prediction mode
    DiffusionWrapper has 859.52 M params.
    Loading weights [2c02b20a] from /Users/**USER**/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt
    Traceback (most recent call last):
      File "/Users/**USER**/stable-diffusion-webui/launch.py", line 295, in <module>
        start()
      File "/Users/**USER**/stable-diffusion-webui/launch.py", line 290, in start
        webui.webui()
      File "/Users/**USER**/stable-diffusion-webui/webui.py", line 132, in webui
        initialize()
      File "/Users/**USER**/stable-diffusion-webui/webui.py", line 62, in initialize
        modules.sd_models.load_model()
      File "/Users/**USER**/stable-diffusion-webui/modules/sd_models.py", line 261, in load_model
        load_model_weights(sd_model, checkpoint_info)
      File "/Users/**USER**/stable-diffusion-webui/modules/sd_models.py", line 192, in load_model_weights
        model.load_state_dict(sd, strict=False)
      File "/Users/**USER**/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
    	size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
    	size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
    	size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    

    Edited for format

    I found the problem, if you don't provide the option "--config " then it will use the old ./v1-inference.yaml in the project root which is not compatible with the 2.0 models.

    Actually no, --config is not required.

    Putting the config file in the same folder as the model did not work for me (reading through the code I don't see how it could work for anybody else either...)

    What does it mean to just put it in your .zshrc file? Do we just copy paste this into terminal?
    eval "$(/opt/homebrew/bin/brew shellenv)"

    Tried training an embedding and it fails instantly.

    Get this error in the terminal:

    Traceback (most recent call last):
    File "/Users/username/Stable Diffusion/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 315, in train_embedding
    scheduler.apply(optimizer, embedding.step)
    File "/Users/username/Stable Diffusion/stable-diffusion-webui/modules/textual_inversion/learn_schedule.py", line 62, in apply
    if step_number < self.end_step:
    TypeError: '<' not supported between instances of 'NoneType' and 'int'

    From my reply below:

    I've put in a PR (#5810) to address the issues with training, if that gets merged then give it another try. Just be warned that you likely need more than 32 GB of RAM for training to work correctly. If you have 64 GB of RAM or more, it will probably work fine. Performace doesn't actually seem too bad right now (2 to 3 sec/it), and I'm not entirely sure why because it was horrible when I tried it about a month ago with the same version of PyTorch (~25 sec/it IIRC).

    Training is indeed broken at this time, but the PR should address that.

    so I removed my old install, and started afresh.
    the first message I saw was

    Can't run without a checkpoint. Find and place a .ckpt file into any of those locations. The program will exit.
    

    so I went over to hugging face, and tried to find one...
    I downloaded this one - https://huggingface.co/stabilityai/stable-diffusion-2/tree/main and put it in the Stable-diffusion folder.

    now I get

    RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
    	size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
    	size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    	size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
    

    I got similar issues with the 2.1 model.
    it would be great if you update the document to help with where to download the models from, and any extra config

    failed on step 5, after run ./webui.sh

    log as following:

    Using legacy 'setup.py install' for gfpgan, since package 'wheel' is not installed.
    Using legacy 'setup.py install' for basicsr, since package 'wheel' is not installed.
    Using legacy 'setup.py install' for lmdb, since package 'wheel' is not installed.
    Using legacy 'setup.py install' for filterpy, since package 'wheel' is not installed.
    Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
    Installing collected packages: yapf, tensorboard-plugin-wit, pyasn1, lmdb, addict, wheel, tqdm, tifffile, tensorboard-data-server, six, scipy, rsa, pyyaml, PyWavelets, pyparsing, pyasn1-modules, protobuf, packaging, opencv-python, oauthlib, networkx, MarkupSafe, markdown, llvmlite, kiwisolver, imageio, future, fonttools, cycler, contourpy, cachetools, absl-py, werkzeug, scikit-image, requests-oauthlib, python-dateutil, numba, grpcio, google-auth, matplotlib, google-auth-oauthlib, tb-nightly, filterpy, facexlib, basicsr, gfpgan
    Running setup.py install for lmdb: started
    Running setup.py install for lmdb: finished with status 'error'

    stderr: Running command git clone --filter=blob:none --quiet https://github.com/TencentARC/GFPGAN.git /private/var/folders/d_/nz3s4q253752_l7h14mcc6cc0000gn/T/pip-req-build-8moh8ytq
    Running command git rev-parse -q --verify 'sha^8d2447a2d918f8eba5a4a01463fd48e45126a379'
    Running command git fetch -q https://github.com/TencentARC/GFPGAN.git 8d2447a2d918f8eba5a4a01463fd48e45126a379
    Running command git checkout -q 8d2447a2d918f8eba5a4a01463fd48e45126a379
    error: subprocess-exited-with-error

    × Running setup.py install for lmdb did not run successfully.
    │ exit code: 1
    ╰─> [25 lines of output]
    py-lmdb: Using bundled liblmdb with py-lmdb patches; override with LMDB_FORCE_SYSTEM=1 or LMDB_PURE=1.
    patching file lmdb.h
    patching file mdb.c
    py-lmdb: Using CPython extension; override with LMDB_FORCE_CFFI=1.
    running install
    /Users/tian/stable-diffusion-webui/venv/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
    warnings.warn(
    running build
    running build_py
    creating build/lib.macosx-13-arm64-cpython-310
    creating build/lib.macosx-13-arm64-cpython-310/lmdb
    copying lmdb/cffi.py -> build/lib.macosx-13-arm64-cpython-310/lmdb
    copying lmdb/init.py -> build/lib.macosx-13-arm64-cpython-310/lmdb
    copying lmdb/_config.py -> build/lib.macosx-13-arm64-cpython-310/lmdb
    copying lmdb/tool.py -> build/lib.macosx-13-arm64-cpython-310/lmdb
    copying lmdb/main.py -> build/lib.macosx-13-arm64-cpython-310/lmdb
    running build_ext
    building 'cpython' extension
    creating build/temp.macosx-13-arm64-cpython-310
    creating build/temp.macosx-13-arm64-cpython-310/build
    creating build/temp.macosx-13-arm64-cpython-310/build/lib
    creating build/temp.macosx-13-arm64-cpython-310/lmdb
    clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Ilib/py-lmdb -Ibuild/lib -I/Users/tian/stable-diffusion-webui/venv/include -I/opt/homebrew/opt/python@3.10/Frameworks/Python.framework/Versions/3.10/include/python3.10 -c build/lib/mdb.c -o build/temp.macosx-13-arm64-cpython-310/build/lib/mdb.o -DHAVE_PATCHED_LMDB=1 -UNDEBUG -w
    xcrun: error: unable to load libxcrun (dlopen(/Library/Developer/CommandLineTools/usr/lib/libxcrun.dylib, 0x0005): tried: '/Library/Developer/CommandLineTools/usr/lib/libxcrun.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need '')), '/System/Volumes/Preboot/Cryptexes/OS/Library/Developer/CommandLineTools/usr/lib/libxcrun.dylib' (no such file), '/Library/Developer/CommandLineTools/usr/lib/libxcrun.dylib' (mach-o file, but is an incompatible architecture (have 'x86_64', need ''))).
    error: command '/usr/bin/clang' failed with exit code 1
    [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip.
    error: legacy-install-failure

    × Encountered error while trying to install package.
    ╰─> lmdb

    note: This is an issue with the package mentioned above, not pip.
    hint: See above for output from the failure.

    I am experiencing the same issue and have not yet figured out how to resolve it. When you mention that the problem may be related to XCode, what exactly do you mean? I have not utilized XCode to install the web-ui and am not familiar with it. What should I do to solve the lmdb package error? Could someone please help me? (I am a newbie, so please be polite)

    I am a completely non-technical person (a product manager) and I want to try out the tool to generate various images. I am wondering if anyone can provide instructions in layman's language with all (ALL, really) steps involved. I'd be more than happy to test, then create a document with screenshots and add various guides to the product.

    I'm using MacOS 12.6.1
    Managed to get to the step ./webui.sh then the auto dependency download becomes a disaster.
    It always seems to get stuck for a few minutes when installing gfpgan then displaying the following message.

    Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
    Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
    Installing gfpgan
    Traceback (most recent call last):
      File "/Users/***/stable-diffusion-webui/launch.py", line 294, in <module>
        prepare_environment()
      File "/Users/***/stable-diffusion-webui/launch.py", line 212, in prepare_environment
        run_pip(f"install {gfpgan_package}", "gfpgan")
      File "/Users/***/stable-diffusion-webui/launch.py", line 78, in run_pip
        return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
      File "/Users/***/stable-diffusion-webui/launch.py", line 49, in run
        raise RuntimeError(message)
    RuntimeError: Couldn't install gfpgan.
    Command: "/Users/***/stable-diffusion-webui/venv/bin/python3.10" -m pip install git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 --prefer-binary
    Error code: 1
    stdout: Collecting git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379
      Cloning https://github.com/TencentARC/GFPGAN.git (to revision 8d2447a2d918f8eba5a4a01463fd48e45126a379) to /private/var/folders/90/t9l7qzm964n_mk0p_ltvg35h0000gn/T/pip-req-build-tic45oxo
    stderr:   Running command git clone --filter=blob:none --quiet https://github.com/TencentARC/GFPGAN.git /private/var/folders/90/t9l7qzm964n_mk0p_ltvg35h0000gn/T/pip-req-build-tic45oxo
      fatal: unable to access 'https://github.com/TencentARC/GFPGAN.git/': Empty reply from server
      error: subprocess-exited-with-error
      × git clone --filter=blob:none --quiet https://github.com/TencentARC/GFPGAN.git /private/var/folders/90/t9l7qzm964n_mk0p_ltvg35h0000gn/T/pip-req-build-tic45oxo did not run successfully.
      │ exit code: 128
      ╰─> See above for output.
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: subprocess-exited-with-error
    × git clone --filter=blob:none --quiet https://github.com/TencentARC/GFPGAN.git /private/var/folders/90/t9l7qzm964n_mk0p_ltvg35h0000gn/T/pip-req-build-tic45oxo did not run successfully.
    │ exit code: 128
    ╰─> See above for output.
    note: This error originates from a subprocess, and is likely not a problem with pip.
    [notice] A new release of pip available: 22.2.2 -> 22.3.1
    [notice] To update, run: pip install --upgrade pip
    

    Even after successful installation of gfpgan, it will be stuck on clip, then open_clip (if clip fortunately installs, probably 1 our of 5 times).
    I live in China so I suspect this has to do with the firewall messing with my connection with GitHub... tried messing with hosts to no avail.
    At this point I'm close to giving up. Is anyone having similar issues? Is there a way for me to manually install these dependencies?

    请问这种UI是必须全程挂梯子才有用吗?我这里好像退出VPN就error了

    yes, in china need vpn or clashxpro with enhance mode

    Got it, thanks a lot!

    请问绕过防火墙的代码在安装sd的终端页面直接输入吗?我试了下还是失败了。 希望有小伙伴告知如何实操,感谢🙏

    我一开始按照层主的http.proxy方法也不行,后来重置代理后改为指定GitHub就可以了,
    git config --global http.https://github.com.proxy socks5://127.0.0.1:XXXX
    其中XXXX改成你的socks端口码,你可以试试

    请问绕过防火墙的代码在安装sd的终端页面直接输入吗?我试了下还是失败了。 希望有小伙伴告知如何实操,感谢🙏

    我一开始按照层主的http.proxy方法也不行,后来重置代理后改为指定GitHub就可以了, git config --global http.https://github.com.proxy socks5://127.0.0.1:XXXX, 其中XXXX改成你的socks端口码,你可以试试

    works! thanks a lot!

    I get the following issue on my M1 as for some reason where venv installs x86 packages while python and arch is set to "ARM64". Any pointers on cleaning up or making the environment work on M1 for arm64? Somehow install the missing arm64 version of the crypto lib.

    /opt/homebrew/bin/python3 is architecture: arm64

    Launching Web UI with arguments: --no-half --use-cpu interrogate
    Traceback (most recent call last):
      File "/Users/miksvenson/repos/stable-diffusion-webui/launch.py", line 295, in <module>
        start()
      File "/Users/miksvenson/repos/stable-diffusion-webui/launch.py", line 286, in start
        import webui
      File "/Users/miksvenson/repos/stable-diffusion-webui/webui.py", line 11, in <module>
        from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call
      File "/Users/miksvenson/repos/stable-diffusion-webui/modules/call_queue.py", line 7, in <module>
        from modules import shared
      File "/Users/miksvenson/repos/stable-diffusion-webui/modules/shared.py", line 8, in <module>
        import gradio as gr
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/__init__.py", line 3, in <module>
        import gradio.components as components
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components.py", line 31, in <module>
        from gradio import media_data, processing_utils, utils
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/processing_utils.py", line 20, in <module>
        from gradio import encryptor, utils
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/encryptor.py", line 2, in <module>
        from Crypto.Cipher import AES
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/__init__.py", line 27, in <module>
        from Crypto.Cipher._mode_ecb import _create_ecb_cipher
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/_mode_ecb.py", line 35, in <module>
        raw_ecb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ecb", """
      File "/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/_raw_api.py", line 309, in load_pycryptodome_raw_lib
        raise OSError("Cannot load native module '%s': %s" % (name, ", ".join(attempts)))
    OSError: Cannot load native module 'Crypto.Cipher._raw_ecb': Not found '_raw_ecb.cpython-310-darwin.so', Cannot load '_raw_ecb.abi3.so': cannot load library '/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so': dlopen(/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so, 0x0002): tried: '/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so' (no such file), '/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/_raw_ecb.abi3.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/_raw_ecb.abi3.so' (no such file), '/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/_raw_ecb.abi3.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')).  Additionally, ctypes.util.find_library() did not manage to locate a library called '/Users/miksvenson/repos/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so', Not found '_raw_ecb.so'

    Got the same error, but this didn't solve my error. Anybody?
    Her is my terminal output:
    File "/Users/home/stable-diffusion-webui/launch.py", line 317, in
    start()
    File "/Users/home/stable-diffusion-webui/launch.py", line 308, in start
    import webui
    File "/Users/home/stable-diffusion-webui/webui.py", line 13, in
    from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call
    File "/Users/home/stable-diffusion-webui/modules/call_queue.py", line 7, in
    from modules import shared, progress
    File "/Users/home/stable-diffusion-webui/modules/shared.py", line 9, in
    import gradio as gr
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/init.py", line 3, in
    import gradio.components as components
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components.py", line 34, in
    from gradio import media_data, processing_utils, utils
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/processing_utils.py", line 23, in
    from gradio import encryptor, utils
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/encryptor.py", line 2, in
    from Crypto.Cipher import AES
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/init.py", line 27, in
    from Crypto.Cipher._mode_ecb import _create_ecb_cipher
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/_mode_ecb.py", line 35, in
    raw_ecb_lib = load_pycryptodome_raw_lib("Crypto.Cipher._raw_ecb", """
    File "/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/_raw_api.py", line 309, in load_pycryptodome_raw_lib
    raise OSError("Cannot load native module '%s': %s" % (name, ", ".join(attempts)))
    OSError: Cannot load native module 'Crypto.Cipher._raw_ecb': Not found '_raw_ecb.cpython-310-darwin.so', Cannot load '_raw_ecb.abi3.so': dlopen(/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so, 0x0006): tried: '/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Util/../Cipher/_raw_ecb.abi3.so' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e))), '/Users/home/stable-diffusion-webui/venv/lib/python3.10/site-packages/Crypto/Cipher/_raw_ecb.abi3.so' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e))), Not found '_raw_ecb.so'

    Same issue. As suggested, clearing the cache and reinstalling didn't work for me, either.

    It seems like a gradio dependency (pycryptodome) is either:

  • not compatible OR
  • incorrectly installed using the x86 version, not arm64
    Related: Gradio and M1 macbook gradio-app/gradio#1940.
  • As suggested in the issue, I configured Conda and ran:

    pip uninstall pycryptodome
    conda install pycryptodome
    

    But no luck. Thank you world ahead of time for any help on this one!

    To those with this issue: do you have Xcode installed? I suspect there is a problem with the build environment that is preventing pip from building the required wheel, but it is a bit hard to tell why exactly it isn't simply giving an error message when first trying to install pycryptodome instead of installing the wrong wheel. If it really is just pycryptodome that is the problem, then this wheel can be used:
    pycryptodome-3.16.0-cp35-abi3-macosx_13_0_arm64.whl.zip
    Unzipping that file onto the desktop and running sh -c 'source venv/bin/activate; pip uninstall pycryptodome; pip install ~/Desktop/pycryptodome-3.16.0-cp35-abi3-macosx_13_0_arm64.whl' from the stable-diffusion-webui directory should install a working pycryptodome package.

    Thanks a lot @brkirch, replacing the wheel manually as you proposed fixed the issue. The UI starts just fine now (MacBook Air M1, MacOS 13.1).

    do you have Xcode installed?

    Yes, Xcode is installed.

    I don't know why there are always such errors. I couldn't clone it before, so I manually downloaded the zip file from GitHub, but after ./webui.sh, it still couldn't connect. But I have been surfing the Internet scientifically, and I can directly access GitHub using a browser. Is there something else that needs to be set in the terminal? I'm so confused, can anyone help me with this, I've failed so many times.chain@12 ~ % cd /Users/chain/Documents/stable-diffusion-webui
    chain@12 stable-diffusion-webui % ./webui.sh

    ################################################################
    Install script for stable-diffusion + Web UI
    Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
    ################################################################

    ################################################################
    Running on chain user
    ################################################################

    ################################################################
    Clone stable-diffusion-webui
    ################################################################
    Cloning into 'stable-diffusion-webui'...
    fatal: unable to access 'https://github.com/AUTOMATIC1111/stable-diffusion-webui.git/': Failed to connect to 127.0.0.1 port 7890 after 0 ms: Couldn't connect to server
    ./webui.sh: line 202: cd: stable-diffusion-webui/: No such file or directory
    ERROR: Can't cd to /Users/chain/stable-diffusion-webui/, aborting...% chain@12 stable-diffusion-webui %

    When I tried to run the webui.sh for the first time following the this Apple silicon installment instruction, I got this error msg from my terminal

    ERROR: Could not find a version that satisfies the requirement torch==2.1.0 (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0)
    ERROR: No matching distribution found for torch==2.1.0
    

    I have to comment out the line14 export TORCH_COMMAND="pip install torch==2.1.0 torchvision==0.16.0 in webui-macos-env.sh to make the webui.sh proceed running. Has anyone else encountered this problem as well?

    Before WebUI, I have installed the ConfyUI and it's dependencies. I have a version of Pytroch using pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu

    super, now it works! how did you understand it was the intel version of Homebrew? (I also suspect why I had the intel version)

    Because you got an error that 2.3.1 does not exist, which you should get only if you try to install that version on Intel. Paths where your Homebrew and Python were installed confirmed my suspicion.

    I believe I had the same problem initially, and it was because I migrated from an Intel Mac Mini to an M3, so Intel homebrew came along for the ride.

    Thanks for pointing that out. I forgot about that since I am using my custom script to (re)install everything and it detects the CPU and installs the correct version of brew. So, even people who didn't mess with Rosetta and intentionally install the Intel version of homebrew might have that problem.

    @marco-porru I will add additional comments to my guide, and I will add this check to my script. The script will probably just show an error message and stop, because brew uninstall might be a little tricky. I do not want to unintentionally mess up someone's system. Installation is not a problem, since it does not delete anything.

    Hi everyone, I get this after the first new installation. Tried re installing several times, rebooted my MacBook Pro (2021) Apple M1 Max
    with monterey 12.1 but no solution here :

    Can you please help me ? thanks in advance

    Install script for stable-diffusion + Web UI
    Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.

    Running on leonard user

    Repo already cloned, using it as install directory

    Create and activate python venv

    Launching launch.py. . .

    Python 3.10.14 (main, Mar 20 2024, 03:57:45) [Clang 14.0.0 (clang-1400.0.29.202)]
    Version: v1.9.3
    Commit hash: 1c0a0c4
    Installing torch and torchvision
    Collecting torch==2.1.0
    Using cached torch-2.1.0-cp310-none-macosx_11_0_arm64.whl.metadata (24 kB)
    Collecting torchvision==0.16.0
    Using cached torchvision-0.16.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (6.6 kB)
    Collecting filelock (from torch==2.1.0)
    Using cached filelock-3.14.0-py3-none-any.whl.metadata (2.8 kB)
    Collecting typing-extensions (from torch==2.1.0)
    Using cached typing_extensions-4.12.0-py3-none-any.whl.metadata (3.0 kB)
    Collecting sympy (from torch==2.1.0)
    Using cached sympy-1.12-py3-none-any.whl.metadata (12 kB)
    Collecting networkx (from torch==2.1.0)
    Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
    Collecting jinja2 (from torch==2.1.0)
    Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
    Collecting fsspec (from torch==2.1.0)
    Using cached fsspec-2024.5.0-py3-none-any.whl.metadata (11 kB)
    Collecting numpy (from torchvision==0.16.0)
    Using cached numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl.metadata (61 kB)
    Collecting requests (from torchvision==0.16.0)
    Using cached requests-2.32.2-py3-none-any.whl.metadata (4.6 kB)
    Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.16.0)
    Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.2 kB)
    Collecting MarkupSafe>=2.0 (from jinja2->torch==2.1.0)
    Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl.metadata (3.0 kB)
    Collecting charset-normalizer<4,>=2 (from requests->torchvision==0.16.0)
    Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl.metadata (33 kB)
    Collecting idna<4,>=2.5 (from requests->torchvision==0.16.0)
    Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB)
    Collecting urllib3<3,>=1.21.1 (from requests->torchvision==0.16.0)
    Using cached urllib3-2.2.1-py3-none-any.whl.metadata (6.4 kB)
    Collecting certifi>=2017.4.17 (from requests->torchvision==0.16.0)
    Using cached certifi-2024.2.2-py3-none-any.whl.metadata (2.2 kB)
    Collecting mpmath>=0.19 (from sympy->torch==2.1.0)
    Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
    Using cached torch-2.1.0-cp310-none-macosx_11_0_arm64.whl (59.5 MB)
    Using cached torchvision-0.16.0-cp310-cp310-macosx_11_0_arm64.whl (1.6 MB)
    Using cached pillow-10.3.0-cp310-cp310-macosx_11_0_arm64.whl (3.4 MB)
    Using cached filelock-3.14.0-py3-none-any.whl (12 kB)
    Using cached fsspec-2024.5.0-py3-none-any.whl (316 kB)
    Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
    Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
    Using cached numpy-1.26.4-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
    Using cached requests-2.32.2-py3-none-any.whl (63 kB)
    Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
    Using cached typing_extensions-4.12.0-py3-none-any.whl (37 kB)
    Using cached certifi-2024.2.2-py3-none-any.whl (163 kB)
    Using cached charset_normalizer-3.3.2-cp310-cp310-macosx_11_0_arm64.whl (120 kB)
    Using cached idna-3.7-py3-none-any.whl (66 kB)
    Using cached MarkupSafe-2.1.5-cp310-cp310-macosx_10_9_universal2.whl (18 kB)
    Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
    Using cached urllib3-2.2.1-py3-none-any.whl (121 kB)
    Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
    Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.14.0 fsspec-2024.5.0 idna-3.7 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 pillow-10.3.0 requests-2.32.2 sympy-1.12 torch-2.1.0 torchvision-0.16.0 typing-extensions-4.12.0 urllib3-2.2.1
    Installing clip
    Installing open_clip
    Cloning assets into /Users/leonard/stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
    Clonage dans '/Users/leonard/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'...
    remote: Enumerating objects: 20, done.
    remote: Counting objects: 100% (20/20), done.
    remote: Compressing objects: 100% (18/18), done.
    remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0
    Réception d'objets: 100% (20/20), 132.70 Kio | 3.02 Mio/s, fait.
    Cloning Stable Diffusion into /Users/leonard/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
    Clonage dans '/Users/leonard/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
    remote: Enumerating objects: 580, done.
    remote: Counting objects: 100% (571/571), done.
    remote: Compressing objects: 100% (304/304), done.
    remote: Total 580 (delta 278), reused 448 (delta 249), pack-reused 9
    Réception d'objets: 100% (580/580), 73.44 Mio | 2.73 Mio/s, fait.
    Résolution des deltas: 100% (278/278), fait.
    Cloning Stable Diffusion XL into /Users/leonard/stable-diffusion-webui/repositories/generative-models...
    Clonage dans '/Users/leonard/stable-diffusion-webui/repositories/generative-models'...
    remote: Enumerating objects: 941, done.
    remote: Total 941 (delta 0), reused 0 (delta 0), pack-reused 941
    Réception d'objets: 100% (941/941), 43.85 Mio | 2.32 Mio/s, fait.
    Résolution des deltas: 100% (489/489), fait.
    Cloning K-diffusion into /Users/leonard/stable-diffusion-webui/repositories/k-diffusion...
    Clonage dans '/Users/leonard/stable-diffusion-webui/repositories/k-diffusion'...
    remote: Enumerating objects: 1345, done.
    remote: Counting objects: 100% (743/743), done.
    remote: Compressing objects: 100% (94/94), done.
    remote: Total 1345 (delta 697), reused 655 (delta 649), pack-reused 602
    Réception d'objets: 100% (1345/1345), 236.07 Kio | 2.71 Mio/s, fait.
    Résolution des deltas: 100% (945/945), fait.
    Cloning BLIP into /Users/leonard/stable-diffusion-webui/repositories/BLIP...
    Clonage dans '/Users/leonard/stable-diffusion-webui/repositories/BLIP'...
    remote: Enumerating objects: 277, done.
    remote: Counting objects: 100% (165/165), done.
    remote: Compressing objects: 100% (30/30), done.
    remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
    Réception d'objets: 100% (277/277), 7.03 Mio | 3.39 Mio/s, fait.
    Résolution des deltas: 100% (152/152), fait.
    Installing requirements
    Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
    no module 'xformers'. Processing without...
    no module 'xformers'. Processing without...
    No module 'xformers'. Proceeding without it.
    Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled

    You are running torch 2.1.0.
    The program is tested to work with torch 2.1.2.
    To reinstall the desired version, run with commandline flag --reinstall-torch.
    Beware that this will cause a lot of large files to be downloaded, as well as
    there are reports of issues with training tab on the latest version.

    Use --skip-version-check commandline argument to disable this check.

    Calculating sha256 for /Users/leonard/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt: Running on local URL: http://127.0.0.1:7860

    To create a public link, set share=True in launch().
    Startup time: 155.2s (prepare environment: 87.9s, import torch: 10.5s, import gradio: 15.6s, setup paths: 20.7s, initialize shared: 0.1s, other imports: 19.5s, load scripts: 0.4s, create ui: 0.2s, gradio launch: 0.2s).
    cc6cb27103417325ff94f52b7a5d2dde45a7515b25c255d8e396c90014281516
    Loading weights [cc6cb27103] from /Users/leonard/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
    Creating model from config: /Users/leonard/stable-diffusion-webui/configs/v1-inference.yaml
    /Users/leonard/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
    warnings.warn(
    changing setting sd_model_checkpoint to v1-5-pruned-emaonly.ckpt: AttributeError
    Traceback (most recent call last):
    File "/Users/leonard/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
    File "/Users/leonard/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
    File "/Users/leonard/stable-diffusion-webui/modules/initialize_util.py", line 181, in
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

    Applying attention optimization: InvokeAI... done.
    loading stable diffusion model: AssertionError
    Traceback (most recent call last):
    File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
    File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
    File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
    File "/Users/leonard/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model # noqa: B018
    File "/Users/leonard/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 770, in load_model
    with devices.autocast(), torch.no_grad():
    File "/Users/leonard/stable-diffusion-webui/modules/devices.py", line 218, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
    File "/Users/leonard/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
    File "/Users/leonard/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
    File "/Users/leonard/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 769, in current_device
    _lazy_init()
    File "/Users/leonard/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled

    Stable diffusion model failed to load
    Exception in thread Thread-2 (load_model):
    Traceback (most recent call last):
    File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
    File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
    File "/Users/leonard/stable-diffusion-webui/modules/initialize.py", line 154, in load_model
    devices.first_time_calculation()
    File "/Users/leonard/stable-diffusion-webui/modules/devices.py", line 267, in first_time_calculation
    linear(x)
    File "/Users/leonard/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/Users/leonard/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in call_impl
    return forward_call(*args, **kwargs)
    File "/Users/leonard/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 503, in network_Linear_forward
    return originals.Linear_forward(self, input)
    File "/Users/leonard/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
    RuntimeError: "addmm_impl_cpu
    " not implemented for 'Half'
    *** Error completing request
    *** Arguments: ('task(4p89spgl0hy0zha)', <gradio.routes.Request object at 0x35195d7e0>, 'a bird', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
    File "/Users/leonard/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
    File "/Users/leonard/stable-diffusion-webui/modules/call_queue.py", line 36, in f
    res = func(*args, **kwargs)
    File "/Users/leonard/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
    processed = processing.process_images(p)
    File "/Users/leonard/stable-diffusion-webui/modules/processing.py", line 832, in process_images
    sd_models.reload_model_weights()
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
    File "/Users/leonard/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

    @viking1304 have exactly the same issue with macOS 13.4.1

    changing setting sd_model_checkpoint to v1-5-pruned-emaonly.safetensors: AttributeError
    Traceback (most recent call last):
    File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/options.py", line 165, in set
    option.onchange()
    File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/call_queue.py", line 13, in f
    res = func(*args, **kwargs)
    File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/initialize_util.py", line 181, in
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
    File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
    File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
    File "/Users/gusenits/development/stable_diff/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
    AttributeError: 'NoneType' object has no attribute 'lowvram'

    Please find this part in the console and show me what you got:

    ################################################################
    Launching launch.py...
    ################################################################
    Python 3.10.13 (main, Nov  1 2023, 17:29:04) [Clang 14.0.3 (clang-1403.0.22.14.1)]
    Version: v1.7.0
    Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
    Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
    

    i have the same issue "AttributeError: 'NoneType' object has no attribute 'lowvram'"

    @viking1304

    It's the Python version. You need 3.10. Because Python is worse at "DLL hell" than Windows ever was, apparently, and no one follows semver.

    @menta-alt What is your specific model and macOS version? Are you on an Intel Mac with macOS Monterey? Recently, I found an old iMac at work and did some tests, and I noticed this error:

    stable-diffusion-webui/modules/mac_specific.py:83: UserWarning: cumsum_out_mps supported by MPS on MacOS 13+, please upgrade (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/UnaryOps.mm:406.)
    

    So, it looks like Macs that cannot be updated to macOS 13+ can only work in CPU-only mode. 😱

    @richardtallent it should be better after the next A1111 release. Since my PR #15851 has been merged, it will be easier to update torch for ARM Macs without breaking things for Intel Macs (torch for Intel builds are deprecated). I also posted PR #16059 that will use 2.3.1 on ARM Macs, after I tested that version.

    My latest PR #16092 will try to prioritize python 3.10 (or any version specified in webui-user.sh). If that fails, it will fall back to python3. This will ensure that a1111 will first try to use 3.10, if the user has 3.10 and 3.12 installed.

    A1111 should work fine with 3.11 (I haven't noticed any issues), but 3.12 support has to wait since the torch itself is not fully functional on 3.12, as we discussed here

    I'm stuck at step 1, where it says "Keep the terminal window open and follow the instructions under "Next steps" to add Homebrew to your PATH."

    There were no instructions under "Next steps". I've installed on Apple Silicon Mac.

    Thanks, I didn't get the instruction. I don't know why because I haven't installed Homebrew on this machine before.

    Anyway, I've managed to get to the point where I've installed everything and am running the webUI, but when I try a prompt I get this error:

    URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)>

    Run this command and post the output:

    system_profiler SPHardwareDataType | head -n 12 && echo "" && system_profiler SPDisplaysDataType && sw_vers
    

    Double-check if your serial is displayed in the output and REMOVE IT before posting!

    I am not 100% sure if other macOS versions have the same display format as the last one, but the serial should not be in the first 12 lines of SPHardwareDataType output.

    Also, post the result of this command:

    echo -e "\nInstalled python versions:";for v in {10..12}; do if command -v python3."${v}" &> /dev/null;then python_version="$(python3."${v}" --version)";python_path="$(which python3."${v}")";echo -e "  \033[34m${python_version#*[[:space:]]}\033[0m: ${python_path}";fi;done;if command -v python3 &> /dev/null;then p3="$(python3 --version)";echo -e "\nDefault Python version: \033[34m${p3#*[[:space:]]}\033[0m";fi
          Model Number: Z176000C4B/A
          Chip: Apple M2 Max
          Total Number of Cores: 12 (8 performance and 4 efficiency)
          Memory: 96 GB
          System Firmware Version: 10151.121.1
          OS Loader Version: 10151.121.1
    Graphics/Displays:
        Apple M2 Max:
          Chipset Model: Apple M2 Max
          Type: GPU
          Bus: Built-In
          Total Number of Cores: 38
          Vendor: Apple (0x106b)
          Metal Support: Metal 3
          Displays:
            Color LCD:
              Display Type: Built-in Liquid Retina XDR Display
              Resolution: 3456 x 2234 Retina
              Main Display: Yes
              Mirror: Off
              Online: Yes
              Automatically Adjust Brightness: Yes
              Connection Type: Internal
    ProductName:		macOS
    ProductVersion:		14.5
    BuildVersion:		23F79
    
    Installed python versions:
    3.10.6: /usr/local/bin/python3.10
    3.12.3: /opt/homebrew/bin/python3.12
    Default Python version: 3.10.6
    
    ❯ which brew
    

    If it shows /usr/local/bin/brew and not /opt/homebrew/bin/brew, you probably run rosetta2 before installing brew, which is completely wrong. In that case, you need to remove the Intel version of brew first.

    I just realized that /usr/local/bin/python3.10 is the location where brew for Intel installs python 3.10, but it is possible that you used some

    Try to uninstall Python 3.10.6 located in /usr/local/bin/python3.10 first, then install 3.10.14 using brew install python@3.10

    Remove venv folder which is located in stable-diffusion-webui folder and run ./webui.sh. That will create a new venv for you and hopefully solve the problem with certs.

    Or, you can just uninstall Python 3.10.6 and run my script, which will do all the rest for you automatically.

    Hi everyone, webui consistently fails to start on a new insrtall, did everything in the tutorial, twice
    Here is the log

    Installing requirements
    Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae --use-cpu interrogate
    no module 'xformers'. Processing without...
    no module 'xformers'. Processing without...
    Traceback (most recent call last):
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/launch.py", line 48, in
    main()
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/launch.py", line 44, in main
    start()
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
    import webui
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/webui.py", line 13, in
    initialize.imports()
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/modules/initialize.py", line 26, in imports
    from modules import paths, timer, import_hook, errors # noqa: F401
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/modules/paths.py", line 60, in
    import sgm # noqa: F401
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/init.py", line 1, in
    from .models import AutoencodingEngine, DiffusionEngine
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/models/init.py", line 1, in
    from .autoencoder import AutoencodingEngine
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 12, in
    from ..modules.diffusionmodules.model import Decoder, Encoder
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/modules/init.py", line 1, in
    from .encoders.modules import GeneralConditioner
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 22, in
    from ...modules.diffusionmodules.model import Encoder
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/init.py", line 6, in
    from .sampling import BaseDiffusionSampler
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/sampling.py", line 12, in
    from ...modules.diffusionmodules.sampling_utils import (
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/sampling_utils.py", line 2, in
    from scipy import integrate
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/init.py", line 134, in getattr
    return _importlib.import_module(f'scipy.{name}')
    File "/opt/homebrew/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/init.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/integrate/init.py", line 94, in
    from ._quadrature import *
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/integrate/_quadrature.py", line 9, in
    from scipy.special import roots_legendre
    File "/Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/special/init.py", line 777, in
    from . import _ufuncs
    ImportError: dlopen(/Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/special/_ufuncs.cpython-310-darwin.so, 2): Symbol not found: _cephes_Gamma
    Referenced from: /Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/special/_ufuncs.cpython-310-darwin.so
    Expected in: flat namespace
    in /Users/borisvian/stablediffusion/stable-diffusion-webui/venv/lib/python3.10/site-packages/scipy/special/_ufuncs.cpython-310-darwin.so
            

    Sorry about that, I thought you specified to double check if it was in there to make sure I sent it, what can be done with my mac's serial?

    Hardware:
        Hardware Overview:
          Model Name: MacBook Air
          Model Identifier: MacBookAir10,1
          Chip: Apple M1
          Total Number of Cores: 8 (4 performance and 4 efficiency)
          Memory: 8 GB
          System Firmware Version: 6723.140.2
          OS Loader Version: 6723.140.2
    Graphics/Displays:
        Apple M1:
          Chipset Model: Apple M1
          Type: GPU
          Bus: Built-In
          Total Number of Cores: 7
          Vendor: Apple (0x106b)
          Metal Family: Supported, Metal GPUFamily Apple 7
          Displays:
            Color LCD:
              Resolution: 2880 x 1800
              UI Looks like: 1440 x 900 @ 60.00Hz
              Main Display: Yes
              Mirror: Off
              Online: Yes
              Automatically Adjust Brightness: No
              Connection Type: Internal
    
    Installed python versions:
      3.10.14: /opt/homebrew/bin/python3.10
      3.12.3: /opt/homebrew/bin/python3.12
    Default Python version: 3.12.3
            

    I am very sorry I was not clear enough. I updated my original post. It is usually not a big deal. Theoretically, Apple can block iMessage on your Mac if someone uses your serial with Hackintosh, but chances for that are almost none. I would avoid posting that info publicly anyway, and I do not want people to think I intentionally made them send their serials.

    You have brew and python3.10 properly installed, so this error is strange.

    I do not see your macOS version (even though it should be displayed in the output of the command I sent), so please post this too:

    sw_vers
    

    You must update your macOS if it is lower than 12.3.

    If your macOS is newer, you can try my step-by-step guide or simply run my script, which will do all for you automatically.

    Hi, I am trying to install this repository in my MacBook Air M2. I cloned, downloaded the model from here (https://huggingface.co/stabilityai/stable-diffusion-3-medium) and copied the model in the stability-diffusion folder for checkpoints. While running ./web-ui.sh I am always getting this error.

    Python 3.12.3 (v3.12.3:f6650f9ad7, Apr  9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
    Version: v1.9.4
    Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
    Installing torch and torchvision
    ERROR: Could not find a version that satisfies the requirement torch==2.1.0 (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1)
    ERROR: No matching distribution found for torch==2.1.0

    I know this repository has been built on 3.10 but I tried to use 3.12. However, it shows errors while installing PyTorch and I am not sure where to change this PyTorch configuration. Kindly help me :)

    Please use 3.10 since that is recommended Python version for A1111 on Mac. My PR has been merged, so it will be included in next release. It will install torch 2.3.0 on ARM Macs. SD3 support is still in testing but I have no information if it will be part of the next release or not.

    I just check change log for 2.3.1, so I will try to submit a new PR soon.

    Fix data corruption when coping large (>4GiB) tensors (pytorch/pytorch#124635)
    Fix Tensor.abs() for complex (pytorch/pytorch#125662)

    I need to test 2.3.1 properly before I submit that PR.

    I'm using MacOS 14.5
    Managed to get to the step ./webui.sh then the auto dependency download becomes a disaster.
    It always seems to get stuck for a few minutes when installing gfpgan then displaying the following message.
    warning: variable does not need to be mutable
    --> tokenizers-lib/src/models/unigram/model.rs:265:21
    265 | let mut target_node = &mut best_path_ends_at[key_pos];
    | ----^^^^^^^^^^^
    | |
    | help: remove this mut
    = note: #[warn(unused_mut)] on by default

      warning: variable does not need to be mutable
         --> tokenizers-lib/src/models/unigram/model.rs:282:21
      282 |                 let mut target_node = &mut best_path_ends_at[starts_at + mblen];
          |                     ----^^^^^^^^^^^
          |                     |
          |                     help: remove this `mut`
      warning: variable does not need to be mutable
         --> tokenizers-lib/src/pre_tokenizers/byte_level.rs:200:59
      200 |     encoding.process_tokens_with_offsets_mut(|(i, (token, mut offsets))| {
          |                                                           ----^^^^^^^
          |                                                           |
          |                                                           help: remove this `mut`
      error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell`
         --> tokenizers-lib/src/models/bpe/trainer.rs:526:47
      522 |                     let w = &words[*i] as *const _ as *mut _;
          |                             -------------------------------- casting happend here
      526 |                         let word: &mut Word = &mut (*w);
          |                                               ^^^^^^^^^
          = note: for more information, visit <https://doc.rust-lang.org/book/ch15-05-interior-mutability.html>
          = note: `#[deny(invalid_reference_casting)]` on by default
      warning: `tokenizers` (lib) generated 3 warnings
      error: could not compile `tokenizers` (lib) due to 1 previous error; 3 warnings emitted
      Caused by:
        process didn't exit successfully: `/Users/bayhe/.rustup/toolchains/stable-x86_64-apple-darwin/bin/rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg 'feature="cached-path"' --cfg 'feature="clap"' --cfg 'feature="cli"' --cfg 'feature="default"' --cfg 'feature="dirs"' --cfg 'feature="esaxx_fast"' --cfg 'feature="http"' --cfg 'feature="indicatif"' --cfg 'feature="onig"' --cfg 'feature="progressbar"' --cfg 'feature="reqwest"' -C metadata=3b56116cb7d28e4e -C extra-filename=-3b56116cb7d28e4e --out-dir /private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps -C strip=debuginfo -L dependency=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps --extern aho_corasick=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libaho_corasick-f2121eaad07e1da9.rmeta --extern cached_path=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libcached_path-686ebc4aed57628f.rmeta --extern clap=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libclap-0482e0e2ec940f6a.rmeta --extern derive_builder=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libderive_builder-9c483c90207feb85.rmeta --extern dirs=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libdirs-eb720cc9f373a459.rmeta --extern esaxx_rs=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libesaxx_rs-c11a1e1f6f84e7b8.rmeta --extern getrandom=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libgetrandom-49682abfc035ad79.rmeta --extern indicatif=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libindicatif-32f05fdd066867a5.rmeta --extern itertools=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libitertools-68b219beb42b5ae2.rmeta --extern lazy_static=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/liblazy_static-e6022603780086c5.rmeta --extern log=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/liblog-6d18cc6b2bf3241b.rmeta --extern macro_rules_attribute=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libmacro_rules_attribute-abad8be0d4723f47.rmeta --extern monostate=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libmonostate-82919c5752b6df75.rmeta --extern onig=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libonig-01162798a6f8ebe5.rmeta --extern paste=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libpaste-c62514dfda758ec3.dylib --extern rand=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/librand-79d25ea5508fc8f8.rmeta --extern rayon=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/librayon-f1ab30199cee89cd.rmeta --extern rayon_cond=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/librayon_cond-962e3be22f28c0ee.rmeta --extern regex=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libregex-b978c26d90bafb00.rmeta --extern regex_syntax=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libregex_syntax-b0b37ec730b7eff0.rmeta --extern reqwest=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libreqwest-1d1bab95eec9f8cc.rmeta --extern serde=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libserde-a5876712f0ac1a56.rmeta --extern serde_json=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libserde_json-b298227f142760c3.rmeta --extern spm_precompiled=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libspm_precompiled-c6bf16861e7474c8.rmeta --extern thiserror=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libthiserror-10b8593c7d690b1d.rmeta --extern unicode_normalization_alignments=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libunicode_normalization_alignments-60e594e5a4ae7456.rmeta --extern unicode_segmentation=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libunicode_segmentation-bdd041fc650cc8e0.rmeta --extern unicode_categories=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/deps/libunicode_categories-7d33312ac673795c.rmeta -L native=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/build/bzip2-sys-457a4582acde308b/out/lib -L native=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/build/zstd-sys-78fa6ecab7c42d83/out -L native=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/build/esaxx-rs-aa7cf9b038c87515/out -L native=/private/var/folders/8_/hzpg5ddj4b73p4qkq03xv08w0000gn/T/pip-install-lm8lky09/tokenizers_78bb9c12755f4e21876e42a11d1161bd/target/release/build/onig_sys-ed31d8db1c2ce589/out` (exit status: 1)
      error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- -C 'link-args=-undefined dynamic_lookup -Wl,-install_name,@rpath/tokenizers.cpython-312-darwin.so'` failed with code 101
      [end of output]
    

    note: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed building wheel for tokenizers
    ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (Pillow, tokenizers)

    When I tried to use the command ./webui.sh, the following error message appeared:

    ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/ab/6a/0debe1ec3c63b1fd7487ec7dd8fb1adf19898bef5a8dc151265d79ffd915/torch-2.1.0-cp310-none-macosx_11_0_arm64.whl (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))

    Command: "/Users/huaiwanju/workfile/stable-diffusion-webui/venv/bin/python3.10" -m pip install torch==2.1.0 torchvision==0.16.0

    Here is a part of the error message.

    I tried using a mirror, upgrading pip, changing the DNS, but all of them failed.

    I think this is not caused by network issues.

    Has anyone encountered such an issue?

    Based on the error, you cannot access https://files.pythonhosted.org/
    When Python installs packages, it needs access to that site to get files.

    I heard that people from some regions cannot access it directly and that they need to use a proxy or VPN. Can you open that link in your browser?