With the landscape quickly changing, this article is fast becoming outdated!
If you face issues with the tutorial below I recommend you checkout the latest advice here.
Stable Diffusion is a latent text-to-image diffusion model that was recently made open source.
For Linux users with dedicated NVDIA GPUs the instructions for setup and usage are relatively straight forward. However for MacOS users you can't use the project "out of the box". Not to worry! There are some steps to getting it working nevertheless!
Environment Setup
To begin you need Python, Conda, and a few other libraries set up:
# Install Python, Cmake, Git, and Protobuf
brew install python \
cmake \
git \
protobuf
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install Conda:
## Either use this for older "pre-M1" Macs:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
bash Miniconda3-latest-MacOSX-x86_64.sh
## Or use this for older M1 Macs:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh
bash Miniconda3-latest-MacOSX-arm64.sh
You may need to restart your terminal at this point for the changes for the new libraries to be picked up.
Clone this fork of the project and checkout the apple patch branch:
git clone https://github.com/magnusviri/stable-diffusion
cd stable-diffusion
git checkout apple-silicon-mps-support
At this point you will need to make sure you're using Python 3, check out this article for different ways to make Python 3 the default version on your Mac.
Set up the Conda environment:
conda env create -f environment-mac.yaml
conda activate ldm
And finally set the following environment variable:
export PYTORCH_ENABLE_MPS_FALLBACK=1
Code Changes
Our environment is now set up, but we have a few tweaks that we need to allow the code to gracefully fallback to using the CPU (if required!).
Append .contiguous()
at ldm/models/diffusion/plms.py#L27 resulting in:
- attr = attr.to(torch.float32).to(torch.device(self.device_available))
+ attr = attr.to(torch.float32).to(torch.device(self.device_available)).contiguous()
Similarly append a new line x = x.contiguous()
after ldm/modules/attention.py#L211 so it looks something like:
def _forward(self, x, context=None):
+ x = x.contiguous()
x = self.attn1(self.norm1(x)) + x
Download Stable Diffusion Weights
Let's install our diffusion weights
curl "https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media" > sd-v1-4.ckpt
Create Images 🚀
You should now be ready to generate images on your MacOS device using Stable Diffusion! 🎉 🎉
python scripts/txt2img.py --prompt "a drawing of web developers" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1
Tricks and hacks gleamed from https://github.com/CompVis/stable-diffusion/issues/25 - credit to all the folks in that thread for figuring out how to get things working!
Top comments (41)
Hey all, I'm magnusviri. The instructions here need to be modified like this.
git checkout https://github.com/lstein/stable-diffusion
Don't check out anything. After
conda activate ldm
run this.python scripts/preload_models.py
The latest instructions are here.
I haven't updated them with
python scripts/preload_models.py
. Anyone can get a github account, fork lstein's repo, update the readme with the latest info, and do a pull request to get it updated for everyone. This is the beauty of open source.This stuff is moving extremely fast and is extremely complex. It's moving so fast that since it's moved from magnusviri/stable-diffusion to lstein/stable-diffusion I haven't even had time to double check that the readme is even accurate or update it with the latest list of errors plaguing people. While people are trying to make it as easy as possible this is nowhere near ready for the masses and non-power users.
Nice one @magnusviri! I've put a notice at the top as my personal notes from having a play are fast becoming unfit for the enthusiasm of the community!
Thanks for the detailed instructions! Unfortunately, I'm getting stuck at environment creation where pip is failing with the message below. Any thoughts on how to get over this hurdle? Thanks again!
(Note: It appears to be an issue with onnx. I tried a
pip install onnx
and it went through every version unsucessfully.)im having the same issue, any clue?
I’ll give my own instructions another go and see if can reproduce…
What version of python/pip are you using? IIRC you need to use python 3
Updates:
cmake
, adding some env setup steps for this now! You need to install thisbrew install cmake
I’ll check again installing cmake
Question: why are you using /Miniconda3-latest-MacOSX-x86_64.sh instead of /Miniconda3-latest-MacOSX-arm64.sh ?
Miniconda3-latest-MacOSX-x86_64.sh
is for "pre-M1" macs, e.g. I use a MacBook Air 2017 which uses an Intel chip.Miniconda3-latest-MacOSX-arm64.sh
is for Apple M1 😄I was confused because you use apple-silicon-mps-support branch and I thought you had a M1 cpu. Anyway now it's working for me, thanks
Thanks for the detailed instructions!
after set up the enviroment.
the last step:
python scripts/orig_scripts/txt2img.py --prompt "An Armenia girl with curly hair goes to Senior school with her mum in Shanghai, and she carries a dark red shoulder bag" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1
the error:
fixed by removing
local_files_only=True
:I had the same problem and your method solved it, but the next error crept up:
Traceback (most recent call last):
Traceback (most recent call last):
File "/Users/SWB/stable-diffusion/scripts/orig_scripts/txt2img.py", line 327, in <module>
main()
File "/Users/SWB/stable-diffusion/scripts/orig_scripts/txt2img.py", line 277, in main
samples_ddim, _ = sampler.sample(S=opt.ddim_steps,
File "/Users/SWB/miniconda3/envs/ldm/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/Users/SWB/stable-diffusion/ldm/models/diffusion/plms.py", line 156, in sample
self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
File "/Users/SWB/stable-diffusion/ldm/models/diffusion/plms.py", line 108, in make_schedule
(1 - self.alphas_cumprod_prev)
TypeError: unsupported operand type(s) for -: 'int' and 'builtin_function_or_method'
(ldm) SWB@SWB-iMac stable-diffusion %
For me it is hard to find the reason for this error. Please help!
Hi, thanks for the tutorial, haven't managed to get it to work yet. I got this error:
Edit:
I don't know what I did, but now the error is
Edit:
I found the file somewhere else on my mac and put it in the correct folder and renamed it model.ckpt, but it resulted in the following error :
Not really sure how I can proceed from here to try and make it work.
Some other issues I came across along the way:
github.com/lstein/stable-diffusion...
Thanks for the instructions, but I found some problems while trying to make this to work.
Many modules looks missing on my MacOS (with M1), that I think I managed to install, but now I'm stucked.
The error I get is:
Any idea on how to solve it? I tried to install libpng but things don't change.
Thanks again :)
Hi Craig,
I'm stuck at the point of installing Conda. I get the message: 'zsh: command not found: wget'. It seems wget is a Linux command, but I'm on a iMac (Monterey). Any help is much appreciated!
Sjoerd
you can install wget with brew, or switch to using curl to get the file also
curl -O URLHERE
Thanks! ATM I'm not able to try it, but also further investigation revealed that probably the path to the zsh shell is not valid. So I will check that also asap.
It turned out I indeed had to install 'wget' with brew. It does now recognise the command. So no problems with the shell path. Thanks so much!
Are other people having issues with grpcio? I have never delved into the python world much, but tried installing an earlier version into the conda environment and cleared the caches. But it will still download the latest version which fails on my M1 mac.
Shows the installed version as 1.42.0 but I still get this error
I am getting a weird error when I try and run the python script.
I have an apple Macbook Pro M1 with 64GB of ram.
thx for this tutorial
but I got error on my MacBook pro (intel):
The only time I got a segfault was when I set KMP_DUPLICATE_LIB_OK ... I posted a couple other solutions above (recommend the "delete the conflicting library" one over the "rebuild with nomkl" but whichever you feel more comfortable with ^_^ )
FWIW [2022-08-29] —
on an intel mac
1)
ldm/modules/attention.py#L211 already had the patch
2)
ran into issues with:
I wound up redoing the environment to include nomkl and it seems to have worked, if ... so ... SO ... slow [4 minutes!] . But I'll take functional over not (thanks!) ... helps me prep for doing this on a real machine, and I can kick things off overnight. ;)
Alternately, with the info that libiomp5.dylib and libomp.dylib were likely conflicts:
seems to have worked, ignoring the nomkl path.
The KMP_DUPLICATE_LIB_OK solution only led to segfaults.
The LD_PRELOAD did nothing, possibly because I was targeting the wrong one, but possibly because the names were actually different?