Similar to instructPix2Pix (below) but works with any SD 1.5 based model. The middle image is the "shuffled" input image imagine -control-image pearl-girl.jpg -control-mode shuffle "a clown" Generates the image based on elements of the control image. Normal Map Control imagine -control-image bird.jpg -control-mode normal "a bird" HED Boundary Control imagine -control-image dog.jpg -control-mode hed "photo of a dalmation"ĭepth Map Control imagine -control-image fancy-living.jpg -control-mode depth "a modern living room" Openpose Control imagine -control-image assets/indiana.jpg -control-mode openpose -caption-text openpose "photo of a polar bear"Ĭanny Edge Control imagine -control-image assets/lena.png -control-mode canny "photo of a woman with a hat looking at the camera" Generate images guided by body poses, depth maps, canny edges, hed boundaries, or normal maps. Visit and Image Structure Control by ControlNet Much smaller featureset compared to the command line tool. Generate images via API or web interface. Run API server and StableStudio web interface (alpha) > imagine "a scenic landscape" "a photo of a dog" "photo of a fruit bowl" "portrait photo of a freckled woman" "a bluejay" # Make an animation showing the generation process # on macOS, make sure rust is installed first # be sure to use Python 3.10, Python 3.11 is not supported at the moment "just works" on Linux and macOS(M1) (and maybe windows?). Pythonic generation of stable diffusion images.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |