Create Morph Image: Create a GIF/APNG animation from two images, fading between them.Create Grid Image from Batch: Create a grid image from a batch tensor of images.Create Grid Image: Create a image grid from images at a destination with customizable glob pattern.Control Net Model Input Switch: Switch between two Control Net Model inputs based on a boolean switch.Conditioning Input Switch: Switch between two conditioning inputs.CLIP Vision Input Switch: Switch between two CLIP Vision inputs based on a boolean switch.CLIP Input Switch: Switch between two CLIP inputs based on a boolean switch.If no path is set the wildcards dir is located at the root of WAS Node Suite as /wildcards."wildcards_path": "E:\\python\\automatic\\webui3\\stable-diffusion-webui\\extensions\\sd-dynamic-prompts\\wildcards".You can set a custom wildcards path in was_suite_config.json file with key:.Wildcards are in the style of _filename_, which also includes subdirectories like _appearance/haircolour_ (if you noodle_key is set to _).CLIPTextEncode (NSP): Parse noodle soups from the NSP pantry, or parse wildcards from a directory containing A1111 style wildacrds. Cache Node: Cache Latnet, Tensor Batches (Image), and Conditioning to disk to use later.Bus Node: condense the 5 common connectors into one, keep your workspace tidy (Model, Clip, VAE, Positive Conditioning, Negative Conditioning).Bounded Image Crop with Mask: Crop a bounds image by mask.Bounded Image Crop: Crop a bounds image.Bounded Image Blend with Mask: Blend a bounds image by mask.Bounded Image Blend: Blend bounds image.Inset Image Bounds: Inset a image bounds.SAM Parameters Combine: Combine SAM parameters.SAM Parameters: Define your SAM parameters for segmentation of a image.SAM Model Loader: Load a SAM Segmentation model.Models will be stored in ComfyUI/models/blip/checkpoints/.Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config.BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question.BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node.These are experimental nodes.Ĭurrent Nodes: $\Large\color$ Video Nodes - There are two new video nodes, Write to Video and Create Video from Path.The was_suite_config.json will automatically set use_legacy_ascii_text to false.This is a change from ASCII so that it is more clear what data is being passed. The new preferred method of text node output is STRING. BLIP is now a shipped module of WAS-NS and no longer requires the BLIP Repo.I will approve appropriate and beneficial PRs. Feel free to fork and continue the project. I do not have the time and have other obligations. WAS-NS is not under active development.Consider donating to the project to help it's continued development. You can use this tool to add a workflow to a PNG file easily. Preferably embedded PNGs with workflows, but JSON is OK too. A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |