Comfyui workflow json example reddit


Comfyui workflow json example reddit. png. Workflow in Json format. (I've also edited the post to include a link to the workflow) That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. We have four main sections: Masks, IPAdapters, Prompts, and Outputs. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself Many of the workflow examples can be copied either visually or by downloading a shared file containing the workflow. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. Installing ComfyUI. Please share your tips, tricks, and workflows for using this software to create your AI art. Join the largest ComfyUI community. json files. Breakdown of workflow content. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. You can also turn each process on/off for each run. There is a latent workflow and a pixel space ESRGAN workflow in the examples. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. I tried to open SuperBeasts-POM-SmoothBatchCreative-V1. To add content, your account must be vetted/verified. We will walk through a simple example of using ComfyUI, introduce some concepts, and gradually move on to more complicated workflows. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. sft file in your: ComfyUI/models/unet/ folder. Can your ComfyUI-serverless be adapted to work if the ComfyUI workflow was hosted on Runpod, Kaggle, Google Colab, or some other site ? Any help would be appreciated. for example, is "I want to compose a very K12sysadmin is for K12 techs. Nodes/graph/flowchart interface to experiment Img2Img Examples. Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. While I have you, can I ask where best to insert the base LoRA in your workflow? I created a ComfyUI workflow for Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. Updated IP Adapter Workflow Example - Asking . This is like the exact same example workflow that exists (and many others) on Kosinkadink's AnimateDiff Evolved GitHub renderartist • This is a great idea Welcome to the unofficial ComfyUI subreddit. 5 models and it easily generated 2k images without any distortion, which is better than khoya deep shrink. ComfyUI-Custom-Scripts. pt 到 models/ultralytics/bbox/ Will load a workflow from JSON via the load menu, but not drag and drop. I haven't decided if I want to go through the frustration of trying this again after spending a full day trying to get the last . g. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. 1. I hope that having a comparison was useful nevertheless. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Welcome to the unofficial ComfyUI subreddit. With ComfyUI Workflow Manager -Can I easily change or modify where my json workflows are stored and saved? Yes we just enabled this feature, please go to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. It might seem daunting at first, but you actually don't need to fully learn how these are connected. Table of contents. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. 13 GB Stage C >> \models\unet\SD Cascade Do you have ComfyUI manager. json file - Thank you very much for your contribution. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. x, 2. Pick an image that you want to inpaint. ComfyUI Tip: Add a node to your workflow quickly via double-clicking For example, if you want to use "FaceDetailer", just type "Face". Resoltuons 512x512, 600x400 and 800x400 is the limit that I've have tested, I dont't know how it will work at higher resolutions. Reply reply aliguana23 • when i download it, it downloads as webp without the workflow. Save your workflow using this format which is different than the normal json workflows. 5/clip_some_other_model. Adding LORAs in my next iteration. SDXL most definitely doesn't work with the old control net. Upscaling ComfyUI workflow. It's simple and straight to the point. Flux Schnell is a distilled 4 step model. It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. There are a couple abandoned suites that say they can do that, e. ComfyUI-Image-Selector. You create the workflow as you do in ComfyUI and then switch to that interfase. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. ComfyUI workflow ComfyUI . Even with 4 regions and a global condition, they just combine them all 2 at a It is a simple workflow of Flux AI on ComfyUI. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Animation using ComfyUI Workflow by Future Thinker If you have the SDXL 0. If it's the best way to install control net because when I tried manually doing it . Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. That will give you a Save(API Format) option on the main menu. 0 for ComfyUI - Now with support for SD 1. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. This is an interesting implementation of that idea, with a lot of potential. would be really nice if there was a workflow folder under Comfy as a default save/load spot. Ignore the prompts and setup /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. Tried another browser (both FF and Chrome. 4 - The best workflow examples are through the github examples pages. ComfyUI won't load my workflow JSON upvote /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. config. 5 base models, and modify latent image dimensions and upscale values to Welcome to the unofficial ComfyUI subreddit. Otherwise, please change the flare to "Workflow not included" edit: I didn't see a sample . 85 or even 0. K12sysadmin is open to view and closed to post. It covers the following topics: Merge 2 images together with this ComfyUI workflow. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. SD1. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. This guide is about how to setup ComfyUI on your Windows computer to run Flux. You can find the Flux Dev diffusion model weights here. An example of what this workflow can make. 7 MB Stage B >> \models\unet\SD Cascade stage_b_bf16. I've been using comfyui for a few weeks now and really like the flexibility it offers. safetensors 73. ComfyUI Fooocus Inpaint with Segmentation Workflow. You signed in with another tab or window. It's a bit messy, but if you want to use it as a reference, it might help you. Ability to change default values of UI settings (loaded from settings. If you want to automate it, I'm pretty sure there are Python packages that can do it, maybe even a tool that can read information out of a file, like for example ComfyUI workflow json file. ComfyUI Examples. I provide one example JSON to demonstrate how it works. It's quite straight forward, but maybe it could be simpler. We would like to show you a description here but the site won’t allow us. safetensors 3. 5 by using XL in comfy. ckpt model For ease, you can download these models from here. One year passes very quickly and progress is never linear or promised. Hi everyone. \Stable_Diffusion\stable Makeing a bit of progress this week in ComfyUI. I've uploaded the json files that krita and comfy used for this. Reply reply For example: ffmpeg -i my-cool-video. rgthree-comfy. I would like to ask you the following two questions Can we currently use the stable diffusion turbo class model to make the speed faster Examples. Sytan's SDXL Offical ComyfUI 1. More on number 3: I know people would say "just right click on the image and save it", but this isn't the same at all. If you want the exact input image you can find it on on Ubuntu it's downloads. You can apply poses with it in same workflow. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. 9 leaked repo, you can read the README. I have searched far and wide but could not find a node that lets me save the current workflow to a json file. hopefully this will be useful to you. What it's great for: Once you've achieved the artwork you're looking for, it's time to delve deeper and use inpainting, Get the Reddit app Scan this QR code to download the app now xpost from r/comfyui: New IPAdapter workflow. anyway. make sure to also rename sdfx. json) will be/are For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. The comfyui workflow is just a bit easier to drag and drop and get going right a way. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. json but I am having problems with a couple of nodes: I have a tutorial here for those who want to learn it instead of ComfyUI based workflow. Last but not least, I have the JSON template SDXL Turbo Examples. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. All the images in this repo contain metadata which means they can be loaded into ComfyUI I just tried a few things, and it looks like the only way I'm able to make this work is to use the "Save (API Format)" button in Comfy and then upload the resulting Flux. ckpt A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. No errors in the shell on drag and drop, nothing on the page updates at all Tried multiple PNG and JSON files, including multiple known-good ones Pulled latest from github I removed all custom nodes. Step 2: Upload an image. here to share my current workflow for switching between prompts. The experiments are more advanced examples Drag and drop doesn't work for . example to sdfx. Yes. Its just not intended as an upscale from the resolution used in the base model stage. I've tried using an empty positive prompt (as suggested in demos) and describing the content to be replaced without success -- Welcome to the unofficial ComfyUI subreddit. They are images of Thanks for the tips on Comfy! I'm enjoying it a lot so far. This tool also lets you export your workflows in a “launcher. ComfyUI is a completely different conceptual approach to generative art. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Some very cool stuff! For those who don't know what One Button 18K subscribers in the comfyui community. Nobody needs all that, LOL. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. For each of the sequences, I generated about ten of them and then chose the one I Plus, you want to upscale in latent space if possible. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. So OP, please upload the PNG to civitai. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. For other types of detailer, just type "Detailer". You can then load or drag the following This repo is divided into macro categories, in the root of each directory you'll find the basic json files and an experiments directory. They can create the impression of watching an animation when presented as an animated GIF or other video format. Belittling their efforts will get you banned. I notice the names of the settings in the krita json don't match what's in comfy's json at all, so I can't simply copy them across. It's thought to be as faster as possible to get the best clips and later upscale them. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other Img2Img Examples. Grab the ComfyUI workflow JSON here. json of the file I just used. As always, I'd like to remind you that this is a workflow designed to learn how to build a pipeline and how SDXL works. 1 or not. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Then there's a full render of the image with a prompt that describes the whole thing. I was not aware that reddit strips off the metadata of the png. Also, if this is new and exciting to For example, we take a simple prompt, Create a list, Verify with the guideline, improve and then send it to `TaraPrompter` to actually generate the final prompt that we can send. The way ComfyUI is built up, every image or video saves the workflow in the metadata, which means that once an image has been generated with ComfyUI, you can simply drag and drop it to get that complete workflow. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting Welcome to the unofficial ComfyUI subreddit. All the images in this repo contain metadata which means they can be loaded into ComfyUI Go on github repos for the example workflows. You may plug them to use with 1. Users can drag and drop nodes to design advanced AI art pipelines, and also take advantage of libraries of existing workflows. For example, this is what the workflow produces: Other than that, there were a few mistakes in version 3. safetensors and 1. Click New Fixed Random in the Seed node in Group A. A video snapshot is a variant on this theme. gah chrome I'm new to comfyui, does the sample image work as a "workflow save", as if it was a json with all the nodes? Reply reply Dezordan I couldn't decipher it either, but I think I found something that works. (also fixed the json with a better sampler layout. The drawback of comfyui is that it cannot change the topology of the workflow once it has already started running. ComfyUI provides a powerful yet intuitive way to harness Stable Diffusion through a flowchart interface. com and then post a link back here if you are willing to share it. However, without the reference_only ControlNetthis works poorly. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. It lets you change the aspect ratio, resolution, steps and everything without having to edit the nodes. ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. If you download custom nodes, those workflows (. be/ppE1W0-LJas - the tutorial. Please keep posted images SFW. For example you have [11,22,33], then by default you "pluck" starting from the first element, which the first pin with type INT will output 11. ComfyUI Tatoo Workflow | ComfyUI Workflow | OpenArt That being said, even for making apps, I believe using ComfyScript is better than directly modifying JSON, especially if the workflow is complex. json to work. The only references I've been able to find makes mention of this inpainting model, using raw python or auto1111. Download. Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & Here are approx. I really really love how lightweight and flexible it is. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper 17K subscribers in the comfyui community. People are running Bots which generate Art all the time and post it automatically to Discord and other places, I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. x, SDXL, LoRA, and upscaling makes ComfyUI flexible. or through searching reddit, the comfyUI manual needs updating imo. ComfyUI was generating normal images just fine. I played with hi-diffusion in comfyui with sd1. 10 votes, 10 comments. Merging 2 Images A collection of simple but powerful ComfyUI workflows for Stable Diffusion with curated default settings. Load the . Please let me know if you have any questions! My Discord - jojo studio /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. json, and verify / edit the paths to your model folders Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. json. It's not for beginners, but that's OK. ckpt model v3_sd15_mm. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. safetensors (5Gb - from the infamous SD3, instead of 20Gb - default from PixArt). For your all-in-one workflow, use the Generate tab. That actually does create a json, but the json Hey all- I'm attempting to replicate my workflow from 1111 and SD1. but mine do include workflows for the most part in the video description. I made an open source ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. a search of the subreddit Didn't turn up any answers to my question. You can load this image in ComfyUI to get the full workflow. This should update and may ask you the click restart. Upload your json workflow so that others can test for you. ) That's a bit presumptuous considering you don't know my requirements. I've been especially digging the detail in the clothing more than anything else. But reddit will strip it away. 0. The _offset field is a way to quickly skip ahead some data of same types. But all of the other API workflows listed in Custom ComfyUI Workflow dropdown in the plugin window within Photoshop are non-functional, giving variations of "ComfyUI Node type is not found" errors. com/models/628682/flux-1-checkpoint It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. Endless Nodes, but I couldn't find anything that actually can still be installed and works. You can Load these images in ComfyUI to get the full workflow. It looks freaking amazing! Anyhow, here is a screenshot and the . The examples were generated with the Welcome to the unofficial ComfyUI subreddit. This workflow needs a bunch of custom nodes and models that are a pain to track down: Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you You can use folders too, so eg cascade/clip_model. Check ComfyUI here: https://github. If you find it confusing, please post here for help or create an Issue in GitHub. (for 12 gb VRAM Max is about 720p resolution). Now I've enabled Developer mode in Comfy and I have managed to save the workflow in JSON API format but I need help setting up the API. Simply download the . The ui feels professional and directed. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. AP Workflow 6. Mute the two Save Image nodes in Group E Click Queue Prompt to generate a batch of 4 image previews in Group B. Is there a way to load the workflow from an image within It's perfect for animating hair while keeping the rest of the face still, as you can see in the examples. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. You can pull PNGs from Automatic1111 for the creation of some Comfy workflows but as far as I can tell it doesn't work with ControlNet or ADetailer images sadly. You can write workflows in code instead of separate files, use control flows directly, call Python libraries, and cache results across different workflows. You can save the workflow as json file and load it again from that file. EZ way, kust download this one and run like another checkpoint ;) https://civitai. Search the sub for what you need and download the . Img2Img ComfyUI workflow. I think it is just the same as the 1. Save this image then load it or drag it on ComfyUI to get the workflow. ; When the workflow opens, download the dependent nodes by pressing "Install Missing Custom Nodes" in Comfy Manager. json as a template). From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. com/models/628682/flux-1-checkpoint You can download this webp animated image and load it or drag it on ComfyUI to get the workflow. Stage A >> \models\vae\SD Cascade stage_a. This is why I used Rem as an example, to show you can "transplant" the kick to a different character using a character LoRA. json file, change your input images and your prompts and you are good to go! ControlNet Depth ComfyUI workflow I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. It is a simple workflow of Flux AI on ComfyUI. json file - use settings-example. If you haven't already, install ComfyUI and Comfy Manager - you can find instructions on their pages. I know it's simple for now. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. Merge 2 images together with this ComfyUI workflow. here is a example: "Help me create a ComfyUI workflow that takes an input image, uses SAM to identify and inpaint watermarks for removal, then applies various methods to upscale the watermark-free image. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. Look for the example that uses controlnet lineart. 3. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. Put the flux1-dev. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. This workflow needs a bunch of custom nodes and models that are a pain to If necessary, updates of the workflow will be made available on Github. The workflow is saved as a json file. This is the link to the workflow. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor Toggle for "workflow loading" when dropping in image in ComfyUI. The graphic style This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. This repo contains examples of what is achievable with ComfyUI. it is VERY memory efficient and has a great deal of flexibility especially where a user has need of a complex set of instructions I'm looking for a workflow (or tutorial) that enables removal of an object or region (generative fill) in an image. . Where can one get such things? It would be nice to An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. ) to integrate it with comfyUI for a "$0 budget sprite game". Ending Workflow. Ability to save full metadata for generated images (as JSON or embedded in PNG, disabled by default). This workflow needs a bunch of custom nodes and models that are a pain to track down: ComfyUI Path Helper MarasIT Nodes KJNodes Mikey Nodes AnimateDiff AnimateDiff Evolved IPAdapter plus If you drag in a png made with comfyui, you'll see the workflow in comfyui with the nodes etc. But for a base to start at it'll work. safetensors sd15_t2v_beta. json you had used, helpful. I would also love to see some repo of actual JSON or images (since Comfy does build the workflow from the image if everything necessary is installed). This json file can then be processed automatically across multiple repos to construct an overall map of everything. You can just use someone elses workflow of 0. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images Flux Dev. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. Because there are an infinite number of things that can happen in front of a virtual camera there are then an infinite number of variables and scenarios that generative models will face. You switched accounts on another tab or window. Andy Lau is ready for inpainting. Or what I started doing tonight was disconnect my upscale section but put a load image box at the start of upscale, generate a batch of images with a fixed seed if I like one of them then i load it at the start of the upscale and regeneration, because the seed hasn't changed it skips And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. Features. EDIT: For example this workflow shows the use of the other prompt windows. This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. from a folder but mainly its a workflow designed make or change an initial image to send to our sampler Two workflows included. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint v3_sd15_adapter. A few examples of my ComfyUI workflow to make very You can just open another tab of comfyui and load a different workflow in there. These are examples demonstrating how to do img2img. When rendering human creations, I still find significantly better results with 1. ComfyUI/web folder is where you want to save/load . pt 或者 face_yolov8n. Drag and drop the JSON file to ComfyUI. I use a google colab VM to run Comfyui. 0 workflow with Mixed Diffusion, and reliable high quality High Res Fix, now officially released! Below are some example /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It didn't work out. A lot of people are just discovering this technology, and want to show off what they created. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. It does not work with SDXL for me at the moment. That's how I made and shared this. 50, the graph will show lines more “spaced out” meaning that the frames are more distributed. SDXL Default ComfyUI workflow. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. In the original post is a youtube link where everything is explained while zooming in on the workflow in Comfyui. Examples of what Welcome to the unofficial ComfyUI subreddit. Krita's json settings First, I generated a series of images in a 9:16 aspect ratio, some in comfyui with sdxl, and others in midjourney. ComfyUI-Impact-Pack. This ComfyUI workflow lets you remove backgrounds or replace backgrounds which is a must for anyone wanting to enhance their products by either removing a background or replacing the background with something new. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . 5/clip_model_somemodel. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. Img2Img works by loading an image Starting workflow. For more details on using the workflow, check out the full guide Does anyone else here use this Photoshop plugin? I managed to set up the sdxl_turbo_txt2img_api JSON file that is described in the documentation. com/comfyanonymous/ComfyUI. Instructions and listing of necessary Resources are in Note files. There are plenty of workflows made you can find. ; Download this workflow and drop it into ComfyUI - or you can use one of the workflows others in the community made below. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. Support for SD 1. json inside Resource - Update I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. 1 that are now corrected. So in this workflow each of them will run on your input image and you can select the one that produces the best results. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. The denoise controls the amount of noise added to the image. I have an image that I want to do a simple zoom out on. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Honestly the real way this needs to work is for every custom node author to use a json file that describes functionality of each node's inputs/outputs and general functionality of the node(s). Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. I'm just wondering what other folks use it for. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows I'm making changes to several nodes in a workflow, but only specific ones are rerunning like for example the KSampler node. I am thinking of the scenario, where you have generated, say, a 1000 images with a randomized prompt and low quality settings and then have selected the 100 best and want to create high quality Welcome to the unofficial ComfyUI subreddit. safetensors vs 1. rgthree does it, I've written CLI tools to do the same based on /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The closest I found was SaveImgPrompt. The examples were generated with the Not a specialist, just a knowledgeable beginner. image saving and postprocess need was-node-suite-comfyui to be installed. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very Each workflow runs in its own isolated environment Prevents your workflows from suddenly breaking when updating a workflow’s custom nodes, ComfyUI, etc. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. This ComfyUI Examples. WAS suite has some workflow stuff in its github links somewhere as well. Right now the only way I see is putting an There are a lot of upscale variants in ComfyUI. For more details on using the workflow, check out the full guide Note that ComfyUI workflow uses the masquerade custom nodes, but they're a bit broken, I can't be totally sure I downloaded the json but I don't have the images you set up as an example. Upcoming tutorial - SDXL Lora + using 1. For example, it would be very cool if one could place the node numbers on a grid (of This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. So, if you are using that, I recommend you to take a look at this new one. Discussion, samples, tips and tricks on the Sigma FP. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. I've also added a ` TaraApiKeySaver` I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. Comfy UI is actually very good, it has many capabilities that are simply beyond other interfaces. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Still great on OP’s part for sharing the workflow. Welcome to the unofficial ComfyUI subreddit. SDXL Turbo is a SDXL model that can generate consistent images in a single step. So every time I reconnect I have to load a presaved workflow to continue where I started. 43 KB. In addition, I provide some sample images that can be imported into the program. Download the following inpainting workflow. Share, discover, & run thousands of ComfyUI workflows. Check comfyUI image examples in the link. json file so I just roughly reproduced the workflow shown in the video on the Github site, and this works! Maybe it even works better than before--at least I'm getting good results with fewer samples. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). Ability to load prompt information from JSON and PNG files. In case you ever wanted to see what happened if you went from Prompt A to Prompt B with multiple steps in between, now you can! (The workflow was intended to be attached to the screenshot at the bottom of this post, but instead, here's a link to comfy uis inpainting and masking aint perfect. This is an example of an image that I generated with the advanced workflow. mp4 -vf fps=10/1 frame%03d. Then when _offset have something like INT,1, then the first pin that have type INT will be 22. [This is a JSON uploaded to PasteBin, link also in comments] This means using natural language descriptions to automatically produce the corresponding JSON configurations. Fusion Workflow - JSON From An Alert upvotes r/ticktick. You can then load or drag the following image in ComfyUI to get the workflow: Well, I feel dumb. Ability to change default paths (loaded from paths. Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. The first one is very similar to the old workflow and just called "simple". This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. json file - use paths-example. Or open it in Visual Code and that can tell you if it ok or not. Official list of SDXL resolutions (as defined in SDXL paper). 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. See my own response here: Flux Schnell. Think about mass producing stuff, like game assets. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 A repository of well documented easy to follow workflows for ComfyUI - cubiq/ComfyUI_Workflows Description. Input your choice of checkpoint and lora in their respective nodes in Group A. The entire comfy workflow is there which you can use. json file, change your input images and your prompts and you are good to go! Inpainting Workflow. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. Example: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. The video is just too fast. Achieves high FPS using frame interpolation (w/ RIFE). I 表情代码:修改自ComfyUI-AdvancedLivePortrait face crop 模型参考 comfyui-ultralytics-yolo 下载 face_yolov8m. com or https://imgur. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. json workflow file from the C:\Downloads\ComfyUI\workflows folder. https://youtu. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. safetensors -- makes it easier to remember Im trying to understand how to control the animation from the notes of the author, it seems that if you reduce the linear_key_frame_influence_value of the Batch Creative interpolation node, like to 0. I am trying to find a workflow to automate by learning the manual steps (blender+etc. 0/Download workflow . Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. It's pretty easy to prune a workflow in json before sending it to ComfyUI. As far as I can see from the workflow you sent the full image to clip_vision which is basically turning the full image into an embedding, which contain a Reddit removes the ComfyUI metadata when you upload your pic. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). And that’s the best part Welcome to the unofficial ComfyUI subreddit. I understand how outpainting is supposed to work in comfyui (workflow. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Can someone give examples what you can do with the adapter in general? (Beyond what's in the videos) I've used it a little and it feels like a way to have an instant lora for a character. json file from CivitAI. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to But actually I got the same problem as with "euler", just very wildly different results like in the examples above. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. I think most of the time I only want the prompt and seed to be reused and keep the layout of my nodes unchanged. Here's the big issue AI-only driven techniques face for filmmaking. Prompt: A couple in a Get the Reddit app Scan this QR code to download the app now. Reload to refresh your session. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. 5 models like epicRealism or Jaugeraut, but I know once more models come out with the SDXL base, we'll see incredible results. And above all, BE NICE. r/ticktick. the good thing is no upscale needed. Still have the problem. It is not much an inconvenience when I'm at my main PC. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: Workflow. All of these were generated using this simple Comfy workflow:https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It now officialy supports ComfyUI and there is now a new Prompt Variant mode. I think ComfyUI is good for those who wish to do a reproducible workflow which then can be used to output multiple images of the same kind with the same steps. I also combined ELLA in the workflow to make it easier to get what I want. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. Just load your image, and prompt and go. Like 1024, 1280, 2048, 1536. - Ling-APE/ComfyUI-All-in-One-FluxDev yes, I've experienced that when the json file is not good. However, when I change values in some other nodes like something like Canny Edge node or DW Pose Estimator, they don't rerun. Simple workflow for using the new Stable Video Diffusion model in ComfyUI for image to video generation. I used the workflow kindly provided by the user u/LumaBrik, mainly playing with parameters like CFG Guidance, Augmentation level, and motion bucket. So, I just made this workflow ComfyUI. the diagram doesn't load into comfyui so I can't test it out. Hands are still bad though. When I saw a certain Reddit thread, I was immediately inspired to test and create my own PIXART-Σ (PixArt-Sigma) ComfyUI workflow. You signed out in another tab or window. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1 ComfyUI install guidance, workflow and example. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have So, I started to get into Animatediff Vid2Vid using ComfyUI yesterday and starting to get the hang of it, where I keep running into issues is identifying key frames for prompt travel. 9(just search in youtube sdxl 0. The trick of this method is to use new SD3 ComfyUI nodes for loading t5xxl_fp8_e4m3fn. ControlNet Inpaint Example. ComfyUI Workflow | OpenArt Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. More examples. safetensors sd15_lora_beta. found sdxl_styles. You can use more steps to increase the quality. Welcome to the TickTick Reddit! This community is devoted to the discussion of Nel file scaricabile troverai un file JSON da importare in ComfyUI, contenente due workflow pronti all’uso: uno con Portrait Master, dedicato ai ritratti, e uno per inserire manualmente i prompt positivi e negativi. 0 and upscalers It's a complex workflow with a lot of variables, I annotated the workflow trying to explain what is going on. Let's break down the main parts of this workflow so that you can understand it better. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Also it's possible to share the setup as a project of some kind and share this workflow with others for finetuning. Has anyone else messed around with gligen much? Thanks. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Here you can download my ComfyUI workflow with 4 inputs. Mixing ControlNets But standard A1111 inpaint works mostly same as this ComfyUI example you provided. You can then load or drag the 6 min read. 5 . What is the best workflow you know of? For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to speak. In ComfyUI go into settings and enable dev mode options. AP Workflow 7. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, Hello everyone, I got some exiting updates to share for One Button Prompt. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. json” file format, which lets anyone using the ComfyUI Launcher import your workflow w/ 100% reproducibility. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. Actually natsort are not involved in Junction at all. It looks freaking amazing! You signed in with another tab or window. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. A good place to start if you have no idea how any of this works Does anyone know why ComfyUI produces images that look like this? Important: This is the output I get using the old tutorial. There is also a UltimateSDUpscale node suite (as an extension). xooc feta csrd egeq kfyqt jyfvjst gtrcjz hryn lcclvvpj odfqsjto