Best upscale model for comfyui reddit. Generates a SD1. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. with a denoise setting of 0. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. . Aug 5, 2024 · Flux has been out of under a week and already seeing some great innovation in the open source community. pth or 4x_foolhardy_Remacri. I haven't been able to replicate this in Comfy. 25 i get a good blending of the face without changing the image to much. Does anyone have any suggestions, would it be better to do an ite From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. 5), with an ESRGAN model. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. 5, see workflow for more info Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. The downside is that it takes a very long time. That's practically instant but doesn't do much either. There is no tiling in the default A1111 hires. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Fastest would be a simple pixel upscale with lanczos. Upscaling: Increasing the resolution and sharpness at the same time. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. fix. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. But basically txt2img, img2img, 4x upscale with a few different upscalers. Sometimes models appear twice, for example “4xESRGAN” used by chaiNNer and “4x_ESRGAN” used by Automatic1111. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. And when purely upscaling, the best upscaler is called LDSR. Please keep posted images SFW. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. I was working on exploring and putting together my guide on running Flux on Runpod ($0. Import times for custom nodes: 0. The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. Reply reply Welcome to the unofficial ComfyUI subreddit. true. That's because of the model upscale. The world’s best aim trainer, trusted by top pros, streamers, and players like you. But for the other stuff, super small models and good results. You can easily utilize schemes below for your custom setups. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. The resolution is okay, but if possible I would like to get something better. 0 seconds (IMPORT FAILED): R:\diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 0. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. We would like to show you a description here but the site won’t allow us. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to create your AI art. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Thanks. That's because latent upscale turns the base image into noise (blur). I took a 2-4 month hiatus, basically when the OG upscale checkpoints came out like SUPIR so I have no heckin' idea what is the go-to these days. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. 1 at main (huggingface. Search for upscale and click on Install for the models you want. the factor 2. You create nodes and "wire" them together. Yep , people do say that ultimate SD works for SDXL as well now but didn't work for me. - latent upscale looks much more detailed, but gets rid of the detail of the original image. I want to upscale my image with a model, and then select the final size of it. 0 and want to add an Aesthetic Score Predictor function. 0-RC , its taking only 7. 4 This custom node is failing to load but I think this is a separate issue. Super late here but is this still the case? I've got CCSR & TTPlanet. Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. Then output everything to Video Combine . You could also try a standard checkpoint with say 13, and 30. But I probably wouldn't upscale by 4x at all if fidelity is important. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. 34 per hour) and discovered this workflow by @plasm0 that runs locally and support upscaling as well. This is the 'latent chooser' node - it works but is slightly unreliable. Though, from what someone else stated it comes to use case. 101 votes, 27 comments. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. so i. There are also "face detailer" workflows for faces specifically. 5 models such as dreamshaper or those which provide good details. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. Tried the llite custom nodes with lllite models and impressed. Now go back to img2img generated mask the important parts of your images and upscale that. Usually I use two my wokrflows: For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. py --directml In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. Welcome to the unofficial ComfyUI subreddit. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. I run some tests this morning. It's a lot faster that tiling but outputs aren't detailed. 5 model) >> FaceDetailer. co) Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. 6. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). That's a a good model but to be very clear it's not "objectively better" than anything else on that site, OP's entire basis for the post is just wrong, purpose built upscale models are NOT "advancing" in the way they seem to believe. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. 0-inpainting-0. Best aesthetic scorer custom node suite for ComfyUI? I'm working on the upcoming AP Workflow 8. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. 15-0. e. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 2 - image upscale is less detailed, but more faithful to the image you upscale. You can also run a regular AI upscale then a downscale (4x * 0. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Welcome to the unofficial ComfyUI subreddit. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird ComfyUI uses a flowchart diagram model. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. Upscale x1. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). model: base sd v1. There's "latent upscale by", but I don't want to upscale the latent image. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. g Use a X2 Upscaler model. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. Also, both have a denoise value that drastically changes the result. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. I am curious both which nodes are the best for this, and which models. messing around with upscale by model is pointless for high res fix. diffusers/stable-diffusion-xl-1. Note: Remember to add your models, VAE, LoRAs etc. Good for depth, open pose so far so good. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. All of this can be done in Comfy with a few nodes. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. Then another node under loaders> "load upscale model" node. I first create the image with SDXL then ultimate upscale using a SD 1. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. cgqznnj bsul jmykqxb bsvfz csez rouxw xjafdk cvsrwf sdwb smpgyyu