Multi-Angle Editing with Qwen-Edit-2509 (ComfyUI Local + API Ready)
Sharing a workflow for anyone exploring multi-angle image generation and camera-style edits in ComfyUI, powered by Qwen-Image-Edit-2509-Lightning-4steps-V1.0-bf16 for lightning-fast outputs.
https://preview.redd.it/8b7ckbj0j10g1.png?width=1529&format=png&auto=webp&s=28d011f82bf245b610506b6d371cc78bfdd30d69
You can rotate your scene by 45° or 90°, switch to top-down, low-angle, or close-up views, and experiment with cinematic lens presets using simple text prompts.
🔗 Setup & Links:
• API ready: Replicate – Any ComfyUI Workflow \+ Workflow
• LoRA: Qwen-Edit-2509-Multiple-Angles
• Workflow: GitHub – ComfyUI-Workflows
📸 Example Prompts:
Use any of these supported commands directly in your prompt:
• Rotate camera 45° left
• Rotate camera 90° right
• Switch to top-down view
• Switch to low-angle view
• Switch to close-up lens
• Switch to medium close-up lens
• Switch to zoom out lens
You can combine them with your main denoscription, for example:
portrait of a knight in forest, cinematic lighting, rotate camera 45° left, switch to low-angle view
If you’re into building, experimenting, or creating with AI, feel free to follow or connect. Excited to see how you use this workflow to capture new perspectives.
Credits: dx8152 – Original Model
https://redd.it/1orq4s3
@rStableDiffusion
Sharing a workflow for anyone exploring multi-angle image generation and camera-style edits in ComfyUI, powered by Qwen-Image-Edit-2509-Lightning-4steps-V1.0-bf16 for lightning-fast outputs.
https://preview.redd.it/8b7ckbj0j10g1.png?width=1529&format=png&auto=webp&s=28d011f82bf245b610506b6d371cc78bfdd30d69
You can rotate your scene by 45° or 90°, switch to top-down, low-angle, or close-up views, and experiment with cinematic lens presets using simple text prompts.
🔗 Setup & Links:
• API ready: Replicate – Any ComfyUI Workflow \+ Workflow
• LoRA: Qwen-Edit-2509-Multiple-Angles
• Workflow: GitHub – ComfyUI-Workflows
📸 Example Prompts:
Use any of these supported commands directly in your prompt:
• Rotate camera 45° left
• Rotate camera 90° right
• Switch to top-down view
• Switch to low-angle view
• Switch to close-up lens
• Switch to medium close-up lens
• Switch to zoom out lens
You can combine them with your main denoscription, for example:
portrait of a knight in forest, cinematic lighting, rotate camera 45° left, switch to low-angle view
If you’re into building, experimenting, or creating with AI, feel free to follow or connect. Excited to see how you use this workflow to capture new perspectives.
Credits: dx8152 – Original Model
https://redd.it/1orq4s3
@rStableDiffusion
I made a set of enhancers and fixers for sdxl (yellow cast remover, skin detail, hand fix, image composition, add detail and many others)
https://redd.it/1orzfn4
@rStableDiffusion
https://redd.it/1orzfn4
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I made a set of enhancers and fixers for sdxl (yellow cast remover, skin detail,…
Explore this post and more from the StableDiffusion community
Cathedral (video version). Chroma Radiance + wan refiner, wan 2.2 3 steps in total workflow, topaz upscaling and interpolation
https://www.youtube.com/watch?v=R0hlhm_P3W8
https://redd.it/1ortlkn
@rStableDiffusion
https://www.youtube.com/watch?v=R0hlhm_P3W8
https://redd.it/1ortlkn
@rStableDiffusion
YouTube
The Cathedral
Little experiment with Chroma Radiance model and wan 2.2
[LoRA] PanelPainter — Manga Panel Coloring (Qwen Image Edit 2509)
https://redd.it/1os7ut5
@rStableDiffusion
https://redd.it/1os7ut5
@rStableDiffusion
Image MetaHub 0.9.5 – Search by prompt, model, LoRAs, etc. Now supports Fooocus, Midjourney, Forge, SwarmUI, & more
https://redd.it/1os6ijn
@rStableDiffusion
https://redd.it/1os6ijn
@rStableDiffusion
WAN 2.2 ANIMATE - how to make long videos, higher than 480p?
Is this possible to use resolution more than 480p if i have 16GB VRAM? (RTX 4070Ti SUPER)
Im struggling with workflows that allows to generate long videos, but only at low resolutions - when i go above 640x480, i'm getting VRAM allocation errors, regardless of requested frame count, fps and block swaps.
Official animate workflow from comfy templates, allows me do make videos in 1024x768 and even 1200x900 that are looking awesome, but they can have maximum 77 frames which is 4 seconds). Of course, they can handle more than 4 seocnds, but with terrible workaround - making batch of new separate videos, one by one, and connect them via first and last frame. It causes glitches and ugly transitions that are not acceptable.
Is there any way that allows to make let's say 8 seconds video at 1280x720p?
https://redd.it/1oserqg
@rStableDiffusion
Is this possible to use resolution more than 480p if i have 16GB VRAM? (RTX 4070Ti SUPER)
Im struggling with workflows that allows to generate long videos, but only at low resolutions - when i go above 640x480, i'm getting VRAM allocation errors, regardless of requested frame count, fps and block swaps.
Official animate workflow from comfy templates, allows me do make videos in 1024x768 and even 1200x900 that are looking awesome, but they can have maximum 77 frames which is 4 seconds). Of course, they can handle more than 4 seocnds, but with terrible workaround - making batch of new separate videos, one by one, and connect them via first and last frame. It causes glitches and ugly transitions that are not acceptable.
Is there any way that allows to make let's say 8 seconds video at 1280x720p?
https://redd.it/1oserqg
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit
Explore this post and more from the StableDiffusion community
Looking for a local alternative to Nano Banana for consistent character scene generation
https://redd.it/1oshicn
@rStableDiffusion
https://redd.it/1oshicn
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Looking for a local alternative to Nano Banana for consistent character scene generation
Explore this post and more from the StableDiffusion community
Is it possible to create FP8 GGUF?
Recently I've started creating GGUF, but the request that I had were for FP8 merged models, and I noticed that the noscript would turn FP8 to FP16.
I did some search and found that it is the weight that GGUF accepted, but then I saw this PR - https://github.com/ggml-org/llama.cpp/issues/14762 \- and would like to know if anyone was able to make this work or not?
The main issue at this moment, is the size of the GGUF vs the initial model, since it converts to FP16.
The other one, is that I don't know if it is making the model better, due to FP16, or even worst because of the noscript conversion.
https://redd.it/1oshg41
@rStableDiffusion
Recently I've started creating GGUF, but the request that I had were for FP8 merged models, and I noticed that the noscript would turn FP8 to FP16.
I did some search and found that it is the weight that GGUF accepted, but then I saw this PR - https://github.com/ggml-org/llama.cpp/issues/14762 \- and would like to know if anyone was able to make this work or not?
The main issue at this moment, is the size of the GGUF vs the initial model, since it converts to FP16.
The other one, is that I don't know if it is making the model better, due to FP16, or even worst because of the noscript conversion.
https://redd.it/1oshg41
@rStableDiffusion
GitHub
Feature Request: Direct FP8 conversion from convert_hf_to_gguf.py · Issue #14762 · ggml-org/llama.cpp
Prerequisites I am running the latest code. Mention the version if possible as well. I carefully followed the README.md. I searched using keywords relevant to my issue to make sure that I am creati...