r/StableDiffusion – Telegram
Multi-Angle Editing with Qwen-Edit-2509 (ComfyUI Local + API Ready)

Sharing a workflow for anyone exploring multi-angle image generation and camera-style edits in ComfyUI, powered by Qwen-Image-Edit-2509-Lightning-4steps-V1.0-bf16 for lightning-fast outputs.

https://preview.redd.it/8b7ckbj0j10g1.png?width=1529&format=png&auto=webp&s=28d011f82bf245b610506b6d371cc78bfdd30d69

You can rotate your scene by 45° or 90°, switch to top-down, low-angle, or close-up views, and experiment with cinematic lens presets using simple text prompts.

🔗 Setup & Links:
• API ready: Replicate – Any ComfyUI Workflow \+ Workflow
• LoRA: Qwen-Edit-2509-Multiple-Angles
• Workflow: GitHub – ComfyUI-Workflows

📸 Example Prompts:
Use any of these supported commands directly in your prompt:
• Rotate camera 45° left
• Rotate camera 90° right
• Switch to top-down view
• Switch to low-angle view
• Switch to close-up lens
• Switch to medium close-up lens
• Switch to zoom out lens

You can combine them with your main denoscription, for example:

portrait of a knight in forest, cinematic lighting, rotate camera 45° left, switch to low-angle view

If you’re into building, experimenting, or creating with AI, feel free to follow or connect. Excited to see how you use this workflow to capture new perspectives.

Credits: dx8152 – Original Model

https://redd.it/1orq4s3
@rStableDiffusion
Cathedral (video version). Chroma Radiance + wan refiner, wan 2.2 3 steps in total workflow, topaz upscaling and interpolation
https://www.youtube.com/watch?v=R0hlhm_P3W8

https://redd.it/1ortlkn
@rStableDiffusion
[LoRA] PanelPainter — Manga Panel Coloring (Qwen Image Edit 2509)
https://redd.it/1os7ut5
@rStableDiffusion
Image MetaHub 0.9.5 – Search by prompt, model, LoRAs, etc. Now supports Fooocus, Midjourney, Forge, SwarmUI, & more
https://redd.it/1os6ijn
@rStableDiffusion
WAN 2.2 ANIMATE - how to make long videos, higher than 480p?

Is this possible to use resolution more than 480p if i have 16GB VRAM? (RTX 4070Ti SUPER)

Im struggling with workflows that allows to generate long videos, but only at low resolutions - when i go above 640x480, i'm getting VRAM allocation errors, regardless of requested frame count, fps and block swaps.

Official animate workflow from comfy templates, allows me do make videos in 1024x768 and even 1200x900 that are looking awesome, but they can have maximum 77 frames which is 4 seconds). Of course, they can handle more than 4 seocnds, but with terrible workaround - making batch of new separate videos, one by one, and connect them via first and last frame. It causes glitches and ugly transitions that are not acceptable.

Is there any way that allows to make let's say 8 seconds video at 1280x720p?

https://redd.it/1oserqg
@rStableDiffusion
Is it possible to create FP8 GGUF?

Recently I've started creating GGUF, but the request that I had were for FP8 merged models, and I noticed that the noscript would turn FP8 to FP16.

I did some search and found that it is the weight that GGUF accepted, but then I saw this PR - https://github.com/ggml-org/llama.cpp/issues/14762 \- and would like to know if anyone was able to make this work or not?

The main issue at this moment, is the size of the GGUF vs the initial model, since it converts to FP16.

The other one, is that I don't know if it is making the model better, due to FP16, or even worst because of the noscript conversion.

https://redd.it/1oshg41
@rStableDiffusion