Looks like someone beat z-image to the punch in respect to a dedicated Anime style model. Very lightweight too, insanity. Even on a gaddamn Sunday we're getting new releases.
https://x.com/ModelScope2022/status/1997543466587636209?t=kA4UmA71XRPsyJS515WaOQ&s=19
https://redd.it/1pgdzf1
@rStableDiffusion
https://x.com/ModelScope2022/status/1997543466587636209?t=kA4UmA71XRPsyJS515WaOQ&s=19
https://redd.it/1pgdzf1
@rStableDiffusion
X (formerly Twitter)
ModelScope (@ModelScope2022) on X
🚀 NewBieAI-Lab drops NewBie-image-Exp0.1 — a 3.5B open-source ACG-native DiT model built for precise, fast, and high-quality anime generation.
✅ 3.5B params (8GB VRAM friendly — RTX 4060? ✅)
✅ Dual text encoders: Gemma-3-4B-it + Jina CLIP v2 → deep prompt…
✅ 3.5B params (8GB VRAM friendly — RTX 4060? ✅)
✅ Dual text encoders: Gemma-3-4B-it + Jina CLIP v2 → deep prompt…
NewBie Image Exp0.1: a 3.5B open-source ACG-native DiT model built for high-quality anime generation
https://modelscope.cn/models/NewBieAi-lab/NewBie-image-Exp0.1
https://redd.it/1pgehp8
@rStableDiffusion
https://modelscope.cn/models/NewBieAi-lab/NewBie-image-Exp0.1
https://redd.it/1pgehp8
@rStableDiffusion
modelscope.cn
NewBie-image-Exp0.1
ModelScope——汇聚各领域先进的机器学习模型,提供模型探索体验、推理、训练、部署和应用的一站式服务。在这里,共建模型开源社区,发现、学习、定制和分享心仪的模型。
AI-Toolkit: Use local model directories for training
For AI-toolkit trainings, I propose to download the models manually and store them locally, outside huggingface cache. This should work for all training types and usually prevents the need for online connection at the beginning of each training.
**Example for Z-Turbo with training adaptor LoRa, but the process is the same for any other training:**
1. Go to [https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main) and download the folders marked in the sceenshot (text\_encoder, tokenizer, transformer, vae).
2. Store this directory structure to a dedicated training models folder, in my case "**g:\\Training\\Models\\Tongyi-MAI--Z-Image-Turbo\\**"
3. Go to [https://huggingface.co/ostris/zimage\_turbo\_training\_adapter/tree/main](https://huggingface.co/ostris/zimage_turbo_training_adapter/tree/main) and download one or both of the training adaptors zimage\_turbo\_training\_adapter\_v1.safetensors or zimage\_turbo\_training\_adapter\_v2.safetensors. After some training tests I am still not sure if V1 of or V2 works better. I tend to say V1.
4. Store the LoRas to the dedicated training models folder, in my case "**g:\\Training\\Models\\ostris--zimage\_turbo\_training\_adapter\\**"
5. Create a new job, set the correct training type and for the models enter the path to the downloaded models in this format: "**g://Training//Models//Tongyi-MAI--Z-Image-Turbo**" and "**g://Training//Models//ostris--zimage\_turbo\_training\_adapter//zimage\_turbo\_training\_adapter\_v1**"
6. Select the training dataset and make other changes as needed, then save the job.
https://preview.redd.it/f024xhmper5g1.png?width=1731&format=png&auto=webp&s=b78cd06e4e891c89deb2bb542d89dc21e91b509b
This setup also prevents the anoying re-downloads of the complete model set if minor changes happen in Huggingface repository, e.g if the readme file is updated. This results in the download of a new snapshot each time into the .cache\\huggingface\\hub\\ folder, creating duplicate data.
If you have donwloaded the models already ealier to .cache\\huggingface\\hub\\ folder via the AI-Toolkit, you can just copy/move the folders to your dedicated training models folder, and set the local paths in training setup as described above.
Finally, if you need a really comprehensive overview and explanation of latest the AI-Toolkit training settings, I can recommend this video: [https://www.youtube.com/watch?v=liFFrvIndl4&t=2s](https://www.youtube.com/watch?v=liFFrvIndl4&t=2s)
This video was done for ZImage but the detailed settings denoscriptions are relevant for all tryining types.
https://redd.it/1pgfkoa
@rStableDiffusion
For AI-toolkit trainings, I propose to download the models manually and store them locally, outside huggingface cache. This should work for all training types and usually prevents the need for online connection at the beginning of each training.
**Example for Z-Turbo with training adaptor LoRa, but the process is the same for any other training:**
1. Go to [https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main) and download the folders marked in the sceenshot (text\_encoder, tokenizer, transformer, vae).
2. Store this directory structure to a dedicated training models folder, in my case "**g:\\Training\\Models\\Tongyi-MAI--Z-Image-Turbo\\**"
3. Go to [https://huggingface.co/ostris/zimage\_turbo\_training\_adapter/tree/main](https://huggingface.co/ostris/zimage_turbo_training_adapter/tree/main) and download one or both of the training adaptors zimage\_turbo\_training\_adapter\_v1.safetensors or zimage\_turbo\_training\_adapter\_v2.safetensors. After some training tests I am still not sure if V1 of or V2 works better. I tend to say V1.
4. Store the LoRas to the dedicated training models folder, in my case "**g:\\Training\\Models\\ostris--zimage\_turbo\_training\_adapter\\**"
5. Create a new job, set the correct training type and for the models enter the path to the downloaded models in this format: "**g://Training//Models//Tongyi-MAI--Z-Image-Turbo**" and "**g://Training//Models//ostris--zimage\_turbo\_training\_adapter//zimage\_turbo\_training\_adapter\_v1**"
6. Select the training dataset and make other changes as needed, then save the job.
https://preview.redd.it/f024xhmper5g1.png?width=1731&format=png&auto=webp&s=b78cd06e4e891c89deb2bb542d89dc21e91b509b
This setup also prevents the anoying re-downloads of the complete model set if minor changes happen in Huggingface repository, e.g if the readme file is updated. This results in the download of a new snapshot each time into the .cache\\huggingface\\hub\\ folder, creating duplicate data.
If you have donwloaded the models already ealier to .cache\\huggingface\\hub\\ folder via the AI-Toolkit, you can just copy/move the folders to your dedicated training models folder, and set the local paths in training setup as described above.
Finally, if you need a really comprehensive overview and explanation of latest the AI-Toolkit training settings, I can recommend this video: [https://www.youtube.com/watch?v=liFFrvIndl4&t=2s](https://www.youtube.com/watch?v=liFFrvIndl4&t=2s)
This video was done for ZImage but the detailed settings denoscriptions are relevant for all tryining types.
https://redd.it/1pgfkoa
@rStableDiffusion
huggingface.co
Tongyi-MAI/Z-Image-Turbo at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Z-Image trainer that can train the distilled version of LoRA (in 4~8 steps)
https://redd.it/1pgjpec
@rStableDiffusion
https://redd.it/1pgjpec
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: Z-Image trainer that can train the distilled version of LoRA (in 4~8 steps)
Explore this post and more from the StableDiffusion community
🚀 ComfyUI_StarNodes v1.9.2 is out! ✨
Hey folks, just pushed a fresh update of StarNodes and wanted to share what’s new. 😊
https://preview.redd.it/r4yhqrzn8s5g1.png?width=2048&format=png&auto=webp&s=046c50d0b09afb352d1156b1d5d672d36b1ec217
**New nodes in 1.9.2:**
* ⭐ **Star Stop And Go** – Lets you pause your workflow, preview results, and then decide if you want to continue, pause, or bypass, so you don’t waste time on bad runs.
* ⭐ **Star Model Packer** – Combines split `.safetensors` model shards into one file and converts them to FP8 / FP16 / FP32 in a single, convenient node.
* ⭐ **Star FP8 Converter** – Takes an existing `.safetensors` checkpoint and converts it to FP8 (`float8_e4m3fn`), saving it into your standard ComfyUI output models folder for easy use.
On top of that, **a bunch of issues have been fixed** and the docs/versions are cleaned up so things should feel a bit smoother overall. 🧹✅
You can install/update **via ComfyUI Manager** (just search for “Starnodes”)
or check out the full details and docs on GitHub:
👉 [https://github.com/Starnodes2024/ComfyUI\_StarNodes](https://github.com/Starnodes2024/ComfyUI_StarNodes)
https://preview.redd.it/ge2e3lwp8s5g1.png?width=1545&format=png&auto=webp&s=258593f4990ae52dc9bbbc479178f35bf5a71307
Thanks for all the feedback and bug reports – it really helps make these nodes better for everyone. 💛
https://redd.it/1pgi6el
@rStableDiffusion
Hey folks, just pushed a fresh update of StarNodes and wanted to share what’s new. 😊
https://preview.redd.it/r4yhqrzn8s5g1.png?width=2048&format=png&auto=webp&s=046c50d0b09afb352d1156b1d5d672d36b1ec217
**New nodes in 1.9.2:**
* ⭐ **Star Stop And Go** – Lets you pause your workflow, preview results, and then decide if you want to continue, pause, or bypass, so you don’t waste time on bad runs.
* ⭐ **Star Model Packer** – Combines split `.safetensors` model shards into one file and converts them to FP8 / FP16 / FP32 in a single, convenient node.
* ⭐ **Star FP8 Converter** – Takes an existing `.safetensors` checkpoint and converts it to FP8 (`float8_e4m3fn`), saving it into your standard ComfyUI output models folder for easy use.
On top of that, **a bunch of issues have been fixed** and the docs/versions are cleaned up so things should feel a bit smoother overall. 🧹✅
You can install/update **via ComfyUI Manager** (just search for “Starnodes”)
or check out the full details and docs on GitHub:
👉 [https://github.com/Starnodes2024/ComfyUI\_StarNodes](https://github.com/Starnodes2024/ComfyUI_StarNodes)
https://preview.redd.it/ge2e3lwp8s5g1.png?width=1545&format=png&auto=webp&s=258593f4990ae52dc9bbbc479178f35bf5a71307
Thanks for all the feedback and bug reports – it really helps make these nodes better for everyone. 💛
https://redd.it/1pgi6el
@rStableDiffusion
I trained Z-Image lora with prodigy-plus-schedule-free and it seems to work.
https://redd.it/1pgkxyq
@rStableDiffusion
https://redd.it/1pgkxyq
@rStableDiffusion
Reddit
From the StableDiffusion community on Reddit: I trained Z-Image lora with prodigy-plus-schedule-free and it seems to work.
Explore this post and more from the StableDiffusion community