Looks like someone beat z-image to the punch in respect to a dedicated Anime style model. Very lightweight too, insanity. Even on a gaddamn Sunday we're getting new releases.
https://x.com/ModelScope2022/status/1997543466587636209?t=kA4UmA71XRPsyJS515WaOQ&s=19
https://redd.it/1pgdzf1
@rStableDiffusion
https://x.com/ModelScope2022/status/1997543466587636209?t=kA4UmA71XRPsyJS515WaOQ&s=19
https://redd.it/1pgdzf1
@rStableDiffusion
X (formerly Twitter)
ModelScope (@ModelScope2022) on X
🚀 NewBieAI-Lab drops NewBie-image-Exp0.1 — a 3.5B open-source ACG-native DiT model built for precise, fast, and high-quality anime generation.
✅ 3.5B params (8GB VRAM friendly — RTX 4060? ✅)
✅ Dual text encoders: Gemma-3-4B-it + Jina CLIP v2 → deep prompt…
✅ 3.5B params (8GB VRAM friendly — RTX 4060? ✅)
✅ Dual text encoders: Gemma-3-4B-it + Jina CLIP v2 → deep prompt…
NewBie Image Exp0.1: a 3.5B open-source ACG-native DiT model built for high-quality anime generation
https://modelscope.cn/models/NewBieAi-lab/NewBie-image-Exp0.1
https://redd.it/1pgehp8
@rStableDiffusion
https://modelscope.cn/models/NewBieAi-lab/NewBie-image-Exp0.1
https://redd.it/1pgehp8
@rStableDiffusion
modelscope.cn
NewBie-image-Exp0.1
ModelScope——汇聚各领域先进的机器学习模型,提供模型探索体验、推理、训练、部署和应用的一站式服务。在这里,共建模型开源社区,发现、学习、定制和分享心仪的模型。
AI-Toolkit: Use local model directories for training
For AI-toolkit trainings, I propose to download the models manually and store them locally, outside huggingface cache. This should work for all training types and usually prevents the need for online connection at the beginning of each training.
**Example for Z-Turbo with training adaptor LoRa, but the process is the same for any other training:**
1. Go to [https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main) and download the folders marked in the sceenshot (text\_encoder, tokenizer, transformer, vae).
2. Store this directory structure to a dedicated training models folder, in my case "**g:\\Training\\Models\\Tongyi-MAI--Z-Image-Turbo\\**"
3. Go to [https://huggingface.co/ostris/zimage\_turbo\_training\_adapter/tree/main](https://huggingface.co/ostris/zimage_turbo_training_adapter/tree/main) and download one or both of the training adaptors zimage\_turbo\_training\_adapter\_v1.safetensors or zimage\_turbo\_training\_adapter\_v2.safetensors. After some training tests I am still not sure if V1 of or V2 works better. I tend to say V1.
4. Store the LoRas to the dedicated training models folder, in my case "**g:\\Training\\Models\\ostris--zimage\_turbo\_training\_adapter\\**"
5. Create a new job, set the correct training type and for the models enter the path to the downloaded models in this format: "**g://Training//Models//Tongyi-MAI--Z-Image-Turbo**" and "**g://Training//Models//ostris--zimage\_turbo\_training\_adapter//zimage\_turbo\_training\_adapter\_v1**"
6. Select the training dataset and make other changes as needed, then save the job.
https://preview.redd.it/f024xhmper5g1.png?width=1731&format=png&auto=webp&s=b78cd06e4e891c89deb2bb542d89dc21e91b509b
This setup also prevents the anoying re-downloads of the complete model set if minor changes happen in Huggingface repository, e.g if the readme file is updated. This results in the download of a new snapshot each time into the .cache\\huggingface\\hub\\ folder, creating duplicate data.
If you have donwloaded the models already ealier to .cache\\huggingface\\hub\\ folder via the AI-Toolkit, you can just copy/move the folders to your dedicated training models folder, and set the local paths in training setup as described above.
Finally, if you need a really comprehensive overview and explanation of latest the AI-Toolkit training settings, I can recommend this video: [https://www.youtube.com/watch?v=liFFrvIndl4&t=2s](https://www.youtube.com/watch?v=liFFrvIndl4&t=2s)
This video was done for ZImage but the detailed settings denoscriptions are relevant for all tryining types.
https://redd.it/1pgfkoa
@rStableDiffusion
For AI-toolkit trainings, I propose to download the models manually and store them locally, outside huggingface cache. This should work for all training types and usually prevents the need for online connection at the beginning of each training.
**Example for Z-Turbo with training adaptor LoRa, but the process is the same for any other training:**
1. Go to [https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main) and download the folders marked in the sceenshot (text\_encoder, tokenizer, transformer, vae).
2. Store this directory structure to a dedicated training models folder, in my case "**g:\\Training\\Models\\Tongyi-MAI--Z-Image-Turbo\\**"
3. Go to [https://huggingface.co/ostris/zimage\_turbo\_training\_adapter/tree/main](https://huggingface.co/ostris/zimage_turbo_training_adapter/tree/main) and download one or both of the training adaptors zimage\_turbo\_training\_adapter\_v1.safetensors or zimage\_turbo\_training\_adapter\_v2.safetensors. After some training tests I am still not sure if V1 of or V2 works better. I tend to say V1.
4. Store the LoRas to the dedicated training models folder, in my case "**g:\\Training\\Models\\ostris--zimage\_turbo\_training\_adapter\\**"
5. Create a new job, set the correct training type and for the models enter the path to the downloaded models in this format: "**g://Training//Models//Tongyi-MAI--Z-Image-Turbo**" and "**g://Training//Models//ostris--zimage\_turbo\_training\_adapter//zimage\_turbo\_training\_adapter\_v1**"
6. Select the training dataset and make other changes as needed, then save the job.
https://preview.redd.it/f024xhmper5g1.png?width=1731&format=png&auto=webp&s=b78cd06e4e891c89deb2bb542d89dc21e91b509b
This setup also prevents the anoying re-downloads of the complete model set if minor changes happen in Huggingface repository, e.g if the readme file is updated. This results in the download of a new snapshot each time into the .cache\\huggingface\\hub\\ folder, creating duplicate data.
If you have donwloaded the models already ealier to .cache\\huggingface\\hub\\ folder via the AI-Toolkit, you can just copy/move the folders to your dedicated training models folder, and set the local paths in training setup as described above.
Finally, if you need a really comprehensive overview and explanation of latest the AI-Toolkit training settings, I can recommend this video: [https://www.youtube.com/watch?v=liFFrvIndl4&t=2s](https://www.youtube.com/watch?v=liFFrvIndl4&t=2s)
This video was done for ZImage but the detailed settings denoscriptions are relevant for all tryining types.
https://redd.it/1pgfkoa
@rStableDiffusion
huggingface.co
Tongyi-MAI/Z-Image-Turbo at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.