How To Write Scenes.pdf
185 KB
How to write scenes
یک راهنمای کوتاه در زمینهی نوشتن صحنههای داستانی
یک راهنمای کوتاه در زمینهی نوشتن صحنههای داستانی
Machine_Learning_Design_Patterns_Solutions_to_Common_Challenges.pdf
18.7 MB
Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps
این رو نخوندم ولی واقعا کتاب خیلی خیلی جالبی به نظر میرسه
این رو نخوندم ولی واقعا کتاب خیلی خیلی جالبی به نظر میرسه
کتابهای هنرستان موسیقی میتونن مرجع خوب و راحتی برای یاد گرفتن اصول موسیقی باشند:
http://chap.sch.ir/category/100
http://chap.sch.ir/category/100
یه مستند جالب پیدا کردم به اسم Coded Bias
When MIT researcher, poet and computer scientist Joy Buolamwini uncovers racial and gender bias in AI systems sold by big tech companies, she embarks on a journey alongside pioneering women sounding the alarm about the dangers of unchecked artificial intelligence that impacts us all. Through Joy’s transformation from scientist to steadfast advocate and the stories of everyday people experiencing technical harms, Coded Bias sheds light on the threats A.I. poses to civil rights and democracy.
https://ezbebin.com/coded-bias-2020/
When MIT researcher, poet and computer scientist Joy Buolamwini uncovers racial and gender bias in AI systems sold by big tech companies, she embarks on a journey alongside pioneering women sounding the alarm about the dangers of unchecked artificial intelligence that impacts us all. Through Joy’s transformation from scientist to steadfast advocate and the stories of everyday people experiencing technical harms, Coded Bias sheds light on the threats A.I. poses to civil rights and democracy.
https://ezbebin.com/coded-bias-2020/
Federated Learning.pdf
496.4 KB
یه مقاله مروری که یکی دو سال پیش در زمینهی یادگیری فدرال نوشتم.
Character Outline Questions.pdf
474.1 KB
یه لیست نسبتا خوبی از سوالهایی برای طراحی شخصیتهای داستانی
معمولا جواب دادن به این سوالات، حتی اگه مستقیم به داستانمون ربط نداشته باشه، میتونه کمک خیلی زیادی به پخته شدن اون شخصیت توی داستان میشه
معمولا جواب دادن به این سوالات، حتی اگه مستقیم به داستانمون ربط نداشته باشه، میتونه کمک خیلی زیادی به پخته شدن اون شخصیت توی داستان میشه
Forwarded from 🎬 مووی کاتیج (Elham)
سریال "BoJack Horseman" رو که دیدم فهمیدم هیچی بدتر از این نیست که آدم در گذشته غرق باشه و نتونه خودش رو دوست داشته باشه
این سریال جزو بهترین سریالهای ساخته شده هست که به خوبی افسردگی رو به تصویر کشیده و باعث میشه شما جوری با یه اسب همزاد پنداری کنید که خودتون هم باورتون نشه
دیدنش رو به همه پیشنهاد میکنم، چیزای زیادی ازش یاد میگیرید
این انیمیشن رو از سایت و ربات مووی کاتیج میتونید رایگان دانلود کنید یا آنلاین ببینید. #معرفیانیمیشن
این سریال جزو بهترین سریالهای ساخته شده هست که به خوبی افسردگی رو به تصویر کشیده و باعث میشه شما جوری با یه اسب همزاد پنداری کنید که خودتون هم باورتون نشه
دیدنش رو به همه پیشنهاد میکنم، چیزای زیادی ازش یاد میگیرید
این انیمیشن رو از سایت و ربات مووی کاتیج میتونید رایگان دانلود کنید یا آنلاین ببینید. #معرفیانیمیشن
وقتی برای آیلتس میخوندم، این دو تا رو پیدا کرده بودم که گفته بود که تمام سوالات احتمالی برای بخش دوم و سوم speaking رو آورده. احتمالا جدیدش رو هم بتونید پیدا کنید اگه بگردید:
من خودم از این کانال گرفته بودمشون: https://news.1rj.ru/str/afarineshbooks
من خودم از این کانال گرفته بودمشون: https://news.1rj.ru/str/afarineshbooks
Telegram
AFARINESH Books | آفرینش
🔴 لیست تمامی گروهها و کانالهای تخصصی آفرینش (در کانال زیر)
👇👇👇
@AfarineshGroups
🔻ارتباط با ادمین
@AfarineshGroupAdmin
👇👇👇
@AfarineshGroups
🔻ارتباط با ادمین
@AfarineshGroupAdmin
Building Microservices.PDF
16.9 MB
این هم یه کتاب خیلی خوب در زمینهی طراحی میکروسرویسهاست
data-driven.pdf
16.7 MB
Data Driven: Creating a Data Culture
این یه کتابچه است که dj patil و hilary mason که از خفنهای حوزهی داده هستند نوشتند راجع به اینکه دادهمحوری چیه و چطور یه فرهنگ دادهمحور در سازمان داشته باشیم.
این نسخهای که اینجا میفرستم نسخهایه که خودم خوندم و هایلایت کردم
این یه کتابچه است که dj patil و hilary mason که از خفنهای حوزهی داده هستند نوشتند راجع به اینکه دادهمحوری چیه و چطور یه فرهنگ دادهمحور در سازمان داشته باشیم.
این نسخهای که اینجا میفرستم نسخهایه که خودم خوندم و هایلایت کردم
NIPS_2015_hidden_technical_debt_in_machine_learning_systems_Paper.pdf
187.7 KB
Hidden Technical Debt in Machine Learning Systems
یه مقاله خیلی خیلی خوب همراه با هایلایتهای من :)
یه مقاله خیلی خیلی خوب همراه با هایلایتهای من :)
BREAKING: White House issues new policy that will require, by 2026, all federally-funded research results to be freely available to public without delay, ending longstanding ability of journals to paywall results for up to 1 year.
Some journal publishers had long fought this open access requirement, fearing it would harm their subnoscription business model...
- ScienceInsider.
Some journal publishers had long fought this open access requirement, fearing it would harm their subnoscription business model...
- ScienceInsider.
شبکه داستانی عصبی
خیلی خفنه
رشته توییت:
TwitterThreadUnrollBot:
Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choice (pet, object, etc.) in myriad contexts and with text-guided semantic variations! The options are endless. (Thread 👇)
webpage: https://t.co/EDpIyalqiK
1/N https://t.co/FhHFAMtLwS
Text-to-image diffusion models are extremely powerful and allow for flexible generation of images with complex user captions. One limitation is that controlling the subject’s appearance and identity using text is very hard.
2/N https://t.co/1y7C0hnUr4
By finetuning a model (Imagen here) with few images of a subject (~3-5), a user can generate variations of the subject. E.g. by controlling the environment and context of the subject. Ever wanted to have a high-quality picture of your dog in Paris (no travel required)?
3/N https://t.co/iSb04jcU5R
Our method has some surprising capabilities inherited from large diffusion models. For example it can generate novel art renditions of a subject! Here are some renditions of a specific dog in the style of famous painters.
4/N https://t.co/Oyg2j3SK1B
We can also change semantic attributes of a subject. Re-coloring, chimeras, material changes, etc.
5/N https://t.co/sRd6356mdW
What about accessorization? Given a few images of your pet you could accessorize them with extreme flexibility. Imagination is the limit!
6/N https://t.co/ovNNGvMb1b
We can even do realistic viewpoint changes for some subjects which have a strong class prior! Here are some examples of different viewpoints for a cat. Notice the detailed fur patterns in the forehead are conserved. 🤯
7/N https://t.co/XH2Jki79s3
Finally, our method can generate new images of a subject with different expressions/emotions. Note that the original images of the subject dog here did not exhibit any of these expressions.
8/N https://t.co/Fmv6IJOaJN
One main difficulty in finetuning a diffusion model using few images is overfitting. We tackle this problem by presenting an autogenous class-specific prior preservation loss. More details in the paper.
9/N https://t.co/4bqLW1qDwi
We are able to alleviate overfitting using this approach. We show that finetuning without this loss term leads to accelerated overfitting of subject pose and appearance, or context. This decreases generation variability and incorrect scenes.
10/N https://t.co/cCpR1L2r4m
Thank you for your time. And thank you to all of my collaborators @AbermanKfir, Yuanzhen Li, @jampani_varun, @MikiRubinstein, Yael Pritch. I had an amazing time working on this with you and am looking forward to future uses of this technology and more research!
12/N
We have many other details on the method in the paper. Feel free to check it out!
arxiv: https://t.co/GelJOBDa7H
11/N
We also thank the Imagen team for lending us access to their incredible model. And we deeply thank all of the great people who helped with reviews and feedback (all acknowledged in the paper).
Again, our project website is: https://t.co/EDpIyaDzwS
13/13 (END)
TwitterThreadUnrollBot:
Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choice (pet, object, etc.) in myriad contexts and with text-guided semantic variations! The options are endless. (Thread 👇)
webpage: https://t.co/EDpIyalqiK
1/N https://t.co/FhHFAMtLwS
Text-to-image diffusion models are extremely powerful and allow for flexible generation of images with complex user captions. One limitation is that controlling the subject’s appearance and identity using text is very hard.
2/N https://t.co/1y7C0hnUr4
By finetuning a model (Imagen here) with few images of a subject (~3-5), a user can generate variations of the subject. E.g. by controlling the environment and context of the subject. Ever wanted to have a high-quality picture of your dog in Paris (no travel required)?
3/N https://t.co/iSb04jcU5R
Our method has some surprising capabilities inherited from large diffusion models. For example it can generate novel art renditions of a subject! Here are some renditions of a specific dog in the style of famous painters.
4/N https://t.co/Oyg2j3SK1B
We can also change semantic attributes of a subject. Re-coloring, chimeras, material changes, etc.
5/N https://t.co/sRd6356mdW
What about accessorization? Given a few images of your pet you could accessorize them with extreme flexibility. Imagination is the limit!
6/N https://t.co/ovNNGvMb1b
We can even do realistic viewpoint changes for some subjects which have a strong class prior! Here are some examples of different viewpoints for a cat. Notice the detailed fur patterns in the forehead are conserved. 🤯
7/N https://t.co/XH2Jki79s3
Finally, our method can generate new images of a subject with different expressions/emotions. Note that the original images of the subject dog here did not exhibit any of these expressions.
8/N https://t.co/Fmv6IJOaJN
One main difficulty in finetuning a diffusion model using few images is overfitting. We tackle this problem by presenting an autogenous class-specific prior preservation loss. More details in the paper.
9/N https://t.co/4bqLW1qDwi
We are able to alleviate overfitting using this approach. We show that finetuning without this loss term leads to accelerated overfitting of subject pose and appearance, or context. This decreases generation variability and incorrect scenes.
10/N https://t.co/cCpR1L2r4m
Thank you for your time. And thank you to all of my collaborators @AbermanKfir, Yuanzhen Li, @jampani_varun, @MikiRubinstein, Yael Pritch. I had an amazing time working on this with you and am looking forward to future uses of this technology and more research!
12/N
We have many other details on the method in the paper. Feel free to check it out!
arxiv: https://t.co/GelJOBDa7H
11/N
We also thank the Imagen team for lending us access to their incredible model. And we deeply thank all of the great people who helped with reviews and feedback (all acknowledged in the paper).
Again, our project website is: https://t.co/EDpIyaDzwS
13/13 (END)
Understanding Diffusion Models: A Unified Perspective
Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.
https://arxiv.org/abs/2208.11970
Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.
https://arxiv.org/abs/2208.11970
👍1