شبکه داستانی عصبی – Telegram
شبکه داستانی عصبی
793 subscribers
746 photos
35 videos
96 files
1.9K links
اینجا راجع به چیزایی که دوست دارم صحبت می‌کنم: داستان، هوش مصنوعی، موسیقی، نرم‌افزار، هنر، روانشناسی و ... :)

اگه خواستید صحبت کنیم خیلی خوشحالم می‌کنید:
@alimirferdos
Download Telegram
وقتی برای آیلتس می‌خوندم، این دو تا رو پیدا کرده بودم که گفته بود که تمام سوالات احتمالی برای بخش دوم و سوم speaking رو آورده. احتمالا جدیدش رو هم بتونید پیدا کنید اگه بگردید:
من خودم از این کانال گرفته بودمشون: https://news.1rj.ru/str/afarineshbooks
Building Microservices.PDF
16.9 MB
این هم یه کتاب خیلی خوب در زمینه‌ی طراحی میکروسرویس‌هاست
data-driven.pdf
16.7 MB
Data Driven: Creating a Data Culture

این یه کتابچه است که dj patil و hilary mason که از خفن‌های حوزه‌ی داده هستند نوشتند راجع به اینکه داده‌محوری چیه و چطور یه فرهنگ داده‌محور در سازمان داشته باشیم.
این نسخه‌ای که اینجا می‌فرستم نسخه‌ایه که خودم خوندم و هایلایت کردم
NIPS_2015_hidden_technical_debt_in_machine_learning_systems_Paper.pdf
187.7 KB
Hidden Technical Debt in Machine Learning Systems

یه مقاله خیلی خیلی خوب همراه با هایلایت‌های من :)
Stable diffusion

Prompt:
Game of thrones written by franz kafka
ابزاری برای کشیدن معماری‌های یادگیری عمیق:

https://alexlenail.me/NN-SVG/index.html
BREAKING: White House issues new policy that will require, by 2026, all federally-funded research results to be freely available to public without delay, ending longstanding ability of journals to paywall results for up to 1 year.
Some journal publishers had long fought this open access requirement, fearing it would harm their subnoscription business model...

- ScienceInsider.
خیلی خفنه
شبکه داستانی عصبی
خیلی خفنه
رشته توییت:

TwitterThreadUnrollBot:
Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choice (pet, object, etc.) in myriad contexts and with text-guided semantic variations! The options are endless. (Thread 👇)
webpage: https://t.co/EDpIyalqiK
1/N https://t.co/FhHFAMtLwS

Text-to-image diffusion models are extremely powerful and allow for flexible generation of images with complex user captions. One limitation is that controlling the subject’s appearance and identity using text is very hard.
2/N https://t.co/1y7C0hnUr4

By finetuning a model (Imagen here) with few images of a subject (~3-5), a user can generate variations of the subject. E.g. by controlling the environment and context of the subject. Ever wanted to have a high-quality picture of your dog in Paris (no travel required)?
3/N https://t.co/iSb04jcU5R

Our method has some surprising capabilities inherited from large diffusion models. For example it can generate novel art renditions of a subject! Here are some renditions of a specific dog in the style of famous painters.
4/N https://t.co/Oyg2j3SK1B

We can also change semantic attributes of a subject. Re-coloring, chimeras, material changes, etc.
5/N https://t.co/sRd6356mdW

What about accessorization? Given a few images of your pet you could accessorize them with extreme flexibility. Imagination is the limit!
6/N https://t.co/ovNNGvMb1b

We can even do realistic viewpoint changes for some subjects which have a strong class prior! Here are some examples of different viewpoints for a cat. Notice the detailed fur patterns in the forehead are conserved. 🤯
7/N https://t.co/XH2Jki79s3

Finally, our method can generate new images of a subject with different expressions/emotions. Note that the original images of the subject dog here did not exhibit any of these expressions.
8/N https://t.co/Fmv6IJOaJN

One main difficulty in finetuning a diffusion model using few images is overfitting. We tackle this problem by presenting an autogenous class-specific prior preservation loss. More details in the paper.
9/N https://t.co/4bqLW1qDwi

We are able to alleviate overfitting using this approach. We show that finetuning without this loss term leads to accelerated overfitting of subject pose and appearance, or context. This decreases generation variability and incorrect scenes.
10/N https://t.co/cCpR1L2r4m

Thank you for your time. And thank you to all of my collaborators @AbermanKfir, Yuanzhen Li, @jampani_varun, @MikiRubinstein, Yael Pritch. I had an amazing time working on this with you and am looking forward to future uses of this technology and more research!
12/N

We have many other details on the method in the paper. Feel free to check it out!
arxiv: https://t.co/GelJOBDa7H
11/N

We also thank the Imagen team for lending us access to their incredible model. And we deeply thank all of the great people who helped with reviews and feedback (all acknowledged in the paper).
Again, our project website is: https://t.co/EDpIyaDzwS
13/13 (END)
Understanding Diffusion Models: A Unified Perspective

Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.

https://arxiv.org/abs/2208.11970
👍1
ya tayr snounou
musicweek.ir
قطعه《یاطیر سنونو》

اجرای #لما_شریف
خواننده لبنانی

ياطير السنونو سلملي عيونو
قلو اني بحبو بحبو وبعمري ما بخونو
قلو وحياة الله عنو ما بتخلى 
قلو اني انشالله انشالله ما بخيبلو ظنونو

 پرستوی مهاجر سلام من را به چشمانش برسان
و بگو دوستش دارم و بهش خیانت نمیکنم

بگو اگر خدا بخواهد نا امیدش نمی‌کنم
بگو به خدا قسم دست ازش برنمی‌دارم
و ناامیدش نمیکنم
شبکه داستانی عصبی
musicweek.ir – ya tayr snounou
این رو الان به گوشم خورد و نمی‌دونم چرا اینقدر موده
Hello everyone, here's a new notebook.

It's called Chrysalis, and it uses stable diffusion to generate a video that interpolates smoothly between any two prompts. It is pretty much idiot-proof. Just put in your prompts and run it, the defaults work fine.

https://twitter.com/ai_curio/status/1563250436450418689?t=othXx70pKRH9QkbAtAAwhw&s=19
Forwarded from Programming Resources via @like
Persian book from a person who works in google and some startups in USA about tips and tricks in both tech and no-tech topics.
«لوکومتیو» اسم مستعار یه بنده‌خدای دهه‌شصتی هست که یک‌سوم عمرش رو آمریکا زندگی و کار کرده (گوگل، استارت‌آپ، ...) و داره تجربیات‌ش در زمینه‌ی زندگی شخصی، اجتماعی، کاری، مهاجرت، و برنامه‌نویسی رو جمع‌آوری می‌کنه
تا الآن و در عرض سه ماه اول، این تجربیات به ۳۲۲ صفحه رسیده (تا صفحه ۲۵۰ مستقل از برنامه‌نویسی هست).

#tips #life #hacks #book #free #persian #farsi #tricks #psycology
@pythony

https://locomo.tips
👍2
Forwarded from Programming Resources via @like
This is the online home for Debugging Teams, a book about the human side of software engineering. neighbor of teamgeek book.
کتاب آنلاین به نام debugging teams کتابی که به جنبه انسانی و مهارت‌های نرم شغل مهندسی‌نرم‌افزار میپردازه. کتاب دیگه‌ای هم هست به نام teamGeek که اون هم یه جورایی همسایه این کتاب محسوب میشه.

#team #teams #soft #skill #book #human #software #engineering #softskill #manage #leader #books #manager #coach
@pythony

https://www.debuggingteams.com