Building Microservices.PDF
16.9 MB
این هم یه کتاب خیلی خوب در زمینهی طراحی میکروسرویسهاست
data-driven.pdf
16.7 MB
Data Driven: Creating a Data Culture
این یه کتابچه است که dj patil و hilary mason که از خفنهای حوزهی داده هستند نوشتند راجع به اینکه دادهمحوری چیه و چطور یه فرهنگ دادهمحور در سازمان داشته باشیم.
این نسخهای که اینجا میفرستم نسخهایه که خودم خوندم و هایلایت کردم
این یه کتابچه است که dj patil و hilary mason که از خفنهای حوزهی داده هستند نوشتند راجع به اینکه دادهمحوری چیه و چطور یه فرهنگ دادهمحور در سازمان داشته باشیم.
این نسخهای که اینجا میفرستم نسخهایه که خودم خوندم و هایلایت کردم
NIPS_2015_hidden_technical_debt_in_machine_learning_systems_Paper.pdf
187.7 KB
Hidden Technical Debt in Machine Learning Systems
یه مقاله خیلی خیلی خوب همراه با هایلایتهای من :)
یه مقاله خیلی خیلی خوب همراه با هایلایتهای من :)
BREAKING: White House issues new policy that will require, by 2026, all federally-funded research results to be freely available to public without delay, ending longstanding ability of journals to paywall results for up to 1 year.
Some journal publishers had long fought this open access requirement, fearing it would harm their subnoscription business model...
- ScienceInsider.
Some journal publishers had long fought this open access requirement, fearing it would harm their subnoscription business model...
- ScienceInsider.
شبکه داستانی عصبی
خیلی خفنه
رشته توییت:
TwitterThreadUnrollBot:
Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choice (pet, object, etc.) in myriad contexts and with text-guided semantic variations! The options are endless. (Thread 👇)
webpage: https://t.co/EDpIyalqiK
1/N https://t.co/FhHFAMtLwS
Text-to-image diffusion models are extremely powerful and allow for flexible generation of images with complex user captions. One limitation is that controlling the subject’s appearance and identity using text is very hard.
2/N https://t.co/1y7C0hnUr4
By finetuning a model (Imagen here) with few images of a subject (~3-5), a user can generate variations of the subject. E.g. by controlling the environment and context of the subject. Ever wanted to have a high-quality picture of your dog in Paris (no travel required)?
3/N https://t.co/iSb04jcU5R
Our method has some surprising capabilities inherited from large diffusion models. For example it can generate novel art renditions of a subject! Here are some renditions of a specific dog in the style of famous painters.
4/N https://t.co/Oyg2j3SK1B
We can also change semantic attributes of a subject. Re-coloring, chimeras, material changes, etc.
5/N https://t.co/sRd6356mdW
What about accessorization? Given a few images of your pet you could accessorize them with extreme flexibility. Imagination is the limit!
6/N https://t.co/ovNNGvMb1b
We can even do realistic viewpoint changes for some subjects which have a strong class prior! Here are some examples of different viewpoints for a cat. Notice the detailed fur patterns in the forehead are conserved. 🤯
7/N https://t.co/XH2Jki79s3
Finally, our method can generate new images of a subject with different expressions/emotions. Note that the original images of the subject dog here did not exhibit any of these expressions.
8/N https://t.co/Fmv6IJOaJN
One main difficulty in finetuning a diffusion model using few images is overfitting. We tackle this problem by presenting an autogenous class-specific prior preservation loss. More details in the paper.
9/N https://t.co/4bqLW1qDwi
We are able to alleviate overfitting using this approach. We show that finetuning without this loss term leads to accelerated overfitting of subject pose and appearance, or context. This decreases generation variability and incorrect scenes.
10/N https://t.co/cCpR1L2r4m
Thank you for your time. And thank you to all of my collaborators @AbermanKfir, Yuanzhen Li, @jampani_varun, @MikiRubinstein, Yael Pritch. I had an amazing time working on this with you and am looking forward to future uses of this technology and more research!
12/N
We have many other details on the method in the paper. Feel free to check it out!
arxiv: https://t.co/GelJOBDa7H
11/N
We also thank the Imagen team for lending us access to their incredible model. And we deeply thank all of the great people who helped with reviews and feedback (all acknowledged in the paper).
Again, our project website is: https://t.co/EDpIyaDzwS
13/13 (END)
TwitterThreadUnrollBot:
Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choice (pet, object, etc.) in myriad contexts and with text-guided semantic variations! The options are endless. (Thread 👇)
webpage: https://t.co/EDpIyalqiK
1/N https://t.co/FhHFAMtLwS
Text-to-image diffusion models are extremely powerful and allow for flexible generation of images with complex user captions. One limitation is that controlling the subject’s appearance and identity using text is very hard.
2/N https://t.co/1y7C0hnUr4
By finetuning a model (Imagen here) with few images of a subject (~3-5), a user can generate variations of the subject. E.g. by controlling the environment and context of the subject. Ever wanted to have a high-quality picture of your dog in Paris (no travel required)?
3/N https://t.co/iSb04jcU5R
Our method has some surprising capabilities inherited from large diffusion models. For example it can generate novel art renditions of a subject! Here are some renditions of a specific dog in the style of famous painters.
4/N https://t.co/Oyg2j3SK1B
We can also change semantic attributes of a subject. Re-coloring, chimeras, material changes, etc.
5/N https://t.co/sRd6356mdW
What about accessorization? Given a few images of your pet you could accessorize them with extreme flexibility. Imagination is the limit!
6/N https://t.co/ovNNGvMb1b
We can even do realistic viewpoint changes for some subjects which have a strong class prior! Here are some examples of different viewpoints for a cat. Notice the detailed fur patterns in the forehead are conserved. 🤯
7/N https://t.co/XH2Jki79s3
Finally, our method can generate new images of a subject with different expressions/emotions. Note that the original images of the subject dog here did not exhibit any of these expressions.
8/N https://t.co/Fmv6IJOaJN
One main difficulty in finetuning a diffusion model using few images is overfitting. We tackle this problem by presenting an autogenous class-specific prior preservation loss. More details in the paper.
9/N https://t.co/4bqLW1qDwi
We are able to alleviate overfitting using this approach. We show that finetuning without this loss term leads to accelerated overfitting of subject pose and appearance, or context. This decreases generation variability and incorrect scenes.
10/N https://t.co/cCpR1L2r4m
Thank you for your time. And thank you to all of my collaborators @AbermanKfir, Yuanzhen Li, @jampani_varun, @MikiRubinstein, Yael Pritch. I had an amazing time working on this with you and am looking forward to future uses of this technology and more research!
12/N
We have many other details on the method in the paper. Feel free to check it out!
arxiv: https://t.co/GelJOBDa7H
11/N
We also thank the Imagen team for lending us access to their incredible model. And we deeply thank all of the great people who helped with reviews and feedback (all acknowledged in the paper).
Again, our project website is: https://t.co/EDpIyaDzwS
13/13 (END)
Understanding Diffusion Models: A Unified Perspective
Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.
https://arxiv.org/abs/2208.11970
Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.
https://arxiv.org/abs/2208.11970
👍1
ya tayr snounou
musicweek.ir
قطعه《یاطیر سنونو》
اجرای #لما_شریف
خواننده لبنانی
ياطير السنونو سلملي عيونو
قلو اني بحبو بحبو وبعمري ما بخونو
قلو وحياة الله عنو ما بتخلى
قلو اني انشالله انشالله ما بخيبلو ظنونو
پرستوی مهاجر سلام من را به چشمانش برسان
و بگو دوستش دارم و بهش خیانت نمیکنم
بگو اگر خدا بخواهد نا امیدش نمیکنم
بگو به خدا قسم دست ازش برنمیدارم
و ناامیدش نمیکنم
اجرای #لما_شریف
خواننده لبنانی
ياطير السنونو سلملي عيونو
قلو اني بحبو بحبو وبعمري ما بخونو
قلو وحياة الله عنو ما بتخلى
قلو اني انشالله انشالله ما بخيبلو ظنونو
پرستوی مهاجر سلام من را به چشمانش برسان
و بگو دوستش دارم و بهش خیانت نمیکنم
بگو اگر خدا بخواهد نا امیدش نمیکنم
بگو به خدا قسم دست ازش برنمیدارم
و ناامیدش نمیکنم
شبکه داستانی عصبی
musicweek.ir – ya tayr snounou
این رو الان به گوشم خورد و نمیدونم چرا اینقدر موده
Hello everyone, here's a new notebook.
It's called Chrysalis, and it uses stable diffusion to generate a video that interpolates smoothly between any two prompts. It is pretty much idiot-proof. Just put in your prompts and run it, the defaults work fine.
https://twitter.com/ai_curio/status/1563250436450418689?t=othXx70pKRH9QkbAtAAwhw&s=19
It's called Chrysalis, and it uses stable diffusion to generate a video that interpolates smoothly between any two prompts. It is pretty much idiot-proof. Just put in your prompts and run it, the defaults work fine.
https://twitter.com/ai_curio/status/1563250436450418689?t=othXx70pKRH9QkbAtAAwhw&s=19
Twitter
Hello everyone, here's a new notebook.
It's called Chrysalis, and it uses stable diffusion to generate a video that interpolates smoothly between any two prompts. It is pretty much idiot-proof. Just put in your prompts and run it, the defaults work fine.…
It's called Chrysalis, and it uses stable diffusion to generate a video that interpolates smoothly between any two prompts. It is pretty much idiot-proof. Just put in your prompts and run it, the defaults work fine.…
Forwarded from Programming Resources via @like
Persian book from a person who works in google and some startups in USA about tips and tricks in both tech and no-tech topics.
«لوکومتیو» اسم مستعار یه بندهخدای دههشصتی هست که یکسوم عمرش رو آمریکا زندگی و کار کرده (گوگل، استارتآپ، ...) و داره تجربیاتش در زمینهی زندگی شخصی، اجتماعی، کاری، مهاجرت، و برنامهنویسی رو جمعآوری میکنه
تا الآن و در عرض سه ماه اول، این تجربیات به ۳۲۲ صفحه رسیده (تا صفحه ۲۵۰ مستقل از برنامهنویسی هست).
#tips #life #hacks #book #free #persian #farsi #tricks #psycology
@pythony
https://locomo.tips
«لوکومتیو» اسم مستعار یه بندهخدای دههشصتی هست که یکسوم عمرش رو آمریکا زندگی و کار کرده (گوگل، استارتآپ، ...) و داره تجربیاتش در زمینهی زندگی شخصی، اجتماعی، کاری، مهاجرت، و برنامهنویسی رو جمعآوری میکنه
تا الآن و در عرض سه ماه اول، این تجربیات به ۳۲۲ صفحه رسیده (تا صفحه ۲۵۰ مستقل از برنامهنویسی هست).
#tips #life #hacks #book #free #persian #farsi #tricks #psycology
@pythony
https://locomo.tips
👍2
Forwarded from Programming Resources via @like
This is the online home for Debugging Teams, a book about the human side of software engineering. neighbor of teamgeek book.
کتاب آنلاین به نام debugging teams کتابی که به جنبه انسانی و مهارتهای نرم شغل مهندسینرمافزار میپردازه. کتاب دیگهای هم هست به نام teamGeek که اون هم یه جورایی همسایه این کتاب محسوب میشه.
#team #teams #soft #skill #book #human #software #engineering #softskill #manage #leader #books #manager #coach
@pythony
https://www.debuggingteams.com
کتاب آنلاین به نام debugging teams کتابی که به جنبه انسانی و مهارتهای نرم شغل مهندسینرمافزار میپردازه. کتاب دیگهای هم هست به نام teamGeek که اون هم یه جورایی همسایه این کتاب محسوب میشه.
#team #teams #soft #skill #book #human #software #engineering #softskill #manage #leader #books #manager #coach
@pythony
https://www.debuggingteams.com
شبکه داستانی عصبی
«بیقیدوشرطترین تکلیف ما یادگیری هر روزه چگونه مردن است. اما راه شناخت عمیق تر مرگ رویگردانی از زندگی نیست. شناخت عمیق تر مرگ میوه رسیده اینجا و اکنون زیستن است؛ میوهای که اگر به دست بیاوریم و به دهانش ببریم طعم وصف ناپذیرش در وجود ما میپراکند.» لنگرگاهی…
پس از زندگی، لنگرگاهی در شن روان
👍1