💡 https://www.quantamagazine.org/the-atomic-theory-of-origami-20171031/
#Geometry #Mathematics #Origami #Phase_Transitions #Statistical_Physics #Topology
#Geometry #Mathematics #Origami #Phase_Transitions #Statistical_Physics #Topology
Quanta Magazine
The Atomic Theory of Origami
By reimagining the kinks and folds of origami as atoms in a lattice, researchers are uncovering strange behavior hiding in simple structures.
🎞 Second-Order Phase Transitions: Beyond Landau-Ginzburg Theory
Zohar Komargodski (Weizmann)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_KOMARGODSKI_111416
Zohar Komargodski (Weizmann)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_KOMARGODSKI_111416
🎞 Learning and Inference When There Is Little Data
Yasser Roudi (NTNU)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_ROUDI_050216
Yasser Roudi (NTNU)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_ROUDI_050216
Forwarded from انجمن علمی فیزیک بهشتی (SBU)
اولین #کارگاه #علم_داده:
"Big Data"
🗓 25آبان، 2 و 9 آذر 96
🕰 ساعت 9 الی 12:30
📍دانشكده فيزيك دانشگاه شهيد بهشتی
ثبت نام و اطلاعات تکمیلی:
http://sbuphysics.ir
http://rusherg.com
@sbu_physics
"Big Data"
🗓 25آبان، 2 و 9 آذر 96
🕰 ساعت 9 الی 12:30
📍دانشكده فيزيك دانشگاه شهيد بهشتی
ثبت نام و اطلاعات تکمیلی:
http://sbuphysics.ir
http://rusherg.com
@sbu_physics
1️⃣ But what *is* a Neural Network? | Deep learning, Part 1
🔗 https://www.aparat.com/v/XkQFy
2️⃣ Gradient descent, how neural networks learn | Deep learning, part 2
🔗 https://www.aparat.com/v/uvUxW
3️⃣ What is backpropagation and what is it actually doing? | Deep learning
🔗 https://www.aparat.com/v/EZ9RV
3️⃣*️⃣ Backpropagation calculus | Appendix to deep learning
🔗 https://www.aparat.com/v/0tSKg
🔗 https://www.aparat.com/v/XkQFy
2️⃣ Gradient descent, how neural networks learn | Deep learning, part 2
🔗 https://www.aparat.com/v/uvUxW
3️⃣ What is backpropagation and what is it actually doing? | Deep learning
🔗 https://www.aparat.com/v/EZ9RV
3️⃣*️⃣ Backpropagation calculus | Appendix to deep learning
🔗 https://www.aparat.com/v/0tSKg
آپارات - سرویس اشتراک ویدیو
But what *is* a Neural Network? | Deep learning, Part 1
Subscribe to stay notified about part 2 on backpropagation: http://3b1b.co/subscribe
Support more videos like this on Patreon: https://www.patreon.com/3blue1brown
For any early-stage ML entrepreneurs, Amplify Partners would love to hear from you: 3bl…
Support more videos like this on Patreon: https://www.patreon.com/3blue1brown
For any early-stage ML entrepreneurs, Amplify Partners would love to hear from you: 3bl…
#سمینارهای_هفتگی گروه سیستمهای پیچیده و علم شبکه دانشگاه شهید بهشتی
🔹دوشنبه، ۱۵ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
🔹دوشنبه، ۱۵ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
Forwarded from انجمن علمی فیزیک بهشتی (SBU)
#سمینار_عمومی این هفته
کوانتوم؛ مغز و هوش مصنوعی
-۳شنبه ۱۶ آبان؛ ساعت ۱۶
-تالار ابن هیثم، دانشکده فیزیک
کانال انجمن علمی دانشجویی فیزیک
@sbu_physics
کوانتوم؛ مغز و هوش مصنوعی
-۳شنبه ۱۶ آبان؛ ساعت ۱۶
-تالار ابن هیثم، دانشکده فیزیک
کانال انجمن علمی دانشجویی فیزیک
@sbu_physics
🔖 Variational Inference: A Review for Statisticians
David M. Blei, Alp Kucukelbir, Jon D. McAuliffe
🔗 https://arxiv.org/pdf/1601.00670
📌 ABSTRACT
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms
David M. Blei, Alp Kucukelbir, Jon D. McAuliffe
🔗 https://arxiv.org/pdf/1601.00670
📌 ABSTRACT
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms