🔖Phase Coexistence in Insect Swarms
🔗 https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.178003
📌 ABSTRACT
Animal aggregations are visually striking, and as such are popular examples of collective behavior in the natural world. Quantitatively demonstrating the collective nature of such groups, however, remains surprisingly difficult. Inspired by thermodynamics, we applied topological data analysis to laboratory insect swarms and found evidence for emergent, material-like states. We show that the swarms consist of a core “condensed” phase surrounded by a dilute “vapor” phase. These two phases coexist in equilibrium, and maintain their distinct macroscopic properties even though individual insects pass freely between them. We further define a pressure and chemical potential to describe these phases, extending theories of active matter to aggregations of macroscopic animals and laying the groundwork for a thermodynamic denoscription of collective animal groups.
Michael Sinhuber and Nicholas T. Ouellette🔗 https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.178003
📌 ABSTRACT
Animal aggregations are visually striking, and as such are popular examples of collective behavior in the natural world. Quantitatively demonstrating the collective nature of such groups, however, remains surprisingly difficult. Inspired by thermodynamics, we applied topological data analysis to laboratory insect swarms and found evidence for emergent, material-like states. We show that the swarms consist of a core “condensed” phase surrounded by a dilute “vapor” phase. These two phases coexist in equilibrium, and maintain their distinct macroscopic properties even though individual insects pass freely between them. We further define a pressure and chemical potential to describe these phases, extending theories of active matter to aggregations of macroscopic animals and laying the groundwork for a thermodynamic denoscription of collective animal groups.
Physical Review Letters
Phase Coexistence in Insect Swarms
Animal aggregations are visually striking, and as such are popular examples of collective behavior in the natural world. Quantitatively demonstrating the collective nature of such groups, however, remains surprisingly difficult. Inspired by thermodynamics…
🎼 Emergencehttp://radio.seti.org/episodes/Emergence
Your brain is made up of cells. Each one does its own, cell thing. But remarkable behavior emerges when lots of them join up in the grey matter club. You are a conscious being – a single neuron isn’t.
Find out about the counter-intuitive process known as emergence – when simple stuff develops complex forms and complex behavior – and all without a blueprint.
🔗 http://traffic.libsyn.com/arewealone/BiPiSci13-10-14.mp3
Guests:
👨🏻💼Randy Schekman - Professor of molecular and cell biology, University of California, Berkeley, 2013 Nobel Prize-winner
👨🏻💼Steve Potter - Neurobiologist, biomedical engineer, Georgia Institute of Technology
👨🏻💼 Terence Deacon - Biological anthropologist, University of California, Berkeley
👨🏻💼 Simon DeDeo - Research fellow at the Santa Fe Institute
👨🏻💼 Leslie Valiant - Computer scientist, Harvard University, author of Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World
💡 https://www.quantamagazine.org/the-atomic-theory-of-origami-20171031/
#Geometry #Mathematics #Origami #Phase_Transitions #Statistical_Physics #Topology
#Geometry #Mathematics #Origami #Phase_Transitions #Statistical_Physics #Topology
Quanta Magazine
The Atomic Theory of Origami
By reimagining the kinks and folds of origami as atoms in a lattice, researchers are uncovering strange behavior hiding in simple structures.
🎞 Second-Order Phase Transitions: Beyond Landau-Ginzburg Theory
Zohar Komargodski (Weizmann)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_KOMARGODSKI_111416
Zohar Komargodski (Weizmann)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_KOMARGODSKI_111416
🎞 Learning and Inference When There Is Little Data
Yasser Roudi (NTNU)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_ROUDI_050216
Yasser Roudi (NTNU)
http://media.physics.harvard.edu/video/html5/?id=COLLOQ_ROUDI_050216
Forwarded from انجمن علمی فیزیک بهشتی (SBU)
اولین #کارگاه #علم_داده:
"Big Data"
🗓 25آبان، 2 و 9 آذر 96
🕰 ساعت 9 الی 12:30
📍دانشكده فيزيك دانشگاه شهيد بهشتی
ثبت نام و اطلاعات تکمیلی:
http://sbuphysics.ir
http://rusherg.com
@sbu_physics
"Big Data"
🗓 25آبان، 2 و 9 آذر 96
🕰 ساعت 9 الی 12:30
📍دانشكده فيزيك دانشگاه شهيد بهشتی
ثبت نام و اطلاعات تکمیلی:
http://sbuphysics.ir
http://rusherg.com
@sbu_physics
1️⃣ But what *is* a Neural Network? | Deep learning, Part 1
🔗 https://www.aparat.com/v/XkQFy
2️⃣ Gradient descent, how neural networks learn | Deep learning, part 2
🔗 https://www.aparat.com/v/uvUxW
3️⃣ What is backpropagation and what is it actually doing? | Deep learning
🔗 https://www.aparat.com/v/EZ9RV
3️⃣*️⃣ Backpropagation calculus | Appendix to deep learning
🔗 https://www.aparat.com/v/0tSKg
🔗 https://www.aparat.com/v/XkQFy
2️⃣ Gradient descent, how neural networks learn | Deep learning, part 2
🔗 https://www.aparat.com/v/uvUxW
3️⃣ What is backpropagation and what is it actually doing? | Deep learning
🔗 https://www.aparat.com/v/EZ9RV
3️⃣*️⃣ Backpropagation calculus | Appendix to deep learning
🔗 https://www.aparat.com/v/0tSKg
آپارات - سرویس اشتراک ویدیو
But what *is* a Neural Network? | Deep learning, Part 1
Subscribe to stay notified about part 2 on backpropagation: http://3b1b.co/subscribe
Support more videos like this on Patreon: https://www.patreon.com/3blue1brown
For any early-stage ML entrepreneurs, Amplify Partners would love to hear from you: 3bl…
Support more videos like this on Patreon: https://www.patreon.com/3blue1brown
For any early-stage ML entrepreneurs, Amplify Partners would love to hear from you: 3bl…
#سمینارهای_هفتگی گروه سیستمهای پیچیده و علم شبکه دانشگاه شهید بهشتی
🔹دوشنبه، ۱۵ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
🔹دوشنبه، ۱۵ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
Forwarded from انجمن علمی فیزیک بهشتی (SBU)
#سمینار_عمومی این هفته
کوانتوم؛ مغز و هوش مصنوعی
-۳شنبه ۱۶ آبان؛ ساعت ۱۶
-تالار ابن هیثم، دانشکده فیزیک
کانال انجمن علمی دانشجویی فیزیک
@sbu_physics
کوانتوم؛ مغز و هوش مصنوعی
-۳شنبه ۱۶ آبان؛ ساعت ۱۶
-تالار ابن هیثم، دانشکده فیزیک
کانال انجمن علمی دانشجویی فیزیک
@sbu_physics
🔖 Variational Inference: A Review for Statisticians
David M. Blei, Alp Kucukelbir, Jon D. McAuliffe
🔗 https://arxiv.org/pdf/1601.00670
📌 ABSTRACT
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms
David M. Blei, Alp Kucukelbir, Jon D. McAuliffe
🔗 https://arxiv.org/pdf/1601.00670
📌 ABSTRACT
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms