1️⃣ But what *is* a Neural Network? | Deep learning, Part 1
🔗 https://www.aparat.com/v/XkQFy
2️⃣ Gradient descent, how neural networks learn | Deep learning, part 2
🔗 https://www.aparat.com/v/uvUxW
3️⃣ What is backpropagation and what is it actually doing? | Deep learning
🔗 https://www.aparat.com/v/EZ9RV
3️⃣*️⃣ Backpropagation calculus | Appendix to deep learning
🔗 https://www.aparat.com/v/0tSKg
🔗 https://www.aparat.com/v/XkQFy
2️⃣ Gradient descent, how neural networks learn | Deep learning, part 2
🔗 https://www.aparat.com/v/uvUxW
3️⃣ What is backpropagation and what is it actually doing? | Deep learning
🔗 https://www.aparat.com/v/EZ9RV
3️⃣*️⃣ Backpropagation calculus | Appendix to deep learning
🔗 https://www.aparat.com/v/0tSKg
آپارات - سرویس اشتراک ویدیو
But what *is* a Neural Network? | Deep learning, Part 1
Subscribe to stay notified about part 2 on backpropagation: http://3b1b.co/subscribe
Support more videos like this on Patreon: https://www.patreon.com/3blue1brown
For any early-stage ML entrepreneurs, Amplify Partners would love to hear from you: 3bl…
Support more videos like this on Patreon: https://www.patreon.com/3blue1brown
For any early-stage ML entrepreneurs, Amplify Partners would love to hear from you: 3bl…
#سمینارهای_هفتگی گروه سیستمهای پیچیده و علم شبکه دانشگاه شهید بهشتی
🔹دوشنبه، ۱۵ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
🔹دوشنبه، ۱۵ آبان ماه، ساعت ۴:۰۰ - کلاس۱ دانشکده فیزیک دانشگاه شهید بهشتی.
@carimi
Forwarded from انجمن علمی فیزیک بهشتی (SBU)
#سمینار_عمومی این هفته
کوانتوم؛ مغز و هوش مصنوعی
-۳شنبه ۱۶ آبان؛ ساعت ۱۶
-تالار ابن هیثم، دانشکده فیزیک
کانال انجمن علمی دانشجویی فیزیک
@sbu_physics
کوانتوم؛ مغز و هوش مصنوعی
-۳شنبه ۱۶ آبان؛ ساعت ۱۶
-تالار ابن هیثم، دانشکده فیزیک
کانال انجمن علمی دانشجویی فیزیک
@sbu_physics
🔖 Variational Inference: A Review for Statisticians
David M. Blei, Alp Kucukelbir, Jon D. McAuliffe
🔗 https://arxiv.org/pdf/1601.00670
📌 ABSTRACT
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms
David M. Blei, Alp Kucukelbir, Jon D. McAuliffe
🔗 https://arxiv.org/pdf/1601.00670
📌 ABSTRACT
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms
Really helpful, thorough, and insightful network tutorial .ipynb
https://github.com/vtraag/4TU-CSS/blob/master/presentations/traag/notebook/Network.ipynb
https://github.com/vtraag/4TU-CSS/blob/master/presentations/traag/notebook/Network.ipynb
👌🏾 An Introduction to Complex Systems: Society, Ecology, and Nonlinear Dynamics
http://physicstoday.scitation.org/doi/full/10.1063/PT.3.3766
Physics Today 70, 11, 51 (2017);https://doi.org/10.1063/PT.3.3766
⚙ PDF 👇🏼👇🏻👇🏽
http://physicstoday.scitation.org/doi/full/10.1063/PT.3.3766
Physics Today 70, 11, 51 (2017);https://doi.org/10.1063/PT.3.3766
⚙ PDF 👇🏼👇🏻👇🏽
Using Big Data, Social Networks, and Agent-Based Modeling to Understand Information Diffusion
https://vimeo.com/241857550
https://vimeo.com/241857550
Vimeo
Using Big Data, Social Networks, and Agent-Based Modeling to Understand Information Diffusion
Center for Collective Dynamics of Complex Systems (CoCo) Seminar Series November 7, 2017 Bill Rand (Marketing, North Carolina State University) "Using Big…
⚙ This collection of over 500 MATLAB examples can help you with #machinelearning, #statistics, and #math problems
https://www.mathworks.com/examples/product-group/matlab-math-statistics-and-optimization?s_eid=PSM_da&hootPostID=70cb4d7118cfa4662cc041050f5e8ff1
https://www.mathworks.com/examples/product-group/matlab-math-statistics-and-optimization?s_eid=PSM_da&hootPostID=70cb4d7118cfa4662cc041050f5e8ff1
Mathworks
Math, Statistics, and Optimization Examples
Explore thousands of code examples for MATLAB, Simulink, and other MathWorks products.