Channel name was changed to «Computational and Mathematical Psychology Lab»
پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی با همکاری آکادمی لوپ برگزار میکنند 📣📣📣
کارگاه سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
سمپوزیم سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
📃 با اعطای گواهی معتبر از سوی پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ.
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
کارگاه سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
سمپوزیم سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
📃 با اعطای گواهی معتبر از سوی پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ.
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣 معرفی اساتید ارائه دهنده در سمپوزیم سه روزه یادگیری عمیق و شبکه عصبی
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣 معرفی اساتید ارائه دهنده در کارگاه سه روزه یادگیری عمیق و شبکه عصبی
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣📣📣 برنامه زمانی کارگاه سه روزه یادگیری عمیق و شبکه عصبی
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣📣📣 برنامه زمانی سمپوزیم سه روزه یادگیری عمیق و شبکه عصبی
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Professor Wulfram Gerstner,
Director of the Laboratory of Computational Neuroscience (LCN) at the EPFL
Title: Eligibility traces and three-factor rules of synaptic plasticity
Abstract: Hebbian plasticity combines two factors: presynaptic activity must occur together with some postsynaptic variable (spikes, voltage deflection, calcium elevation ...). In three-factor learning rules the combination of the two Hebbian factors is not sufficient, but leaves a trace at the synapses (eligibility trace) which decays over a few seconds; only if a third factor (neuromodulator signal) is present, either simultaneously or within a short a delay, the actual change of the synapse via long-term plasticity is triggered. After a review of classic theories and recent evidence of plasticity traces from plasticity experiments in rodents, I will discuss two studies from my own lab: the first one is a modeling study of reward-based learning with spiking neurons using an actor-critic architecture; the second one is a joint theory-experimental study showing evidence for eligibility traces in human behavior and pupillometry. Extensions from reward-based learning to surprise-based learning will be indicated.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Professor Wulfram Gerstner,
Director of the Laboratory of Computational Neuroscience (LCN) at the EPFL
Title: Eligibility traces and three-factor rules of synaptic plasticity
Abstract: Hebbian plasticity combines two factors: presynaptic activity must occur together with some postsynaptic variable (spikes, voltage deflection, calcium elevation ...). In three-factor learning rules the combination of the two Hebbian factors is not sufficient, but leaves a trace at the synapses (eligibility trace) which decays over a few seconds; only if a third factor (neuromodulator signal) is present, either simultaneously or within a short a delay, the actual change of the synapse via long-term plasticity is triggered. After a review of classic theories and recent evidence of plasticity traces from plasticity experiments in rodents, I will discuss two studies from my own lab: the first one is a modeling study of reward-based learning with spiking neurons using an actor-critic architecture; the second one is a joint theory-experimental study showing evidence for eligibility traces in human behavior and pupillometry. Extensions from reward-based learning to surprise-based learning will be indicated.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Professor James L. McClelland,
Co-Director, Center for Mind, Brain, Computation and Technology, Stanford University
Title: Human and Machine Learning: How each has taught us about the other, and what is left to learn
Abstract: In this talk, I will describe work at the interface between human and machine learning. The talk will draw on the effects of brain damage on human learning and memory, the patterns of learning that humans exhibit, and computational models based on artificial neural networks that reveal properties shared by human and artificial neural networks. In the latter part of the talk, we will discuss challenges posed to artificial learning systems by aspects of human learning we still do not fully understand in terms of the underlying neural computations.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Professor James L. McClelland,
Co-Director, Center for Mind, Brain, Computation and Technology, Stanford University
Title: Human and Machine Learning: How each has taught us about the other, and what is left to learn
Abstract: In this talk, I will describe work at the interface between human and machine learning. The talk will draw on the effects of brain damage on human learning and memory, the patterns of learning that humans exhibit, and computational models based on artificial neural networks that reveal properties shared by human and artificial neural networks. In the latter part of the talk, we will discuss challenges posed to artificial learning systems by aspects of human learning we still do not fully understand in terms of the underlying neural computations.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Professor Hugo Larochelle,
Google Brain
Title: Learning to generalize from few examples with meta-learning
Abstract: A lot of the recent progress on many AI tasks were enabled in part by the availability of large quantities of labeled data for deep learning. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning has been a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In this talk, I’ll present an overview of the recent research that has made exciting progress on this topic. I will also share my thoughts on the challenges and research opportunities that remain in few-shot learning, including a proposal for a new benchmark.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Professor Hugo Larochelle,
Google Brain
Title: Learning to generalize from few examples with meta-learning
Abstract: A lot of the recent progress on many AI tasks were enabled in part by the availability of large quantities of labeled data for deep learning. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning has been a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In this talk, I’ll present an overview of the recent research that has made exciting progress on this topic. I will also share my thoughts on the challenges and research opportunities that remain in few-shot learning, including a proposal for a new benchmark.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
📣📣📣 اطلاعیه خوابگاه برای شرکت کنندگان در کارگاه و سمپوزیم یادگیری عمیق و شبکه های عصبی
جهت رفاه حال دانشجویان مقیم سایر شهر ها و با هماهنگی های انجام شده با دانشگاه شهید بهشتی، تعداد محدودی خوابگاه برای دانشجویان ثبت نامی کارگاه و سمپوزیم یادگیری عمیق و شبکه های عصبی فراهم شده است. لذا دانشجویانی که قصد اقامت در خوابگاه را دارند قبل از ثبت نام با شماره 09195849138 (امیر حسین هادیان) تماس حاصل فرمایند و یا به آیدی تلگرام @AmirHoseinHadian پیام ارسال نمایند و بعد از هماهنگی خوابگاه در سامانه ثبت نام نمایند.
❗️اولویت ثبت نام خوابگاه با دانشجویانی است که زود تر درخواست دهند و ثبت نام کنند.
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
جهت رفاه حال دانشجویان مقیم سایر شهر ها و با هماهنگی های انجام شده با دانشگاه شهید بهشتی، تعداد محدودی خوابگاه برای دانشجویان ثبت نامی کارگاه و سمپوزیم یادگیری عمیق و شبکه های عصبی فراهم شده است. لذا دانشجویانی که قصد اقامت در خوابگاه را دارند قبل از ثبت نام با شماره 09195849138 (امیر حسین هادیان) تماس حاصل فرمایند و یا به آیدی تلگرام @AmirHoseinHadian پیام ارسال نمایند و بعد از هماهنگی خوابگاه در سامانه ثبت نام نمایند.
❗️اولویت ثبت نام خوابگاه با دانشجویانی است که زود تر درخواست دهند و ثبت نام کنند.
🏢 برگزار کننده: پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ
🕒زمان برگزاری کارگاه : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🕒زمان برگزاری سمپوزیم: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Timothée Masquelier,
CNRS Researcher (CR1) in Computational Neuroscience
Title: supervised learning in spiking neural networks
Abstract: I will present two recent works on supervised learning in spiking neural networks.
In the first one, we used backpropagation through time. The most commonly used spiking neuron model, the leaky integrate-and-fire neuron, obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential. The firing threshold causes optimization issues, but they can be overcome using a surrogate gradient. We extended previous approaches in two ways. Firstly, we showed that the approach can be used to train convolutional layers. Secondly, we included fast horizontal connections à la Denève: when a neuron N fires, we subtract to the potentials of all the neurons with the same receptive the dot product between their weight vectors and the one of neuron N. Such connections improved the performance.
The second project focuses on SNNs which use at most one spike per neuron per stimulus, and latency coding. We derived a new learning rule for this sort of network, termed S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximate error gradients can be computed backward in a feedforward network with any number of layers.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Timothée Masquelier,
CNRS Researcher (CR1) in Computational Neuroscience
Title: supervised learning in spiking neural networks
Abstract: I will present two recent works on supervised learning in spiking neural networks.
In the first one, we used backpropagation through time. The most commonly used spiking neuron model, the leaky integrate-and-fire neuron, obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential. The firing threshold causes optimization issues, but they can be overcome using a surrogate gradient. We extended previous approaches in two ways. Firstly, we showed that the approach can be used to train convolutional layers. Secondly, we included fast horizontal connections à la Denève: when a neuron N fires, we subtract to the potentials of all the neurons with the same receptive the dot product between their weight vectors and the one of neuron N. Such connections improved the performance.
The second project focuses on SNNs which use at most one spike per neuron per stimulus, and latency coding. We derived a new learning rule for this sort of network, termed S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximate error gradients can be computed backward in a feedforward network with any number of layers.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی با همکاری آکادمی لوپ برگزار میکنند 📣📣📣
کارگاه سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
سمپوزیم سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
📃 با اعطای گواهی معتبر از سوی پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ.
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
کارگاه سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان : شنبه سوم اسفند الی دوشنبه پنجم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
سمپوزیم سه روزه یادگیری عمیق و شبکه های عصبی:
🕒زمان: سه شنبه ششم اسفند الی پنج شنبه هشتم اسفند
🏫مکان: تالار دانشکده برق، دانشگاه شهید بهشتی
📃 با اعطای گواهی معتبر از سوی پژوهشکده علوم شناختی و مغز دانشگاه شهید بهشتی و آکادمی لوپ.
⭕️ برای مشاهده جزییات بیشتر و ثبت نام روی این لینک کلیک کنید.
با ما همراه باشید
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Ali Yoonessi,
Tehran University of Medical Sciences
Title: What can visual system neural networks tell us about better agent-based deep learning models? A prospect.
Abstract: Ample amount of evidence suggests that the visual system is optimized to process the environment that we live in. Interactions of several types of neurons during development creates a sophisticated neural network. What are the properties of these biological neural cells or agents that we can use for creating new models of agent-based neural networks?
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Ali Yoonessi,
Tehran University of Medical Sciences
Title: What can visual system neural networks tell us about better agent-based deep learning models? A prospect.
Abstract: Ample amount of evidence suggests that the visual system is optimized to process the environment that we live in. Interactions of several types of neurons during development creates a sophisticated neural network. What are the properties of these biological neural cells or agents that we can use for creating new models of agent-based neural networks?
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Mir-Shahram Safari,
Shahid Beheshti University of Medical Sciences (SBMU)
Title: Neurobiology and Neurophysiology of Neural Networks
Abstract: Neural networks in brain made by different cell-types with different morphology, molecular profile and electrophysiological properties that connected together with precise targeting bias. Synaptic connection between specific cell types have specific structural and functional features that make them different. Learning mechanism in brain obey from architecture of neural microcircuits and synaptic features. Inhibitory control of interneurons on different dendritic compartments have an important role in information processing, synaptic plasticity and learning in neural microcircuits. Different organization of interneurons in neural motifs made required control for example by feedback, feedforward or lateral inhibition. How different brain microcircuits involved in processing of information and learning and memory is very important open question in neuroscience. I will review latest updates on this issue in my talk.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Mir-Shahram Safari,
Shahid Beheshti University of Medical Sciences (SBMU)
Title: Neurobiology and Neurophysiology of Neural Networks
Abstract: Neural networks in brain made by different cell-types with different morphology, molecular profile and electrophysiological properties that connected together with precise targeting bias. Synaptic connection between specific cell types have specific structural and functional features that make them different. Learning mechanism in brain obey from architecture of neural microcircuits and synaptic features. Inhibitory control of interneurons on different dendritic compartments have an important role in information processing, synaptic plasticity and learning in neural microcircuits. Different organization of interneurons in neural motifs made required control for example by feedback, feedforward or lateral inhibition. How different brain microcircuits involved in processing of information and learning and memory is very important open question in neuroscience. I will review latest updates on this issue in my talk.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Mohammad Ganjtabesh,
University of Tehran
Title: Bio-inspired Learning of Visual Features in Shallow and Deep Spiking Neural Networks
Abstract: To date, various computational models have been proposed to mimic the hierarchical processing of the ventral visual pathway in the cortex, with limited success. In this talk, we show how the association of both biologically inspired network architecture and learning rule significantly improves the models' performance in challenging invariant object recognition problems. In all experiments, we used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. We start with a shallow network, in which neurons in the higher trainable layer are equipped with STDP learning rule and they progressively become selective to intermediate complexity visual features appropriate for object recognition. Then, a deeper model comprising several convolutional (trainable with STDP) and pooling layers will be presented, in which, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Finally, we show how reinforcement learning can be used efficiently to train a deep SNN to perform object recognition in natural images without using any external classifier and the superiority of reward-modulated STDP (R-STDP) over the STDP in extracting discriminative visual features will be discussed.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Mohammad Ganjtabesh,
University of Tehran
Title: Bio-inspired Learning of Visual Features in Shallow and Deep Spiking Neural Networks
Abstract: To date, various computational models have been proposed to mimic the hierarchical processing of the ventral visual pathway in the cortex, with limited success. In this talk, we show how the association of both biologically inspired network architecture and learning rule significantly improves the models' performance in challenging invariant object recognition problems. In all experiments, we used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. We start with a shallow network, in which neurons in the higher trainable layer are equipped with STDP learning rule and they progressively become selective to intermediate complexity visual features appropriate for object recognition. Then, a deeper model comprising several convolutional (trainable with STDP) and pooling layers will be presented, in which, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Finally, we show how reinforcement learning can be used efficiently to train a deep SNN to perform object recognition in natural images without using any external classifier and the superiority of reward-modulated STDP (R-STDP) over the STDP in extracting discriminative visual features will be discussed.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎
📣📣📣 Deep Learning and Neural Networks Symposium and Workshop
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Milad Mozafari,
CerCo, CNRS, France
Title: Reconstructing Natural Scenes from fMRI Patterns using Bi-directional Generative Neural Networks
Abstract: Decoding and reconstructing images from brain imaging data is a research area of high interest. Recent progress in deep generative neural networks has introduced new opportunities to tackle this problem. Here, we employ a recently proposed large-scale bi-directional generative adversarial network, called BigBiGAN, to decode and reconstruct natural scenes from fMRI patterns. BigBiGAN converts images into a 120-dimensional latent space which encodes class and attribute information together, and can also reconstruct images based on their latent vectors. We trained a linear mapping between fMRI data, acquired over images from 150 different categories of ImageNet, and their corresponding BigBiGAN latent vectors. Then, we applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images. Pairwise image decoding from the predicted latent vectors was highly accurate (84%). Moreover, qualitative and quantitative assessments revealed that the resulting image reconstructions were visually plausible, successfully captured many attributes of the original images, and had high perceptual similarity with the original content. This method establishes a new state-of-the-art for fMRI-based natural image reconstruction, and can be flexibly updated to take into account any future improvements in generative models of natural scene images.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
⚙️ Organized by: Institute for Cognitive and Brain Sciences, Shahid Beheshti University and Loop Academy
👨🏻🎓 Speaker introduction
Dr. Milad Mozafari,
CerCo, CNRS, France
Title: Reconstructing Natural Scenes from fMRI Patterns using Bi-directional Generative Neural Networks
Abstract: Decoding and reconstructing images from brain imaging data is a research area of high interest. Recent progress in deep generative neural networks has introduced new opportunities to tackle this problem. Here, we employ a recently proposed large-scale bi-directional generative adversarial network, called BigBiGAN, to decode and reconstruct natural scenes from fMRI patterns. BigBiGAN converts images into a 120-dimensional latent space which encodes class and attribute information together, and can also reconstruct images based on their latent vectors. We trained a linear mapping between fMRI data, acquired over images from 150 different categories of ImageNet, and their corresponding BigBiGAN latent vectors. Then, we applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images. Pairwise image decoding from the predicted latent vectors was highly accurate (84%). Moreover, qualitative and quantitative assessments revealed that the resulting image reconstructions were visually plausible, successfully captured many attributes of the original images, and had high perceptual similarity with the original content. This method establishes a new state-of-the-art for fMRI-based natural image reconstruction, and can be flexibly updated to take into account any future improvements in generative models of natural scene images.
⭕️ For more details please see here
Follow us!
@CMPlab
@LoopAcademy
🌐 www.CMPLab.ir
🌐 www.LoopAcademy.ir
Telegram
attach 📎