EEG workshop – Telegram
EEG workshop
4.28K subscribers
1.6K photos
98 videos
962 files
1.51K links
کانال کارگاه های پردازش سیگنال های مغزی

لینک عضویت کانال:
https://news.1rj.ru/str/joinchat/AAAAAD7DKCCSYN9in_TFWQ
ارتباط با مدیر سایت:
@eegworkshop0
ارتباط با ما: @EEGWorkshops
ارتباط با دکتر نصرآبادی
@ali_m_n2015
Download Telegram
#book

Fundamentals of Computational Neuroscience

Author(s): Thomas Trappenberg
Publisher: Oxford University Press, Year: 2023
Denoscription:
Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Although not a new area, it is only recently that enough knowledge has been gathered to establish computational neuroscience as a scientific discipline in its own right. Given the complexity of the field, and its increasing importance in progressing our understanding of how the brain works, there has long been a need for an introductory text on what is often assumed to be an impenetrable topic.
Table of contents :
Cover
Fundamentals of Computational Neuroscience - Third Edition
Copyright
Preface
Mathematical formulas
Programming examples
References
Acknowledgements
Contents
I Background
1 Introduction and outlook
1.1 What is computational neuroscience?
1.1.1 Embedding within neuroscience
1.2 Organization in the brain
1.2.1 Levels of organization in the brain
1.2.2 Large-scale brain anatomy
1.2.3 Hierarchical organization of cortex
1.2.4 Rapid data transmission in the brain
1.2.5 The layered structure of neocortex
1.2.6 Columnar organization and cortical modules
1.2.7 Connectivity between neocortical layers
1.2.8 Cortical parameters
1.3 What is a model?
1.3.1 Phenomenological and explanatory models
1.3.2 Models in computational neuroscience
1.4 Is there a brain theory?
1.4.1 Emergence and adaptation
1.4.2 Levels of analysis
1.5 A computational theory of the brain
1.5.1 Why do we have brains?
1.5.2 The anticipating brain
1.5.3 Deep sparse predictive coding and the uncertain brain
2 Scientific programming with Python
2.1 The Python programming environment
2.2 Basic language elements
2.2.1 Basic data types and arrays
2.2.2 Control flow
2.2.3 Functions
2.2.4 Plotting
2.2.5 Timing the program
2.3 Code efficiency and vectorization
3 Math and Stats
3.1 Vector and matrix notations
3.2 Distance measures
3.3 The δ-function
3.4 Numerical calculus
3.4.1 Differences and sums
3.4.2 Numerical integration of an initial value problem
3.4.3 Euler method
3.4.4 Higher-order methods
3.5 Basic probability theory
3.5.1 Random numbers and their probability (density) function
3.5.2 Moments: mean, variance, etc.
3.5.3 Examples of probability (density) functions
3.5.3.1 Uniform distribution
3.5.3.2 Normal (Gaussian) distribution
3.5.3.3 Bernoulli distribution
3.5.3.4 Binomial distribution
3.5.3.5 Multinomial distribution
3.5.3.6 Poisson distribution
3.5.4 Cumulative probability (density) function and the Gaussian error function
3.5.5 Functions of random variables and the central limit theorem
3.5.6 Measuring the difference between distributions
3.5.7 Marginal, joined, and conditional distributions
II Neurons
4 Neurons and conductance-based models
4.1 Biological background
4.1.1 Structural properties
4.1.2 Information-processing mechanisms
4.1.3 Membrane potential
4.1.4 Ion channels
4.2 Synaptic mechanisms and dendritic processing
4.2.1 Chemical synapses and neurotransmitters
4.2.2 Excitatory/inhibitory synapses
4.2.3 modelling synaptic responses
Simulation
4.2.4 Different levels of modelling
4.3 The generation of action potentials: Hodgkin–Huxley
4.3.1 The minimal mechanisms
4.3.2 Ion pumps
4.3.3 Hodgkin–Huxley equations
4.3.4 Propagation of action potentials
4.3.5 Above and beyond the Hodgkin–Huxley neuron: the Wilson model
4.4 FitzHugh-Nagumo model
4.5 Neuronal morphologies: compartmental models
4.5.1 Cable theory
4.5.2 Physical shape of neurons
4.5.3 Neuron simulators
5 Integrate-and-fire neurons and population models
5.1 The leaky integrate-and-fire models
5.1.1 Response of IF neurons to very short and constant input currents
5.1.2 Rate gain function
5.1.3 The spike-response model
5.1.4 The Generalized LIF model
5.1.5 The McCulloch–Pitts neuron
5.2 Spike-time variability
5.2.1 Biological irregularities
5.2.2 Noise models for IF neurons
5.2.3 Simulating the variability of real neurons
5.2.4 The activation function depends on input
5.3 Advanced integrate-and-fire models
5.3.1 The Izhikevich neuron
5.4 The neural code and the firing rate hypothesis
5.4.1 Correlation codes and coincidence detectors
5.4.2 How accurate is spike timing?
5.5 Population dynamics: modelling the average behaviour of neurons
5.5.1 Firing rates and population averages
5.5.2 Population dynamics for slow varying input
5.5.3 Motivations for population dynamics
5.5.4 Rapid response of populations
5.5.5 Common activation functions
5.6 Networks with non-classical synapses
5.6.1 Logical AND and sigma–pi nodes
5.6.2 Divisive inhibition
5.6.3 Further sources of modulatory effects between synaptic inputs
6 Associators and synaptic plasticity
6.1 Associative memory and Hebbian learning
6.1.1 Hebbian learning
6.1.2 Associations
6.1.3 Hebbian learning in the conditioning framework
6.1.4 Features of associators and Hebbian learning
Pattern completion and generalization
Prototypes and extraction of central tendencies
Graceful degradation
6.2 The physiology and biophysics of synaptic plasticity
6.2.1 Typical plasticity experiments
6.2.2 Spike timing dependent plasticity
6.2.3 The calcium hypothesis and modelling chemical pathways
6.3 Mathematical formulation of Hebbian plasticity
6.3.1 Spike timing dependent plasticity rules
6.3.2 Hebbian learning in population and rate models
Simulation
6.3.3 Negative weights and crossing synapses
6.4 Synaptic scaling and weight distributions
6.4.1 Examples of STDP with spiking neurons
6.4.2 Weight distributions in rate models
6.4.3 Competitive synaptic scaling and weight decay
6.4.4 Oja’s rule and principal component analysis
6.5 Plasticity with pre- and postsynaptic dynamics
III Networks
7 Feed-forward mapping networks
7.1 Deep representational learning
7.2 The perceptron
7.2.1 The simple perceptron as boolean function
7.2.2 Multilayer perceptron (MLP)
7.2.3 MNIST with MLP
7.2.4 MLP with Keras
7.2.5 Some remarks on gradient learning and biological plausibility of MLPs
7.3 Convolutional neural networks (CNNs)
7.3.1 Invariant object recognition
7.3.2 Image processing and convolutions filters
7.3.3 CNN and MNIST
7.4 Probabilistic interpretation of MLPs
7.4.1 Probabilistic regression
7.4.2 Probabilistic classification
7.4.3 Maximum a posteriori (MAP) and regularization with priors
7.4.4 Mapping networks with context units
7.5 The anticipating brain
7.5.1 The brain as anticipatory system in a probabilistic framework
7.5.2 Variational free energy principle
7.5.3 Deep sparse predictive coding
7.5.4 Predictive coding of MNIST
8 Feature maps and competitive population coding
8.1 Competitive feature representations in cortical tissue
8.2 Self-organizing maps
8.2.1 The basic cortical map model
8.2.2 The Kohonen model
8.2.3 Ongoing refinements of cortical maps
8.3 Dynamic neural field theory
8.3.1 The centre-surround interaction kernel
8.3.2 Asymptotic states and the dynamics of neural fields
8.3.3 Examples of competitive representations in the brain
8.3.4 Formal analysis of attractor states
8.4 ‘Path’ integration and the Hebbian trace rule
8.4.1 Path integration with asymmetrical weight kernels
8.4.2 Self-organization of a rotation network
8.4.3 Updating the network after learning
8.5 Distributed representation and population coding
8.5.1 Sparseness
8.5.2 Probabilistic population coding
8.5.3 Optimal decoding with tuning curves
8.5.4 Implementations of decoding mechanisms
9 Recurrent associative networks and episodic memory
9.1 The auto-associative network and the hippocampus
9.1.1 Different memory types
9.1.2 The hippocampus and episodic memory
9.1.3 Learning and retrieval phase
9.2 Point-attractor neural networks (ANN)
9.2.1 Network dynamics and training
9.2.2 Signal-to-noise analysis
9.2.3 The phase diagram
9.2.4 Spurious states and the advantage of noise
9.2.5 Noisy weights and diluted attractor networks
9.3 Sparse attractor networks and correlated patterns
9.3.1 Sparse patterns and expansion recoding
9.3.2 Control of sparseness in attractor networks
9.4 Chaotic networks: a dynamic systems view
9.4.1 Attractors
9.4.2 Lyapunov functions
9.4.3 The Cohen–Grossberg theorem
9.4.4 Asymmetrical networks
9.4.5 Non-monotonic networks
9.5 The Boltzmann Machine
9.5.1 ANN with hidden nodes
9.5.2 The restricted Boltzmann machine and contrastive Hebbian learning
9.5.3 Example of basic RMB on MNIST data
9.6 Re-entry and gated recurrent networks
9.6.1 Sequence processing
9.6.2 Basic sequence processing with multilayer perceptrons and recurrent neural networks in Keras
9.6.3 Long short-term memory (LSTM) and sentiment analysis
9.6.4 Other gated architectures and attention
IV System-level models
10 Modular networks and complementary systems
10.1 Modular mapping networks
10.1.1 Mixture of experts
10.1.2 The ‘what-and-where’ task
10.1.3 Product of experts
10.2 Coupled attractor networks
10.2.1 Imprinted and composite patterns
10.2.2 Signal-to-noise analysis
10.3 Sequence learning
10.4 Complementary memory systems
10.4.1 Distributed model of working memory
10.4.2 Limited capacity of working memory
10.4.3 The spurious synchronization hypothesis
10.4.4 The interacting-reverberating-memory hypothesis
11 Motor Control and Reinforcement Learning
11.1 Motor learning and control
11.1.1 Feedback controller
11.1.2 Forward and inverse model controller
11.1.3 The actor–critic scheme
11.2 Classical conditioning and reinforcement learning
11.3 Formalization of reinforcement learning
11.3.1 The environmental setting of a Markov decision process
11.3.2 Model-based reinforcement learning
11.3.2.1 The basic Bellman equation
11.3.2.2 Policy iteration
11.3.2.3 Bellman function for optimal policy and value (Q) iteration
11.3.3 Model-free reinforcement learning
11.3.3.1 Temporal difference method for value iteration
11.3.3.2 TD(λ)
11.4 Deep reinforcement learning
11.4.1 Value-function approximation with ANN
11.4.2 Deep Q-learning
11.4.3 Actors and policy search
11.4.4 Actor-critic schemes
11.4.5 Reinforcement learning in the brain
11.4.6 The cerebellum and motor control
11.4.7 Neural implementations of TD learning
11.4.8 Basal Ganglia
12 The cognitive brain
12.1 Attentive vision
12.1.1 Attentive vision
12.1.2 Attentional bias in visual search and object recognition
12.2 An interconnecting workspace hypothesis
12.2.1 The global workspace
12.2.2 Demonstration of the global workspace in the Stroop task
12.3 Complementary decision systems
12.4 Probabilistic reasoning: causal models and Bayesian networks
12.4.1 Graphical mo
12.4.2 The Pearl-example
12.4.3 Probabilistic reasoning in Python using LEA
12.4.4 Expectation maximization
12.5 Structural causal models and learning causality
12.5.1 Out-of-distribution generalization
12.5.2 Structural causal models
12.5.3 Learning causality and explainable AI
12.5.4 The way forward
Index
#book

Computational Neuroscience (Neuromethods, 199)

This volume looks at the latest advancements in imaging neuroscience methods using magnetic resonance imaging (MRI) and electroencephalography (EEG) to study the healthy and diseased brain. The chapters in this book are organized into five parts. Parts One and Two cover an introduction to this field and the latest use of molecular models. Part Three explores neurophysiological methods for assessment, such as quantitative EEG and event-related potentials. Part Four discusses the advances and innovations made in computational anatomy, and Part Five addresses the challenges faced by researchers prior to the computational neuroscience to find wider translational applications in the field of psychiatry and mental health. In the Neuromethods series style, chapters include the kind of detail and key advice from the specialists needed to get successful results in your laboratory. 
Title:Computational Neuroscience (Neuromethods, 199)Volume:Author(s):Drozdstoy Stoyanov (editor), Bogdan Draganski (editor), Paolo Brambilla (editor), Claus Lamm (editor)Series:Periodical:Publisher:HumanaCity:Year:2023Edition:1st ed. 2023Language:EnglishPages (biblio\tech):288\275ISBN:1071632299, 9781071632291ID:3721528Time added:2023-05-12 17:24:18Time modified:2023-05-13 18:00:11Library:Library issue:Size:9 MB (9165932 bytes)Extension:pdfWorse versions:BibTeXLinkDesr. old vers.:2023-05-12 17:24:18Edit record:Libgen LibrarianCommentary:Topic:Tags:Identifiers:ISSN:UDC:LBC:LCC:DDC:DOI:OpenLibrary ID:Google Books:ASIN:Book attributes:DPI:OCR:Bookmarked:Scanned:Orientation:Paginated:Color:Clean:0yesyesMirrors:Libgen & IPFS & TorLibgen.liGnutellaEd2kDC++Torrent per 1000 filesThis volume looks at the latest advancements in imaging neuroscience methods using magnetic resonance imaging (MRI) and electroencephalography (EEG) to study the healthy and diseased brain. The chapters in this book are organized into five parts. Parts One and Two cover an introduction to this field and the latest use of molecular models. Part Three explores neurophysiological methods for assessment, such as quantitative EEG and event-related potentials. Part Four discusses the advances and innovations made in computational anatomy, and Part Five addresses the challenges faced by researchers prior to the computational neuroscience to find wider translational applications in the field of psychiatry and mental health. In the Neuromethods series style, chapters include the kind of detail and key advice from the specialists needed to get successful results in your laboratory. 
Cutting-edge and comprehensive, Computational Neuroscience is a valuable tool for researchers in the psychiatry and mental health fields who want to learn more about ways to incorporate computational approaches into utility and validity of clinical methods.


Table of contents :
Preface to the Series
Preface
Contents
Contributors
Part I: Introduction
Chapter 1: Toward Methodology for Strategic Innovations in Translational and Computational Neuroscience in Psychiatry
1 Background
2 Current Advancements
2.1 Future Research Goals
2.2 Expected Results
3 Impact
References
Part II: Molecular Methods
Chapter 2: Molecular Methods in Neuroscience and Psychiatry
1 Introduction
2 Methods
3 Results and Discussions
3.1 Methods in Neurotrannoscriptomics
3.2 Methods in Neuroproteomics
3.3 Methods in Epigenetics
3.4 Flow Cytometry in Psychiatry and Neuroscience
3.5 ELISpot/FluoroSpot in Psychiatry and Neuroscience
4 Conclusions
References
Chapter 3: Toward the Use of Research and Diagnostic Algorithmic Rules to Assess the Recurrence of Illness and Major Dysmood D...
1 Mood Disorder Concepts: The Ultimate Chaos
2 Diagnosis of Mood Disorders: The Ultimate Chaos
3 Lack of a Correct Model Prevents Targeted Research
4 Machine Learning Models
5 RADAR Scores and Plots
6 Why the Diagnosis ``Bipolar Disorder´´ Is Useless
6.1 Patients with BP1 and BP2 May Be Classified as SMDM or MDMD
6.2 Depressive and Manic Episodes Are Manifestations of ROI
6.3 The Diagnoses of MDD, MDE, BP1, and BP2 Are Irrelevant in Our Precision Models
6.4 No Model Differences Between Unipolar and Bipolar Disorders
7 Conclusions
References
Part III: Neurophysiological Methods
Chapter 4: The Concept of Event-Related Oscillations: A Spotlight on Extended Applications
Abbreviations
1 The Concept of Event-Related Oscillations
1.1 Conceptual Framework
1.1.1 Event-Related Potentials
1.1.2 Event-Related EEG Oscillations
1.2 Advantages
1.2.1 A Full Characterization of Event-Related EEG Signals
1.2.2 Evaluation of Parallel Processes in the Brain
1.2.3 A Physiological Approach to the Event-Related EEG Activity
2 Methodology
2.1 Analysis in the Frequency Domain
2.2 Analysis in the Time-Frequency Domain
2.2.1 Phase-Locked Power
2.2.2 Total Power
2.2.3 Temporal Phase Locking
Single-Sweep Wave Identification Method
Phase-Locking Factor
2.2.4 Event-Related Spatial Synchronization: Spatial Phase Locking
Phase-Locking Value
Phase-Lag Index
3 Extended Applications
3.1 Internal Information Processing
3.1.1 Response-Related Potentials
3.1.2 Coupling Between Slow Oscillations and Sleep Spindles
3.1.3 Additional Internal Potentials: A Future Focus of Research
3.2 Event-Related Frequency Tuning
3.3 Event-Related Spatial Synchronization
3.4 Multi-Second Behavioral Patterns
4 Concluding Remarks
References
Chapter 5: Quantitative EEG Analysis: Introduction and Basic Principles
1 Introduction
References
Chapter 6: QEEG and ERP Biomarkers of Psychotic and Mood Disorders and Their Treatment Response
1 The Perspective of Clinical Utility of Mismatch Negativity and P300 Event-Related Potentials in Psychotic Disorders
1.1 MMN
1.2 P300
2 Quantitative EEG Biomarkers of Depression and Antidepressant Treatment Response
References
Chapter 7: Quantitative EEG in Patients with Schizophrenia
1 Introduction
References
Part IV: Neuroimaging Methods
Chapter 8: Computational Anatomy Going Beyond Brain Morphometry
1 Introduction
2 Historical Overview
3 Computational Anatomy in Basic and Clinical Neuroscience
4 Limitations of Computational Anatomy Using T1-Weighted Data
5 Improved Brain Tissue Classification Using qMRI
6 ``In Vivo Histology´´ Using qMRI
7 Current Limitations of qMRI
8 Outlook
References
Chapter 9: Nonlinear Methods for the Investigation of Psychotic Disorders
1 Introduction
2 How to Evaluate Nonlinear Dynamical Systems?
3 Methods
4 Use Cases
5 Summary and Outlook
References
Chapter 10: Carving Out the Path to Computational Biomarkers for Mental Disorders
1 Introduction
1.1 The Complexity of Understanding Brain Function and Dysfunction
1.2 The Role of Emotions in Anxiety and Other Mental Disorders
1.3 The Role of the Amygdala in Emotions
1.4 Amygdala Activation and Connectivity in Anxiety and Other Mental Disorders
1.5 Structural and Functional Alterations in the Amygdala as a Possible Differential Diagnostic Biomarker for Mental Disorders
1.6 Real World Challenges for Amygdala-Based Biomarkers
2 Materials
2.1 Magnetic Resonance Imaging Hardware
2.2 Computing Hardware
3 Methods
3.1 Experimental Design: Amygdala Function
3.2 Experimental Design: Amygdala Anatomy
3.3 Experimental Design: Connectivity and Amygdala Regulation
3.4 Functional MRI of the Amygdala
3.5 Ultra-High Field Functional MRI of the Amygdala
4 Conclusion
References
Chapter 11: Neuroimaging Methods Investigating Anterior Insula Alterations in Schizophrenia and Mood Disorders
1 Introduction
2 Structural Changes
3 Functional Alterations
4 Impaired Connectivity
5 Conclusion
References
Chapter 12: Magnetic Resonance Spectroscopy
1 Introduction
1.1 Magnetic Resonance Spectroscopy: Principles
1.2 Clinical Applications of MRS
1.3 MRS Applications in Neurology
1.4 MRS Applications in Psychiatry
1.5 Functional MRS
2 Materials
3 Methods
3.1 Data Acquisition
3.1.1 Single Voxel Spectroscopy
3.1.2 MRS Imaging
3.1.3 Water and Lipid Suppression
3.2 Data Processing
4 Notes
4.1 Understanding Data Quality
4.2 Artifacts
4.3 Long and Short TE
References
Chapter 13: The Effect of Exogenous and Endogenous Parameters on Group Resting-State Effective Connectivity and BOLD Signal
1 Introduction
2 Methods
2.1 Participants
2.2 BETULA Data Collection
2.3 Data Analysis
2.3.1 Pre-processing
2.3.2 ROI Selection
2.3.3 DCM Analysis
3 Results
3.1 Effective Connectivity
3.2 Parameters Defining the BOLD Signal
4 Discussions
5 Conclusions
References
Chapter 14: Utility of Computational Approaches for Precision Psychiatry: Applications to Substance Use Disorders
1 Introduction
2 Theory-Driven Approaches: Computational Modeling and Computational Phenotyping
2.1 Joint Modeling
3 Hybrid Approaches/Adaptive Design Optimization
4 Data-Driven Approaches/Machine Learning
5 Summary and Conclusion
References
Part V: Integrative Computational Neuroscience
Chapter 15: Multimodal Integration in Psychiatry: Clinical Potential and Challenges
1 Introduction
2 Materials
3 Methods
3.1 Multimodality of Magnetic Resonance Techniques
3.2 Multimodal Magnetic Resonance in the Study of Major Psychoses
3.3 Functional Neuroimaging and Neurophysiologic Techniques
3.3.1 Magnetic Resonance Imaging of the BOLD Effect
3.4 MRI Techniques Sensitive to Perfusion and Oxidative Metabolism
3.5 Optical Neuroimaging Techniques
3.6 Positron Emission Tomography
3.7 Electroencephalography
4 Conclusions
References
Chapter 16: Premises of Computational Neuroscience: Machine Learning Tools and Multivariate Analyses
1 Introduction
2 Guide to the Methodology
2.1 Overview
2.2 Mathematical Formulations
2.3 Benefits and Limitations of Using Multivariate Methods for Mental Health
3 Examples of Application of Multivariate Methods in Mental Health
3.1 Multivariate Methods Applied to the Classification of Schizophrenia
3.1.1 Objective
3.1.2 Data Used
3.1.3 Method Used
3.1.4 Results
3.2 Individual- and Group-Level Brain Signatures of Schizophrenia, Major Depressive, and Bipolar Disorders
3.2.1 Objective
3.2.2 Data
3.2.3 Method
3.2.4 Results
3.3 Multimodel Brain Signature with Task-fMRI, Resting State, and Morphometry in Schizophrenia and Major Depressive Disorder
3.3.1 Objective
3.3.2 Data
3.3.3 Methods
3.3.4 Results
4 Code and Toolbox Availability
5 Conclusion
References
Index
فصل چهار برای بررسی نوسانات وابسته به رویداد مفید است
🔰 آزمایشگاه ملی نقشه برداری مغز برگزار می‌کند:

💠 ۳۰ امین کارگاه ثبت، پردازش و تحلیل سیگنالهای EEG💠

🟢 مدرسین:
☑️دكتر علی مطيع نصرآبادی
☑️دكتر محمد ميكائيلی

🕑زمان: چهارشنبه، پنجشنبه و جمعه
۲۳ الی ۲۵ آبان ۱۴۰۳

‼️ظرفیت محدود ‼️

🌐برای ثبت نام و کسب اطلاعات بیشتر کلیک کنید.

☎️ تماس با ما: 02186093155

💠Telegram
💠Instagram
💠LinkedIn
🌐Website