the Turing Machine – Telegram
the Turing Machine
276 subscribers
189 photos
15 files
557 links
Join me through the journey of learning Computational Neuroscience topics.
Useful resources, positions and much more!
Get in touch: @nosratullah
Website: nosratullah.github.io
Download Telegram
Analysis and interpretation of massively parallel electrophsiological data

This workshop showcases a few different approaches and analysis tools to exploit electrophysiological data.
Workshop at Neuroinformatics 2013 in Stockholm, Sweden
Workshop noscript: Analysis and Interpretation of Massively Parallel Electrophysiological Data
Probing the organization of interactions within and across neuronal populations is a promising approach to understanding the principles of brain processing. The rapidly advancing technical capabilities to record from hundreds of neurons in parallel open up new possibilities to disentangle the correlative structure within neuronal networks. However, the complexity of these massive data streams calls for novel, tractable analysis tools that exploit the parallel aspect of the data.

[ link ]

Follow: @theTuringMachine
the Turing Machine
https://www.youtube.com/watch?v=NFeGW5ljUoI
Weber17_IzhikevichGLM_NC.pdf
1.5 MB
Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models

A key problem in computational neuroscience is to find simple, tractable models that are nevertheless flexible enough to capture the response properties of real neurons. Here we examine the capabilities of recur- rent point process models known as Poisson generalized linear models (GLMs). These models are defined by a set of linear filters and a point nonlinearity and are conditionally Poisson spiking....

#paper

Follow: @theTuringMachine
Forwarded from Scientific Programming (Ziaee (he/him))
This media is not supported in your browser
VIEW IN TELEGRAM
Have you ever wanted to quickly try some ideas popping up in your head using a Python Shell (REPL)? You might not want to open a new Jupyter Notebook to experiment with only a few lines of code.
But you might also be hesitant to use a classic Python shell since it doesn’t support auto-completion or docstring as Jupyter Notebook does. You also cannot fix the mistake in the code after hitting Enter.
What if you can turn your boring Python shell into a multi-functional shell like below?

Features:
🌱Syntax highlighting.
🌱Multiline editing (the up arrow works).
🌱Autocompletion
🌱Mouse support
🌱Support for color schemes.
🌱Support for bracketed paste
🌱Both Vi and Emacs key bindings.
🌱Support for double width (Chinese) characters.
🌱... and many other things.


Link
Open in an incognito tab!
Computational Neuroscience Symposium

This is the annual symposium of NYU's Training Program in Computational Neuroscience, but with a twist: we will have a joint event with the three other training programs funded by the same NIH grant: Brandeis University, Carnegie Mellon University, and University of Washington. There will be five student talks per site. Keynote lectures will be given by Joshua Gordon (NIMH) and Adrienne Fairhall (University of Washington). We hope you will join us!

Monday Jun. 7
12:00pm - 6:00pm

[ link ]-[ registration ]

Follow:@theTuringMachine
A Gaussian process can be thought of as an extension of the multivariate normal distribution to an infinite number of random variables covering each point on the input domain. The covariance between function values at any two points is given by the evaluation of the kernel of the Gaussian process.

[ link ]

Follow: @theTuringMachine
Here, we bridge these different levels of denoscription by showing how computational models parametrically map classic neuromodulatory processes onto systems-level models of neural activity. The ensuing critical balance of systems-level activity supports perception and action, although our knowledge of this mapping remains incomplete. In this way, quantitative models that link microscale neuronal neuromodulation to systems-level brain function highlight gaps in knowledge and suggest new directions for integrating theoretical and experimental work.

[ link ]
Follow: @theTuringMachine
the Turing Machine
https://www.nature.com/articles/nrn3962
From the neuron doctrine to neural networks
---
Abstract | For over a century, the neuron doctrine — which states that the neuron is the structural and functional unit of the nervous system — has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.

[ link ]

Follow: @theTuringMachine
Forwarded from Complex Systems Studies
We are hiring! Please get in touch if you are looking for an interdisciplinary comp neuro #PhD or #Postdoc. We are looking to fill two positions 1) focusing on dendritic dynamics and synaptic plasticity and 2) on neural network analysis in health and disease.

https://twitter.com/TTchumatchenko/status/1401987678707531783?s=19
NLTools is a Python package for analyzing neuroimaging data. It is the analysis engine powering neuro-learn There are tools to perform data manipulation and analyses such as univariate GLMs, predictive multivariate modeling, and representational similarity analyses. It is based loosely off of Tor Wager’s object-oriented Matlab toolbox and leverages much code from nilearn and scikit-learn

[ link ]

Follow: @theTuringMachine
Listening through the noise
We are all familiar with the difficulty of trying to pay attention to a person speaking in a noisy environment, something often known as the ‘cocktail party problem’. This can be especially…

[ link ]

#spare_time

Follow: @theTuringMachine