Forwarded from Complex Systems Studies
#Coxeter Lecture Series will be delivered by 2022 Fields Medallist Hugo Duminil-Copin.
Do NOT miss an opportunity to hear his talks in-person or online!
Register: http://www.fields.utoronto.ca/activities/23-24/Duminil-Copin
Do NOT miss an opportunity to hear his talks in-person or online!
Register: http://www.fields.utoronto.ca/activities/23-24/Duminil-Copin
Summer School | Advanced tools for data analysis in neuroscience
Advanced tools for data analysis in neuroscience
Research discoveries are increasingly dependent on the development of new tools and technologies, as well as on the ability to process, manage and analyze the large amounts of data collected with these tools....
[ link ]
More: @theTuringMachine
Advanced tools for data analysis in neuroscience
Research discoveries are increasingly dependent on the development of new tools and technologies, as well as on the ability to process, manage and analyze the large amounts of data collected with these tools....
[ link ]
More: @theTuringMachine
👍4🔥2
IPython Interactive Computing and Visualization Cookbook, Second Edition (2018), by Cyrille Rossant, contains over 100 hands-on recipes on high-performance numerical computing and data science in the Jupyter Notebook.
[ link ]
More: @theTuringMachine
[ link ]
More: @theTuringMachine
❤2👍1
Forwarded from Scientific Programming (SCI_dev(he/him))
Datasets for machine learning typically contain a large number of features, but such high-dimensional feature spaces are not always helpful.
In general, all the features are not equally important and there are certain features that account for a large percentage of variance in the dataset. Dimensionality reduction algorithms aim to reduce the dimension of the feature space to a fraction of the original number of dimensions. In doing so, the features with high variance are still retained—but are in the transformed feature space. And principal component analysis (PCA) is one of the most popular dimensionality reduction algorithms.
Here's a simple example in Python demonstrating PCA for dimensionality reduction before training a scikit-learn classifier
Github
You may also need to read more about PCA here.
In general, all the features are not equally important and there are certain features that account for a large percentage of variance in the dataset. Dimensionality reduction algorithms aim to reduce the dimension of the feature space to a fraction of the original number of dimensions. In doing so, the features with high variance are still retained—but are in the transformed feature space. And principal component analysis (PCA) is one of the most popular dimensionality reduction algorithms.
Here's a simple example in Python demonstrating PCA for dimensionality reduction before training a scikit-learn classifier
Github
You may also need to read more about PCA here.
GitHub
workshop_ML/pca/classify_use_pca.ipynb at main · Ziaeemehr/workshop_ML
Machine learning tutorials and examples. Contribute to Ziaeemehr/workshop_ML development by creating an account on GitHub.
Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis
How can we tell whether two neural networks utilize the same internal processes for a particular computation?
[ Talk ] [ paper ][ git ]
#Cosyne2024
More: @theTuringMachine
How can we tell whether two neural networks utilize the same internal processes for a particular computation?
[ Talk ] [ paper ][ git ]
#Cosyne2024
More: @theTuringMachine
👍2❤1
Deep neural networks reveal context-sensitive speech encoding in single neurons of human cortex.
Shailee Jain, Matthew K. Leonard, Edward F. Chang
[ Talk ]
#Cosyne2024
More: @theTuringMachine
Shailee Jain, Matthew K. Leonard, Edward F. Chang
[ Talk ]
#Cosyne2024
More: @theTuringMachine
🔥3
Forwarded from Scientific Programming (SCI_dev(he/him))
Post-doctoral in Marseille.
Project Title: Higher-order interactions in human brain networks supporting causal learning
Project Title: Higher-order interactions in human brain networks supporting causal learning
Forwarded from the last neural cell (Aleksandr Kovalev)
Brain-To-Text Competition 2024
This is the most fascinating BCI competition yet, organized by Stanford. Everyone has one month to develop the world's best brain-to-speech decoder!
Task: Predict attempted speech from brain activity.
Deadline: June 2, 2024
Dataset: They've recorded 12,100 sentences from a patient who can no longer speak intelligibly due to amyotrophic lateral sclerosis (ALS).
Just letting you know we're jumping into this challenge!
Together with @Altime and @kovalev_alvi, we're going to create something interesting.
Like this post if you want to follow our updates❤️
This is the most fascinating BCI competition yet, organized by Stanford. Everyone has one month to develop the world's best brain-to-speech decoder!
Task: Predict attempted speech from brain activity.
Deadline: June 2, 2024
Dataset: They've recorded 12,100 sentences from a patient who can no longer speak intelligibly due to amyotrophic lateral sclerosis (ALS).
For each sentence, we provide the trannoscript of what the participant was attempting to say, along with the corresponding time series of neural spiking activity recorded from 256 microelectrodes in speech-related areas of cortex.
Just letting you know we're jumping into this challenge!
Together with @Altime and @kovalev_alvi, we're going to create something interesting.
Like this post if you want to follow our updates❤️
❤4
Massive Open Online Courses
Simulation Neuroscience is an emerging approach to integrate the knowledge dispersed throughout the field of neuroscience.
[ link ]
Follow: @theTuringMachine
Simulation Neuroscience is an emerging approach to integrate the knowledge dispersed throughout the field of neuroscience.
[ link ]
Follow: @theTuringMachine
❤4🔥1
Neurodata Without Borders (NWB) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build analysis tools for neurophysiology data. NWB is designed to store a variety of neurophysiology data, including data from intracellular and extracellular electrophysiology experiments, data from optical physiology experiments, and tracking and stimulus data.
https://www.nwb.org
https://www.nwb.org
👌2
the Turing Machine
Neurodata Without Borders (NWB) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build analysis tools for neurophysiology data. NWB is designed to store a variety of neurophysiology data,…
CRCNS - Collaborative Research in Computational
Neuroscience - Data sharing
To enable concerted efforts in understanding the brain experimental data and other resources such as stimuli and analysis tools should be widely shared by researchers all over the world. To serve this purpose, this website provides a marketplace and discussion forum for sharing tools and data in neuroscience
https://crcns.org
Neuroscience - Data sharing
To enable concerted efforts in understanding the brain experimental data and other resources such as stimuli and analysis tools should be widely shared by researchers all over the world. To serve this purpose, this website provides a marketplace and discussion forum for sharing tools and data in neuroscience
https://crcns.org
👌3
In Search of Invariance in Brains and Machines
Despite their seemingly impressive performance at image recognition and other perceptual tasks, deep convolutional neural networks are prone to be easily fooled, sensitive to adversarial attack, and have trouble generalizing to data outside the training domain that arise from everyday interactions with the real world. The premise of this talk is that these shortcomings stem from the lack of an appropriate mathematical framework for posing the problems at the core of deep learning - in particular, modeling hierarchical structure, and the ability to describe transformations, such as variations in pose, that occur when viewing objects in the real world. Here I will describe an approach that draws from a well-developed branch of mathematics for representing and computing these transformations: Lie theory. In particular, I shall describe a method for learning shapes and their transformations from images in an unsupervised manner using Lie Group Sparse Coding. Additionally, I will show how the generalized bispectrum can potentially be used to learn invariant representations that are complete and impossible to fool.
https://www.youtube.com/watch?v=GPMcJa88qaE
Despite their seemingly impressive performance at image recognition and other perceptual tasks, deep convolutional neural networks are prone to be easily fooled, sensitive to adversarial attack, and have trouble generalizing to data outside the training domain that arise from everyday interactions with the real world. The premise of this talk is that these shortcomings stem from the lack of an appropriate mathematical framework for posing the problems at the core of deep learning - in particular, modeling hierarchical structure, and the ability to describe transformations, such as variations in pose, that occur when viewing objects in the real world. Here I will describe an approach that draws from a well-developed branch of mathematics for representing and computing these transformations: Lie theory. In particular, I shall describe a method for learning shapes and their transformations from images in an unsupervised manner using Lie Group Sparse Coding. Additionally, I will show how the generalized bispectrum can potentially be used to learn invariant representations that are complete and impossible to fool.
https://www.youtube.com/watch?v=GPMcJa88qaE
YouTube
In Search of Invariance in Brains and Machines
Presented By: Bruno Olshausen | Professor; Helen Wills Neuroscience Institute & School of Optometry and Director; Redwood Center for Theoretical Neuroscience, U.C. Berkeley
Presented: May 15th | 11am - 12pm | Georgia Institute of Technology | IBB 1128
Talk…
Presented: May 15th | 11am - 12pm | Georgia Institute of Technology | IBB 1128
Talk…
What are the necessary steps toward open-neuroscience. Below is a short note by Samuel Gershman regarding the matter.
[ read more ]
Follow for more: @theTuringMachine
[ read more ]
Follow for more: @theTuringMachine
The Transmitter
A README for open neuroscience
Making data (and code) useful for yourself automatically makes it useful for others.
What's so special
about the
human brain?
Torrents of data from cell atlases, brain organoids and other methods are finally delivering answers to an age-old question.
By Kerri Smith
Infographics by Nik Spencer
[ link ]
More: @theTuringMachine
about the
human brain?
Torrents of data from cell atlases, brain organoids and other methods are finally delivering answers to an age-old question.
By Kerri Smith
Infographics by Nik Spencer
[ link ]
More: @theTuringMachine
❤4
Breaking Free from Neural Networks and Dynamical Systems
By Hessam Akhlaghpour
This blog post is written as a dialogue between two imaginary characters, arguing about the mainstream idea of brain is a dynamical system.
[ link ]
Follow: @theTuringMachine
By Hessam Akhlaghpour
This blog post is written as a dialogue between two imaginary characters, arguing about the mainstream idea of brain is a dynamical system.
[ link ]
Follow: @theTuringMachine
🔥2
I'm delighted to share that I published my first preprint
"From spiking neuronal networks to interpretable dynamics: a diffusion-approximation framework" in bioRxiv. In this study we introduced a framework to interpret complex spiking neuronal network dynamics using nonlinear Hawkes process models and diffusion approximations. This approach allows extracting tractable equations, offering dynamical insights into behavior and cognition.... [ read more ]
Follow: @theTuringMachine
"From spiking neuronal networks to interpretable dynamics: a diffusion-approximation framework" in bioRxiv. In this study we introduced a framework to interpret complex spiking neuronal network dynamics using nonlinear Hawkes process models and diffusion approximations. This approach allows extracting tractable equations, offering dynamical insights into behavior and cognition.... [ read more ]
Follow: @theTuringMachine
👏12❤5👍1