How to Develop a Cost-Sensitive Neural Network for Imbalanced Classification
After completing this tutorial, you will know:
How the standard neural network algorithm does not support imbalanced classification.
How the neural network training algorithm can be modified to weight misclassification errors in proportion to class importance.
How to configure class weight for neural networks and evaluate the effect on model performance.
Link
🔭 @DeepGravity
After completing this tutorial, you will know:
How the standard neural network algorithm does not support imbalanced classification.
How the neural network training algorithm can be modified to weight misclassification errors in proportion to class importance.
How to configure class weight for neural networks and evaluate the effect on model performance.
Link
🔭 @DeepGravity
Machine Learning Mastery
How to Develop a Cost-Sensitive Neural Network for Imbalanced Classification - Machine Learning Mastery
Deep learning neural networks are a flexible class of machine learning algorithms that perform well on a wide range of problems.
Neural networks are trained using the backpropagation of error algorithm that involves calculating errors made by the model…
Neural networks are trained using the backpropagation of error algorithm that involves calculating errors made by the model…
This Python Package ‘Causal ML’ Provides a Suite of Uplift Modeling and Causal Inference with Machine Learning
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
MarkTechPost
This Python Package 'Causal ML' Provides a Suite of Uplift Modeling and Causal Inference with Machine Learning | MarkTechPost
This Python Package ‘Causal ML’ Provides a Suite of Uplift Modeling and Causal Inference with Machine Learning. ‘Causal ML’ is a Python package that deals
A Gentle Introduction to Cross-Entropy for Machine Learning
After completing this tutorial, you will know:
How to calculate cross-entropy from scratch and using standard machine learning libraries.
Cross-entropy can be used as a loss function when optimizing classification models like logistic regression and artificial neural networks.
Cross-entropy is different from KL divergence but can be calculated using KL divergence, and is different from log loss but calculates the same quantity when used as a loss function.
Link
🔭 @DeepGravity
After completing this tutorial, you will know:
How to calculate cross-entropy from scratch and using standard machine learning libraries.
Cross-entropy can be used as a loss function when optimizing classification models like logistic regression and artificial neural networks.
Cross-entropy is different from KL divergence but can be calculated using KL divergence, and is different from log loss but calculates the same quantity when used as a loss function.
Link
🔭 @DeepGravity
Methods in Computational Neuroscience
Course Date: August 2 – August 28, 2020
Deadline: March 16, 2020
Link
🔭 @DeepGravity
Course Date: August 2 – August 28, 2020
Deadline: March 16, 2020
Link
🔭 @DeepGravity
#MIT Deep Learning #course, with Alexander Amini, & Ava Soleimany
Link to the course page
🔭 @DeepGravity
Link to the course page
🔭 @DeepGravity
MIT Deep Learning 6.S191
MIT's introductory course on deep learning methods and applications
#Microsoft Research Webinar Series
Data Visualization: Bridging the Gap Between Users and Information
Link
🔭 @DeepGravity
Data Visualization: Bridging the Gap Between Users and Information
Link
🔭 @DeepGravity
A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns
Abstract
In cancer, the primary tumour’s organ of origin and histopathology are the strongest determinants of its clinical behaviour, but in 3% of cases a patient presents with a metastatic tumour and no obvious primary. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, we train a deep learning classifier to predict cancer type based on patterns of somatic passenger mutations detected in whole genome sequencing (WGS) of 2606 tumours representing 24 common cancer types produced by the PCAWG Consortium. Our classifier achieves an accuracy of 91% on held-out tumor samples and 88% and 83% respectively on independent primary and metastatic samples, roughly double the accuracy of trained pathologists when presented with a metastatic tumour without knowledge of the primary. Surprisingly, adding information on driver mutations reduced accuracy. Our results have clinical applicability, underscore how patterns of somatic passenger mutations encode the state of the cell of origin, and can inform future strategies to detect the source of circulating tumour DNA.
Paper
🔭 @DeepGravity
Abstract
In cancer, the primary tumour’s organ of origin and histopathology are the strongest determinants of its clinical behaviour, but in 3% of cases a patient presents with a metastatic tumour and no obvious primary. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, we train a deep learning classifier to predict cancer type based on patterns of somatic passenger mutations detected in whole genome sequencing (WGS) of 2606 tumours representing 24 common cancer types produced by the PCAWG Consortium. Our classifier achieves an accuracy of 91% on held-out tumor samples and 88% and 83% respectively on independent primary and metastatic samples, roughly double the accuracy of trained pathologists when presented with a metastatic tumour without knowledge of the primary. Surprisingly, adding information on driver mutations reduced accuracy. Our results have clinical applicability, underscore how patterns of somatic passenger mutations encode the state of the cell of origin, and can inform future strategies to detect the source of circulating tumour DNA.
Paper
🔭 @DeepGravity
Nature
A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns
Nature Communications - Some cancer patients first present with metastases where the location of the primary is unidentified; these are difficult to treat. In this study, using machine learning,...
ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
Microsoft Research
ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters - Microsoft Research
The latest trend in AI is that larger natural language models provide better accuracy; however, larger models are difficult to train because of cost, time, and ease of code integration. Microsoft is releasing an open-source library called DeepSpeed, which…
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
ZDNet
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun
Novel hardware to accelerate training and inference of neural nets can lead to much larger models, perhaps someday making possible the trillion-synapse neural net, say deep learning creators Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Also important is…