A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns
Abstract
In cancer, the primary tumour’s organ of origin and histopathology are the strongest determinants of its clinical behaviour, but in 3% of cases a patient presents with a metastatic tumour and no obvious primary. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, we train a deep learning classifier to predict cancer type based on patterns of somatic passenger mutations detected in whole genome sequencing (WGS) of 2606 tumours representing 24 common cancer types produced by the PCAWG Consortium. Our classifier achieves an accuracy of 91% on held-out tumor samples and 88% and 83% respectively on independent primary and metastatic samples, roughly double the accuracy of trained pathologists when presented with a metastatic tumour without knowledge of the primary. Surprisingly, adding information on driver mutations reduced accuracy. Our results have clinical applicability, underscore how patterns of somatic passenger mutations encode the state of the cell of origin, and can inform future strategies to detect the source of circulating tumour DNA.
Paper
🔭 @DeepGravity
Abstract
In cancer, the primary tumour’s organ of origin and histopathology are the strongest determinants of its clinical behaviour, but in 3% of cases a patient presents with a metastatic tumour and no obvious primary. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, we train a deep learning classifier to predict cancer type based on patterns of somatic passenger mutations detected in whole genome sequencing (WGS) of 2606 tumours representing 24 common cancer types produced by the PCAWG Consortium. Our classifier achieves an accuracy of 91% on held-out tumor samples and 88% and 83% respectively on independent primary and metastatic samples, roughly double the accuracy of trained pathologists when presented with a metastatic tumour without knowledge of the primary. Surprisingly, adding information on driver mutations reduced accuracy. Our results have clinical applicability, underscore how patterns of somatic passenger mutations encode the state of the cell of origin, and can inform future strategies to detect the source of circulating tumour DNA.
Paper
🔭 @DeepGravity
Nature
A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns
Nature Communications - Some cancer patients first present with metastases where the location of the primary is unidentified; these are difficult to treat. In this study, using machine learning,...
ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
Microsoft Research
ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters - Microsoft Research
The latest trend in AI is that larger natural language models provide better accuracy; however, larger models are difficult to train because of cost, time, and ease of code integration. Microsoft is releasing an open-source library called DeepSpeed, which…
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
ZDNet
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun
Novel hardware to accelerate training and inference of neural nets can lead to much larger models, perhaps someday making possible the trillion-synapse neural net, say deep learning creators Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Also important is…
Two nice tools for drawing deep nets for your papers/presentations:
alexlenail
PlotNeuralNet
🔭 @DeepGravity
alexlenail
PlotNeuralNet
🔭 @DeepGravity
GitHub
GitHub - HarisIqbal88/PlotNeuralNet: Latex code for making neural networks diagrams
Latex code for making neural networks diagrams. Contribute to HarisIqbal88/PlotNeuralNet development by creating an account on GitHub.
Complete Statistical Theory of Learning (Vladimir #Vapnik) | MIT Deep Learning Series
YouTube
🔭 @DeepGravity
YouTube
🔭 @DeepGravity
YouTube
Complete Statistical Theory of Learning (Vladimir Vapnik) | MIT Deep Learning Series
Lecture by Vladimir Vapnik in January 2020, part of the MIT Deep Learning Lecture Series.
Slides: http://bit.ly/2ORVofC
Associated podcast conversation: https://www.youtube.com/watch?v=bQa7hpUpMzM
Series website: https://deeplearning.mit.edu
Playlist: ht…
Slides: http://bit.ly/2ORVofC
Associated podcast conversation: https://www.youtube.com/watch?v=bQa7hpUpMzM
Series website: https://deeplearning.mit.edu
Playlist: ht…
#Position
Dear all,
Vivacity Labs, a fast-growing startup in London, is looking for a Machine Learning Researcher to work on using Reinforcement Learning for Traffic Signal Control. You would have extensive simulations & existing datasets to work with, and would be deploying your technology to live junctions in the UK from day 1.
For more details, please see:
https://angel.co/company/vivacity-labs/jobs/664275-machine-learning-researcher
Please feel free to apply directly to joinus@vivacitylabs.com with a CV and covering note.
Kind regards,
Mark
🔭 @DeepGravity
Dear all,
Vivacity Labs, a fast-growing startup in London, is looking for a Machine Learning Researcher to work on using Reinforcement Learning for Traffic Signal Control. You would have extensive simulations & existing datasets to work with, and would be deploying your technology to live junctions in the UK from day 1.
For more details, please see:
https://angel.co/company/vivacity-labs/jobs/664275-machine-learning-researcher
Please feel free to apply directly to joinus@vivacitylabs.com with a CV and covering note.
Kind regards,
Mark
🔭 @DeepGravity
Stanford CS330: Deep Multi-Task and #MetaLearning
cs330.stanford.edu
Lecture Videos:
YouTube
🔭 @DeepGravity
cs330.stanford.edu
Lecture Videos:
YouTube
🔭 @DeepGravity
Eastern European Machine Learning Summer School
6-11 July 2020, Krakow, Poland
Deep Learning and Reinforcement Learning
Link
🔭 @DeepGravity
6-11 July 2020, Krakow, Poland
Deep Learning and Reinforcement Learning
Link
🔭 @DeepGravity
www.eeml.eu
Home
(photo credits: Balša Strugar)
Forwarded from دستاوردهای یادگیری عمیق(InTec)
اگر با کراس کد زده باشید و اندازه کار بزرگ بوده باشه میدونید که یک سری محدودیتهای خسته کننده برای
distributed training
وجود داره، البته که میشه مشکلات رو حل کرد اما خب زمانبر هست
توی تنسورفلو یک، ،submodule دیگری هم وجود داره به اسم estimator که توی نسخه ۲ توجه خوبی بهش شده و پیشرفتای خوبی داشته، حتی خیلی ریکامند میشه که موقع
Distributed Processing
بجای کراس ازین مورد استفاده بشه، ولی چون نوع syntax خودش رو داره برای کدهای بزرگ که روی کراس نوشته شده بنظر مفید نمیاد
اما :
tf.keras.estimator.model_to_estimator()
مشکل رو حل میکنه؛ توی تستهای بنده اگر از خود کراس مستقیم استفاده کنید
pip install keras
و بخواید اینکارو انجام بدید مشکلاتی پیش میاد اما نسخه تنسورفلو ۲ به راحتی و عالی اینکارو انجام میده، دوم اینکه دیگه نیازی نیس یادتون بمونه که حتماً از
@tf.function
استفاده کنید، چون این مورد خوردش بهینهسازیهارو انجام میده، پردازش توزیع شده هم که دلیل اصلی استفاده هست
distributed training
وجود داره، البته که میشه مشکلات رو حل کرد اما خب زمانبر هست
توی تنسورفلو یک، ،submodule دیگری هم وجود داره به اسم estimator که توی نسخه ۲ توجه خوبی بهش شده و پیشرفتای خوبی داشته، حتی خیلی ریکامند میشه که موقع
Distributed Processing
بجای کراس ازین مورد استفاده بشه، ولی چون نوع syntax خودش رو داره برای کدهای بزرگ که روی کراس نوشته شده بنظر مفید نمیاد
اما :
tf.keras.estimator.model_to_estimator()
مشکل رو حل میکنه؛ توی تستهای بنده اگر از خود کراس مستقیم استفاده کنید
pip install keras
و بخواید اینکارو انجام بدید مشکلاتی پیش میاد اما نسخه تنسورفلو ۲ به راحتی و عالی اینکارو انجام میده، دوم اینکه دیگه نیازی نیس یادتون بمونه که حتماً از
@tf.function
استفاده کنید، چون این مورد خوردش بهینهسازیهارو انجام میده، پردازش توزیع شده هم که دلیل اصلی استفاده هست
Statistical Modelling vs Machine Learning
At times it may seem Machine Learning can be done these days without a sound statistical background but those people are not really understanding the different nuances. Code written to make it easier does not negate the need for an in-depth understanding of the problem.
Link
🔭 @DeepGravity
At times it may seem Machine Learning can be done these days without a sound statistical background but those people are not really understanding the different nuances. Code written to make it easier does not negate the need for an in-depth understanding of the problem.
Link
🔭 @DeepGravity
KDnuggets
Statistical Modelling vs Machine Learning - KDnuggets
At times it may seem Machine Learning can be done these days without a sound statistical background but those people are not really understanding the different nuances. Code written to make it easier does not negate the need for an in-depth understanding…
Self-Tuning Deep Reinforcement Learning
Reinforcement learning (RL) algorithms often require expensive manual or automated hyperparameter searches in order to perform well on a new domain. This need is particularly acute in modern deep RL architectures which often incorporate many modules and multiple loss functions. In this paper, we take a step towards addressing this issue by using metagradients (Xu et al., 2018) to tune these hyperparameters via differentiable cross validation, whilst the agent interacts with and learns from the environment. We present the Self-Tuning Actor Critic (STAC) which uses this process to tune the hyperparameters of the usual loss function of the IMPALA actor critic agent(Espeholt et. al., 2018), to learn the hyperparameters that define auxiliary loss functions, and to balance trade offs in off policy learning by introducing and adapting the hyperparameters of a novel leaky V-trace operator. The method is simple to use, sample efficient and does not require significant increase in compute. Ablative studies show that the overall performance of STAC improves as we adapt more hyperparameters. When applied to 57 games on the Atari 2600 environment over 200 million frames our algorithm improves the median human normalized score of the baseline from 243
Paper
🔭 @DeepGravity
Reinforcement learning (RL) algorithms often require expensive manual or automated hyperparameter searches in order to perform well on a new domain. This need is particularly acute in modern deep RL architectures which often incorporate many modules and multiple loss functions. In this paper, we take a step towards addressing this issue by using metagradients (Xu et al., 2018) to tune these hyperparameters via differentiable cross validation, whilst the agent interacts with and learns from the environment. We present the Self-Tuning Actor Critic (STAC) which uses this process to tune the hyperparameters of the usual loss function of the IMPALA actor critic agent(Espeholt et. al., 2018), to learn the hyperparameters that define auxiliary loss functions, and to balance trade offs in off policy learning by introducing and adapting the hyperparameters of a novel leaky V-trace operator. The method is simple to use, sample efficient and does not require significant increase in compute. Ablative studies show that the overall performance of STAC improves as we adapt more hyperparameters. When applied to 57 games on the Atari 2600 environment over 200 million frames our algorithm improves the median human normalized score of the baseline from 243
Paper
🔭 @DeepGravity
Attention Augmented Convolutional Networks
Convolutional neural networks have proven to be a powerful tool for image recognition, allowing for ever-improving results in image classification (ImageNet), object detection (COCO), and other tasks. Despite their success, convolutions are limited by their locality, i.e. their inability to consider relations between different areas of an image. On the other hand, a popular mechanism which has proven success in overcoming locality is self-attention, which has shown to be able to capture long-range interactions (e.g. Show, Attend and Tell).
Article
🔭 @DeepGravity
Convolutional neural networks have proven to be a powerful tool for image recognition, allowing for ever-improving results in image classification (ImageNet), object detection (COCO), and other tasks. Despite their success, convolutions are limited by their locality, i.e. their inability to consider relations between different areas of an image. On the other hand, a popular mechanism which has proven success in overcoming locality is self-attention, which has shown to be able to capture long-range interactions (e.g. Show, Attend and Tell).
Article
🔭 @DeepGravity
Lyrn.AI
Attention Augmented Convolutional Networks | Lyrn.AI
Convolutional neural networks have proven to be a powerful tool for image recognition, allowing for ever-improving results in image classification (ImageNet), object detection (COCO), and other tasks. Despite their success, convolutions are limited by their…
#probability and Coronavirus
From #statistical point of view, there is an important difference between having any sort of direct or indirect interaction with a person with #coronavirus and risk of getting the disease. For example, people sitting in window seats have lower chance of getting the disease if there would be a person with the virus in a flight (check the figure). Or if the person with the disease walk and have direct or indirect interaction with other people, it is mainly the people in two seats around him that have the high probability of getting the disease.
These are just some statistical facts but anyway we have to be very cautious.
Note. Don’t avoid interaction with people based on race.
Credit goes to Seyed Ali Madani
🔭 @DeepGravity
From #statistical point of view, there is an important difference between having any sort of direct or indirect interaction with a person with #coronavirus and risk of getting the disease. For example, people sitting in window seats have lower chance of getting the disease if there would be a person with the virus in a flight (check the figure). Or if the person with the disease walk and have direct or indirect interaction with other people, it is mainly the people in two seats around him that have the high probability of getting the disease.
These are just some statistical facts but anyway we have to be very cautious.
Note. Don’t avoid interaction with people based on race.
Credit goes to Seyed Ali Madani
🔭 @DeepGravity
Google's DeepMind just shared AI-generated predictions about the #coronavirus that could help researchers stem the global outbreak
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
Business Insider
Google's DeepMind just shared AI-generated predictions about the coronavirus that could help researchers stem the global outbreak
These predictions were drawn from DeepMind's new deep learning system but have yet to be experimentally verified, DeepMind noted.