Deep Gravity
#DeepFovea: #Neural Reconstruction for Foveated Rendering and Video Compression using Learned #Statistics of Natural Videos Link to the paper #FacebookAI 🔭 @DeepGravity
#DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos
In order to provide an immersive visual experience, modern displays require head mounting, high image resolution, low latency, as well as high refresh rate. This poses a challenging computational problem. On the other hand, the human visual system can consume only a tiny fraction of this video stream due to the drastic acuity loss in the peripheral vision. Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we explore a novel foveated reconstruction method that employs the recent advances in generative adversarial neural networks. We reconstruct a plausible peripheral video from a small fraction of pixels provided every frame. The reconstruction is done by finding the closest matching video to this sparse input stream of pixels on the learned manifold of natural videos. Our method is more efficient than the state-of-the-art foveated rendering, while providing the visual experience with no noticeable quality degradation. We conducted a user study to validate our reconstruction method and compare it against existing foveated rendering and video compression techniques. Our method is fast enough to drive gaze-contingent head-mounted displays in real time on modern hardware. We plan to publish the trained network to establish a new quality bar for foveated rendering and compression as well as encourage follow-up research.
Link
#Facebook
🔭 @DeepGravity
In order to provide an immersive visual experience, modern displays require head mounting, high image resolution, low latency, as well as high refresh rate. This poses a challenging computational problem. On the other hand, the human visual system can consume only a tiny fraction of this video stream due to the drastic acuity loss in the peripheral vision. Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we explore a novel foveated reconstruction method that employs the recent advances in generative adversarial neural networks. We reconstruct a plausible peripheral video from a small fraction of pixels provided every frame. The reconstruction is done by finding the closest matching video to this sparse input stream of pixels on the learned manifold of natural videos. Our method is more efficient than the state-of-the-art foveated rendering, while providing the visual experience with no noticeable quality degradation. We conducted a user study to validate our reconstruction method and compare it against existing foveated rendering and video compression techniques. Our method is fast enough to drive gaze-contingent head-mounted displays in real time on modern hardware. We plan to publish the trained network to establish a new quality bar for foveated rendering and compression as well as encourage follow-up research.
Link
🔭 @DeepGravity
Facebook Research
DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos
Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we…
Multi-Object Portion Tracking in 4D Fluorescence Microscopy Imagery with #Deep Feature Maps
3D fluorescence microscopy of living organisms has increasingly become an essential and powerful tool in biomedical research and diagnosis. An exploding amount of imaging #data has been collected, whereas efficient and effective computational tools to extract information from them are still lagging behind. This is largely due to the challenges in analyzing biological data. Interesting biological structures are not only small, but are often morphologically irregular and highly dynamic. Although tracking cells in live organisms has been studied for years, existing tracking methods for cells are not effective in tracking subcellular structures, such as protein complexes, which feature in continuous morphological changes including split and merge, in addition to fast migration and complex motion. In this paper, we first define the problem of multi-object portion tracking to model the protein object tracking process. A multi-object tracking method with portion matching is proposed based on 3D segmentation results. The proposed method distills deep feature maps from deep networks, then recognizes and matches object portions using an extended search. Experimental results confirm that the proposed method achieves 2.96 consistent tracking accuracy and 35.48 than the state-of-art methods.
Link
🔭 @DeepGravity
3D fluorescence microscopy of living organisms has increasingly become an essential and powerful tool in biomedical research and diagnosis. An exploding amount of imaging #data has been collected, whereas efficient and effective computational tools to extract information from them are still lagging behind. This is largely due to the challenges in analyzing biological data. Interesting biological structures are not only small, but are often morphologically irregular and highly dynamic. Although tracking cells in live organisms has been studied for years, existing tracking methods for cells are not effective in tracking subcellular structures, such as protein complexes, which feature in continuous morphological changes including split and merge, in addition to fast migration and complex motion. In this paper, we first define the problem of multi-object portion tracking to model the protein object tracking process. A multi-object tracking method with portion matching is proposed based on 3D segmentation results. The proposed method distills deep feature maps from deep networks, then recognizes and matches object portions using an extended search. Experimental results confirm that the proposed method achieves 2.96 consistent tracking accuracy and 35.48 than the state-of-art methods.
Link
🔭 @DeepGravity
The Mind at Work: Guido van #Rossum on how #Python makes thinking in code easier
A conversation with the creator of the world’s most popular programming language on removing brain friction for better work
“You primarily write your code to communicate with other coders, and, to a lesser extent, to impose your will on the computer.”
—Guido van Rossum
Link
🔭 @DeepGravity
A conversation with the creator of the world’s most popular programming language on removing brain friction for better work
“You primarily write your code to communicate with other coders, and, to a lesser extent, to impose your will on the computer.”
—Guido van Rossum
Link
🔭 @DeepGravity
Dropbox
The Mind at Work: Guido van Rossum on how Python makes thinking in code easier
A conversation with the creator of the world’s most popular programming language on removing brain friction for better work.
A massive new #blackhole discovered in #MilkyWay !
Astronomers in China made a surprising discovery: a massive black hole in our galaxy. Called LB-1, this one is about 70 times the mass of the sun. It's the first time a black hole of this size has been detected in the Milky Way.
Link CNN
YouTube
🔭 @DeepGravity
Astronomers in China made a surprising discovery: a massive black hole in our galaxy. Called LB-1, this one is about 70 times the mass of the sun. It's the first time a black hole of this size has been detected in the Milky Way.
Link CNN
YouTube
🔭 @DeepGravity
CNN
Scientists find ‘monster’ black hole so big it shouldn’t exist | CNN
Scientists have discovered the black hole LB-1, so massive that it shouldn’t exist.
A #DeepLearning framework for #neuroscience
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, #cognitive and motor tasks. Conversely, #ArtificialIntelligence attempts to design computational systems based on the tasks they will have to solve. In artificial #NeuralNetworks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of #deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Link
🔭 @DeepGravity
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, #cognitive and motor tasks. Conversely, #ArtificialIntelligence attempts to design computational systems based on the tasks they will have to solve. In artificial #NeuralNetworks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of #deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Link
🔭 @DeepGravity
Nature Neuroscience
A deep learning framework for neuroscience
A deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation. Richards et al. argue that this inspires fruitful approaches to systems neuroscience.
The Top 10 #Books on #AI recommended by #ElonMusk and #BillGates
1 Superintelligence: Paths, Dangers, Strategies
2 The Singularity Is Near: When Humans Transcend Biology
3 Life 3.0: Being Human in the Age of Artificial Intelligence
4 Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World
5 The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
6 Machine, Platform, Crowd: Harnessing Our Digital Future
7 AI Superpowers: China, Silicon Valley, and the New World Order
8 The Sentient Machine: The Coming Age of Artificial Intelligence
9 Our Final Invention: Artificial Intelligence and the End of the Human Era
10 Army of None: Autonomous Weapons and the Future of War
Link
🔭 @DeepGravity
1 Superintelligence: Paths, Dangers, Strategies
2 The Singularity Is Near: When Humans Transcend Biology
3 Life 3.0: Being Human in the Age of Artificial Intelligence
4 Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World
5 The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
6 Machine, Platform, Crowd: Harnessing Our Digital Future
7 AI Superpowers: China, Silicon Valley, and the New World Order
8 The Sentient Machine: The Coming Age of Artificial Intelligence
9 Our Final Invention: Artificial Intelligence and the End of the Human Era
10 Army of None: Autonomous Weapons and the Future of War
Link
🔭 @DeepGravity
Medium
The Top 10 Books on AI recommended by Elon Musk and Bill Gates
The best books on AI for leaders in the new Age, recommended by the likes of Elon Musk, Bill Gates, Eric Schmidt, Reid Hoffman, Mark…
3 Main Approaches to #MachineLearning Models
Machine learning encompasses a vast set of conceptual approaches. We classify the three main #algorithmic methods based on #mathematical foundations to guide your exploration for developing models.
Link
🔭 @DeepGravity
Machine learning encompasses a vast set of conceptual approaches. We classify the three main #algorithmic methods based on #mathematical foundations to guide your exploration for developing models.
Link
🔭 @DeepGravity
KDnuggets
3 Main Approaches to Machine Learning Models - KDnuggets
Machine learning encompasses a vast set of conceptual approaches. We classify the three main algorithmic methods based on mathematical foundations to guide your exploration for developing models.
François #Chollet is the creator of #Keras, which is an open source #DeepLearning library that is designed to enable fast, user-friendly experimentation with #deepNeuralNetworks. It serves as an interface to several deep learning libraries, most popular of which is #TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class #AI researcher and software engineer at #Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of #ArtificialIntelligence. This conversation is part of the Artificial Intelligence podcast.
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
YouTube
François Chollet: Keras, Deep Learning, and the Progress of AI | Lex Fridman Podcast #38
Google’s new ‘#ExplainableAI” (#xAI) service
#Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
Link
🔭 @DeepGravity
#Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
Link
🔭 @DeepGravity
Medium
Google’s new ‘Explainable AI” (xAI) service
Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. We take a look at the intent.
Gilbert Strang: #DeepLearning and #NeuralNetworks
Part of Lex Fridman conversation with Gilbert Strang
Gilbert Strang is a professor of mathematics at #MIT and perhaps one of the most famous and impactful teachers of #math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times.
🔭 @DeepGravity
Part of Lex Fridman conversation with Gilbert Strang
Gilbert Strang is a professor of mathematics at #MIT and perhaps one of the most famous and impactful teachers of #math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times.
🔭 @DeepGravity
YouTube
Gilbert Strang: Deep Learning and Neural Networks
Full episode with Gilbert Strang (Nov 2019): https://www.youtube.com/watch?v=lEZPfmGCEk0
Subscribe to this channel if you like clips and to the main channel if you like full length episodes: https://www.youtube.com/lexfridman
(more links below)
Podcast…
Subscribe to this channel if you like clips and to the main channel if you like full length episodes: https://www.youtube.com/lexfridman
(more links below)
Podcast…
#Meta #TransferLearning for factorizing representations and knowledge for #AI - Yoshua #Bengio
Abstract:
Whereas #MachineLearning theory has focused on generalization to examples from the same distribution as the training data, better understanding of the transfer scenarios where the observed distribution changes often in the lifetime of the learning agent is important, both for robust deployment and to achieve a more powerful form of generalization which humans seem able to enjoy and which seem necessary for learning agents. Whereas most machine learning algorithms and architectures can be traced back to assumptions about the training distributions, we also need to explore assumptions about how the observed distribution changes. We propose that sparsity of change in distribution, when knowledge is represented appropriately, is a good assumption for this purpose, and we claim that if that assumption is verified and knowledge represented appropriately, it leads to fast adaptation to changes in distribution, and thus that the speed of adaptation to changes in distribution can be used as a meta-objective which can drive the discovery of knowledge representation compatible with that assumption. We illustrate these ideas in causal discovery: is some variable a direct cause of another? and how to map raw data to a representation space where different dimensions correspond to causal variables for which a clear causal relationship exists? We propose a large research program in which this non-stationarity assumption and meta-transfer objective is combined with other closely related assumptions about the world embodied in a world model, such as the consciousness prior (the causal graph is captured by a sparse factor graph) and the assumption that the causal variables are often those agents can act upon (the independently controllable factors prior), both of which should be useful for agents which plan, imagine and try to find explanations for what they observe.
Lecture
🔭 @DeepGravity
Abstract:
Whereas #MachineLearning theory has focused on generalization to examples from the same distribution as the training data, better understanding of the transfer scenarios where the observed distribution changes often in the lifetime of the learning agent is important, both for robust deployment and to achieve a more powerful form of generalization which humans seem able to enjoy and which seem necessary for learning agents. Whereas most machine learning algorithms and architectures can be traced back to assumptions about the training distributions, we also need to explore assumptions about how the observed distribution changes. We propose that sparsity of change in distribution, when knowledge is represented appropriately, is a good assumption for this purpose, and we claim that if that assumption is verified and knowledge represented appropriately, it leads to fast adaptation to changes in distribution, and thus that the speed of adaptation to changes in distribution can be used as a meta-objective which can drive the discovery of knowledge representation compatible with that assumption. We illustrate these ideas in causal discovery: is some variable a direct cause of another? and how to map raw data to a representation space where different dimensions correspond to causal variables for which a clear causal relationship exists? We propose a large research program in which this non-stationarity assumption and meta-transfer objective is combined with other closely related assumptions about the world embodied in a world model, such as the consciousness prior (the causal graph is captured by a sparse factor graph) and the assumption that the causal variables are often those agents can act upon (the independently controllable factors prior), both of which should be useful for agents which plan, imagine and try to find explanations for what they observe.
Lecture
🔭 @DeepGravity
YouTube
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio
Speaker: Yoshua Bengio Title: Meta transfer learning for factorizing representations and knowledge for AI Abstract: Whereas machine learning theory has focus...
#NeuralNetworks: Feedforward and #Backpropagation Explained & Optimization
What is neural networks? Developers should understand backpropagation, to figure out why their code sometimes does not work. Visual and down to earth explanation of the math of backpropagation.
Link
🔭 @DeepGravity
What is neural networks? Developers should understand backpropagation, to figure out why their code sometimes does not work. Visual and down to earth explanation of the math of backpropagation.
Link
🔭 @DeepGravity
Machine Learning From Scratch
Neural Networks: Feedforward and Backpropagation Explained
What is neural networks? Developers should understand backpropagation, to figure out why their code sometimes does not work. Visual and down to earth explanation of the math of backpropagation.
Semantic Segmentation of Thigh Muscle using 2.5D #DeepLearning Network Trained with Limited Datasets
Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Link
🔭 @DeepGravity
Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Link
🔭 @DeepGravity
#Classification-driven Single Image Dehazing
Most existing dehazing algorithms often use hand-crafted features or #ConvolutionalNeuralNetworks (#CNN)-based methods to generate clear images using pixel-level Mean Square Error (MSE) loss. The generated images generally have better visual appeal, but not always have better performance for high-level vision tasks, e.g. image classification. In this paper, we investigate a new point of view in addressing this problem. Instead of focusing only on achieving good quantitative performance on pixel-based metrics such as Peak Signal to Noise Ratio (PSNR), we also ensure that the dehazed image itself does not degrade the performance of the high-level vision tasks such as image classification. To this end, we present an unified CNN architecture that includes three parts: a dehazing sub-network (DNet), a classification-driven Conditional #GenerativeAdversarialNetworks sub-network (CCGAN) and a classification sub-network (CNet) related to image classification, which has better performance both on visual appeal and image classification. We conduct comprehensive experiments on two challenging benchmark datasets for fine-grained and object classification: CUB-200-2011 and Caltech-256. Experimental results demonstrate that the proposed method outperforms many recent state-of-the-art single image dehazing methods in terms of image dehazing metrics and classification accuracy.
Link
🔭 @DeepGravity
Most existing dehazing algorithms often use hand-crafted features or #ConvolutionalNeuralNetworks (#CNN)-based methods to generate clear images using pixel-level Mean Square Error (MSE) loss. The generated images generally have better visual appeal, but not always have better performance for high-level vision tasks, e.g. image classification. In this paper, we investigate a new point of view in addressing this problem. Instead of focusing only on achieving good quantitative performance on pixel-based metrics such as Peak Signal to Noise Ratio (PSNR), we also ensure that the dehazed image itself does not degrade the performance of the high-level vision tasks such as image classification. To this end, we present an unified CNN architecture that includes three parts: a dehazing sub-network (DNet), a classification-driven Conditional #GenerativeAdversarialNetworks sub-network (CCGAN) and a classification sub-network (CNet) related to image classification, which has better performance both on visual appeal and image classification. We conduct comprehensive experiments on two challenging benchmark datasets for fine-grained and object classification: CUB-200-2011 and Caltech-256. Experimental results demonstrate that the proposed method outperforms many recent state-of-the-art single image dehazing methods in terms of image dehazing metrics and classification accuracy.
Link
🔭 @DeepGravity
A nice and simple introduction to #MachineLearning
Machine Learning is undeniably one of the most influential and powerful technologies in today’s world. More importantly, we are far from seeing its full potential. There’s no doubt, it will continue to be making headlines for the foreseeable future. This article is designed as an introduction to the Machine Learning concepts, covering all the fundamental ideas without being too high level.
Link
🔭 @DeepGravity
Machine Learning is undeniably one of the most influential and powerful technologies in today’s world. More importantly, we are far from seeing its full potential. There’s no doubt, it will continue to be making headlines for the foreseeable future. This article is designed as an introduction to the Machine Learning concepts, covering all the fundamental ideas without being too high level.
Link
🔭 @DeepGravity
Medium
Machine Learning | An Introduction
An introduction to Machine Learning and its 4 approaches