Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
The Mind at Work: Guido van #Rossum on how #Python makes thinking in code easier

A conversation with the creator of the world’s most popular programming language on removing brain friction for better work

“You primarily write your code to communicate with other coders, and, to a lesser extent, to impose your will on the computer.”
—Guido van Rossum

Link

🔭 @DeepGravity
A massive new #blackhole discovered in #MilkyWay !

Astronomers in China made a surprising discovery: a massive black hole in our galaxy. Called LB-1, this one is about 70 times the mass of the sun. It's the first time a black hole of this size has been detected in the Milky Way.

Link CNN

YouTube

🔭 @DeepGravity
A #DeepLearning framework for #neuroscience

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, #cognitive and motor tasks. Conversely, #ArtificialIntelligence attempts to design computational systems based on the tasks they will have to solve. In artificial #NeuralNetworks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of #deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

Link

🔭 @DeepGravity
The Top 10 #Books on #AI recommended by #ElonMusk and #BillGates

1 Superintelligence: Paths, Dangers, Strategies
2 The Singularity Is Near: When Humans Transcend Biology
3 Life 3.0: Being Human in the Age of Artificial Intelligence
4 Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World
5 The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
6 Machine, Platform, Crowd: Harnessing Our Digital Future
7 AI Superpowers: China, Silicon Valley, and the New World Order
8 The Sentient Machine: The Coming Age of Artificial Intelligence
9 Our Final Invention: Artificial Intelligence and the End of the Human Era
10 Army of None: Autonomous Weapons and the Future of War

Link

🔭 @DeepGravity
François #Chollet is the creator of #Keras, which is an open source #DeepLearning library that is designed to enable fast, user-friendly experimentation with #deepNeuralNetworks. It serves as an interface to several deep learning libraries, most popular of which is #TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class #AI researcher and software engineer at #Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of #ArtificialIntelligence. This conversation is part of the Artificial Intelligence podcast.

Link

🔭 @DeepGravity
Google’s new ‘#ExplainableAI” (#xAI) service

#Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.

Link

🔭 @DeepGravity
#Meta #TransferLearning for factorizing representations and knowledge for #AI - Yoshua #Bengio

Abstract:
Whereas #MachineLearning theory has focused on generalization to examples from the same distribution as the training data, better understanding of the transfer scenarios where the observed distribution changes often in the lifetime of the learning agent is important, both for robust deployment and to achieve a more powerful form of generalization which humans seem able to enjoy and which seem necessary for learning agents. Whereas most machine learning algorithms and architectures can be traced back to assumptions about the training distributions, we also need to explore assumptions about how the observed distribution changes. We propose that sparsity of change in distribution, when knowledge is represented appropriately, is a good assumption for this purpose, and we claim that if that assumption is verified and knowledge represented appropriately, it leads to fast adaptation to changes in distribution, and thus that the speed of adaptation to changes in distribution can be used as a meta-objective which can drive the discovery of knowledge representation compatible with that assumption. We illustrate these ideas in causal discovery: is some variable a direct cause of another? and how to map raw data to a representation space where different dimensions correspond to causal variables for which a clear causal relationship exists? We propose a large research program in which this non-stationarity assumption and meta-transfer objective is combined with other closely related assumptions about the world embodied in a world model, such as the consciousness prior (the causal graph is captured by a sparse factor graph) and the assumption that the causal variables are often those agents can act upon (the independently controllable factors prior), both of which should be useful for agents which plan, imagine and try to find explanations for what they observe.

Lecture

🔭 @DeepGravity
#NeuralNetworks: Feedforward and #Backpropagation Explained & Optimization

What is neural networks? Developers should understand backpropagation, to figure out why their code sometimes does not work. Visual and down to earth explanation of the math of backpropagation.

Link

🔭 @DeepGravity
Semantic Segmentation of Thigh Muscle using 2.5D #DeepLearning Network Trained with Limited Datasets

Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.

Link

🔭 @DeepGravity
#Classification-driven Single Image Dehazing

Most existing dehazing algorithms often use hand-crafted features or #ConvolutionalNeuralNetworks (#CNN)-based methods to generate clear images using pixel-level Mean Square Error (MSE) loss. The generated images generally have better visual appeal, but not always have better performance for high-level vision tasks, e.g. image classification. In this paper, we investigate a new point of view in addressing this problem. Instead of focusing only on achieving good quantitative performance on pixel-based metrics such as Peak Signal to Noise Ratio (PSNR), we also ensure that the dehazed image itself does not degrade the performance of the high-level vision tasks such as image classification. To this end, we present an unified CNN architecture that includes three parts: a dehazing sub-network (DNet), a classification-driven Conditional #GenerativeAdversarialNetworks sub-network (CCGAN) and a classification sub-network (CNet) related to image classification, which has better performance both on visual appeal and image classification. We conduct comprehensive experiments on two challenging benchmark datasets for fine-grained and object classification: CUB-200-2011 and Caltech-256. Experimental results demonstrate that the proposed method outperforms many recent state-of-the-art single image dehazing methods in terms of image dehazing metrics and classification accuracy.

Link

🔭 @DeepGravity
A nice and simple introduction to #MachineLearning

Machine Learning is undeniably one of the most influential and powerful technologies in today’s world. More importantly, we are far from seeing its full potential. There’s no doubt, it will continue to be making headlines for the foreseeable future. This article is designed as an introduction to the Machine Learning concepts, covering all the fundamental ideas without being too high level.

Link

🔭 @DeepGravity
How to Perform #ObjectDetection With #YOLOv3 in #Keras

After completing this tutorial, you will know:

* YOLO-based #ConvolutionalNeuralNetwork family of models for object detection and the most recent variation called YOLOv3.
* The best-of-breed open source library implementation of the YOLOv3 for the Keras deep learning library.
* How to use a pre-trained YOLOv3 to perform object localization and detection on new photographs.

Link

🔭 @DeepGravity
Iteratively-Refined Interactive 3D Medical Image Segmentation with Multi-Agent #ReinforcementLearning

Existing automatic 3D image segmentation methods usually fail to meet the clinic use. Many studies have explored an interactive strategy to improve the image segmentation performance by iteratively incorporating user hints. However, the dynamic process for successive interactions is largely ignored. We here propose to model the dynamic process of iterative interactive image segmentation as a #MarkovDecisionProcess (#MDP) and solve it with reinforcement learning (#RL). Unfortunately, it is intractable to use single-agent RL for voxel-wise prediction due to the large exploration space. To reduce the exploration space to a tractable size, we treat each voxel as an agent with a shared voxel-level behavior strategy so that it can be solved with multi-agent reinforcement learning. An additional advantage of this multi-agent model is to capture the dependency among voxels for segmentation task. Meanwhile, to enrich the information of previous segmentations, we reserve the prediction uncertainty in the state space of MDP and derive an adjustment action space leading to a more precise and finer segmentation. In addition, to improve the efficiency of exploration, we design a relative cross-entropy gain-based reward to update the policy in a constrained direction. Experimental results on various medical datasets have shown that our method significantly outperforms existing state-of-the-art methods, with the advantage of fewer interactions and a faster convergence.

Link

🔭 @DeepGravity