Deep Gravity – Telegram
Deep Gravity
393 subscribers
60 photos
35 videos
17 files
495 links
AI

Contact:
DeepL.Gravity@gmail.com
Download Telegram
#Keras inventor #Chollet charts a new direction for #AI: a Q&A

#Google scientist François Chollet has made a lasting contribution to AI in the wildly popular Keras application programming interface. He now hopes to move the field toward a new approach to intelligence. He talked with ZDNet about what he hopes to accomplish.

Link

🔭 @DeepGravity
Introduction to Applied #LinearAlgebra – Vectors, Matrices, and Least Squares
by
Stephen Boyd and Lieven Vandenberghe

#Cambridge University Press

Link

#Book

🔭 @DeepGravity
Deep Gravity
Introduction to Applied #LinearAlgebra – Vectors, Matrices, and Least Squares by Stephen Boyd and Lieven Vandenberghe #Cambridge University Press Link #Book 🔭 @DeepGravity
#LSTM: A Search Space Odyssey
Klaus Greff, Rupesh K. Srivastava, Jan Koutn´ık, Bas R. Steunebrink, Jurgen #Schmidhuber

Abstract—Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful fANOVA framework. In total, we summarize the results of 5400 experimental runs (≈ 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.

Link

🔭 @DeepGravity
Learning Efficient Video Representation with #Video Shuffle Networks

3D #CNN shows its strong ability in learning spatiotemporal representation in recent video recognition tasks. However, inflating 2D convolution to 3D inevitably introduces additional computational costs, making it cumbersome in practical deployment. We consider whether there is a way to equip the conventional 2D convolution with temporal #vision no requiring expanding its kernel. To this end, we propose the video shuffle, a parameter-free plug-in component that efficiently reallocates the inputs of 2D convolution so that its receptive field can be extended to the temporal dimension. In practical, video shuffle firstly divides each frame feature into multiple groups and then aggregate the grouped features via temporal shuffle operation. This allows the following 2D convolution aggregate the global spatiotemporal features. The proposed video shuffle can be flexibly inserted into popular 2D #CNNs, forming the Video Shuffle Networks (VSN). With a simple yet efficient implementation, VSN performs surprisingly well on temporal modeling benchmarks. In experiments, VSN not only gains non-trivial improvements on Kinetics and Moments in Time, but also achieves state-of-the-art performance on Something-Something-V1, Something-Something-V2 datasets.

Link

🔭 @DeepGravity
How to Visualize Filters and Feature Maps in #ConvolutionalNeuralNetworks


After completing this tutorial, you will know:

* How to develop a visualization for specific filters in a convolutional neural network.
* How to develop a visualization for specific feature maps in a convolutional neural network.
* How to systematically visualize feature maps for each block in a #deep convolutional neural network.

Link

🔭 @DeepGravity
Which Channel to Ask My Question? Personalized Customer Service RequestStream Routing using #DeepReinforcementLearning


Customer services are critical to all companies, as they may directly connect to the brand reputation. Due to a great number of customers, e-commerce companies often employ multiple communication channels to answer customers' questions, for example, chatbot and hotline. On one hand, each channel has limited capacity to respond to customers' requests, on the other hand, customers have different preferences over these channels. The current production systems are mainly built based on business rules, which merely considers tradeoffs between resources and customers' satisfaction. To achieve the optimal tradeoff between resources and customers' satisfaction, we propose a new framework based on deep reinforcement learning, which directly takes both resources and user model into account. In addition to the framework, we also propose a new deep-reinforcement-learning based routing method-double dueling deep Q-learning with prioritized experience replay (PER-DoDDQN). We evaluate our proposed framework and method using both synthetic and a real customer service log data from a large financial technology company. We show that our proposed deep-reinforcement-learning based framework is superior to the existing production system. Moreover, we also show our proposed PER-DoDDQN is better than all other deep Q-learning variants in practice, which provides a more optimal routing plan. These observations suggest that our proposed method can seek the trade-off where both channel resources and customers'

Link

🔭 @DeepGravity
Introducing #Google Research Football: A Novel #ReinforcementLearning Environment

The goal of reinforcement learning (RL) is to train smart agents that can interact with their environment and solve complex tasks, with real-world applications towards robotics, self-driving cars, and more. The rapid progress in this field has been fueled by making agents play games such as the iconic Atari console games, the ancient game of Go, or professionally played video games like Dota 2 or Starcraft 2, all of which provide challenging environments where new algorithms and ideas can be quickly tested in a safe and reproducible manner. The game of football is particularly challenging for RL, as it requires a natural balance between short-term control, learned concepts, such as passing, and high level strategy.

Link

🔭 @DeepGravity
Deep Gravity
#DeepFovea: #Neural Reconstruction for Foveated Rendering and Video Compression using Learned #Statistics of Natural Videos Link to the paper #FacebookAI 🔭 @DeepGravity
#DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos

In order to provide an immersive visual experience, modern displays require head mounting, high image resolution, low latency, as well as high refresh rate. This poses a challenging computational problem. On the other hand, the human visual system can consume only a tiny fraction of this video stream due to the drastic acuity loss in the peripheral vision. Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we explore a novel foveated reconstruction method that employs the recent advances in generative adversarial neural networks. We reconstruct a plausible peripheral video from a small fraction of pixels provided every frame. The reconstruction is done by finding the closest matching video to this sparse input stream of pixels on the learned manifold of natural videos. Our method is more efficient than the state-of-the-art foveated rendering, while providing the visual experience with no noticeable quality degradation. We conducted a user study to validate our reconstruction method and compare it against existing foveated rendering and video compression techniques. Our method is fast enough to drive gaze-contingent head-mounted displays in real time on modern hardware. We plan to publish the trained network to establish a new quality bar for foveated rendering and compression as well as encourage follow-up research.

Link

#Facebook

🔭 @DeepGravity
Multi-Object Portion Tracking in 4D Fluorescence Microscopy Imagery with #Deep Feature Maps

3D fluorescence microscopy of living organisms has increasingly become an essential and powerful tool in biomedical research and diagnosis. An exploding amount of imaging #data has been collected, whereas efficient and effective computational tools to extract information from them are still lagging behind. This is largely due to the challenges in analyzing biological data. Interesting biological structures are not only small, but are often morphologically irregular and highly dynamic. Although tracking cells in live organisms has been studied for years, existing tracking methods for cells are not effective in tracking subcellular structures, such as protein complexes, which feature in continuous morphological changes including split and merge, in addition to fast migration and complex motion. In this paper, we first define the problem of multi-object portion tracking to model the protein object tracking process. A multi-object tracking method with portion matching is proposed based on 3D segmentation results. The proposed method distills deep feature maps from deep networks, then recognizes and matches object portions using an extended search. Experimental results confirm that the proposed method achieves 2.96 consistent tracking accuracy and 35.48 than the state-of-art methods.

Link

🔭 @DeepGravity
The Mind at Work: Guido van #Rossum on how #Python makes thinking in code easier

A conversation with the creator of the world’s most popular programming language on removing brain friction for better work

“You primarily write your code to communicate with other coders, and, to a lesser extent, to impose your will on the computer.”
—Guido van Rossum

Link

🔭 @DeepGravity
A massive new #blackhole discovered in #MilkyWay !

Astronomers in China made a surprising discovery: a massive black hole in our galaxy. Called LB-1, this one is about 70 times the mass of the sun. It's the first time a black hole of this size has been detected in the Milky Way.

Link CNN

YouTube

🔭 @DeepGravity
A #DeepLearning framework for #neuroscience

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, #cognitive and motor tasks. Conversely, #ArtificialIntelligence attempts to design computational systems based on the tasks they will have to solve. In artificial #NeuralNetworks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of #deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

Link

🔭 @DeepGravity
The Top 10 #Books on #AI recommended by #ElonMusk and #BillGates

1 Superintelligence: Paths, Dangers, Strategies
2 The Singularity Is Near: When Humans Transcend Biology
3 Life 3.0: Being Human in the Age of Artificial Intelligence
4 Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World
5 The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
6 Machine, Platform, Crowd: Harnessing Our Digital Future
7 AI Superpowers: China, Silicon Valley, and the New World Order
8 The Sentient Machine: The Coming Age of Artificial Intelligence
9 Our Final Invention: Artificial Intelligence and the End of the Human Era
10 Army of None: Autonomous Weapons and the Future of War

Link

🔭 @DeepGravity
François #Chollet is the creator of #Keras, which is an open source #DeepLearning library that is designed to enable fast, user-friendly experimentation with #deepNeuralNetworks. It serves as an interface to several deep learning libraries, most popular of which is #TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class #AI researcher and software engineer at #Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of #ArtificialIntelligence. This conversation is part of the Artificial Intelligence podcast.

Link

🔭 @DeepGravity
Google’s new ‘#ExplainableAI” (#xAI) service

#Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.

Link

🔭 @DeepGravity