Tensorflow(@CVision)
Andrew-Ng.pdf
نسخه اولیه ( draft version ) کتاب یادگیری ماشین Andrew Ng
#کتاب
#کتاب
Tensorflow(@CVision)
Video
Amazon go
کاربرد هوش مصنوعی، بینایی ماشین و یادگیری عمیق در خرید ساده 👌
How does Amazon Go work?
Our checkout-free shopping experience is made possible by the same types of technologies used in self-driving cars: #computer_vision, sensor fusion, and #deep_learning. Our Just Walk Out technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your Amazon account and send you a receipt.
https://www.amazon.com/b?ie=UTF8&node=16008589011
کاربرد هوش مصنوعی، بینایی ماشین و یادگیری عمیق در خرید ساده 👌
How does Amazon Go work?
Our checkout-free shopping experience is made possible by the same types of technologies used in self-driving cars: #computer_vision, sensor fusion, and #deep_learning. Our Just Walk Out technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your Amazon account and send you a receipt.
https://www.amazon.com/b?ie=UTF8&node=16008589011
Tensorflow(@CVision)
A Conversation with Andrew Ng _ Andrew Ng _ TEDxBoston.mp4
حضور جالب و خاص Andrew Ng در TED 😊
A Conversation with Andrew Ng | Andrew Ng | TEDxBoston
دکتر َAndrew NG بیش از 100 مقاله تحقیقاتی در یادگیری ماشین، رباتیک و زمینه های مرتبط منتشر کرده است.
در سال 2013 نام وی در لیست 100 نفر از با نفوذ ترین افراد در جهان قرارگرفت.
مدارک تحصلی وی از دانشگاه Carnegie Mellon ، MIT و دانشگاه Berkeley کالیفرنیا است. و در حال حاضر استاد دانشگاه استفورد است.
A Conversation with Andrew Ng | Andrew Ng | TEDxBoston
دکتر َAndrew NG بیش از 100 مقاله تحقیقاتی در یادگیری ماشین، رباتیک و زمینه های مرتبط منتشر کرده است.
در سال 2013 نام وی در لیست 100 نفر از با نفوذ ترین افراد در جهان قرارگرفت.
مدارک تحصلی وی از دانشگاه Carnegie Mellon ، MIT و دانشگاه Berkeley کالیفرنیا است. و در حال حاضر استاد دانشگاه استفورد است.
Tensorflow(@CVision)
Vision Reconstruction.mp4
بازسازی صحنه های فیلم مشاهده شده توسط فرد با پردازش فعالیت های ناحیه ی بینایی مغز
آیا میتوان یک روز رویاها و خواب ها را با این تکنولوژی ضبط کرد و به صورت فیلم بازیابی کرد؟!
UC Berkeley researchers have succeeded in #decoding and #reconstructing people's dynamic #visual experiences.
The #brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.
The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills #Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.
more:
http://news.berkeley.edu/2011/09/22/brain-movies/
آیا میتوان یک روز رویاها و خواب ها را با این تکنولوژی ضبط کرد و به صورت فیلم بازیابی کرد؟!
UC Berkeley researchers have succeeded in #decoding and #reconstructing people's dynamic #visual experiences.
The #brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images.
Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds.
The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills #Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.
more:
http://news.berkeley.edu/2011/09/22/brain-movies/
Berkeley News
Scientists use brain imaging to reveal the movies in our mind - Berkeley News
Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, UC Berkeley scientists are bringing these futuristic scenarios within reach. Using functional Magnetic…
'Your #face is big data:' The noscript of this photographer's experiment says it all...
http://images.techhive.com/images/article/2016/04/your-face-is-big-data-100655649-large.jpg
http://www.pcworld.com/article/3055305/analytics/your-face-is-big-data-the-noscript-of-this-photographers-experiment-says-it-all.html
یه مرد روسی در مترو از مسافرین عکس گرفته، سپس با الگوریتمهای تشخیص چهره اکانت شبکه های اجتماعی آنها را پیدا کرده،
تو سایتش نیز چند نمونه از مسافرا را قرار داده...
https://birdinflight.com/ru/vdohnovenie/fotoproect/06042016-face-big-data.html
#face_recognition
http://images.techhive.com/images/article/2016/04/your-face-is-big-data-100655649-large.jpg
http://www.pcworld.com/article/3055305/analytics/your-face-is-big-data-the-noscript-of-this-photographers-experiment-says-it-all.html
یه مرد روسی در مترو از مسافرین عکس گرفته، سپس با الگوریتمهای تشخیص چهره اکانت شبکه های اجتماعی آنها را پیدا کرده،
تو سایتش نیز چند نمونه از مسافرا را قرار داده...
https://birdinflight.com/ru/vdohnovenie/fotoproect/06042016-face-big-data.html
#face_recognition
21 #Deep_Learning #Videos, #Tutorials & #Courses on Youtube from 2016:
https://www.analyticsvidhya.com/blog/2016/12/21-deep-learning-videos-tutorials-courses-on-youtube-from-2016/
https://www.analyticsvidhya.com/blog/2016/12/21-deep-learning-videos-tutorials-courses-on-youtube-from-2016/
Analytics Vidhya
21 Deep Learning Videos, Tutorials & Courses on Youtube from 2016
Introduction Until a few years back, deep learning was considered of a lesser importance as compared to machine learning. The emergence of neural networks & big-data has made various tasks possible. Back in 2009, deep learning was only an emerging field and…
Deep learning
Yann LeCun, Yoshua Bengio & Geoffrey Hinton
http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html
#Deep_learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object #recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep #convolutional nets have brought about breakthroughs in processing #images, #video, #speech and #audio, whereas #recurrent nets have shone light on sequential data such as #text and speech.
Yann LeCun, Yoshua Bengio & Geoffrey Hinton
http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html
#Deep_learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object #recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep #convolutional nets have brought about breakthroughs in processing #images, #video, #speech and #audio, whereas #recurrent nets have shone light on sequential data such as #text and speech.
Tensorflow(@CVision)
Deep learning Yann LeCun, Yoshua Bengio & Geoffrey Hinton http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html #Deep_learning allows computational models that are composed of multiple processing layers to learn representations of data with…
NatureDeepReview.pdf
2 MB
Deep Residual Learning for Image Recognition
https://arxiv.org/pdf/1512.03385v1.pdf
شبکه عمیق 152 لایه...
https://arxiv.org/pdf/1512.03385v1.pdf
شبکه عمیق 152 لایه...
#Winter #Seminar Series
Advanced Topics in Computer Science and Engineering
Sharif University of Technology / 28-29 December 2016
دومین #سمینار #زمستانی مباحثی پیشرفته در علوم و مهندسی کامپیوتر / 8 و 9 دی ماه
http://wss.ce.sharif.edu/
لینک ثبت نام:
http://ssc.ce.sharif.edu/out-of-menu-static-pages/payment-pages/wss-2016-registration/
با توجه به تعطیلات زمستانی, پژوهشگران ایرانی مقیم خارج از کشور در این تاریخ به ایران آمده و پژوهشگاه مرتبط با علوم کامپیوتر را ارائه میدهند.
سخنرانی های مرتبط با #یادگیری_عمیق و #deep_learning در زیر آمده است.
✅Ali Eslami
(Google DeepMind)
🔹TITLE:
Beyond Supervised Deep Learning
🔗 http://wss.ce.sharif.edu/speakers/ali-eslami.html#speaker
✅ALI SHARIFI ZARECHI
(RESEARCH ASSOCIATE, COLORADO STATE UNIVERSITY, FORT COLLINS)
🔹TITLE:
Using Deep Neural Networks to Understand the Cell Identity by Expression Fingerprints
🔗 http://wss.ce.sharif.edu/speakers/ali-sharifi-zarechi.html#speaker
✅Ehsan Asgari
(PhD Candidate, University of California, Berkeley)
🔹Title:
Bioinformatics, Natural Language Processing, Character based Deep NLP, Machine Learning, Deep Learning, Digital Humanities
🔗 http://wss.ce.sharif.edu/speakers/ehsan-asgari.html#speaker
✅Mohammad Babaiezadeh
(PhD, University of Illinois at Urbana–Champaign)
🔹Title:
Deep Learning at Scale
🔗 http://wss.ce.sharif.edu/speakers/mohammad-babaiezadeh.html#speaker
✅Mohsen Mousavi Dezfouli
(PhD, École Polytechnique Fédérale de Lausanne)
🔹Title:
Robustness of Image Classifiers
🔗 http://wss.ce.sharif.edu/speakers/mohsen-mousavi-dezfouli.html#speaker
✅Naeimeh Omidvar
(PhD, Hong Kong University of Science and Technology)
🔹Title:
Online Stochastic Optimisation for Large-Scale Machine Learning Problems in Big Data
🔗 http://wss.ce.sharif.edu/speakers/naeimeh-omidvar.html#speaker
Advanced Topics in Computer Science and Engineering
Sharif University of Technology / 28-29 December 2016
دومین #سمینار #زمستانی مباحثی پیشرفته در علوم و مهندسی کامپیوتر / 8 و 9 دی ماه
http://wss.ce.sharif.edu/
لینک ثبت نام:
http://ssc.ce.sharif.edu/out-of-menu-static-pages/payment-pages/wss-2016-registration/
با توجه به تعطیلات زمستانی, پژوهشگران ایرانی مقیم خارج از کشور در این تاریخ به ایران آمده و پژوهشگاه مرتبط با علوم کامپیوتر را ارائه میدهند.
سخنرانی های مرتبط با #یادگیری_عمیق و #deep_learning در زیر آمده است.
✅Ali Eslami
(Google DeepMind)
🔹TITLE:
Beyond Supervised Deep Learning
🔗 http://wss.ce.sharif.edu/speakers/ali-eslami.html#speaker
✅ALI SHARIFI ZARECHI
(RESEARCH ASSOCIATE, COLORADO STATE UNIVERSITY, FORT COLLINS)
🔹TITLE:
Using Deep Neural Networks to Understand the Cell Identity by Expression Fingerprints
🔗 http://wss.ce.sharif.edu/speakers/ali-sharifi-zarechi.html#speaker
✅Ehsan Asgari
(PhD Candidate, University of California, Berkeley)
🔹Title:
Bioinformatics, Natural Language Processing, Character based Deep NLP, Machine Learning, Deep Learning, Digital Humanities
🔗 http://wss.ce.sharif.edu/speakers/ehsan-asgari.html#speaker
✅Mohammad Babaiezadeh
(PhD, University of Illinois at Urbana–Champaign)
🔹Title:
Deep Learning at Scale
🔗 http://wss.ce.sharif.edu/speakers/mohammad-babaiezadeh.html#speaker
✅Mohsen Mousavi Dezfouli
(PhD, École Polytechnique Fédérale de Lausanne)
🔹Title:
Robustness of Image Classifiers
🔗 http://wss.ce.sharif.edu/speakers/mohsen-mousavi-dezfouli.html#speaker
✅Naeimeh Omidvar
(PhD, Hong Kong University of Science and Technology)
🔹Title:
Online Stochastic Optimisation for Large-Scale Machine Learning Problems in Big Data
🔗 http://wss.ce.sharif.edu/speakers/naeimeh-omidvar.html#speaker
ssc.ce.sharif.edu
ثبتنام در سمینار زمستانی 2016 | انجمن علمی دانشجویی دانشکدهی کامپیوتر دانشگاه صنعتی شریف
لطفا جهت ثبت نام در سمینار زمستانی مباحثی پیشرفته در علوم و مهندسی کامپیوتر فرم زیر را پر کنید.
راهنمای تعاملی و آنلاین برای یارگیری شبکههای عصبی:
https://jalammar.github.io/visual-interactive-guide-basics-neural-networks/
#interactive #neural_network
https://jalammar.github.io/visual-interactive-guide-basics-neural-networks/
#interactive #neural_network
jalammar.github.io
A Visual and Interactive Guide to the Basics of Neural Networks
Discussions:
Hacker News (63 points, 8 comments), Reddit r/programming (312 points, 37 comments)
Translations: Arabic, French, Spanish
Update: Part 2 is now live: A Visual And Interactive Look at Basic Neural Network Math
Motivation
I’m not a…
Hacker News (63 points, 8 comments), Reddit r/programming (312 points, 37 comments)
Translations: Arabic, French, Spanish
Update: Part 2 is now live: A Visual And Interactive Look at Basic Neural Network Math
Motivation
I’m not a…
#MXNet review: Amazon's scalable #deep_learning
http://www.infoworld.com/article/3149598/artificial-intelligence/mxnet-review-amazons-scalable-deep-learning.html
Amazon’s favorite deep learning #framework scales across multiple GPUs and hosts, but it's rough around the edges
#amazon
http://www.infoworld.com/article/3149598/artificial-intelligence/mxnet-review-amazons-scalable-deep-learning.html
Amazon’s favorite deep learning #framework scales across multiple GPUs and hosts, but it's rough around the edges
#amazon
InfoWorld
MXNet review: Amazon's scalable deep learning
Amazon’s favorite deep learning framework scales across multiple GPUs and hosts, but it's rough around the edges
Review: TensorFlow shines a light on deep learning | InfoWorld
http://www.infoworld.com/article/3127397/artificial-intelligence/review-tensorflow-shines-a-light-on-deep-learning.html
http://www.infoworld.com/article/3127397/artificial-intelligence/review-tensorflow-shines-a-light-on-deep-learning.html
InfoWorld
Review: TensorFlow shines a light on deep learning
Google's open source framework for machine learning and neural networks is fast and flexible, rich in models, and easy to run on CPUs or GPUs
⭕️تانسور چیست؟
در ریاضی، #تانسور آرایهای از اعداد است یعنی یک سری اعداد که به طور خاصی مرتب شدهاند یعنی در یک جدول فرضی چیده شدهاند. در واقع تانسور تعمیمی است از مفاهیم اسکالر، بردار و #ماتریس .
تانسور آرایهای است از اعداد که در یک جدول چیده شدهاند. این جدول در حالت کلی میتواند به صورت... N x M x O x P x باشد که حروف بزرگ هر کدام میتوانند نمایندهٔ یک عدد طبیعی باشند و x نشان دهندهٔ عمل ضرب بین آنهاست. تانسور در ساده ترین حالت میتواند یک عضو داشته باشد. در حالت کمی پیشرفته تر تانسور میتواند به صورت بردار باشد. یعنی وقتی شما بردار A را به صورت(x,y,z) نشان میدهید در حقیقت یک تانسور دارید. در حالتی باز هم پیشرفته تر تانسور میتواند دو بعدی باشد(به صورت ماتریسی) یعنی مثلاً جدول ما 2*2 باشد یعنی دو سطر و دو ستون داشته باشد.
چنین تانسوری دارای ۴ عضو است. به طور کلی تانسورهای دو بعدی و بالاتر از دو بعد را با نام ماتریس هم میشناسند.
بیشتر:
🔗https://en.wikipedia.org/wiki/Tensor
⭕️تانسورها در تنسرفلو:
تنسورفلو در واقع از داده ساختار تانسور برای داده ها استفاده میکند.
در واقع شما میتوانید تانسور ها در تنسورفلو را به صورت یک ماتریس n-بعدی در نظر بگیرید.
تانسور ها در تنسورفلو دارای نوع ایستا اما ابعاد پویا هستند.
فقط داده هایی از نوع تانسور میتوانند بین نودهای گراف محاسباتی تنسورفلو جا به جا شوند (خروجی و وردی نودها).
اطلاعات بیشتر و اطلاعاتی از رنک – نوع و شکل تانسورها:
🔗https://www.tensorflow.org/versions/r0.10/resources/dims_types
#تانسور #تنسور #tensor #tensorflow
در ریاضی، #تانسور آرایهای از اعداد است یعنی یک سری اعداد که به طور خاصی مرتب شدهاند یعنی در یک جدول فرضی چیده شدهاند. در واقع تانسور تعمیمی است از مفاهیم اسکالر، بردار و #ماتریس .
تانسور آرایهای است از اعداد که در یک جدول چیده شدهاند. این جدول در حالت کلی میتواند به صورت... N x M x O x P x باشد که حروف بزرگ هر کدام میتوانند نمایندهٔ یک عدد طبیعی باشند و x نشان دهندهٔ عمل ضرب بین آنهاست. تانسور در ساده ترین حالت میتواند یک عضو داشته باشد. در حالت کمی پیشرفته تر تانسور میتواند به صورت بردار باشد. یعنی وقتی شما بردار A را به صورت(x,y,z) نشان میدهید در حقیقت یک تانسور دارید. در حالتی باز هم پیشرفته تر تانسور میتواند دو بعدی باشد(به صورت ماتریسی) یعنی مثلاً جدول ما 2*2 باشد یعنی دو سطر و دو ستون داشته باشد.
چنین تانسوری دارای ۴ عضو است. به طور کلی تانسورهای دو بعدی و بالاتر از دو بعد را با نام ماتریس هم میشناسند.
بیشتر:
🔗https://en.wikipedia.org/wiki/Tensor
⭕️تانسورها در تنسرفلو:
تنسورفلو در واقع از داده ساختار تانسور برای داده ها استفاده میکند.
در واقع شما میتوانید تانسور ها در تنسورفلو را به صورت یک ماتریس n-بعدی در نظر بگیرید.
تانسور ها در تنسورفلو دارای نوع ایستا اما ابعاد پویا هستند.
فقط داده هایی از نوع تانسور میتوانند بین نودهای گراف محاسباتی تنسورفلو جا به جا شوند (خروجی و وردی نودها).
اطلاعات بیشتر و اطلاعاتی از رنک – نوع و شکل تانسورها:
🔗https://www.tensorflow.org/versions/r0.10/resources/dims_types
#تانسور #تنسور #tensor #tensorflow
What is TPU ?!
(Tensor processing unit)
تصویر: http://cdn.mos.cms.futurecdn.net/95b214f2e6ed15a55df3e6a46d28f768-970-80.jpg
تی.پی.یو یا واحد پردازش تانسور نوعی "مدارهای مجتمع با کاربرد خاص" هستند که به طور خاص برای کارهای یادگیری ماشین توسعه داده شده اند.
در مقایسه با GPU ها که در سالهای اخیر علوه بر کاربرد اصلی کارهای گرافیکی برای همین دست مسائل استفاده شده؛ TPUها برای حجم بالاتر داده ولی دقیق تر(مثلا 8 بیتی) و کاهش محاسبات طراحی شده اند.
گوگل که این TPUها را برای کارهای خودش طراحی کرده, ادعا کرده این واحدهای پردازشی تا 10 برابر سریع تر GPU ها برای کارهای #یادگیری_ماشین هستند.
"The TPU used lower precision of 8 bit and possibly lower to get similar performance to what Myriad 2 delivers today. Similar to us they optimized for use with #TensorFlow," said Dr. David Moloney, Movidius' CTO.
اطلاعات بیشتر:
Google's Big Chip Unveil For Machine Learning: Tensor Processing Unit With 10x Better Efficiency
🔗http://www.tomshardware.com/news/google-tensor-processing-unit-machine-learning,31834.html
Google's Tensor Processing Unit explained: this is what the future of computing looks like
🔗http://www.techradar.com/news/computing-components/processors/google-s-tensor-processing-unit-explained-this-is-what-the-future-of-computing-looks-like-1326915
مرتبط:
Google’s tensor processing units (TPUs) are interesting, but Nvidia is essential
🔗http://tech.firstpost.com/news-analysis/googles-tensor-processing-units-tpus-are-interesting-but-nvidia-is-essential-316577.html
Google's Tensor Processing Unit could advance Moore's Law 7 years into the future
🔗http://www.pcworld.com/article/3072256/google-io/googles-tensor-processing-unit-said-to-advance-moores-law-seven-years-into-the-future.html
#TPU #GPU #ASIC @CVISION
(Tensor processing unit)
تصویر: http://cdn.mos.cms.futurecdn.net/95b214f2e6ed15a55df3e6a46d28f768-970-80.jpg
تی.پی.یو یا واحد پردازش تانسور نوعی "مدارهای مجتمع با کاربرد خاص" هستند که به طور خاص برای کارهای یادگیری ماشین توسعه داده شده اند.
در مقایسه با GPU ها که در سالهای اخیر علوه بر کاربرد اصلی کارهای گرافیکی برای همین دست مسائل استفاده شده؛ TPUها برای حجم بالاتر داده ولی دقیق تر(مثلا 8 بیتی) و کاهش محاسبات طراحی شده اند.
گوگل که این TPUها را برای کارهای خودش طراحی کرده, ادعا کرده این واحدهای پردازشی تا 10 برابر سریع تر GPU ها برای کارهای #یادگیری_ماشین هستند.
"The TPU used lower precision of 8 bit and possibly lower to get similar performance to what Myriad 2 delivers today. Similar to us they optimized for use with #TensorFlow," said Dr. David Moloney, Movidius' CTO.
اطلاعات بیشتر:
Google's Big Chip Unveil For Machine Learning: Tensor Processing Unit With 10x Better Efficiency
🔗http://www.tomshardware.com/news/google-tensor-processing-unit-machine-learning,31834.html
Google's Tensor Processing Unit explained: this is what the future of computing looks like
🔗http://www.techradar.com/news/computing-components/processors/google-s-tensor-processing-unit-explained-this-is-what-the-future-of-computing-looks-like-1326915
مرتبط:
Google’s tensor processing units (TPUs) are interesting, but Nvidia is essential
🔗http://tech.firstpost.com/news-analysis/googles-tensor-processing-units-tpus-are-interesting-but-nvidia-is-essential-316577.html
Google's Tensor Processing Unit could advance Moore's Law 7 years into the future
🔗http://www.pcworld.com/article/3072256/google-io/googles-tensor-processing-unit-said-to-advance-moores-law-seven-years-into-the-future.html
#TPU #GPU #ASIC @CVISION
👍1
TensorFlow: A Flexible, Scalable & Portable System
+فیلم و اسلاید
Summary
Rajat Monga talks about why engineers at #Google built #TensorFlow, an open source software library for numerical computation using data flow graphs, and what were some of the technical challenges in building it. TensorFlow leverages a general computational model that is applicable in a wide variety of other domains, especially for performing large-scale numerical computations.
https://www.infoq.com/presentations/tensorflow
+فیلم و اسلاید
Summary
Rajat Monga talks about why engineers at #Google built #TensorFlow, an open source software library for numerical computation using data flow graphs, and what were some of the technical challenges in building it. TensorFlow leverages a general computational model that is applicable in a wide variety of other domains, especially for performing large-scale numerical computations.
https://www.infoq.com/presentations/tensorflow
InfoQ
TensorFlow: A Flexible, Scalable & Portable System
Rajat Monga talks about why engineers at Google built TensorFlow, an open source software library for numerical computation using data flow graphs, and what were some of the technical challenges in building it. TensorFlow leverages a general computational…
خانه ی #هوشمند مارک #زاکربرگ بنیان گذار فیس بوک که از متدهای نوین هوش مصنوعی نظیر بازشناسی شئ، بازشناسی چهره، بازشناسی گفتار، پردازش زبانهای طبیعی و ... بهره برده است.
زاکربرگ از انگیزه ی خود برای این کار و گام های انجام کارش مینویسد:
https://www.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/
چالش شخصی من برای سال 2016 ساخت یک هوش مصنوعی ساده برای خانه ام بوده - مثل جارویس در فیلم مرد آهنین...
Building Jarvis:
- Getting Started: Connecting the Home
- #Natural_Language
- #Vision and #Face_Recognition
- Messenger Bot
- Voice and #Speech_Recognition
- Facebook Engineering Environment
—------
Vision and Face Recognition:
About one-third of the human #brain is dedicated to vision, and there are many important #AI problems related to understanding what is happening in images and videos. These problems include #tracking (eg is Max awake and moving around in her crib?), #object_recognition (eg is that Beast or a rug in that room?), and face recognition (eg who is at the door?).
Face recognition is a particularly difficult version of object recognition because most people look relatively similar compared to telling apart two random objects — for example, a sandwich and a house. But Facebook has gotten very good at face recognition for identifying when your friends are in your photos. That expertise is also useful when your friends are at your door and your AI needs to determine whether to let them in.
To do this, I installed a few cameras at my door that can capture images from all angles. AI systems today cannot identify people from the back of their heads, so having a few angles ensures we see the person's face. I built a simple server that continuously watches the cameras and runs a two step process: first, it runs face detection to see if any person has come into view, and second, if it finds a face, then it runs face recognition to identify who the person is. Once it identifies the person, it checks a list to confirm I'm expecting that person, and if I am then it will let them in and tell me they're here.
This type of visual AI system is useful for a number of things, including knowing when Max is awake so it can start playing music or a Mandarin lesson, or solving the context problem of knowing which room in the house we're in so the AI can correctly respond to context-free requests like "turn the lights on" without providing a location. Like most aspects of this AI, vision is most useful when it informs a broader model of the world, connected with other abilities like knowing who your friends are and how to open the door when they're here. The more context the system has, the smarter is gets overall.
#mark_zuckerberg #smart_home
زاکربرگ از انگیزه ی خود برای این کار و گام های انجام کارش مینویسد:
https://www.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/
چالش شخصی من برای سال 2016 ساخت یک هوش مصنوعی ساده برای خانه ام بوده - مثل جارویس در فیلم مرد آهنین...
Building Jarvis:
- Getting Started: Connecting the Home
- #Natural_Language
- #Vision and #Face_Recognition
- Messenger Bot
- Voice and #Speech_Recognition
- Facebook Engineering Environment
—------
Vision and Face Recognition:
About one-third of the human #brain is dedicated to vision, and there are many important #AI problems related to understanding what is happening in images and videos. These problems include #tracking (eg is Max awake and moving around in her crib?), #object_recognition (eg is that Beast or a rug in that room?), and face recognition (eg who is at the door?).
Face recognition is a particularly difficult version of object recognition because most people look relatively similar compared to telling apart two random objects — for example, a sandwich and a house. But Facebook has gotten very good at face recognition for identifying when your friends are in your photos. That expertise is also useful when your friends are at your door and your AI needs to determine whether to let them in.
To do this, I installed a few cameras at my door that can capture images from all angles. AI systems today cannot identify people from the back of their heads, so having a few angles ensures we see the person's face. I built a simple server that continuously watches the cameras and runs a two step process: first, it runs face detection to see if any person has come into view, and second, if it finds a face, then it runs face recognition to identify who the person is. Once it identifies the person, it checks a list to confirm I'm expecting that person, and if I am then it will let them in and tell me they're here.
This type of visual AI system is useful for a number of things, including knowing when Max is awake so it can start playing music or a Mandarin lesson, or solving the context problem of knowing which room in the house we're in so the AI can correctly respond to context-free requests like "turn the lights on" without providing a location. Like most aspects of this AI, vision is most useful when it informs a broader model of the world, connected with other abilities like knowing who your friends are and how to open the door when they're here. The more context the system has, the smarter is gets overall.
#mark_zuckerberg #smart_home