پست دیروز بلاگ تنسورفلو در زمینهی چشمانداز سال آینده برای این فریم ورک:
https://blog.tensorflow.org/2022/10/building-the-future-of-tensorflow.html
https://blog.tensorflow.org/2022/10/building-the-future-of-tensorflow.html
blog.tensorflow.org
Building the Future of TensorFlow
The TensorFlow roadmap for 2023 and beyond
Stanford CS330: Deep Multi-Task & Meta Learning 2021 - New Lectures 🥳
CS330 is a freely available course that covers a number of topics related to multi-task & meta-learning such as transfer learning, life-long learning, unsupervised pretraining, etc...
https://twitter.com/Jeande_d/status/1583925469937688576?t=rAl7-TCZT5kSKpxEMNGi9A&s=19
CS330 is a freely available course that covers a number of topics related to multi-task & meta-learning such as transfer learning, life-long learning, unsupervised pretraining, etc...
https://twitter.com/Jeande_d/status/1583925469937688576?t=rAl7-TCZT5kSKpxEMNGi9A&s=19
پادکست مصاحبه جو روگان با استیو جابز که با هوش مصنوعی ساخته شده!!!!
Joe Rogan Meets Steve Jobs in an AI-Generated Podcast!!!!
:)))))))))))
https://www.deeplearning.ai/the-batch/joe-rogan-meets-steve-jobs-in-an-ai-generated-podcast
از اینجا بشنوید:
https://podcast.ai/?utm_campaign=The%20Batch&utm_medium=email&_hsmi=2&_hsenc=p2ANqtz-8vwWIsI1L8w_xCxcGrfR9kpKsp0OdvcEuHJXpjiRpl6LNFzYw1zz1WVaPLR5s63Ffo9ZFPEvrr3ChVJPd_LG9RxuzC33BwrocGA2L2NuDPxXit4yU&utm_content=2&utm_source=hs_email
Joe Rogan Meets Steve Jobs in an AI-Generated Podcast!!!!
:)))))))))))
https://www.deeplearning.ai/the-batch/joe-rogan-meets-steve-jobs-in-an-ai-generated-podcast
از اینجا بشنوید:
https://podcast.ai/?utm_campaign=The%20Batch&utm_medium=email&_hsmi=2&_hsenc=p2ANqtz-8vwWIsI1L8w_xCxcGrfR9kpKsp0OdvcEuHJXpjiRpl6LNFzYw1zz1WVaPLR5s63Ffo9ZFPEvrr3ChVJPd_LG9RxuzC33BwrocGA2L2NuDPxXit4yU&utm_content=2&utm_source=hs_email
Joe Rogan Meets Steve Jobs in an AI-Generated Podcast
For the debut episode of a new podcast series, Play.ht synthesized a 19-minute interview between the rock-star podcaster and late Apple CEO...
Why someone with depression may feel like they're "Faking it" even though they're not
New open-source language model from Google AI: Flan-T5 🍮
Flan-T5 is instruction-finetuned on 1,800+ language tasks, leading to dramatically improved prompting and multi-step reasoning abilities.
https://twitter.com/quocleix/status/1583523186376785921?t=xQSasRKf7ldXiTbjIEQ-_A&s=19
Flan-T5 is instruction-finetuned on 1,800+ language tasks, leading to dramatically improved prompting and multi-step reasoning abilities.
https://twitter.com/quocleix/status/1583523186376785921?t=xQSasRKf7ldXiTbjIEQ-_A&s=19
Twitter
New open-source language model from Google AI: Flan-T5 🍮
Flan-T5 is instruction-finetuned on 1,800+ language tasks, leading to dramatically improved prompting and multi-step reasoning abilities.
Public models: https://t.co/bnYVnocJW2
Paper: https://t.co/3KPGJ3tgMw
Flan-T5 is instruction-finetuned on 1,800+ language tasks, leading to dramatically improved prompting and multi-step reasoning abilities.
Public models: https://t.co/bnYVnocJW2
Paper: https://t.co/3KPGJ3tgMw
شبکه داستانی عصبی
New open-source language model from Google AI: Flan-T5 🍮 Flan-T5 is instruction-finetuned on 1,800+ language tasks, leading to dramatically improved prompting and multi-step reasoning abilities. https://twitter.com/quocleix/status/1583523186376785921?t=…
AI is moving sooo fast.
Just look at this new model from Google. Time from tweet to huggingface API is ~3 hrs.
I remember when there was still new content at conferences. When you had to spend months trying to replicate some result on MNIST.
Maybe it’s like, as you write a paper, you’re assisted by an LLM and that LLM is simultaneously summarizing it into a tweet, writing the code, and porting it over to HF. Now that’s the multitask I’m talking about.
https://twitter.com/realSharonZhou/status/1583869609244905474?t=Y0WVHLOJUjtD9nLkBL3N2A&s=19
Just look at this new model from Google. Time from tweet to huggingface API is ~3 hrs.
I remember when there was still new content at conferences. When you had to spend months trying to replicate some result on MNIST.
Maybe it’s like, as you write a paper, you’re assisted by an LLM and that LLM is simultaneously summarizing it into a tweet, writing the code, and porting it over to HF. Now that’s the multitask I’m talking about.
https://twitter.com/realSharonZhou/status/1583869609244905474?t=Y0WVHLOJUjtD9nLkBL3N2A&s=19
Twitter
AI is moving sooo fast.
Just look at this new model from Google. Time from tweet to @huggingface API is ~3 hrs.
I remember when there was still new content at conferences. When you had to spend months trying to replicate some result on MNIST.
Sooo let’s…
Just look at this new model from Google. Time from tweet to @huggingface API is ~3 hrs.
I remember when there was still new content at conferences. When you had to spend months trying to replicate some result on MNIST.
Sooo let’s…
Ama to Nisti
King Raam
اولین باره بارون بگیره
جای خوشحالی دلم میگیره
اولین باره که برف میاد
آدم برفی هست،
اما تو نیستی،
تو نیستی. ;)
🧡🍂
[The message is forwarded]
جای خوشحالی دلم میگیره
اولین باره که برف میاد
آدم برفی هست،
اما تو نیستی،
تو نیستی. ;)
🧡🍂
[The message is forwarded]
❤1👍1
کورس جالبی به نظر میرسه:
Serverless ML Course for building AI-enabled Prediction Services from models and features
https://github.com/featurestoreorg/serverless-ml-course
Serverless ML Course for building AI-enabled Prediction Services from models and features
https://github.com/featurestoreorg/serverless-ml-course
GitHub
GitHub - featurestoreorg/serverless-ml-course: Serverless Machine Learning Course for building AI-enabled Prediction Services from…
Serverless Machine Learning Course for building AI-enabled Prediction Services from models and features - featurestoreorg/serverless-ml-course