یک نقشه راهی برای یادگیری و ۹ دوره رایگان
Generative AI Learning Path
cloudskillsboost.google/paths/118
#هوش_مصنوعی #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Generative AI Learning Path
cloudskillsboost.google/paths/118
#هوش_مصنوعی #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
❤5
اگر راجب گرافها در مدلهای زبانی دنبال منابع و دیتاستهای خوبی میگشتید اینو پیشنهاد میدم.
Graph-Related Large Language Models (LLMs).
https://github.com/XiaoxinHe/Awesome-Graph-LLM
#هوش_مصنوعی #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Graph-Related Large Language Models (LLMs).
https://github.com/XiaoxinHe/Awesome-Graph-LLM
#هوش_مصنوعی #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
🔥4
دوره خوبی هست، خواستید یه سر بزنید و نگاه بکنید:
https://maktabkhooneh.org/course/%D8%A2%D9%85%D9%88%D8%B2%D8%B4-%DB%8C%D8%A7%D8%AF%DA%AF%DB%8C%D8%B1%DB%8C-%D9%85%D8%A7%D8%B4%DB%8C%D9%86-%DA%A9%D8%A7%D8%B1%D8%A8%D8%B1%D8%AF%DB%8C-mk2450/
دکتر تهرانیپور عزیز تهیه کردند.
#هوش_مصنوعی #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://maktabkhooneh.org/course/%D8%A2%D9%85%D9%88%D8%B2%D8%B4-%DB%8C%D8%A7%D8%AF%DA%AF%DB%8C%D8%B1%DB%8C-%D9%85%D8%A7%D8%B4%DB%8C%D9%86-%DA%A9%D8%A7%D8%B1%D8%A8%D8%B1%D8%AF%DB%8C-mk2450/
دکتر تهرانیپور عزیز تهیه کردند.
#هوش_مصنوعی #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
مکتبخونه
آموزش یادگیری ماشین با 10 پروژه کاربردی و نکات مهم کتابخانه ها
اگر به دنبال یادگیری ماشین لرنینگ به صورت کاربردی هستید و دنبال دوره ای هستید تا نکات بسیار مهم کتابخانههای کاربردی را به شما یاد دهد با ما در دوره یادگیری ماشین کاربردی همراه باشید تا با هم به حل 10 پروژه سنگین و خوب بپردازیم.
❤3
This media is not supported in your browser
VIEW IN TELEGRAM
دراین مقاله اومدن از مدل Segment Anything (SAM) استفاده کردن و یک ماژول سبک وزن Mask-to-Matte (M2M) را برای تطبیق عکسها و... استفاده کردند که به نظرم یک انقلابیه...!!
Matting everything (MAM)
https://huggingface.co/papers/2306.05399
پ.ن:در این مقاله به نظرم میشه صحبت دکتر عسگری رو تایید کرد که پردازش تصویر گیم اور شده پ!!
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Matting everything (MAM)
https://huggingface.co/papers/2306.05399
پ.ن:در این مقاله به نظرم میشه صحبت دکتر عسگری رو تایید کرد که پردازش تصویر گیم اور شده پ!!
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
❤7
نظرات دکتر علی شریفی زارچی استاد کامپیوتر دانشگاه شریف راجب مراحل یادگیری #هوش_مصنوعی
https://twitter.com/SharifiZarchi/status/1667131051104149505
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://twitter.com/SharifiZarchi/status/1667131051104149505
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
👍4👎2
Forwarded from Meysam
This media is not supported in your browser
VIEW IN TELEGRAM
ترک کردن همه چی همه جا در یک لحظه!
این مقاله خیلی خیلی خوبه حتما بخونید در مورد ترکینگ هستش:
https://arxiv.org/abs/2306.05422
(یادتون هست میگفتم پردازش تصویر گیم آور شد؟ بعد از مدل segment anything دیگه خیلی از تسکها ساده تر شدند)
این مقاله خیلی خیلی خوبه حتما بخونید در مورد ترکینگ هستش:
https://arxiv.org/abs/2306.05422
(یادتون هست میگفتم پردازش تصویر گیم آور شد؟ بعد از مدل segment anything دیگه خیلی از تسکها ساده تر شدند)
Meysam
ترک کردن همه چی همه جا در یک لحظه! این مقاله خیلی خیلی خوبه حتما بخونید در مورد ترکینگ هستش: https://arxiv.org/abs/2306.05422 (یادتون هست میگفتم پردازش تصویر گیم آور شد؟ بعد از مدل segment anything دیگه خیلی از تسکها ساده تر شدند)
در ادامه تکمیل این ایده از این مقاله و این مقاله هم اینو مطالعه کنید
Background Prompting for Improved Object Depth
https://huggingface.co/papers/2306.05428
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Background Prompting for Improved Object Depth
https://huggingface.co/papers/2306.05428
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
👍2
Transformers as Statisticians
Unveiling a new mechanism "In-Context Algorithm Selection" for In-Context Learning (ICL) in LLMs/transformers.
arxiv.org/abs/2306.04637
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Unveiling a new mechanism "In-Context Algorithm Selection" for In-Context Learning (ICL) in LLMs/transformers.
arxiv.org/abs/2306.04637
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
ده #ایده_جذاب که در هفته گذشته منتشر شد.
1) Tracking Everything Everywhere All at Once - propose a test-time optimization method for estimating dense and long-range motion; enables accurate, full-length motion estimation of every pixel in a video.
2) AlphaDev - a deep reinforcement learning agent which discovers faster sorting algorithms from scratch; the algorithms outperform previously known human benchmarks and have been integrated into the LLVM C++ library.
3) Sparse-Quantized Representation - a new compressed format and quantization technique that enables near-lossless compression of LLMs across model scales; “allows LLM inference at 4.75 bits with a 15% speedup”.
4) MusicGen - a simple and controllable model for music generation built on top of a single-stage transformer LM together with efficient token interleaving patterns; it can be conditioned on textual denoscriptions or melodic features and shows high performance on a standard text-to-music benchmark.
5. Augmenting LLMs with Databases - combines an LLM with a set of SQL databases, enabling a symbolic memory framework; completes tasks via LLM generating SQL instructions that manipulate the DB autonomously.
6) Concept Scrubbing in LLM - presents a method called LEAst-squares Concept Erasure (LEACE) to erase target concept information from every layer in a neural network; it’s used for reducing gender bias in BERT embeddings.
7) Fine-Grained RLHF - trains LMs with fine-grained human feedback; instead of using overall preference, more explicit feedback is provided at the segment level which helps to improve efficacy on long-form question answering, reduce toxicity, and enables LM customization.
8) Hierarchical Vision Transformer - pretrains vision transformers with a visual pretext task (MAE), while removing unnecessary components from a state-of-the-art multi-stage vision transformer; this enables a simple hierarchical vision transformer that’s more accurate and faster at inference and during training.
9) Humor in ChatGPT - explores ChatGPT’s capabilities to grasp and reproduce humor; finds that over 90% of 1008 generated jokes were the same 25 jokes and that ChatGPT is also overfitted to a particular joke structure.
10) Imitating Reasoning Process of Larger LLMs - develops a 13B parameter model that learns to imitate the reasoning process of large foundational models like GPT-4; it leverages large-scale and diverse imitation data and surpasses instruction-tuned models such as Vicuna-13B in zero-shot reasoning.
#مقاله
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
1) Tracking Everything Everywhere All at Once - propose a test-time optimization method for estimating dense and long-range motion; enables accurate, full-length motion estimation of every pixel in a video.
2) AlphaDev - a deep reinforcement learning agent which discovers faster sorting algorithms from scratch; the algorithms outperform previously known human benchmarks and have been integrated into the LLVM C++ library.
3) Sparse-Quantized Representation - a new compressed format and quantization technique that enables near-lossless compression of LLMs across model scales; “allows LLM inference at 4.75 bits with a 15% speedup”.
4) MusicGen - a simple and controllable model for music generation built on top of a single-stage transformer LM together with efficient token interleaving patterns; it can be conditioned on textual denoscriptions or melodic features and shows high performance on a standard text-to-music benchmark.
5. Augmenting LLMs with Databases - combines an LLM with a set of SQL databases, enabling a symbolic memory framework; completes tasks via LLM generating SQL instructions that manipulate the DB autonomously.
6) Concept Scrubbing in LLM - presents a method called LEAst-squares Concept Erasure (LEACE) to erase target concept information from every layer in a neural network; it’s used for reducing gender bias in BERT embeddings.
7) Fine-Grained RLHF - trains LMs with fine-grained human feedback; instead of using overall preference, more explicit feedback is provided at the segment level which helps to improve efficacy on long-form question answering, reduce toxicity, and enables LM customization.
8) Hierarchical Vision Transformer - pretrains vision transformers with a visual pretext task (MAE), while removing unnecessary components from a state-of-the-art multi-stage vision transformer; this enables a simple hierarchical vision transformer that’s more accurate and faster at inference and during training.
9) Humor in ChatGPT - explores ChatGPT’s capabilities to grasp and reproduce humor; finds that over 90% of 1008 generated jokes were the same 25 jokes and that ChatGPT is also overfitted to a particular joke structure.
10) Imitating Reasoning Process of Larger LLMs - develops a 13B parameter model that learns to imitate the reasoning process of large foundational models like GPT-4; it leverages large-scale and diverse imitation data and surpasses instruction-tuned models such as Vicuna-13B in zero-shot reasoning.
#مقاله
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
👍7❤1
Applications of Transformers
New survey paper highlighting major applications of Transformers for deep learning tasks. Includes a comprehensive list of Transformer models.
arxiv.org/abs/2306.07303
#مقاله
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
New survey paper highlighting major applications of Transformers for deep learning tasks. Includes a comprehensive list of Transformer models.
arxiv.org/abs/2306.07303
#مقاله
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
🔥3
Exploring the MIT Mathematics and EECS Curriculum Using LLMs
"GPT-3.5 successfully solves a third of the entire MIT curriculum, while GPT-4, with prompt engineering, achieves a perfect solve rate on a test set excluding questions based on images."
arxiv.org/abs/2306.08997
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
"GPT-3.5 successfully solves a third of the entire MIT curriculum, while GPT-4, with prompt engineering, achieves a perfect solve rate on a test set excluding questions based on images."
arxiv.org/abs/2306.08997
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
👍1
This media is not supported in your browser
VIEW IN TELEGRAM
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
https://ai.facebook.com/blog/voicebox-generative-ai-model-speech/
Large-scale generative models such as GPT and DALL-E have revolutionized natural language processing and computer vision research. These models not only generate high fidelity text or image outputs, but are also generalists which can solve tasks not explicitly taught. In contrast, speech generative models are still primitive in terms of scale and task generalization. In this paper, we present Voicebox, the most versatile text-guided generative model for speech at scale. Voicebox is a non-autoregressive flow-matching model trained to infill speech, given audio context and text, trained on over 50K hours of speech that are neither filtered nor enhanced. Similar to GPT, Voicebox can perform many different tasks through in-context learning, but is more flexible as it can also condition on future context.
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://ai.facebook.com/blog/voicebox-generative-ai-model-speech/
Large-scale generative models such as GPT and DALL-E have revolutionized natural language processing and computer vision research. These models not only generate high fidelity text or image outputs, but are also generalists which can solve tasks not explicitly taught. In contrast, speech generative models are still primitive in terms of scale and task generalization. In this paper, we present Voicebox, the most versatile text-guided generative model for speech at scale. Voicebox is a non-autoregressive flow-matching model trained to infill speech, given audio context and text, trained on over 50K hours of speech that are neither filtered nor enhanced. Similar to GPT, Voicebox can perform many different tasks through in-context learning, but is more flexible as it can also condition on future context.
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Can LLMs Teach Weaker Agents?
Aligned teachers can intervene w/ free-text explanations using Theory of Mind (ExpUtility+Personalization) to improve students on future unexplained data🙂
Misaligned teachers hurt students😢
arxiv.org/abs/2306.09299
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Aligned teachers can intervene w/ free-text explanations using Theory of Mind (ExpUtility+Personalization) to improve students on future unexplained data🙂
Misaligned teachers hurt students😢
arxiv.org/abs/2306.09299
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
میخواید اخبار و مقالات و... راجب استارت اپ ها و کمپانیها دریافت کنید اینجا ثبت نام کنید
https://www.joinsuperhuman.ai/subscribe
#خبر
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://www.joinsuperhuman.ai/subscribe
#خبر
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
👍1
رایگان امتحان بدید و رایگان آموزش ببینید
https://lightning.ai/pages/ai-education/deep-learning-fundamentals/
#یادگیری_عمیق #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://lightning.ai/pages/ai-education/deep-learning-fundamentals/
#یادگیری_عمیق #منابع #منابع_پیشنهادی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Unifying Large Language Models and Knowledge Graphs: A Roadmap
arxiv.org/abs/2306.08302
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
arxiv.org/abs/2306.08302
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
Sentiment Analysis Of Twitter Data Towards COVID-19 Vaccines Using A Deep Learning Approach
https://ieeexplore.ieee.org/abstract/document/10139297
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://ieeexplore.ieee.org/abstract/document/10139297
#مقاله #ایده_جذاب
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
This media is not supported in your browser
VIEW IN TELEGRAM
سالانه یک میلیون و هشتصدهزار مقاله منتشر میشه.
محققان هوش مصنوعی برای توضیح و خلاصه کردن مقالهها این راهو معرفی کردند.
https://www.explainpaper.com/
AI explaining AI!
#خبر #هوش_مصنوعی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
محققان هوش مصنوعی برای توضیح و خلاصه کردن مقالهها این راهو معرفی کردند.
https://www.explainpaper.com/
AI explaining AI!
#خبر #هوش_مصنوعی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
👍6
چطوری QR Code هنری خودمونو با هوش مصنوعی تولید کنیم؟!
https://huggingface.co/spaces/huggingface-projects/QR-code-AI-art-generator
#خبر #هوش_مصنوعی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind
https://huggingface.co/spaces/huggingface-projects/QR-code-AI-art-generator
#خبر #هوش_مصنوعی
🔸 مطالب بیشتر 👇👇
✅ @AI_DeepMind