شبکه داستانی عصبی – Telegram
شبکه داستانی عصبی
793 subscribers
746 photos
35 videos
96 files
1.9K links
اینجا راجع به چیزایی که دوست دارم صحبت می‌کنم: داستان، هوش مصنوعی، موسیقی، نرم‌افزار، هنر، روانشناسی و ... :)

اگه خواستید صحبت کنیم خیلی خوشحالم می‌کنید:
@alimirferdos
Download Telegram
شبکه داستانی عصبی
میگما آخر هفته یه LK99 بزنیم دور هم
دیشب توی خواب خیلی جدی داشتم این رو سنتز می‌کردم 😐😐😐
چرا خب؟!
Warsaw Village Band Live - Hola Byśki, Hola Sziget 2013


~~~
Hola Byśki Hola
Warsaw village band
موسیقی نواحی لهستان

لُجّه
با دقت بالا حدس میزنه چی دارید تایپ می‌کنید.
از روی چی؟
فقط از روی صدای دکمه‌های کیبورد!

https://arxiv.org/abs/2308.01074
🤯2
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing


https://arxiv.org/abs/2107.13586
اقای جرمی هوارد فرمودند که:

Now that ChatGPT has rolled out custom instructions to most users, try out this instruction -- it makes GPT 4 far more accurate for me


و اینه متن‌شون:

You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.
Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.
Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.
Don't be verbose in your answers, but do provide details and examples where it might help the explanation.
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.


https://github.com/Lightning-AI/lit-gpt
We’re excited to announce Meta Kaggle for Code a new open source dataset made up of ML code created and publicly shared by Kaggle’s community.

It contains hundreds of thousands of Apache 2.0 licensed Python and R notebooks used to analyze Datasets, make submissions to Competitions, and more. This represents nearly a decade of data spanning a period of tremendous evolution in the ways ML work is done.

https://www.kaggle.com/datasets/kaggle/meta-kaggle-code