Forwarded from Machine Learning
100+ LLM Interview Questions and Answers (GitHub Repo)
Anyone preparing for #AI/#ML Interviews, it is mandatory to have good knowledge related to #LLM topics.
This# repo includes 100+ LLM interview questions (with answers) spanning over LLM topics like
LLM Inference
LLM Fine-Tuning
LLM Architectures
LLM Pretraining
Prompt Engineering
etc.
🖕 Github Repo - https://github.com/KalyanKS-NLP/LLM-Interview-Questions-and-Answers-Hub
https://news.1rj.ru/str/DataScienceM✅
Anyone preparing for #AI/#ML Interviews, it is mandatory to have good knowledge related to #LLM topics.
This# repo includes 100+ LLM interview questions (with answers) spanning over LLM topics like
LLM Inference
LLM Fine-Tuning
LLM Architectures
LLM Pretraining
Prompt Engineering
etc.
https://news.1rj.ru/str/DataScienceM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤6👍3
I'm happy to announce that freeCodeCamp has launched a new certification in #Python 🐍
» Learning the basics of programming
» Project development
» Final exam
» Obtaining a certificate
Everything takes place directly in the browser, without installation. This is one of the six certificates in version 10 of the Full Stack Developer training program.
Full announcement with a detailed FAQ about the certificate, the course, and the exams
Link: https://www.freecodecamp.org/news/freecodecamps-new-python-certification-is-now-live/
👉 @codeprogrammer
» Learning the basics of programming
» Project development
» Final exam
» Obtaining a certificate
Everything takes place directly in the browser, without installation. This is one of the six certificates in version 10 of the Full Stack Developer training program.
Full announcement with a detailed FAQ about the certificate, the course, and the exams
Link: https://www.freecodecamp.org/news/freecodecamps-new-python-certification-is-now-live/
Please open Telegram to view this post
VIEW IN TELEGRAM
❤10
1. What will be the output of the following code?
A. [1] then [2]
B. [1] then [1, 2]
C. [] then []
D. Raises TypeError
Correct answer: A.
2. What is printed by this code?
A. 10
B. 5
C. None
D. UnboundLocalError
Correct answer: D.
3. What is the result of executing this code?
A. [1, 2, 3, 4]
B. [4]
C. [1, 2, 3]
D. []
Correct answer: C.
4. What does the following expression evaluate to?
A. False
B. True
C. Raises ValueError
D. None
Correct answer: B.
5. What will be the output?
A. <class 'list'>
B. <class 'set'>
C. <class 'dict'>
D. <class 'tuple'>
Correct answer: C.
6. What is printed by this code?
A. (1, 2, [3])
B. (1, 2, [3, 4])
C. TypeError
D. AttributeError
Correct answer: C.
7. What does this code output?
A. [0, 1, 2]
B. [1, 2]
C. [0]
D. []
Correct answer: B.
8. What will be printed?
A. None
B. KeyError
C. 2
D. "b"
Correct answer: C.
9. What is the output?
A. True True
B. True False
C. False True
D. False False
Correct answer: A.
10. What does this code produce?
A. 0 1
B. 1 2
C. 0 0
D. StopIteration
Correct answer: A.
11. What is printed?
A. {0, 1}
B. {0: 0, 1: 1}
C. [(0,0),(1,1)]
D. Error
Correct answer: B.
12. What is the result of this comparison?
A. True True
B. False False
C. True False
D. False True
Correct answer: C.
13. What will be printed?
A. A
B. B
C. B then A
D. A then B
Correct answer: C.
14. What does this code output?
A. [1, 2, 3]
B. [3]
C. [1, 2]
D. Error
Correct answer: C.
15. What is printed?
A. <class 'list'>
B. <class 'tuple'>
C. <class 'generator'>
D. <class 'range'>
Correct answer: C.
def add_item(item, lst=None):
if lst is None:
lst = []
lst.append(item)
return lst
print(add_item(1))
print(add_item(2))
A. [1] then [2]
B. [1] then [1, 2]
C. [] then []
D. Raises TypeError
Correct answer: A.
2. What is printed by this code?
x = 10
def func():
print(x)
x = 5
func()
A. 10
B. 5
C. None
D. UnboundLocalError
Correct answer: D.
3. What is the result of executing this code?
a = [1, 2, 3]
b = a[:]
a.append(4)
print(b)
A. [1, 2, 3, 4]
B. [4]
C. [1, 2, 3]
D. []
Correct answer: C.
4. What does the following expression evaluate to?
bool("False")A. False
B. True
C. Raises ValueError
D. None
Correct answer: B.
5. What will be the output?
print(type({}))A. <class 'list'>
B. <class 'set'>
C. <class 'dict'>
D. <class 'tuple'>
Correct answer: C.
6. What is printed by this code?
x = (1, 2, [3])
x[2] += [4]
print(x)
A. (1, 2, [3])
B. (1, 2, [3, 4])
C. TypeError
D. AttributeError
Correct answer: C.
7. What does this code output?
print([i for i in range(3) if i])
A. [0, 1, 2]
B. [1, 2]
C. [0]
D. []
Correct answer: B.
8. What will be printed?
d = {"a": 1}
print(d.get("b", 2))A. None
B. KeyError
C. 2
D. "b"
Correct answer: C.
9. What is the output?
print(1 in [1, 2], 1 is 1)
A. True True
B. True False
C. False True
D. False False
Correct answer: A.
10. What does this code produce?
def gen():
for i in range(2):
yield i
g = gen()
print(next(g), next(g))
A. 0 1
B. 1 2
C. 0 0
D. StopIteration
Correct answer: A.
11. What is printed?
print({x: x*x for x in range(2)})A. {0, 1}
B. {0: 0, 1: 1}
C. [(0,0),(1,1)]
D. Error
Correct answer: B.
12. What is the result of this comparison?
print([] == [], [] is [])
A. True True
B. False False
C. True False
D. False True
Correct answer: C.
13. What will be printed?
def f():
try:
return "A"
finally:
print("B")
print(f())
A. A
B. B
C. B then A
D. A then B
Correct answer: C.
14. What does this code output?
x = [1, 2]
y = x
x = x + [3]
print(y)
A. [1, 2, 3]
B. [3]
C. [1, 2]
D. Error
Correct answer: C.
15. What is printed?
print(type(i for i in range(3)))
A. <class 'list'>
B. <class 'tuple'>
C. <class 'generator'>
D. <class 'range'>
Correct answer: C.
❤10👍1
🔥 NEW YEAR 2026 – PREMIUM SCIENTIFIC PAPER WRITING OFFER 🔥
Q1-Ready | Journal-Targeted | Publication-Focused
Serious researchers, PhD & MSc students, postdocs, universities, and funded startups only.
To start 2026 strong, we’re offering a limited New Year scientific writing package designed for fast-track publication, not academic busywork.
🎯 What We Offer (End-of-Year Special):
✍️ Full Research Paper Writing – $400
(Q1 / Q2 journal–ready)
Includes:
✅ Journal-targeted manunoscript (Elsevier / Springer / Wiley / IEEE / MDPI)
✅ IMRAD structure (Introduction–Methods–Results–Discussion)
✅ Strong problem formulation & novelty framing
✅ Methodology written to reviewer standards
✅ Professional academic English (native-level)
✅ Plagiarism-free (Turnitin <10%)
✅ Ready for immediate submission
📊 Available Paper Types:
Original Research Articles
Review & Systematic Review
AI / Machine Learning Papers
Engineering & Medical Research
Health AI & Clinical Data Studies
Interdisciplinary & Applied Research
🧠 Optional Add-ons (if needed):
Journal selection & scope matching
Cover letter to editor
Reviewer response (after review)
Statistical validation & result polishing
Figure & table redesign (publication quality)
🚀 Why This Is Different
We don’t “write generic papers.”
We engineer publishable research.
✔️ Real novelty positioning
✔️ Reviewer-proof logic
✔️ Data-driven arguments
✔️ Aligned with current 2025–2026 journal expectations
Many of our papers are built on real-world datasets and are already aligned with Q1 journal standards.
⏳ New Year Offer – Limited Time
Regular price: $1,500 – $3,000
New Year 2026 price: $400
Limited slots (quality > quantity)
🎓 Priority given to:
PhD / MSc students
Active researchers
Funded startups
Universities & labs
📩 DM for details, samples & timelines
Contact:
@Omidyzd62
Start 2026 with a submitted paper—not just a plan
Q1-Ready | Journal-Targeted | Publication-Focused
Serious researchers, PhD & MSc students, postdocs, universities, and funded startups only.
To start 2026 strong, we’re offering a limited New Year scientific writing package designed for fast-track publication, not academic busywork.
🎯 What We Offer (End-of-Year Special):
✍️ Full Research Paper Writing – $400
(Q1 / Q2 journal–ready)
Includes:
✅ Journal-targeted manunoscript (Elsevier / Springer / Wiley / IEEE / MDPI)
✅ IMRAD structure (Introduction–Methods–Results–Discussion)
✅ Strong problem formulation & novelty framing
✅ Methodology written to reviewer standards
✅ Professional academic English (native-level)
✅ Plagiarism-free (Turnitin <10%)
✅ Ready for immediate submission
📊 Available Paper Types:
Original Research Articles
Review & Systematic Review
AI / Machine Learning Papers
Engineering & Medical Research
Health AI & Clinical Data Studies
Interdisciplinary & Applied Research
🧠 Optional Add-ons (if needed):
Journal selection & scope matching
Cover letter to editor
Reviewer response (after review)
Statistical validation & result polishing
Figure & table redesign (publication quality)
🚀 Why This Is Different
We don’t “write generic papers.”
We engineer publishable research.
✔️ Real novelty positioning
✔️ Reviewer-proof logic
✔️ Data-driven arguments
✔️ Aligned with current 2025–2026 journal expectations
Many of our papers are built on real-world datasets and are already aligned with Q1 journal standards.
⏳ New Year Offer – Limited Time
Regular price: $1,500 – $3,000
New Year 2026 price: $400
Limited slots (quality > quantity)
🎓 Priority given to:
PhD / MSc students
Active researchers
Funded startups
Universities & labs
📩 DM for details, samples & timelines
Contact:
@Omidyzd62
Start 2026 with a submitted paper—not just a plan
❤7🔥3
Machine Learning with Python pinned «🔥 NEW YEAR 2026 – PREMIUM SCIENTIFIC PAPER WRITING OFFER 🔥 Q1-Ready | Journal-Targeted | Publication-Focused Serious researchers, PhD & MSc students, postdocs, universities, and funded startups only. To start 2026 strong, we’re offering a limited New Year…»
Forwarded from Machine Learning with Python
🚀Stanford just completed a must-watch for anyone serious about AI:
🎓 “𝗖𝗠𝗘 𝟮𝟵𝟱: 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿𝘀 & 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀” is now live entirely on YouTube and it’s pure gold.
If you’re building your AI career, stop scrolling.
This isn’t another surface-level overview. It’s the clearest, most structured intro to LLMs you could follow, straight from the Stanford Autumn 2025 curriculum.
📚 𝗧𝗼𝗽𝗶𝗰𝘀 𝗰𝗼𝘃𝗲𝗿𝗲𝗱 𝗶𝗻𝗰𝗹𝘂𝗱𝗲:
• How Transformers actually work (tokenization, attention, embeddings)
• Decoding strategies & MoEs
• LLM finetuning (LoRA, RLHF, supervised)
• Evaluation techniques (LLM-as-a-judge)
• Optimization tricks (RoPE, quantization, approximations)
• Reasoning & scaling
• Agentic workflows (RAG, tool calling)
🧠 My workflow: I usually take the trannoscripts, feed them into NotebookLM, and once I’ve done the lectures, I replay them during walks or commutes. That combo works wonders for retention.
🎥 Watch these now:
- Lecture 1: https://lnkd.in/dDER-qyp
- Lecture 2: https://lnkd.in/dk-tGUDm
- Lecture 3: https://lnkd.in/drAPdjJY
- Lecture 4: https://lnkd.in/e_RSgMz7
- Lecture 5: https://lnkd.in/eivMA9pe
- Lecture 6: https://lnkd.in/eYwwwMXn
- Lecture 7: https://lnkd.in/eKwkEDXV
- Lecture 8: https://lnkd.in/eEWvyfyK
- Lecture 9: https://lnkd.in/euiKRGaQ
🗓 Do yourself a favor for this 2026: block 2-3 hours per week / llectue and go through them.
If you’re in AI — whether building infra, agents, or apps — this is the foundational course you don’t want to miss.
Let’s level up.
https://news.1rj.ru/str/CodeProgrammer😅
🎓 “𝗖𝗠𝗘 𝟮𝟵𝟱: 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗲𝗿𝘀 & 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀” is now live entirely on YouTube and it’s pure gold.
If you’re building your AI career, stop scrolling.
This isn’t another surface-level overview. It’s the clearest, most structured intro to LLMs you could follow, straight from the Stanford Autumn 2025 curriculum.
📚 𝗧𝗼𝗽𝗶𝗰𝘀 𝗰𝗼𝘃𝗲𝗿𝗲𝗱 𝗶𝗻𝗰𝗹𝘂𝗱𝗲:
• How Transformers actually work (tokenization, attention, embeddings)
• Decoding strategies & MoEs
• LLM finetuning (LoRA, RLHF, supervised)
• Evaluation techniques (LLM-as-a-judge)
• Optimization tricks (RoPE, quantization, approximations)
• Reasoning & scaling
• Agentic workflows (RAG, tool calling)
🧠 My workflow: I usually take the trannoscripts, feed them into NotebookLM, and once I’ve done the lectures, I replay them during walks or commutes. That combo works wonders for retention.
🎥 Watch these now:
- Lecture 1: https://lnkd.in/dDER-qyp
- Lecture 2: https://lnkd.in/dk-tGUDm
- Lecture 3: https://lnkd.in/drAPdjJY
- Lecture 4: https://lnkd.in/e_RSgMz7
- Lecture 5: https://lnkd.in/eivMA9pe
- Lecture 6: https://lnkd.in/eYwwwMXn
- Lecture 7: https://lnkd.in/eKwkEDXV
- Lecture 8: https://lnkd.in/eEWvyfyK
- Lecture 9: https://lnkd.in/euiKRGaQ
🗓 Do yourself a favor for this 2026: block 2-3 hours per week / llectue and go through them.
If you’re in AI — whether building infra, agents, or apps — this is the foundational course you don’t want to miss.
Let’s level up.
https://news.1rj.ru/str/CodeProgrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
❤7👍1
Forwarded from Code With Python
Automatic translator in Python!
We translate a text in a few lines using
Install the library:
Example of use:
Mass translation of a list:
🔥 We get a mini-Google Translate right in Python: you can embed it in a chatbot, use it in notes, or automate work with the API.
🚪 @DataScience4
We translate a text in a few lines using
deep-translator. It supports dozens of languages: from English and Russian to Japanese and Arabic.Install the library:
pip install deep-translator
Example of use:
from deep_translator import GoogleTranslator
text = "Hello, how are you?"
result = GoogleTranslator(source="ru", target="en").translate(text)
print("Original:", text)
print("Translation:", result)
Mass translation of a list:
texts = ["Hello", "What's your name?", "See you later"]
for t in texts:
print("→", GoogleTranslator(source="ru", target="es").translate(t))
🔥 We get a mini-Google Translate right in Python: you can embed it in a chatbot, use it in notes, or automate work with the API.
Please open Telegram to view this post
VIEW IN TELEGRAM
❤14👍1🔥1
In scientific work, the most time is spent on reading articles, data, and reports.
On GitHub, there is a collection called Awesome AI for Science -»»» a catalog of AI tools for all stages of research.
Inside:
-» working with literature
-» data analysis
-» turning articles into posters
-» automating experiments
-» tools for biology, chemistry, physics, and other fields
GitHub: http://github.com/ai-boost/awesome-ai-for-science
The list includes Paper2Poster, MinerU, The AI Scientist, as well as articles, datasets, and frameworks.
In fact, this is a complete set of tools for AI support in scientific research.
👉 https://news.1rj.ru/str/CodeProgrammer
On GitHub, there is a collection called Awesome AI for Science -»»» a catalog of AI tools for all stages of research.
Inside:
-» working with literature
-» data analysis
-» turning articles into posters
-» automating experiments
-» tools for biology, chemistry, physics, and other fields
GitHub: http://github.com/ai-boost/awesome-ai-for-science
The list includes Paper2Poster, MinerU, The AI Scientist, as well as articles, datasets, and frameworks.
In fact, this is a complete set of tools for AI support in scientific research.
Please open Telegram to view this post
VIEW IN TELEGRAM
❤6👍1🎉1
AI-ML Roadmap from Scratch
👉 https://github.com/aadi1011/AI-ML-Roadmap-from-scratch?tab=readme-ov-file
https://news.1rj.ru/str/CodeProgrammer🌟
Like and Share
https://news.1rj.ru/str/CodeProgrammer
Like and Share
Please open Telegram to view this post
VIEW IN TELEGRAM
❤9👍4
This GitHub repository is not a dump of tutorials.
Inside, there are 28 production-ready AI projects that can be used.
What's there:
Machine learning projects
→ Airbnb price forecasting
→ Air ticket cost calculator
→ Student performance tracker
AI for medicine
→ Chest disease detection
→ Heart disease prediction
→ Diabetes risk analysis
Generative AI applications
→ Live chatbot on Gemini
→ Medical assistant tool
→ Document analysis tool
Computer vision projects
→ Hand tracking system
→ Drug recognition app
→ OpenCV implementations
Data analysis dashboards
→ E-commerce analytics
→ Restaurant analytics
→ Cricket statistics tracker
And 10 more advanced projects coming soon:
→ Deepfake detection
→ Brain tumor classification
→ Driver drowsiness alert system
This is not just a collection of code files.
These are end-to-end working applications.
View the repository😲
https://github.com/KalyanM45/AI-Project-Gallery
👉 @codeprogrammer
Like and Share
Inside, there are 28 production-ready AI projects that can be used.
What's there:
Machine learning projects
→ Airbnb price forecasting
→ Air ticket cost calculator
→ Student performance tracker
AI for medicine
→ Chest disease detection
→ Heart disease prediction
→ Diabetes risk analysis
Generative AI applications
→ Live chatbot on Gemini
→ Medical assistant tool
→ Document analysis tool
Computer vision projects
→ Hand tracking system
→ Drug recognition app
→ OpenCV implementations
Data analysis dashboards
→ E-commerce analytics
→ Restaurant analytics
→ Cricket statistics tracker
And 10 more advanced projects coming soon:
→ Deepfake detection
→ Brain tumor classification
→ Driver drowsiness alert system
This is not just a collection of code files.
These are end-to-end working applications.
View the repository
https://github.com/KalyanM45/AI-Project-Gallery
Like and Share
Please open Telegram to view this post
VIEW IN TELEGRAM
❤11👍2
transformer Q&A.pdf
1.3 MB
𝐇𝐞𝐫𝐞’𝐬 𝐚 𝐪𝐮𝐢𝐜𝐤 𝐛𝐫𝐞𝐚𝐤𝐝𝐨𝐰𝐧 𝐟𝐫𝐨𝐦 𝐭𝐡𝐞 𝐭𝐨𝐩 𝐓𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦𝐞𝐫𝐬 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 🔥👇
✅ 𝘞𝘩𝘢𝘵 𝘪𝘴 𝘢 𝘛𝘳𝘢𝘯𝘴𝘧𝘰𝘳𝘮𝘦𝘳 𝘢𝘯𝘥 𝘸𝘩𝘺 𝘸𝘢𝘴 𝘪𝘵 𝘪𝘯𝘵𝘳𝘰𝘥𝘶𝘤𝘦𝘥?
𝘐𝘵 𝘴𝘰𝘭𝘷𝘦𝘥 𝘵𝘩𝘦 𝘭𝘪𝘮𝘪𝘵𝘢𝘵𝘪𝘰𝘯𝘴 𝘰𝘧 𝘙𝘕𝘕𝘴 & 𝘓𝘚𝘛𝘔𝘴 𝘣𝘺 𝘶𝘴𝘪𝘯𝘨 𝘴𝘦𝘭𝘧-𝘢𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯, 𝘦𝘯𝘢𝘣𝘭𝘪𝘯𝘨 𝘱𝘢𝘳𝘢𝘭𝘭𝘦𝘭 𝘱𝘳𝘰𝘤𝘦𝘴𝘴𝘪𝘯𝘨 𝘢𝘯𝘥 𝘤𝘢𝘱𝘵𝘶𝘳𝘪𝘯𝘨 𝘭𝘰𝘯𝘨-𝘳𝘢𝘯𝘨𝘦 𝘥𝘦𝘱𝘦𝘯𝘥𝘦𝘯𝘤𝘪𝘦𝘴 𝘭𝘪𝘬𝘦 𝘯𝘦𝘷𝘦𝘳 𝘣𝘦𝘧𝘰𝘳𝘦!
✅ 𝘚𝘦𝘭𝘧-𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 – 𝘛𝘩𝘦 𝘮𝘢𝘨𝘪𝘤 𝘣𝘦𝘩𝘪𝘯𝘥 𝘪𝘵
𝘌𝘷𝘦𝘳𝘺 𝘸𝘰𝘳𝘥 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘴 𝘪𝘵𝘴 𝘤𝘰𝘯𝘵𝘦𝘹𝘵 𝘪𝘯 𝘳𝘦𝘭𝘢𝘵𝘪𝘰𝘯 𝘵𝘰 𝘰𝘵𝘩𝘦𝘳𝘴—𝘮𝘢𝘬𝘪𝘯𝘨 𝘦𝘮𝘣𝘦𝘥𝘥𝘪𝘯𝘨𝘴 𝘴𝘮𝘢𝘳𝘵𝘦𝘳 𝘢𝘯𝘥 𝘮𝘰𝘥𝘦𝘭𝘴 𝘮𝘰𝘳𝘦 𝘤𝘰𝘯𝘵𝘦𝘹𝘵-𝘢𝘸𝘢𝘳𝘦.
✅ 𝘔𝘶𝘭𝘵𝘪-𝘏𝘦𝘢𝘥 𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 – 𝘚𝘦𝘦𝘪𝘯𝘨 𝘧𝘳𝘰𝘮 𝘮𝘶𝘭𝘵𝘪𝘱𝘭𝘦 𝘢𝘯𝘨𝘭𝘦𝘴
𝘋𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘢𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘩𝘦𝘢𝘥𝘴 𝘧𝘰𝘤𝘶𝘴 𝘰𝘯 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘳𝘦𝘭𝘢𝘵𝘪𝘰𝘯𝘴𝘩𝘪𝘱𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘥𝘢𝘵𝘢. 𝘐𝘵’𝘴 𝘭𝘪𝘬𝘦 𝘩𝘢𝘷𝘪𝘯𝘨 𝘮𝘶𝘭𝘵𝘪𝘱𝘭𝘦 𝘦𝘹𝘱𝘦𝘳𝘵𝘴 𝘢𝘯𝘢𝘭𝘺𝘻𝘦 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯!
✅ 𝘗𝘰𝘴𝘪𝘵𝘪𝘰𝘯𝘢𝘭 𝘌𝘯𝘤𝘰𝘥𝘪𝘯𝘨 – 𝘛𝘦𝘢𝘤𝘩𝘪𝘯𝘨 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘰𝘳𝘥𝘦𝘳 𝘮𝘢𝘵𝘵𝘦𝘳𝘴
𝘚𝘪𝘯𝘤𝘦 𝘛𝘳𝘢𝘯𝘴𝘧𝘰𝘳𝘮𝘦𝘳𝘴 𝘥𝘰𝘯’𝘵 𝘱𝘳𝘰𝘤𝘦𝘴𝘴 𝘥𝘢𝘵𝘢 𝘴𝘦𝘲𝘶𝘦𝘯𝘵𝘪𝘢𝘭𝘭𝘺, 𝘵𝘩𝘪𝘴 𝘵𝘳𝘪𝘤𝘬 𝘦𝘯𝘴𝘶𝘳𝘦𝘴 𝘵𝘩𝘦𝘺 “𝘬𝘯𝘰𝘸” 𝘵𝘩𝘦 𝘱𝘰𝘴𝘪𝘵𝘪𝘰𝘯 𝘰𝘧 𝘦𝘢𝘤𝘩 𝘵𝘰𝘬𝘦𝘯.
✅ 𝘓𝘢𝘺𝘦𝘳 𝘕𝘰𝘳𝘮𝘢𝘭𝘪𝘻𝘢𝘵𝘪𝘰𝘯 – 𝘚𝘵𝘢𝘣𝘪𝘭𝘪𝘻𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨 𝘱𝘳𝘰𝘤𝘦𝘴𝘴
𝘐𝘵 𝘴𝘱𝘦𝘦𝘥𝘴 𝘶𝘱 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘢𝘯𝘥 𝘢𝘷𝘰𝘪𝘥𝘴 𝘷𝘢𝘯𝘪𝘴𝘩𝘪𝘯𝘨 𝘨𝘳𝘢𝘥𝘪𝘦𝘯𝘵𝘴, 𝘭𝘦𝘵𝘵𝘪𝘯𝘨 𝘮𝘰𝘥𝘦𝘭𝘴 𝘨𝘰 𝘥𝘦𝘦𝘱𝘦𝘳 𝘢𝘯𝘥 𝘭𝘦𝘢𝘳𝘯 𝘣𝘦𝘵𝘵𝘦𝘳.
👉 @codeprogrammer
Like and Share👍
✅ 𝘞𝘩𝘢𝘵 𝘪𝘴 𝘢 𝘛𝘳𝘢𝘯𝘴𝘧𝘰𝘳𝘮𝘦𝘳 𝘢𝘯𝘥 𝘸𝘩𝘺 𝘸𝘢𝘴 𝘪𝘵 𝘪𝘯𝘵𝘳𝘰𝘥𝘶𝘤𝘦𝘥?
𝘐𝘵 𝘴𝘰𝘭𝘷𝘦𝘥 𝘵𝘩𝘦 𝘭𝘪𝘮𝘪𝘵𝘢𝘵𝘪𝘰𝘯𝘴 𝘰𝘧 𝘙𝘕𝘕𝘴 & 𝘓𝘚𝘛𝘔𝘴 𝘣𝘺 𝘶𝘴𝘪𝘯𝘨 𝘴𝘦𝘭𝘧-𝘢𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯, 𝘦𝘯𝘢𝘣𝘭𝘪𝘯𝘨 𝘱𝘢𝘳𝘢𝘭𝘭𝘦𝘭 𝘱𝘳𝘰𝘤𝘦𝘴𝘴𝘪𝘯𝘨 𝘢𝘯𝘥 𝘤𝘢𝘱𝘵𝘶𝘳𝘪𝘯𝘨 𝘭𝘰𝘯𝘨-𝘳𝘢𝘯𝘨𝘦 𝘥𝘦𝘱𝘦𝘯𝘥𝘦𝘯𝘤𝘪𝘦𝘴 𝘭𝘪𝘬𝘦 𝘯𝘦𝘷𝘦𝘳 𝘣𝘦𝘧𝘰𝘳𝘦!
✅ 𝘚𝘦𝘭𝘧-𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 – 𝘛𝘩𝘦 𝘮𝘢𝘨𝘪𝘤 𝘣𝘦𝘩𝘪𝘯𝘥 𝘪𝘵
𝘌𝘷𝘦𝘳𝘺 𝘸𝘰𝘳𝘥 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘴 𝘪𝘵𝘴 𝘤𝘰𝘯𝘵𝘦𝘹𝘵 𝘪𝘯 𝘳𝘦𝘭𝘢𝘵𝘪𝘰𝘯 𝘵𝘰 𝘰𝘵𝘩𝘦𝘳𝘴—𝘮𝘢𝘬𝘪𝘯𝘨 𝘦𝘮𝘣𝘦𝘥𝘥𝘪𝘯𝘨𝘴 𝘴𝘮𝘢𝘳𝘵𝘦𝘳 𝘢𝘯𝘥 𝘮𝘰𝘥𝘦𝘭𝘴 𝘮𝘰𝘳𝘦 𝘤𝘰𝘯𝘵𝘦𝘹𝘵-𝘢𝘸𝘢𝘳𝘦.
✅ 𝘔𝘶𝘭𝘵𝘪-𝘏𝘦𝘢𝘥 𝘈𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 – 𝘚𝘦𝘦𝘪𝘯𝘨 𝘧𝘳𝘰𝘮 𝘮𝘶𝘭𝘵𝘪𝘱𝘭𝘦 𝘢𝘯𝘨𝘭𝘦𝘴
𝘋𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘢𝘵𝘵𝘦𝘯𝘵𝘪𝘰𝘯 𝘩𝘦𝘢𝘥𝘴 𝘧𝘰𝘤𝘶𝘴 𝘰𝘯 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘳𝘦𝘭𝘢𝘵𝘪𝘰𝘯𝘴𝘩𝘪𝘱𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘥𝘢𝘵𝘢. 𝘐𝘵’𝘴 𝘭𝘪𝘬𝘦 𝘩𝘢𝘷𝘪𝘯𝘨 𝘮𝘶𝘭𝘵𝘪𝘱𝘭𝘦 𝘦𝘹𝘱𝘦𝘳𝘵𝘴 𝘢𝘯𝘢𝘭𝘺𝘻𝘦 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯!
✅ 𝘗𝘰𝘴𝘪𝘵𝘪𝘰𝘯𝘢𝘭 𝘌𝘯𝘤𝘰𝘥𝘪𝘯𝘨 – 𝘛𝘦𝘢𝘤𝘩𝘪𝘯𝘨 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘰𝘳𝘥𝘦𝘳 𝘮𝘢𝘵𝘵𝘦𝘳𝘴
𝘚𝘪𝘯𝘤𝘦 𝘛𝘳𝘢𝘯𝘴𝘧𝘰𝘳𝘮𝘦𝘳𝘴 𝘥𝘰𝘯’𝘵 𝘱𝘳𝘰𝘤𝘦𝘴𝘴 𝘥𝘢𝘵𝘢 𝘴𝘦𝘲𝘶𝘦𝘯𝘵𝘪𝘢𝘭𝘭𝘺, 𝘵𝘩𝘪𝘴 𝘵𝘳𝘪𝘤𝘬 𝘦𝘯𝘴𝘶𝘳𝘦𝘴 𝘵𝘩𝘦𝘺 “𝘬𝘯𝘰𝘸” 𝘵𝘩𝘦 𝘱𝘰𝘴𝘪𝘵𝘪𝘰𝘯 𝘰𝘧 𝘦𝘢𝘤𝘩 𝘵𝘰𝘬𝘦𝘯.
✅ 𝘓𝘢𝘺𝘦𝘳 𝘕𝘰𝘳𝘮𝘢𝘭𝘪𝘻𝘢𝘵𝘪𝘰𝘯 – 𝘚𝘵𝘢𝘣𝘪𝘭𝘪𝘻𝘪𝘯𝘨 𝘵𝘩𝘦 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨 𝘱𝘳𝘰𝘤𝘦𝘴𝘴
𝘐𝘵 𝘴𝘱𝘦𝘦𝘥𝘴 𝘶𝘱 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘢𝘯𝘥 𝘢𝘷𝘰𝘪𝘥𝘴 𝘷𝘢𝘯𝘪𝘴𝘩𝘪𝘯𝘨 𝘨𝘳𝘢𝘥𝘪𝘦𝘯𝘵𝘴, 𝘭𝘦𝘵𝘵𝘪𝘯𝘨 𝘮𝘰𝘥𝘦𝘭𝘴 𝘨𝘰 𝘥𝘦𝘦𝘱𝘦𝘳 𝘢𝘯𝘥 𝘭𝘦𝘢𝘳𝘯 𝘣𝘦𝘵𝘵𝘦𝘳.
Like and Share
Please open Telegram to view this post
VIEW IN TELEGRAM
❤5👍3👏1🎉1
Forwarded from Code With Python
Canva Pro Admin Panel Available In Cheap Price !!
Other Subnoscriptions - Auto CAD 1 Year , GitHub Students Developer Pack 1 Year, Chat GPT Go, Linkedin 3 Month, Adobe Photoshop 6 Month + More Benefits Tools
If You Want To Buy Then Contact Us
WhatsApp : +918004898515
Telegram : desktoppro89
#ad InsideAds
Other Subnoscriptions - Auto CAD 1 Year , GitHub Students Developer Pack 1 Year, Chat GPT Go, Linkedin 3 Month, Adobe Photoshop 6 Month + More Benefits Tools
If You Want To Buy Then Contact Us
WhatsApp : +918004898515
Telegram : desktoppro89
#ad InsideAds
❤1
Forwarded from Code With Python
A cheat sheet about functions and techniques in Python: shows useful built-in functions, working with iterators, strings, and collections, as well as popular tricks with unpacking, zip, enumerate, map, filter, and dictionaries
@DataScience4
@DataScience4
❤1