✅ Python Practice Questions – Part 4 🐍💻
🔹 Q1. Merge two lists
📌 Explanation: + joins two lists
🔹 Q2. Find the minimum number in a list
📌 Explanation: min() returns the smallest value
🔹 Q3. Convert a string to uppercase
📌 Explanation: upper() converts to uppercase
🔹 Q4. Check if a list contains an element
📌 Explanation: in checks if value exists in list
🔹 Q5. Square all numbers in a list
📌 Explanation: List comprehension squares each item
🔹 Q6. Remove an element from a list
📌 Explanation: remove() deletes the value 2
💬 Double Tap ❤️ For More
🔹 Q1. Merge two lists
list1 = [1, 2]
list2 = [3, 4]
merged = list1 + list2
print(merged)
📌 Explanation: + joins two lists
🔹 Q2. Find the minimum number in a list
nums = [5, 2, 9, 1]
print(min(nums))
📌 Explanation: min() returns the smallest value
🔹 Q3. Convert a string to uppercase
text = "hello"
print(text.upper())
📌 Explanation: upper() converts to uppercase
🔹 Q4. Check if a list contains an element
items = [1, 2, 3]
print(2 in items)
📌 Explanation: in checks if value exists in list
🔹 Q5. Square all numbers in a list
nums = [1, 2, 3, 4]
squared = [x**2 for x in nums]
print(squared)
📌 Explanation: List comprehension squares each item
🔹 Q6. Remove an element from a list
items = [1, 2, 3]
items.remove(2)
print(items)
📌 Explanation: remove() deletes the value 2
💬 Double Tap ❤️ For More
❤17👍1
What is the output of this code?
x = 5 y = "5" print(x + int(y))
x = 5 y = "5" print(x + int(y))
Anonymous Quiz
17%
A) 55
61%
B) 10
19%
C) TypeError
3%
D) 5
❤5🤔2
❤3
How do you get user input in Python?
Anonymous Quiz
13%
A) input.get()
12%
B) get.input()
73%
C) input()
2%
D) read()
❤6
What is the output of this code?
print(10 // 3)
print(10 // 3)
Anonymous Quiz
35%
A) 3.33
54%
B) 3
2%
C) 4
9%
D) Error
Which keyword is used to define a function in Python?
Anonymous Quiz
8%
A) func
7%
B) define
8%
C) function
77%
D) def
What is the result of this code?
a = [1, 2, 3]
a.append(4) print(a)
a = [1, 2, 3]
a.append(4) print(a)
Anonymous Quiz
9%
A) [1, 2, 3]
9%
B) [4, 1, 2, 3]
82%
C) [1, 2, 3, 4]
❤7
What is the output?
x = 10
if x > 5: print("Yes") else: print("No")
x = 10
if x > 5: print("Yes") else: print("No")
Anonymous Quiz
82%
A) Yes
11%
B) No
5%
C) Error
1%
D) Nothing
❤4
❤2
Which symbol is used for writing comments in Python?
Anonymous Quiz
25%
A) //
8%
B) <!-- -->
59%
C) #
8%
D) /* */
❤9
Which data structure does not allow duplicate values?
Anonymous Quiz
8%
A) List
33%
B) Tuple
42%
C) Set
16%
D) Dictionary
😴4🔥1
Which method is used to add a key-value pair to a dictionary?
Anonymous Quiz
10%
A) add()
18%
B) append()
7%
C) insert()
64%
D) dict[key] = value
😱4🔥1
Which of the following is a valid list comprehension?*
Anonymous Quiz
38%
A) [x for x in range(5) if x%2==0]
24%
B) for x in range(5): if x%2==0
14%
C) x = [range(5) if x%2==0]
24%
D) list(x for x in range(5) if x%2==0)
🔥2❤1
Which structure is best for storing key-value pairs?
Anonymous Quiz
10%
A) List
11%
B) Tuple
11%
C) Set
68%
D) Dictionary
❤5🔥1😁1
What does the return keyword do in a function?
Anonymous Quiz
6%
A) Exits Python
10%
B) Prints the output
80%
C) Returns a value to the caller
5%
D) Skips execution
❤7🔥1🥰1
What is the output of this code?*
def add(x, y=2):
return x + y print(add(3))
def add(x, y=2):
return x + y print(add(3))
Anonymous Quiz
6%
2
19%
3
75%
5
❤5🔥1😁1
What is *args used for in a function?
Anonymous Quiz
8%
A) To pass a list
7%
B) To define a loop
71%
C) To pass variable number of positional arguments
15%
D) To return multiple values
❤5🔥1👌1
What does this function return?
def func(a, b):
print(a + b)
def func(a, b):
print(a + b)
Anonymous Quiz
49%
A) a + b
30%
B) None
8%
C) 0
14%
D) Error
❤2
Sometimes reality outpaces expectations in the most unexpected ways.
While global AI development seems increasingly fragmented, Sber just released Europe's largest open-source AI collection—full weights, code, and commercial rights included.
✅ No API paywalls.
✅ No usage restrictions.
✅ Just four complete model families ready to run in your private infrastructure, fine-tuned on your data, serving your specific needs.
What makes this release remarkable isn't merely the technical prowess, but the quiet confidence behind sharing it openly when others are building walls. Find out more in the article from the developers.
GigaChat Ultra Preview: 702B-parameter MoE model (36B active per token) with 128K context window. Trained from scratch, it outperforms DeepSeek V3.1 on specialized benchmarks while maintaining faster inference than previous flagships. Enterprise-ready with offline fine-tuning for secure environments.
GitHub | HuggingFace | GitVerse
GigaChat Lightning offers the opposite balance: compact yet powerful MoE architecture running on your laptop. It competes with Qwen3-4B in quality, matches the speed of Qwen3-1.7B, yet is significantly smarter and larger in parameter count.
Lightning holds its own against the best open-source models in its class, outperforms comparable models on different tasks, and delivers ultra-fast inference—making it ideal for scenarios where Ultra would be overkill and speed is critical. Plus, it features stable expert routing and a welcome bonus: 256K context support.
GitHub | Hugging Face | GitVerse
Kandinsky 5.0 brings a significant step forward in open generative models. The flagship Video Pro matches Veo 3 in visual quality and outperforms Wan 2.2-A14B, while Video Lite and Image Lite offer fast, lightweight alternatives for real-time use cases. The suite is powered by K-VAE 1.0, a high-efficiency open-source visual encoder that enables strong compression and serves as a solid base for training generative models. This stack balances performance, scalability, and practicality—whether you're building video pipelines or experimenting with multimodal generation.
GitHub | GitVerse | Hugging Face | Technical report
Audio gets its upgrade too: GigaAM-v3 delivers speech recognition model with 50% lower WER than Whisper-large-v3, trained on 700k hours of audio with punctuation/normalization for spontaneous speech.
GitHub | HuggingFace | GitVerse
Every model can be deployed on-premises, fine-tuned on your data, and used commercially. It's not just about catching up – it's about building sovereign AI infrastructure that belongs to everyone who needs it.
While global AI development seems increasingly fragmented, Sber just released Europe's largest open-source AI collection—full weights, code, and commercial rights included.
✅ No API paywalls.
✅ No usage restrictions.
✅ Just four complete model families ready to run in your private infrastructure, fine-tuned on your data, serving your specific needs.
What makes this release remarkable isn't merely the technical prowess, but the quiet confidence behind sharing it openly when others are building walls. Find out more in the article from the developers.
GigaChat Ultra Preview: 702B-parameter MoE model (36B active per token) with 128K context window. Trained from scratch, it outperforms DeepSeek V3.1 on specialized benchmarks while maintaining faster inference than previous flagships. Enterprise-ready with offline fine-tuning for secure environments.
GitHub | HuggingFace | GitVerse
GigaChat Lightning offers the opposite balance: compact yet powerful MoE architecture running on your laptop. It competes with Qwen3-4B in quality, matches the speed of Qwen3-1.7B, yet is significantly smarter and larger in parameter count.
Lightning holds its own against the best open-source models in its class, outperforms comparable models on different tasks, and delivers ultra-fast inference—making it ideal for scenarios where Ultra would be overkill and speed is critical. Plus, it features stable expert routing and a welcome bonus: 256K context support.
GitHub | Hugging Face | GitVerse
Kandinsky 5.0 brings a significant step forward in open generative models. The flagship Video Pro matches Veo 3 in visual quality and outperforms Wan 2.2-A14B, while Video Lite and Image Lite offer fast, lightweight alternatives for real-time use cases. The suite is powered by K-VAE 1.0, a high-efficiency open-source visual encoder that enables strong compression and serves as a solid base for training generative models. This stack balances performance, scalability, and practicality—whether you're building video pipelines or experimenting with multimodal generation.
GitHub | GitVerse | Hugging Face | Technical report
Audio gets its upgrade too: GigaAM-v3 delivers speech recognition model with 50% lower WER than Whisper-large-v3, trained on 700k hours of audio with punctuation/normalization for spontaneous speech.
GitHub | HuggingFace | GitVerse
Every model can be deployed on-premises, fine-tuned on your data, and used commercially. It's not just about catching up – it's about building sovereign AI infrastructure that belongs to everyone who needs it.
❤4👍4