Machine Learning with Python – Telegram
Machine Learning with Python
68.3K subscribers
1.29K photos
95 videos
169 files
952 links
Learn Machine Learning with hands-on Python tutorials, real-world code examples, and clear explanations for researchers and developers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
🔭 Daily Useful Scripts

Daily.py is a repository that provides a collection of ready-to-use Python noscripts for automating common daily tasks.

git clone https://github.com/Chamepp/Daily.py.git

Github: https://github.com/Chamepp/Daily.py

https://news.1rj.ru/str/CodeProgrammer
👍125😍2
Introduction to Python

Learn fundamental concepts for Python beginners that will help you get started on your journey to learn Python. These tutorials focus on the absolutely essential things you need to know about Python.

What You’ll Learn:
• Installing a Python environment
• The basics of the Python language

https://realpython.com/learning-paths/python3-introduction/

https://news.1rj.ru/str/CodeProgrammer
👍169
Flask by Example

You’re going to start building a Flask app that calculates word-frequency pairs based on the text from a given URL. This is a full-stack tutorial covering a number of web development techniques. Jump right in and discover the basics of Python web development with the Flask microframework.

https://realpython.com/learning-paths/flask-by-example/

https://news.1rj.ru/str/CodeProgrammer
10👍1
👁‍🗨 Running YOLOv7 algorithm on your webcam using Ikomia API
👍62🔥1
Machine Learning with Python
👁‍🗨 Running YOLOv7 algorithm on your webcam using Ikomia API
👁‍🗨 Running YOLOv7 algorithm on your webcam using Ikomia API

from ikomia.dataprocess.workflow import Workflow
from ikomia.utils import ik
from ikomia.utils.displayIO import display
import cv2

stream = cv2.VideoCapture(0)

# Init the workflow
wf = Workflow()

# Add color conversion
cvt = wf.add_task(ik.ocv_color_conversion(code=str(cv2.COLOR_BGR2RGB)), auto_connect=True)

# Add YOLOv7 detection
yolo = wf.add_task(ik.infer_yolo_v7(conf_thres="0.7"), auto_connect=True)

while True:
ret, frame = stream.read()

# Test if streaming is OK
if not ret:
continue

# Run workflow on image
wf.run_on(frame)

# Display results from "yolo"
display(
yolo.get_image_with_graphics(),
noscript="Object Detection - press 'q' to quit",
viewer="opencv"
)

# Press 'q' to quit the streaming process
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# After the loop release the stream object
stream.release()

# Destroy all windows
cv2.destroyAllWindows()


https://news.1rj.ru/str/CodeProgrammer
👍157👏2😍2
🖥 Generate API docs under a minute in Django

https://news.1rj.ru/str/CodeProgrammer
👍73
This media is not supported in your browser
VIEW IN TELEGRAM
👁Savant: Supercharged Computer Vision and Video Analytics Framework on DeepStream

git clone https://github.com/insight-platform/Savant.git

cd Savant/samples/peoplenet_detector

git lfs pull


Github: https://github.com/insight-platform/Savant

https://news.1rj.ru/str/CodeProgrammer
👍132
🖥 Convert PDF to docx using Python

Github: https://github.com/dothinking/pdf2docx

https://news.1rj.ru/str/CodeProgrammer

Please more reaction with our posts
👍2115
Hand gesture recognition

Full Source Code 👇👇👇👇
👍8
Machine Learning with Python
Hand gesture recognition Full Source Code 👇👇👇👇
Hand gesture recognition

import cv2
import mediapipe as mp

# Initialize MediaPipe Hands module
mp_hands = mp.solutions.hands
hands = mp_hands.Hands()

# Initialize MediaPipe Drawing module for drawing landmarks
mp_drawing = mp.solutions.drawing_utils

# Open a video capture object (0 for the default camera)
cap = cv2.VideoCapture(0)

while cap.isOpened():
ret, frame = cap.read()

if not ret:
continue

# Convert the frame to RGB format
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

# Process the frame to detect hands
results = hands.process(frame_rgb)

# Check if hands are detected
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
# Draw landmarks on the frame
mp_drawing.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)

# Display the frame with hand landmarks
cv2.imshow('Hand Recognition', frame)

# Exit when 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break

# Release the video capture object and close the OpenCV windows
cap.release()
cv2.destroyAllWindows()


https://news.1rj.ru/str/CodeProgrammer

Please more reaction with our posts
💯21👍125❤‍🔥2