Python Daily – Telegram
Python Daily
2.56K subscribers
1.49K photos
53 videos
2 files
39K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Creating a Python System to Turn All PostgreSQL Servers into Masters with Auto-Recovery and Sync – N

Hello Python community!I’m currently working on developing a distributed PostgreSQL system using Python, where all servers act as masters. Additionally, I’m adopting a clear separation between servers and clients to create a flexible and efficient architecture.The primary goals of this project are as follows:

1. Master-Master architecture
All servers operate equally, eliminating single points of failure (SPOF).
2. Server-Client separation
Clients can seamlessly access the system while the internal operations are optimized for distributed workloads.
3. Automatic recovery
In case of server failures, other nodes automatically handle recovery to maintain uninterrupted service.
4. Automatic data synchronization
Efficiently synchronizing data across nodes while ensuring consistency.
5. Leveraging Python and PostgreSQL
Combining Python's flexibility with PostgreSQL's robust features.

# Current Tools

For this project, I’m focusing on the following two key modules:

psycopg3: To enable efficient communication with PostgreSQL, especially with its asynchronous capabilities.
aioquic: For leveraging the QUIC protocol to achieve fast and reliable data synchronization, particularly for server-client communications in a distributed setup.

# Challenges and Feedback Needed

Here are some specific points where I’d love to get your insights:

1. Server-Client Design Approach
What’s the best way to dynamically determine which server the client should

/r/Python
https://redd.it/1gwghji
Best Tech Stack for a Chat App with AI: Python vs Nest.js for Backend?

I am working on a B2C startup and need to design the backend for a website and mobile apps supporting a chat application. The platform will incorporate AI/ML models to analyze chats and user inputs, alongside a notification system for users. My initial idea is to separate the backend and AI services. Should I use Python for both the backend(with flask or django) and AI components, or would it be better to leverage Nest.js for the backend, while using Python for AI?

/r/flask
https://redd.it/1gwatcn
R BALROG: Benchmarking Agentic LLM and VLM Reasoning On Games

Tired of saturated benchmarks? Want scope for a significant leap in capabilities? 

Introducing BALROG: a Benchmark for Agentic LLM and VLM Reasoning On Games!

BALROG is a challenging benchmark for LLM agentic capabilities, designed to stay relevant for years to come.


Check it out!

GitHub: https://github.com/balrog-ai/BALROG

Leaderboard: https://balrogai.com

Paper: https://arxiv.org/abs/2411.13543

/r/MachineLearning
https://redd.it/1gwhnf8
HPC-Style Job Scripts in the Cloud

The first parallel computing system I ever used were job noscripts on HPC Job schedulers (like SLURM, PBS, SGE, ...). They had an API straight out of the 90s, but were super straightforward and helped me do research when I was still just a baby programmer.

The cloud is way more powerful than these systems, but kinda sucks from a UX perspective. I wanted to replicate the experience I had on HPC on the cloud with Cloud-based Job Arrays. It wasn't actually all that hard.

[Post here](https://docs.coiled.io/blog/slurm-job-arrays.html)
Video here

This is still super new (we haven't even put up proper docs yet) but I'm excited about the feature. Thoughts/questions/critiques welcome.

/r/Python
https://redd.it/1gwj7e6
Finished my first Django app! (But deployment is hell)

I just finished my first django app. A simple crm for my company. Developing it was an experience that makes me want to switch carrers into web app development. It’s been really really awesome. Sadly I can’t say the same thing about deploying the app. I’ve been trying to get it to work on and off without complete success.

This is how my process looks like:
Pull from repo -> break gunicorn in various ways and spend half and hour figuring out what broke-> get asked to change something -> have fun modifying stuff in my development environment -> pull from repo -> break gunicorn in various ways and spend half and hour figuring out what broke-> get asked to change something -> have fun modifying stuff in my development environment -> …

Is it always like this or am I missing something?

I am just a python/django enthusiast. I know about css and html, but I am not an engineer by any means.

I really enjoy developing in Django but why is deployment hell?

/r/django
https://redd.it/1gwly9h
Friday Daily Thread: r/Python Meta and Free-Talk Fridays

# Weekly Thread: Meta Discussions and Free Talk Friday 🎙️

Welcome to Free Talk Friday on /r/Python! This is the place to discuss the r/Python community (meta discussions), Python news, projects, or anything else Python-related!

## How it Works:

1. Open Mic: Share your thoughts, questions, or anything you'd like related to Python or the community.
2. Community Pulse: Discuss what you feel is working well or what could be improved in the /r/python community.
3. News & Updates: Keep up-to-date with the latest in Python and share any news you find interesting.

## Guidelines:

All topics should be related to Python or the /r/python community.
Be respectful and follow Reddit's Code of Conduct.

## Example Topics:

1. New Python Release: What do you think about the new features in Python 3.11?
2. Community Events: Any Python meetups or webinars coming up?
3. Learning Resources: Found a great Python tutorial? Share it here!
4. Job Market: How has Python impacted your career?
5. Hot Takes: Got a controversial Python opinion? Let's hear it!
6. Community Ideas: Something you'd like to see us do? tell us.

Let's keep the conversation going. Happy discussing! 🌟

/r/Python
https://redd.it/1gwub4n
MetaDataScraper: A Python Package for scraping Facebook page data with ease!

Hey everyone! 👋

I’m excited to introduce MetaDataScraper, a Python package designed to automate the extraction of valuable data from Facebook pages. Whether you're tracking follower counts, post interactions, or multimedia content like videos, this tool makes scraping Facebook page data a breeze. No API keys or tedious manual effort required — just pure automation! 😎

Usage docs here at ReadTheDocs.

# Key Features:

Automated Extraction: Instantly fetch follower counts, post texts, likes, shares, and video links from public Facebook pages.
Comprehensive Data Retrieval: Get detailed insights from posts, including text content, interactions (likes, shares), and multimedia (videos, reels, etc.).
Loginless Scraping: With the LoginlessScraper class, no Facebook login is needed. Perfect for scraping public pages.
Logged-In Scraping: The LoggedInScraper class allows you to login to Facebook and bypass the limitations of loginless scraping. Access more content and private posts if needed.
Headless Operation: Scrapes data silently in the background (without opening a visible browser window) — perfect for automated tasks or server environments.
Flexible & Easy-to-Use: Simple setup, clear method calls, and works seamlessly with Selenium WebDriver.

# Example Usage:

1. Installation: Simply install via pip:

​

pip install MetaDataScraper

2) Loginless Scraping (no Facebook login required):

from MetaDataScraper import

/r/Python
https://redd.it/1gwn9yd
Finally launched my portfolio with Django

https://preview.redd.it/0wec41h4pa2e1.png?width=1904&format=png&auto=webp&s=0b06f10f26406892d7a7a85f02f0b3c334630595

After years of working with Django, I always postponed building my own personal site. Recently, I decided it was time, and that’s how eriktaveras.com came to life.

# What’s included?

Backend: Django to manage projects and a contact form with spam protection (rate limiting and content detection).
Frontend: Tailwind CSS for a clean design and Alpine.js for light interactivity.
Extras: Automatic Telegram notifications whenever someone submits the contact form.

I’m also working on adding a blog and still uploading more projects to the portfolio, so it’s very much a work in progress.

# What I learned

Using Tailwind CSS for quick, efficient design.
Combining Django with lightweight frontend tools like Alpine.js.
Building a secure contact form without relying on external services.

If you’re curious, feel free to check it out at www.eriktaveras.com. I’d love to hear your feedback or ideas for improvement!

Thanks for reading! 🚀

/r/django
https://redd.it/1gwlx9o
How do you handle speeding up frequent reads on aggregations without redundancy risks?

I've built an internal tool for Estimating, Inventory, Schedule & Dispatch, Job Costing & Reconciliation at a construction contractor business. We're using postgres. Now that much of the operational functionality is there with proper normalization, I'm building out dashboards that do a lot of aggregation on deeply nested fields.

So the (possibly misguided/skill issue?) goal is to persist some aggregated values to distant parent model objects. But the values can never be out of sync!

I've implemented the new GeneratedField with db_persist=True in a number of places, which just simplifies some things, but as I understand it I can't use a GeneratedField to sum a value on a child related model.

So there's a few options I'm aware of, and I'm curious what you use in production environments where data validity and integrity is vital (this affects what people are paid, records for taxes, etc).

Side effects in the child model's `save()` method override
1. Slow on save
2. Error prone, No guarantees on data integrity
3. Tons of clutter and poor maintainability in models . py
Django Signals to update affected parent fields
1. Slow on save
2. Does this roll back

/r/django
https://redd.it/1gwt719
Networking applications should not be opening sockets

From my first development project involving networking I was hooked. I also found some areas of networking software a bit unresolved. There was some strong modeling for people who make networking components but that seemed to peter out after the sockets library. Nobody seemed to have a good compelling way to bundle all that block I/O, byte framing, encoding/decoding, message dispatching etc into something that was reused from project to project.

I finally did something about this and have produced a software library. I also wrote a discussion paper that is the first link in the readme of the following github repo. The repo contains demonstration modules that are referred to in the other readme links.

Networking is not about sockets

Is there anyone else out there that has thought along similar lines? Has anyone seen something better?

/r/Python
https://redd.it/1gw3hwi
Use python to build mk Converter

🚀 Check out my URL/PDF/DOCX to Markdown Converter!

Hey fellow developers! 👋

I'm super excited to share a tool I've been working on that I think might make your life a bit easier. You know that annoying process of converting documents to Markdown? Well, I built something to handle that!

What does it do?
- Converts web pages to Markdown with just a URL
- Transforms PDF files to Markdown (using pdfplumber)
- Converts DOCX files to clean Markdown
- Lets you preview the rendered result right there
- Comes with copy/download buttons for quick access

I built it using FastAPI for the backend (it's crazy fast! ) and kept the frontend super clean and simple. You literally just paste a URL or upload a file, hit convert, and boom! 💥 You've got your Markdown.

Why I made this:
I got tired of manually converting docs for my documentation work, and thought others might find this useful too. Plus, I wanted to learn more about FastAPI and document processing in Python.

Tech stack:
- FastAPI (because who doesn't love async Python? 🐍)
- pdfplumber for PDF parsing
- python-docx for Word docs
- marked.js for the preview
- Basic HTML/CSS/JS for the frontend

The code is open source, and I'd love to get your feedback or contributions!

/r/Python
https://redd.it/1gwfscj
Project Guide: AI-Powered Documentation Generator for Codebases

What My Project Does:
Project Guide is an AI-powered tool that analyzes codebases and automatically generates comprehensive documentation. It aims to simplify the process of understanding and navigating complex projects, especially those written by others.

Target Audience:
This tool is intended for developers, both professionals and hobbyists, who work with existing codebases or want to improve documentation for their own projects. It's suitable for production use but can also be valuable for learning and project management.

Comparison:
Unlike traditional documentation tools that require manual input, Project Guide uses AI to analyze code and generate insights automatically. It differs from static analysis tools by providing higher-level, context-aware documentation that explains project architecture and purpose.

Showcase:
Ever wished your project could explain itself? Now it can! 🪄 Project Guide uses AI to analyze your codebase and generate comprehensive documentation automagically.

Features:
🔍 Deep code analysis
📚 Generates detailed developer guides
🎯 Identifies project purpose and architecture
🗺️ Creates clear documentation structure
🤖 AI-powered insights
📝 Markdown-formatted output
🔄 Recursive directory analysis
🎨 Well-organized documentation

Check it out: https://github.com/sojohnnysaid/project-guide

Going through codebases that someone else wrote is hard, no matter how long you've been at this. This tool

/r/Python
https://redd.it/1gx2515
11 Python Boilerplate Code Snippets Every Developer Needs

Python's simplicity makes it a favorite among developers, especially in trending fields like AI, machine learning, and automation. But let's face it—repeating boilerplate code can be a drag. That’s where Python snippets come in!

From validating emails to shuffling lists, we’ve rounded up 11 essential Python boilerplate snippets to simplify your daily tasks and supercharge your workflow:

# 🔍 1. Validate Email Formats (Regex Simplified)

Use regular expressions to validate email strings efficiently:

pythonCopy codeimport re
def validateemail(email):
email
pattern = re.compile(r'^a-zA-Z0-9._%+-+@a-zA-Z0-9.-+\.a-zA-Z{2,}$')
return bool(emailpattern.match(email))


# ✂️ 2. Slice Strings & Lists Like a Pro

Access sub-elements directly without loops for cleaner code:

pythonCopy codemy
string = "Hello, World!"
print(mystring[0:5]) # Output: Hello


# 🔄 3. Compare Words: Are They Anagrams?

Quickly check if two strings are anagrams with `collections.Counter`:

pythonCopy codefrom collections import Counter
def are
anagrams(word1, word2):
return Counter(word1) == Counter(word2)


/r/Python
https://redd.it/1gx1nwb
Running concurrent tasks for streaming in a flask route

Hi guys I'm trying to figure out the best way to solve my issue, whether it be threads, or asyncio, or something other than flask.

Heres my route handler:

routehandler():
def stream
response():
def process(connection):
dosomething()

processing
thread = CancellableThreadWithDBConnection(target=process)
processingthread.start()

while not processing
done:
try:
yield json.dumps("")


/r/flask
https://redd.it/1gwq16h
Async or new thread?

Hi guys my flask route is streaming and “yield”s data every 1 second to check if the client connection has been closed. Meanwhile I want the actual route handler logic to run.

Right now I create a separate thread in the route handler to run the actual logic then just have a while loop with yield “” in the main thread.

But this just seems so hacky since I have to terminate the child thread from the main thread if the client closed the connection and yield “” threw a generator exit error.

I saw that flask has an event loop and just wanted to check with you all and see if anyone has had experience with it. Obviously it’s a much better solution if it works. Thanks!

/r/flask
https://redd.it/1gw6gxe
Fellow django developers let's connect! Let's learn and create something together!

I'm creating a discord channel where developers can just chat, mentor other people, and even create project together. We'd be happy if you join our community!

Discord link: https://discord.gg/SD5b4NP4Sq

/r/djangolearning
https://redd.it/1gx7yyi
Data model

Hi there, fellows,
I have the feeling i am wasting a lot of time reading the documentation of the flask-sqlalchemy flask-sqlalchemy....#define-models without doing real progress.

I seek here some advices to reach my goal faster: load a pandas dataframe into a nice class like ExcelData()
I can already load an excel and display it via route and template, but i now want to save it into a DB via a class. My skills seems to be bloked at this step.

Any hints? Link? Template, Tuto? Indian YouTuber?

/r/flask
https://redd.it/1gxagf9
🚀 Feature Friday: assertNumQueries!

Today's Feature Friday reaches back into Django's history for a small-but-powerful tool: assertNumQueries!

This method from TransactionTestCase helps you write tests that verify the number of queries made by a piece of code.

It is a great way to check DB performance and catch regressions or "n+1" issues (when you accidentally make a single DB query for every object in a loop instead of loading everything up front from the database).

You can pass a function to assertNumQueries, or use it as a context manager, as shown in the example below:

from django.test import TransactionTestCase
from .services import myfunctionthathitsthedb

class MyTest(TransactionTestCase):

def test
dbperformance(self):
# called directly
self.assertNumQueries(7, my
functionthathitsthedb)
# used as a context manager
with self.assertNumQueries(2):
Person.objects.create(name="Aaron")


/r/django
https://redd.it/1gx8b9a
Django mastery?

Hi I want to ask how I would master Django?

How to start making projects in Django

Please seniors give me advice

/r/djangolearning
https://redd.it/1gx7hy3