Python Daily – Telegram
Python Daily
2.56K subscribers
1.49K photos
53 videos
2 files
39K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
React-Django Deployment

I have been working on Ngnix and Gunicorn the whole day and no luck. It's crazy. Both backend and frontend have deployed successfully but while trying to access the backend from the browser I get no response. I need help with configuration. Any leads?

/r/djangolearning
https://redd.it/1gsv99j
Write any Python noscript in 30 characters (plus an ungodly amount of whitespace)

Hey all!

My friend challenged me to find the shortest solution to a certain Leetcode-style problem in Python. They were generous enough to let me use whitespace for free, so that the code stays readable.

# What My Project Does

I like abusing rules, so I made a tool to encode any Python noscript in just 30 bytes, plus some a lot of whitespace.

This result is somewhat harder to achieve than it looks like at first, so you might want to check out a post I wrote about it. Alternatively, jump straight to the code if that's more of your thing: GitHub.

# Target Audience

This is a toy project, nothing serious, but it was fun for me to work on. I hope you find it entertaining too!

# Comparison

This is honestly the first time I've seen anyone do this with a specific goal of reducing the number of non-whitespace characters at any cost, so this might as well be a unique project.

As a honorary mention, though, it builds on another project I think deserves recognition: PyFuck. It's JSFuck for Python, using 8 different characters to encode any (short enough) Python program.

/r/Python
https://redd.it/1gsyls8
AnyModal: A Python Framework for Multimodal LLMs

[AnyModal](https://github.com/ritabratamaiti/AnyModal) is a modular and extensible framework for integrating diverse input modalities (e.g., images, audio) into large language models (LLMs). It enables seamless tokenization, encoding, and language generation using pre-trained models for various modalities.

### Why I Built AnyModal

I created AnyModal to address a gap in existing resources for designing vision-language models (VLMs) or other multimodal LLMs. While there are excellent tools for specific tasks, there wasn’t a cohesive framework for easily combining different input types with LLMs. AnyModal aims to fill that gap by simplifying the process of adding new input processors and tokenizers while leveraging the strengths of pre-trained language models.

### Features

- **Modular Design**: Plug and play with different modalities like vision, audio, or custom data types.
- **Ease of Use**: Minimal setup—just implement your modality-specific tokenization and pass it to the framework.
- **Extensibility**: Add support for new modalities with only a few lines of code.

### Example Usage

```python
from transformers import ViTImageProcessor, ViTForImageClassification
from anymodal import MultiModalModel
from vision import VisionEncoder, Projector

# Load vision processor and model
processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224')
vision_model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
hidden_size = vision_model.config.hidden_size

# Initialize vision encoder and projector
vision_encoder = VisionEncoder(vision_model)
vision_tokenizer = Projector(in_features=hidden_size, out_features=768)

# Load LLM components
from transformers import AutoTokenizer, AutoModelForCausalLM
llm_tokenizer = AutoTokenizer.from_pretrained("gpt2")
llm_model = AutoModelForCausalLM.from_pretrained("gpt2")

# Initialize AnyModal
multimodal_model = MultiModalModel(
input_processor=None,


/r/Python
https://redd.it/1gtbrzb
Why is my django-cte manager a lot faster than a custom QuerySet?

I have this Car model that I want to sort by speed. I implemented two different ways to do these: one is by using a custom queryset and the other is using an external package using django-cte (see below). For some reason, the CTE implementation is alot faster even though the queries are the same (same limit, same offset, same filters, ...). And I'm talking tens of magnitude better, since for 1 million records the custom queryset runs for approx 21s while the CTE one is running for 2s only. Why is this happening? Is it because the custom queryset is sorting it first then does the necessary filters?

```
from django.db import models
from django.utils.translation import gettext_lazy as _
from django_cte import CTEManager, With


class CarCTEManager(CTEManager):
def sort_speed(self):
cte = With(
Car.objects.annotate(
rank=models.Window(
expression=models.functions.Rank(),


/r/django
https://redd.it/1gt9q67
Deply: keep your python architecture clean

Hello everyone,

My name is Archil. I'm a Python/PHP developer originally from Ukraine, now living in Wrocław, Poland. I've been working on a tool called [Deply](https://github.com/Vashkatsi/deply), and I'd love to get your feedback and thoughts on it.

# What My Project Does

**Deply** is a standalone Python tool designed to enforce architectural patterns and dependencies in large Python projects. Deply analyzes your code structure and dependencies to ensure that architectural rules are followed. This promotes cleaner, more maintainable, and modular codebases.

**Key Features:**

* **Layer-Based Analysis**: Define custom layers (e.g., models, views, services) and restrict their dependencies.
* **Dynamic Configuration**: Easily configure collectors for each layer using file patterns and class inheritance.
* **CI Integration**: Integrate Deply into your Continuous Integration pipeline to automatically detect and prevent architecture violations before they reach production.

# Target Audience

* **Who It's For**: Developers and teams working on medium to large Python projects who want to maintain a clean architecture.
* **Intended Use**: Ideal for production environments where enforcing module boundaries is critical, as well as educational purposes to teach best practices.

# Use Cases

* **Continuous Integration**: Add Deply to your CI/CD pipeline to catch architectural violations early in the development process.
* **Refactoring**: Use Deply to understand existing dependencies in your codebase, making large-scale

/r/Python
https://redd.it/1gthdpy
Best host for webapp?

I have a web app running flask login, sqlalchemy for the db, and react for frontend. Don't particulalry want to spend more than 10-20€ (based in western europe) a month, but I do want the option to allow for expansion if the website starts getting traction. I've looked around and there are so many options it's giving me a bit of a headache.

AWS elastic beanstalk seems like the obvious innitial choice, but I feel like the price can really balloon after the first year from what I've read. I've heared about other places to host but nothing seemed to stand out yet.

Idk if this is relevant for the choice, but OVH is my registrar, I'm not really considering them as I've heared it's a bit of a nightmare to host on.

/r/flask
https://redd.it/1gtk0wa
I started implementing an AsyncIO event loop in Rust

The project is called *RLoop* and available [in the relevant GH repository](https://github.com/gi0baro/rloop).

# What My Project Does

RLoop is intended to be a 1:1 replacement for the standard library asyncio event loop. At the moment RLoop is still very pre-alpha, as it only supports I/O handles involving raw socket file denoscriptors. The aim is to reach a stable and feature-complete release in the next few months.

# Target Audience

RLoop is intended for every `asyncio` developer. Until the project reach a stable state though, is intended for use only in non-production environments and for testing purposes only.

# Comparison to Existing Alternatives

The main existing alternatives to RLoop are the standard library implementation and `uvloop`.

Aside from the lack of features of RLoop at this stage, some preliminary benchmarks on MacOS and Python 3.11 with a basic TCP echo show a 30% gain over the default `asyncio` implementation, while `uvloop` is still 50% faster.

___

Feel free to post your feedbacks, test RLoop within your environment and contribute :)

/r/Python
https://redd.it/1gtmvdb
Jupyter Enterprise Gateway on Windows Server?

Hi,

I try to run JEG on my windows server 2019 to connect my laptop to the kernels on the server.

Connection works fine, kernels are starting but closing after WebSocket timeout.

Here is what I can see in the JEG console

D 2024-11-17 18:54:53.267 EnterpriseGatewayApp] Launching kernel: 'Python 3 (ETL)' with command: ['C:\Users\\venvs\etl-env\noscripts\python.exe', '-Xfrozen_modules=off', '-m', 'ipykernel_launcher', '-f', 'C:\Users\\AppData\Roaming\jupyter\runtime\kernel-c66b786d-403c-493f-84f4-458b61a41541.json']
[D 2024-11-17 18:54:53.267 EnterpriseGatewayApp] BaseProcessProxy.launch_process() env: {'KERNEL_LAUNCH_TIMEOUT': '', 'KERNEL_WORKING_DIR': '', 'KERNEL_USERNAME': '', 'KERNEL_GATEWAY': '', 'KERNEL_ID': '', 'KERNEL_LANGUAGE': '', 'EG_IMPERSONATION_ENABLED': ''}
[I 2024-11-17 18:54:53.273 EnterpriseGatewayApp] Local kernel launched on 'ip', pid: 16132, pgid: 0, KernelID: c66b786d-403c-493f-84f4-458b61a41541, cmd: '['C:\Users\\venvs\etl-env\noscripts\python.exe', '-Xfrozen_modules=off', '-m', 'ipykernel_launcher', '-f', 'C:\Users\\AppData\Roaming\jupyter\runtime\kernel-c66b786d-403c-493f-84f4-458b61a41541.json']'
[D 2024-11-17 18:54:53.274 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61198
[D 2024-11-17 18:54:53.281 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61195
[I 2024-11-17 18:54:53.284 EnterpriseGatewayApp] Kernel started: c66b786d-403c-493f-84f4-458b61a41541
[D 2024-11-17 18:54:53.284 EnterpriseGatewayApp] Kernel args: {'env': {'KERNEL_LAUNCH_TIMEOUT': '40', 'KERNEL_WORKING_DIR': 'a path on my laptop', 'KERNEL_USERNAME': 'Laptop username'}, 'kernel_headers': {}, 'kernel_name': 'etl-env'}
[I 241117 18:54:53 web:2348] 201 POST /api/kernels (ip) 29.00ms
[D 2024-11-17 18:54:53.344 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels
[D 2024-11-17 18:54:53.344 EnterpriseGatewayApp] Requesting kernel info from c66b786d-403c-493f-84f4-458b61a41541
[D 2024-11-17 18:54:53.346 EnterpriseGatewayApp] Connecting to: tcp://127.0.0.1:61194
[I 241117 18:54:53 web:2348] 200 GET /api/kernels (ip) 0.00ms
[D 2024-11-17 18:54:53.367 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels
[D 2024-11-17 18:54:53.368 EnterpriseGatewayApp] Waiting for pending kernel_info request
[D 2024-11-17 18:54:53.378 EnterpriseGatewayApp] Initializing websocket connection /api/kernels/c66b786d-403c-493f-84f4-458b61a41541/channels
[W 2024-11-17 18:54:53.379 EnterpriseGatewayApp] Replacing stale connection: c66b786d-403c-493f-84f4-458b61a41541:66351527-a8ee-422a-9305-f3b432ee58df
[D

/r/IPython
https://redd.it/1gtkajw
Monday Daily Thread: Project ideas!

# Weekly Thread: Project Ideas 💡

Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.

## How it Works:

1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.

## Guidelines:

* Clearly state the difficulty level.
* Provide a brief denoscription and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.

# Example Submissions:

## Project Idea: Chatbot

**Difficulty**: Intermediate

**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar

**Denoscription**: Create a chatbot that can answer FAQs for a website.

**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)

# Project Idea: Weather Dashboard

**Difficulty**: Beginner

**Tech Stack**: HTML, CSS, JavaScript, API

**Denoscription**: Build a dashboard that displays real-time weather information using a weather API.

**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)

## Project Idea: File Organizer

**Difficulty**: Beginner

**Tech Stack**: Python, File I/O

**Denoscription**: Create a noscript that organizes files in a directory into sub-folders based on file type.

**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)

Let's help each other grow. Happy

/r/Python
https://redd.it/1gtrhgb
ididi, now with version 1.0.4, supports infinite number of nested scopes!

Hello my peer pythonistas!

9 days ago, I posted my dependency injection lib here

https://www.reddit.com/r/Python/comments/1gn5erp/ididi\_dependency\_injection\_in\_a\_single\_line\_of

In 9 days, ididi has iterated 13 versions, and reached the milestone of 1.0.0(1.0.4 now actually).

https://github.com/raceychan/ididi

I am bringing back ididi to you with a powerful new feature and a nice new documentation.

https://raceychan.github.io/ididi/features/#using-scope-to-manage-resources

A scope is a context that manage the lifespan of resources, a classic usecase would be creating a different database session/connection for each request user send to your endpoint, this separate data access among users.

Unlike most alternatives that either does not support scope, or support finite number of pre-defined scopes,

Ididi now supports infinite number of nested scopes

let's take a glance at the usage here.

async with dg.scope(appname) as appscope:
async with dg.scope(router) as routerscope:
async with dg.scope(endpoint) as endpoint
scope:
async with dg.scope(userid) as userscope:
async with dg.scope(requestid) as requestscope:

for a basic usage

/r/Python
https://redd.it/1gtr77s
ansiplot: Pretty (and legible) small console plots.

What My Project Does

Hi all! While developing something different I realized that it would be nice to have a way of plotting multiple curves in the console to get comparative insights (which of those curves is better or worse at certain regions). I am thinking of a 40x10 to 60x20 canvas and maybe 10+ curves that will probably be overlapping a lot.

I couldn't find something to match the exact use case, so I made yet another console plotter:

https://github.com/maniospas/ansiplot

Target Audience

This is mostly a toy project in the sense that it covers the functionalities I am interested in and was made pretty quickly (in an evening). That said, I am creating it for my own production and will be polishing it as needed, so all feedback is welcome.

Comparison

My previous options were previously [asciichart\](https://github.com/kroitor/asciichart), [drawilleplot\](https://github.com/gooofy/drawilleplot) and [asciiplot\](https://github.com/w2sv/asciiplot). I think ansiplot looks less "clean" because it is restricted to using one symbol per curve, creates thicker lines, and does not show axis tics other than the values for mins and maxs (of course, one can add bars to mark precise points).

The first two shortcomings are conscious design decision in service of two features I consider very important:
\- The plots look pretty with

/r/Python
https://redd.it/1gtvy3o
I Understand Machine Learning Models Better by Combining Python Libraries

Hi folks,


I’m currently super interested in neural networks, and I bet many of you are too. PyTorch is the hottest Python library for Machine Learning right now. For anyone starting out, PyTorch can be hard to understand. That’s why I combined PyTorch with Manim (3b1b) to:

1. Train a neural network (PyTorch), and
2. Visualize the model architecture (Manim).

I think the combination of these two Python libraries makes it relatively easy to get started with ML. https://youtu.be/zLEt5oz5Mr8?si=cY-Riirhdi66Zqfy

Have you worked with PyTorch and Manim before? I find Manim particularly challenging, as it often feels like a work in progress.

/r/Python
https://redd.it/1gtyh9o
Safe to delete all migration files, run makemigrations and apply the new migration?

Have a repo with 3+ years of migrations and wanted to clean them up. Have read a bit on the squashmigrations-noscript but shouldn't I just be able to delete all the migration files, create a new migration containing all the changes and then run migrate, applying that one?

We don't have any custom migrations that need to be ran.

/r/django
https://redd.it/1gtzh7o