Python Daily – Telegram
Python Daily
2.56K subscribers
1.49K photos
53 videos
2 files
39.1K links
Daily Python News
Question, Tips and Tricks, Best Practices on Python Programming Language
Find more reddit channels over at @r_channels
Download Telegram
Any better way between Javanoscript and Django to communicate with each other?

I am designing a front-end for an API of mine. As of now the only way for the Javanoscript and Django to communicate is from cookies.

For example, If a sign in attempt is made with incorrect credentials, the server receives the sign in form, makes a POST request to the API, the API returns an error message that the credentials are incorrect, the Django server makes a temporary cookie named "errorMessage" and redirects the user to the Sign In page again. The cookie then is read and deleted by the Javanoscript to initiate an alert() function with the error message to let the user know that the credentials were wrong.

Is there any better, simple or efficient way?

/r/django
https://redd.it/1fwkld2
R Meta releases SOTA video generation and audio generation that's less than 40 billion parameters.

Today, Meta released SOTA set of text-to-video models. These are small enough to potentially run locally. Doesn't seem like they plan on releasing the code or dataset but they give virtually all details of the model. The fact that this model is this coherent already really points to how much quicker development is occurring.


https://ai.meta.com/research/movie-gen/?utm\_source=linkedin&utm\_medium=organic\_social&utm\_content=video&utm\_campaign=moviegen

This suite of models (Movie Gen) contains many model architectures but it's very interesting to see training by synchronization with sounds and pictures. That actually makes a lot of sense from a training POV.

https://preview.redd.it/047ddxdb7vsd1.png?width=1116&format=png&auto=webp&s=a7cd628a8b2dde9824b27983a430217123c297d8




/r/MachineLearning
https://redd.it/1fwic4m
Segregate By Date: Sort your photos into year and month folders based on filename and EXIF metadata

What My Project Does

This Python code I developed can read a folder containing images and can sort them into folders- parent folder name would be "2024", "2023", etc and child folders would be "Jan", "Feb", etc. The program can read files no matter how they are nested or how many sub-folders there are or where they came from. For instance, if we have 100 files directly in a folder with normal names, 50 files with timestamps in the filename (like IMG_20210912_120000.jpg), 100 files already sorted into years but not month, 50 files already fully sorted into month and year. Once the program is run, all 300 files will be properly sorted into year and month folders.

You can also set the input folder as a new set of images and the output folder a previous output of this program, and the output folder will be modified in place to generate a new fully sorted set of photos (in other words, previous results are implicitly merged with the new one).

Target Audience

1. People or families who regularly take pictures on multiple devices, later wanting to store them all in one place, perhaps to maintain a long-term memories album, or to

/r/Python
https://redd.it/1fwo463
I made a dumb simple GMAIL client... only for sending emails from gmail.

I wanted to automatically send emails from my gmail account but didn't want to go through the whole Google Cloud Platform / etc. setup... this just requires an app passcode for your gmail.

(note: I'm not great at packaging so currently only works from GitHub install)

# What my project does:

Lets you use your gmail and send it in Python without all the GCP setup.

# Target audience:

Simpletons like myself.

# Comparison:

I couldn't find an easy way to use Python gmail without all the complicated Google Cloud Platform jazz... so if you're only wanting to automatically send emails with your gmail account, this is for you!

Let me know what you guys think! Look at the source, it's pretty simple to use haha.

https://github.com/zackplauche/python-gmail

/r/Python
https://redd.it/1fvxpkj
ovld - fast and featureful multiple dispatch

## What My Project Does

[ovld](https://github.com/breuleux/ovld) implements multiple dispatch in Python. This lets you define multiple versions of the same function with different type signatures.

For example:

import math
from typing import Literal
from ovld import ovld

@ovld
def div(x: int, y: int):
return x / y

@ovld
def div(x: str, y: str):
return f"{x}/{y}"

@ovld
def div(x: int, y: Literal[0]):
return math.inf

assert div(8, 2) == 4
assert div("/home", "user") == "/home/user"
assert div(10, 0) == math.inf

## Target Audience

Ovld is pretty generally applicable: multiple dispatch is a central feature of several programming languages, e.g. Julia. I find it particularly useful when doing work on complex heterogeneous data structures, for instance walking an AST, serializing/deserializing data, generating HTML representations of data, etc.


## Features

* Wide range of supported annotations: normal

/r/Python
https://redd.it/1fwdgal
Currently Seeking Entry/Junior Level Developer Work: How Does My Resume Look?

I'm actively looking for entry-level or junior developer positions and would love feedback on my resume. If you're a seasoned developer or someone involved in hiring junior devs, your insights would be invaluable!. Here is the resume at [Google Drive](https://docs.google.com/document/d/1k2EtRjwHxQBobh2tMVV5ZCxaGHZHUPGGgb7oVDQMDDM/edit?usp=sharing).

* What do you think about the structure and content?
* Are there any areas for improvement?
* Does it effectively showcase my skills and projects for this level?

Thank you in advance for your help!

/r/djangolearning
https://redd.it/1fwqg36
Free Python Learning with Literal Baby Steps

I was using Coddy, but then I ran into a paywall and couldn't execute any more functions unless I waited a day. I'm looking for something that helps me to repeat the same things over and over to memorize syntax and learn.

For example, SQL Climber has been wonderful with very slowly learning SQL and repeating the same commands over and over for me to memorize them, and very slowly progressing to more concepts. I'm looking for something similar, but with Python; and completely free. I tried Exercism, but I didn't find it very accessible. Confusing to navigate, and I got stuck on the first main exercise of "cooking a lasagne" because it didn't explain very well what I'm putting in and where and why. I also tried Hack in Science but it progressed way too fast and was more focused on the problem solving aspect, when all I want is learning about the syntax and repeating to memorize it.


I also want something with an online editor that checks my work and then moves on if it's correct (not a book or online book).

/r/Python
https://redd.it/1fwun6b
Server Side rendered Datatables with Django

Just wrapped up a project where I had to handle a massive table with DataTables and Django. Thought I'd share my experience and maybe save someone else a headache.

The challenge: Display thousands of records with dynamic columns, sorting, and filtering - all server-side. Oh, and it needed to be blazing fast.

Here's what worked for me:

1. Custom Django view to process DataTables requests
2. Dynamic column generation based on user permissions
3. Efficient database queries with select\_related()
4. Complex sorting and filtering logic handled server-side
5. Pagination to keep things snappy

The trickiest part was definitely the dynamic ordering. I ended up writing a function to translate DataTables' sorting parameters into Django ORM-friendly format. It was a pain to debug, but works like a charm now.

Performance-wise, it's holding up well. Tables load quickly, and sorting/filtering feels smooth.

Key takeaways:

* Server-side processing is crucial for large datasets
* Plan your dynamic columns carefully
* Efficient querying is your best friend


i also wrote a blog about this - [https://selftaughtdev.hashnode.dev/mastering-complex-datatables-with-django-a-deep-dive-into-server-side-processing](https://selftaughtdev.hashnode.dev/mastering-complex-datatables-with-django-a-deep-dive-into-server-side-processing)

/r/django
https://redd.it/1fwqlwk
Sunday Daily Thread: What's everyone working on this week?

# Weekly Thread: What's Everyone Working On This Week? 🛠️

Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!

## How it Works:

1. Show & Tell: Share your current projects, completed works, or future ideas.
2. Discuss: Get feedback, find collaborators, or just chat about your project.
3. Inspire: Your project might inspire someone else, just as you might get inspired here.

## Guidelines:

Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.

## Example Shares:

1. Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
2. Web Scraping: Built a noscript to scrape and analyze news articles. It's helped me understand media bias better.
3. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!

Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟

/r/Python
https://redd.it/1fx3vfc
Terminal Anime Browsing redefined

What my project does:

I made a python package FastAnime, that replicates the experience you would get from watching anime from a browser in the terminal. It uses yt-dlp to scrape the sites, rich and inquirerPy for the ui and click for the commandline interface. It also supports fzf and rofi as external menus.

It mostly intergrates the anilist api to achieve most of this functionality.

Target Audience:

The project's goal was to bring my love of anime to the terminal.

So its aimed at those anime enthusiasts who prefer doing everything from the terminal.

Comparison:

The main difference between it and other tools like it is how robust and featureful it is:

sync play intergration
anilist syncing
view whats trending
watch trailers of upcoming anime
score anime directly from your terminal
powerful search and filter capability a kin to one in a browser
intergration with python mpv to enable a seamless viewing experience without ever closing the player
batch downloading
manage your anilist anime lists directly from the terminal
highly configurable
nice ui
and so on ...

https://github.com/Benex254/FastAnime




/r/Python
https://redd.it/1fwt65b
Python 3 Reduction of privileges in code - problem (Windows)


The system is Windows 10/Windows 11. I am logged in and I see the desktop in the Account5(no administrator privileges). A python noscript run in this account using the right mouse button run as Administrator. A noscript for applying many operations that require administrator privileges. Nevertheless, one piece of code could be run in the context and with access to the logged in Windows account (Account5). Here is the code (net use is to be executed in the context of the logged in Windows account).

Here is code snippet:

def connect_drive(self):
login = self.entry_login.get()
password = self.entry_password.get()
if not login or not password:
messagebox.showerror("Błąd", "Proszę wprowadzić login i hasło przed próbą połączenia.")
return
try:


/r/Python
https://redd.it/1fwze07
How to measure python coroutine context switch time?

I am trying to measure context switch time of coroutine and python thread by having 2 threads waiting for a event that is set by the other thread. Threading context switch takes 3.87 µs, which matches my expectation as OS context switch does takes a few thousands of instructions. The coroutine version's context switch is 14.43 µs, which is surprising to me as I was expecting coroutine context switch to be magnitude faster. Is it a Python coroutine issue is my program wrong?

Code can be found in this gist.

Rewriting the program in rust gives more reasonable results: coro: 163 ns thread: 1989 ns

/r/Python
https://redd.it/1fx9tgr
Having trouble inserting new element on table

I'm new to Flask and I'm not used to tables in python, I wanted to ask for a hint on how to solve the following problem and I would really appreciate some help if possible, thanks in advance

sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table posts has no column named user_id
[SQL: INSERT INTO posts (noscript, content, user_id) VALUES (?, ?, ?)\]
[parameters: ('First Post', 'hi', 3)\]

Here's the code,

class users(db.Model):
    id = db.Column("id", db.Integer, primarykey=True)
    name = db.Column(db.String(100))
    email = db.Column(db.String(100))

    def init(self,name,email):
       
self.name = name
       
self.email = email

class posts(db.Model):
    id = db.Column("id", db.Integer, primary
key=True)
    noscript = db.Column( db.String(255), nullable = False)
    content = db.Column( db.String(1000))
    userid = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False)

    def init(self,noscript,content,user
id):
       

/r/flask
https://redd.it/1fwvpgy
I wanna create something fun and useful in Python

So recently, I wrote a noscript in Python that grabbed my Spotify liked songs, searched them on Youtube and downloaded them in seconds. I downloaded over 500 songs in minutes using this simple program, and now I wanna build something more. I have intermediate Python skills and am exploring web scraping (enjoying too!!).

What fun ideas do you have that I can check out?

/r/Python
https://redd.it/1fxd8g3
Python is awesome! Speed up Pandas point queries by 100x or even 1000x times.

Introducing NanoCube! I'm currently working on another Python library, called CubedPandas, that aims to make working with Pandas more convenient and fun, but it suffers from Pandas low performance when it comes to filtering data and executing aggregative point queries like the following:

value = df.loc(df['make'.isin('Audi', 'BMW') & (df'engine' == 'hybrid')]'revenue'.sum()

So, can we do better? Yes, multi-dimensional OLAP-databases are a common solution. But, they're quite heavy and often not available for free. I needed something super lightweight, a minimal in-process in-memory OLAP engine that can convert a Pandas DataFrame into a multi-dimensional index for point queries only.

Thanks to the greatness of the Python language and ecosystem I ended up with less than 30 lines of (admittedly ugly) code that can speed up Pandas point queries by factor 10x, 100x or even 1,000x.

I wrapped it into a library called NanoCube, available through pip install nanocube. For source code, further details and some benchmarks please visit https://github.com/Zeutschler/nanocube.

from nanocube import NanoCube
nc = NanoCube(df)
value = nc.get('revenue', make='Audi', 'BMW', engine='hybrid')

Target audience: NanoCube is useful for data engineers, analysts and scientists who want to speed up their data processing. Due

/r/Python
https://redd.it/1fxgkj6
Complete Reddit Backup- A BDFR enhancement: Archive reddit saved posts periodically

What My Project Does

The BDFR tool is an existing, popular and thoroughly useful method to archive reddit saved posts offline, supporting JSON and XML formats. But if you're someone like me that likes to save hundreds of posts a month, move the older saved posts to some offline backup and then un-save these from your reddit account, then you'd have to manually merge last month's BDFR output with this month's. You'd then need to convert the BDFR tool's JSON's file to HTML separately in case the original post was taken down.

For instance, On September 1st, you have a folder for (https://www.reddit.com/r/soccer/) containing your saved posts from the month of August from the BDFR tool. You then remove August's saved posts from your account to keep your saved posts list concise. Then on October 1st, you run it again for posts saved in September. Now you need to add (https://www.reddit.com/r/soccer/)'s posts which were saved in September with those of August's, by manually copy-pasting and removing duplicates, if any. Then repeat the same process subreddit-wise.

I made a noscript to do this, while also using bdfrtohtml to render the final BDFR output (instead of leaving the output in BDFR's JSONs/xml). I have also grouped saved posts by subreddit in

/r/Python
https://redd.it/1fxeglk
Are there any DX standards for building API in a Python library that works with dataframes?

I'm currently working on a Python library (kawa) that handles and manipulates dataframes. My goal is to design the library so that the "backend" of the library can be swapped if needed with other implementations, while the code (method calls etc) of the library do not need changing. This could make it easier for consumers to switch to other libraries later if they don't want to keep using mine.

I'm looking for some existing standard or conventions used in other similar libraries that I can use as inspiration.

For example, here's how I create and load a datasource:

import pandas as pd
import kawa
...

citiesandcountries = pd.DataFrame(
{'id': 'a', 'country': 'FR', 'city': 'Paris', 'measure': 1},
{'id': 'b', 'country': 'FR', 'city': 'Lyon', 'measure': 2},
)

uniqueid = 'resource{}'.format(uuid.uuid4())
loader = kawa.newdataloader(df=self.citiesandcountries, datasourcename=uniqueid)
loader.createdatasource(primarykeys='id')
loader.loaddata(resetbeforeinsert=True, createsheet=True)

and here's how I manipulate (run compute) the created datasource (dataframe):

import pandas as pd
import kawa


/r/Python
https://redd.it/1fxbf9o
What is the best approach to avoid repetition of a try-except structure when fetching models?

I’m fetching data across multiple models and have the following try-exception structure repeated a lot:

try:
model .get / .filter
except Model.DoesNotExist:
handle…
except Model.MultipleInstancesReturned:
handle…

Is it bad to just have this structure repeated across every model I’m querying or is there a cleaner way to generalize this without so much repetition?

/r/django
https://redd.it/1fxk9xr
Monday Daily Thread: Project ideas!

# Weekly Thread: Project Ideas 💡

Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.

## How it Works:

1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.

## Guidelines:

* Clearly state the difficulty level.
* Provide a brief denoscription and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.

# Example Submissions:

## Project Idea: Chatbot

**Difficulty**: Intermediate

**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar

**Denoscription**: Create a chatbot that can answer FAQs for a website.

**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)

# Project Idea: Weather Dashboard

**Difficulty**: Beginner

**Tech Stack**: HTML, CSS, JavaScript, API

**Denoscription**: Build a dashboard that displays real-time weather information using a weather API.

**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)

## Project Idea: File Organizer

**Difficulty**: Beginner

**Tech Stack**: Python, File I/O

**Denoscription**: Create a noscript that organizes files in a directory into sub-folders based on file type.

**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)

Let's help each other grow. Happy

/r/Python
https://redd.it/1fxukcp
Arakawa: Build data reports in 100% Python (a fork of Datapane)


I forked Datapane (https://github.com/datapane/datapane) because it's not maintained but I think it's very useful for data analysis and published a new version under a new name.

https://github.com/ninoseki/arakawa

The functionalities are same as Datapane but it can work along with newer DS/ML libraries such as Pandas v2, NumPy v2, etc.

## What My Project Does

Arakawa makes it simple to build interactive reports in seconds using Python.

Import Arakawa's Python library into your noscript or notebook and build reports programmatically by wrapping components such as:

- Pandas DataFrames
- Plots from Python visualization libraries such as Bokeh, Altair, Plotly, and Folium
- Markdown and text
- Files, such as images, PDFs, JSON data, etc.

Arakawa reports are interactive and can also contain pages, tabs, drop downs, and more. Once created, reports can be exported as HTML, shared as standalone files, or embedded into your own application, where your viewers can interact with your data and visualizations.

## Target Audience

DS/ML people or who needs to create a visual rich report.

## Comparison

Possibly Streamlit and Plotly Dash. But a key difference is whether it's dynamic or static.
Arakawa creates a static HTML report and it's suitable for periodical reporting.

/r/Python
https://redd.it/1fxuqh5