How to measure python coroutine context switch time?
I am trying to measure context switch time of coroutine and python thread by having 2 threads waiting for a event that is set by the other thread. Threading context switch takes 3.87 µs, which matches my expectation as OS context switch does takes a few thousands of instructions. The coroutine version's context switch is 14.43 µs, which is surprising to me as I was expecting coroutine context switch to be magnitude faster. Is it a Python coroutine issue is my program wrong?
Code can be found in this gist.
Rewriting the program in rust gives more reasonable results: coro: 163 ns thread: 1989 ns
/r/Python
https://redd.it/1fx9tgr
I am trying to measure context switch time of coroutine and python thread by having 2 threads waiting for a event that is set by the other thread. Threading context switch takes 3.87 µs, which matches my expectation as OS context switch does takes a few thousands of instructions. The coroutine version's context switch is 14.43 µs, which is surprising to me as I was expecting coroutine context switch to be magnitude faster. Is it a Python coroutine issue is my program wrong?
Code can be found in this gist.
Rewriting the program in rust gives more reasonable results: coro: 163 ns thread: 1989 ns
/r/Python
https://redd.it/1fx9tgr
Gist
python context switch time measurement
python context switch time measurement. GitHub Gist: instantly share code, notes, and snippets.
Having trouble inserting new element on table
I'm new to Flask and I'm not used to tables in python, I wanted to ask for a hint on how to solve the following problem and I would really appreciate some help if possible, thanks in advance
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table posts has no column named user_id
[SQL: INSERT INTO posts (noscript, content, user_id) VALUES (?, ?, ?)\]
[parameters: ('First Post', 'hi', 3)\]
Here's the code,
class users(db.Model):
id = db.Column("id", db.Integer, primarykey=True)
name = db.Column(db.String(100))
email = db.Column(db.String(100))
def init(self,name,email):
self.name = name
self.email = email
class posts(db.Model):
id = db.Column("id", db.Integer, primarykey=True)
noscript = db.Column( db.String(255), nullable = False)
content = db.Column( db.String(1000))
userid = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False)
def init(self,noscript,content,userid):
/r/flask
https://redd.it/1fwvpgy
I'm new to Flask and I'm not used to tables in python, I wanted to ask for a hint on how to solve the following problem and I would really appreciate some help if possible, thanks in advance
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table posts has no column named user_id
[SQL: INSERT INTO posts (noscript, content, user_id) VALUES (?, ?, ?)\]
[parameters: ('First Post', 'hi', 3)\]
Here's the code,
class users(db.Model):
id = db.Column("id", db.Integer, primarykey=True)
name = db.Column(db.String(100))
email = db.Column(db.String(100))
def init(self,name,email):
self.name = name
self.email = email
class posts(db.Model):
id = db.Column("id", db.Integer, primarykey=True)
noscript = db.Column( db.String(255), nullable = False)
content = db.Column( db.String(1000))
userid = db.Column(db.Integer, db.ForeignKey('users.id'), nullable=False)
def init(self,noscript,content,userid):
/r/flask
https://redd.it/1fwvpgy
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
I wanna create something fun and useful in Python
So recently, I wrote a noscript in Python that grabbed my Spotify liked songs, searched them on Youtube and downloaded them in seconds. I downloaded over 500 songs in minutes using this simple program, and now I wanna build something more. I have intermediate Python skills and am exploring web scraping (enjoying too!!).
What fun ideas do you have that I can check out?
/r/Python
https://redd.it/1fxd8g3
So recently, I wrote a noscript in Python that grabbed my Spotify liked songs, searched them on Youtube and downloaded them in seconds. I downloaded over 500 songs in minutes using this simple program, and now I wanna build something more. I have intermediate Python skills and am exploring web scraping (enjoying too!!).
What fun ideas do you have that I can check out?
/r/Python
https://redd.it/1fxd8g3
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
Python is awesome! Speed up Pandas point queries by 100x or even 1000x times.
Introducing NanoCube! I'm currently working on another Python library, called CubedPandas, that aims to make working with Pandas more convenient and fun, but it suffers from Pandas low performance when it comes to filtering data and executing aggregative point queries like the following:
value = df.loc(df['make'.isin('Audi', 'BMW') & (df'engine' == 'hybrid')]'revenue'.sum()
So, can we do better? Yes, multi-dimensional OLAP-databases are a common solution. But, they're quite heavy and often not available for free. I needed something super lightweight, a minimal in-process in-memory OLAP engine that can convert a Pandas DataFrame into a multi-dimensional index for point queries only.
Thanks to the greatness of the Python language and ecosystem I ended up with less than 30 lines of (admittedly ugly) code that can speed up Pandas point queries by factor 10x, 100x or even 1,000x.
I wrapped it into a library called NanoCube, available through
from nanocube import NanoCube
nc = NanoCube(df)
value = nc.get('revenue', make='Audi', 'BMW', engine='hybrid')
Target audience: NanoCube is useful for data engineers, analysts and scientists who want to speed up their data processing. Due
/r/Python
https://redd.it/1fxgkj6
Introducing NanoCube! I'm currently working on another Python library, called CubedPandas, that aims to make working with Pandas more convenient and fun, but it suffers from Pandas low performance when it comes to filtering data and executing aggregative point queries like the following:
value = df.loc(df['make'.isin('Audi', 'BMW') & (df'engine' == 'hybrid')]'revenue'.sum()
So, can we do better? Yes, multi-dimensional OLAP-databases are a common solution. But, they're quite heavy and often not available for free. I needed something super lightweight, a minimal in-process in-memory OLAP engine that can convert a Pandas DataFrame into a multi-dimensional index for point queries only.
Thanks to the greatness of the Python language and ecosystem I ended up with less than 30 lines of (admittedly ugly) code that can speed up Pandas point queries by factor 10x, 100x or even 1,000x.
I wrapped it into a library called NanoCube, available through
pip install nanocube. For source code, further details and some benchmarks please visit https://github.com/Zeutschler/nanocube.from nanocube import NanoCube
nc = NanoCube(df)
value = nc.get('revenue', make='Audi', 'BMW', engine='hybrid')
Target audience: NanoCube is useful for data engineers, analysts and scientists who want to speed up their data processing. Due
/r/Python
https://redd.it/1fxgkj6
GitHub
GitHub - Zeutschler/nanocube: Lightning fast OLAP-style point queries on Pandas DataFrames.
Lightning fast OLAP-style point queries on Pandas DataFrames. - Zeutschler/nanocube
Complete Reddit Backup- A BDFR enhancement: Archive reddit saved posts periodically
What My Project Does
The BDFR tool is an existing, popular and thoroughly useful method to archive reddit saved posts offline, supporting JSON and XML formats. But if you're someone like me that likes to save hundreds of posts a month, move the older saved posts to some offline backup and then un-save these from your reddit account, then you'd have to manually merge last month's BDFR output with this month's. You'd then need to convert the BDFR tool's JSON's file to HTML separately in case the original post was taken down.
For instance, On September 1st, you have a folder for (https://www.reddit.com/r/soccer/) containing your saved posts from the month of August from the BDFR tool. You then remove August's saved posts from your account to keep your saved posts list concise. Then on October 1st, you run it again for posts saved in September. Now you need to add (https://www.reddit.com/r/soccer/)'s posts which were saved in September with those of August's, by manually copy-pasting and removing duplicates, if any. Then repeat the same process subreddit-wise.
I made a noscript to do this, while also using bdfrtohtml to render the final BDFR output (instead of leaving the output in BDFR's JSONs/xml). I have also grouped saved posts by subreddit in
/r/Python
https://redd.it/1fxeglk
What My Project Does
The BDFR tool is an existing, popular and thoroughly useful method to archive reddit saved posts offline, supporting JSON and XML formats. But if you're someone like me that likes to save hundreds of posts a month, move the older saved posts to some offline backup and then un-save these from your reddit account, then you'd have to manually merge last month's BDFR output with this month's. You'd then need to convert the BDFR tool's JSON's file to HTML separately in case the original post was taken down.
For instance, On September 1st, you have a folder for (https://www.reddit.com/r/soccer/) containing your saved posts from the month of August from the BDFR tool. You then remove August's saved posts from your account to keep your saved posts list concise. Then on October 1st, you run it again for posts saved in September. Now you need to add (https://www.reddit.com/r/soccer/)'s posts which were saved in September with those of August's, by manually copy-pasting and removing duplicates, if any. Then repeat the same process subreddit-wise.
I made a noscript to do this, while also using bdfrtohtml to render the final BDFR output (instead of leaving the output in BDFR's JSONs/xml). I have also grouped saved posts by subreddit in
/r/Python
https://redd.it/1fxeglk
GitHub
GitHub - Serene-Arc/bulk-downloader-for-reddit: Downloads and archives content from reddit
Downloads and archives content from reddit. Contribute to Serene-Arc/bulk-downloader-for-reddit development by creating an account on GitHub.
Are there any DX standards for building API in a Python library that works with dataframes?
I'm currently working on a Python library (kawa) that handles and manipulates dataframes. My goal is to design the library so that the "backend" of the library can be swapped if needed with other implementations, while the code (method calls etc) of the library do not need changing. This could make it easier for consumers to switch to other libraries later if they don't want to keep using mine.
I'm looking for some existing standard or conventions used in other similar libraries that I can use as inspiration.
For example, here's how I create and load a datasource:
import pandas as pd
import kawa
...
citiesandcountries = pd.DataFrame(
{'id': 'a', 'country': 'FR', 'city': 'Paris', 'measure': 1},
{'id': 'b', 'country': 'FR', 'city': 'Lyon', 'measure': 2},
)
uniqueid = 'resource{}'.format(uuid.uuid4())
loader = kawa.newdataloader(df=self.citiesandcountries, datasourcename=uniqueid)
loader.createdatasource(primarykeys='id')
loader.loaddata(resetbeforeinsert=True, createsheet=True)
and here's how I manipulate (run compute) the created datasource (dataframe):
import pandas as pd
import kawa
/r/Python
https://redd.it/1fxbf9o
I'm currently working on a Python library (kawa) that handles and manipulates dataframes. My goal is to design the library so that the "backend" of the library can be swapped if needed with other implementations, while the code (method calls etc) of the library do not need changing. This could make it easier for consumers to switch to other libraries later if they don't want to keep using mine.
I'm looking for some existing standard or conventions used in other similar libraries that I can use as inspiration.
For example, here's how I create and load a datasource:
import pandas as pd
import kawa
...
citiesandcountries = pd.DataFrame(
{'id': 'a', 'country': 'FR', 'city': 'Paris', 'measure': 1},
{'id': 'b', 'country': 'FR', 'city': 'Lyon', 'measure': 2},
)
uniqueid = 'resource{}'.format(uuid.uuid4())
loader = kawa.newdataloader(df=self.citiesandcountries, datasourcename=uniqueid)
loader.createdatasource(primarykeys='id')
loader.loaddata(resetbeforeinsert=True, createsheet=True)
and here's how I manipulate (run compute) the created datasource (dataframe):
import pandas as pd
import kawa
/r/Python
https://redd.it/1fxbf9o
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
What is the best approach to avoid repetition of a try-except structure when fetching models?
I’m fetching data across multiple models and have the following try-exception structure repeated a lot:
try:
model .get / .filter
except Model.DoesNotExist:
handle…
except Model.MultipleInstancesReturned:
handle…
Is it bad to just have this structure repeated across every model I’m querying or is there a cleaner way to generalize this without so much repetition?
/r/django
https://redd.it/1fxk9xr
I’m fetching data across multiple models and have the following try-exception structure repeated a lot:
try:
model .get / .filter
except Model.DoesNotExist:
handle…
except Model.MultipleInstancesReturned:
handle…
Is it bad to just have this structure repeated across every model I’m querying or is there a cleaner way to generalize this without so much repetition?
/r/django
https://redd.it/1fxk9xr
Reddit
From the django community on Reddit
Explore this post and more from the django community
Monday Daily Thread: Project ideas!
# Weekly Thread: Project Ideas 💡
Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.
## How it Works:
1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.
## Guidelines:
* Clearly state the difficulty level.
* Provide a brief denoscription and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.
# Example Submissions:
## Project Idea: Chatbot
**Difficulty**: Intermediate
**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar
**Denoscription**: Create a chatbot that can answer FAQs for a website.
**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)
# Project Idea: Weather Dashboard
**Difficulty**: Beginner
**Tech Stack**: HTML, CSS, JavaScript, API
**Denoscription**: Build a dashboard that displays real-time weather information using a weather API.
**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)
## Project Idea: File Organizer
**Difficulty**: Beginner
**Tech Stack**: Python, File I/O
**Denoscription**: Create a noscript that organizes files in a directory into sub-folders based on file type.
**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)
Let's help each other grow. Happy
/r/Python
https://redd.it/1fxukcp
# Weekly Thread: Project Ideas 💡
Welcome to our weekly Project Ideas thread! Whether you're a newbie looking for a first project or an expert seeking a new challenge, this is the place for you.
## How it Works:
1. **Suggest a Project**: Comment your project idea—be it beginner-friendly or advanced.
2. **Build & Share**: If you complete a project, reply to the original comment, share your experience, and attach your source code.
3. **Explore**: Looking for ideas? Check out Al Sweigart's ["The Big Book of Small Python Projects"](https://www.amazon.com/Big-Book-Small-Python-Programming/dp/1718501242) for inspiration.
## Guidelines:
* Clearly state the difficulty level.
* Provide a brief denoscription and, if possible, outline the tech stack.
* Feel free to link to tutorials or resources that might help.
# Example Submissions:
## Project Idea: Chatbot
**Difficulty**: Intermediate
**Tech Stack**: Python, NLP, Flask/FastAPI/Litestar
**Denoscription**: Create a chatbot that can answer FAQs for a website.
**Resources**: [Building a Chatbot with Python](https://www.youtube.com/watch?v=a37BL0stIuM)
# Project Idea: Weather Dashboard
**Difficulty**: Beginner
**Tech Stack**: HTML, CSS, JavaScript, API
**Denoscription**: Build a dashboard that displays real-time weather information using a weather API.
**Resources**: [Weather API Tutorial](https://www.youtube.com/watch?v=9P5MY_2i7K8)
## Project Idea: File Organizer
**Difficulty**: Beginner
**Tech Stack**: Python, File I/O
**Denoscription**: Create a noscript that organizes files in a directory into sub-folders based on file type.
**Resources**: [Automate the Boring Stuff: Organizing Files](https://automatetheboringstuff.com/2e/chapter9/)
Let's help each other grow. Happy
/r/Python
https://redd.it/1fxukcp
YouTube
Build & Integrate your own custom chatbot to a website (Python & JavaScript)
In this fun project you learn how to build a custom chatbot in Python and then integrate this to a website using Flask and JavaScript.
Starter Files: https://github.com/patrickloeber/chatbot-deployment
Get my Free NumPy Handbook: https://www.python-engi…
Starter Files: https://github.com/patrickloeber/chatbot-deployment
Get my Free NumPy Handbook: https://www.python-engi…
Arakawa: Build data reports in 100% Python (a fork of Datapane)
I forked Datapane (https://github.com/datapane/datapane) because it's not maintained but I think it's very useful for data analysis and published a new version under a new name.
https://github.com/ninoseki/arakawa
The functionalities are same as Datapane but it can work along with newer DS/ML libraries such as Pandas v2, NumPy v2, etc.
## What My Project Does
Arakawa makes it simple to build interactive reports in seconds using Python.
Import Arakawa's Python library into your noscript or notebook and build reports programmatically by wrapping components such as:
- Pandas DataFrames
- Plots from Python visualization libraries such as Bokeh, Altair, Plotly, and Folium
- Markdown and text
- Files, such as images, PDFs, JSON data, etc.
Arakawa reports are interactive and can also contain pages, tabs, drop downs, and more. Once created, reports can be exported as HTML, shared as standalone files, or embedded into your own application, where your viewers can interact with your data and visualizations.
## Target Audience
DS/ML people or who needs to create a visual rich report.
## Comparison
Possibly Streamlit and Plotly Dash. But a key difference is whether it's dynamic or static.
Arakawa creates a static HTML report and it's suitable for periodical reporting.
/r/Python
https://redd.it/1fxuqh5
I forked Datapane (https://github.com/datapane/datapane) because it's not maintained but I think it's very useful for data analysis and published a new version under a new name.
https://github.com/ninoseki/arakawa
The functionalities are same as Datapane but it can work along with newer DS/ML libraries such as Pandas v2, NumPy v2, etc.
## What My Project Does
Arakawa makes it simple to build interactive reports in seconds using Python.
Import Arakawa's Python library into your noscript or notebook and build reports programmatically by wrapping components such as:
- Pandas DataFrames
- Plots from Python visualization libraries such as Bokeh, Altair, Plotly, and Folium
- Markdown and text
- Files, such as images, PDFs, JSON data, etc.
Arakawa reports are interactive and can also contain pages, tabs, drop downs, and more. Once created, reports can be exported as HTML, shared as standalone files, or embedded into your own application, where your viewers can interact with your data and visualizations.
## Target Audience
DS/ML people or who needs to create a visual rich report.
## Comparison
Possibly Streamlit and Plotly Dash. But a key difference is whether it's dynamic or static.
Arakawa creates a static HTML report and it's suitable for periodical reporting.
/r/Python
https://redd.it/1fxuqh5
GitHub
GitHub - datapane/datapane: Build and share data reports in 100% Python
Build and share data reports in 100% Python. Contribute to datapane/datapane development by creating an account on GitHub.
Helios: a light-weight system for training AI networks using PyTorch
# What My Project Does
Helios is a light-weight package for training ML networks built on top of PyTorch. I initially developed this as a way to abstract the boiler-plate code that I kept copying around whenever I started a new project, but it's evolved to do much more than that. The main features are:
* It natively supports training by number of epochs, number of iterations, or until some condition is met.
* Ensures (as far as possible) to maintain reproducibility whenever training runs are stopped and restarted.
* An extensive registry system that enables writing generic training code for testing multiple networks with the same codebase. It also includes a way to automatically register all classes into the coressponding registries without having to manually import them.
* Native support for both single and multi-GPU training. Helios will automatically detect and use all GPUs available, or only those specified by the user. In addition, Helios supports training through torchrun.
* Automatic support for gradient accumulation when training by iteration count.
# Target Audience
* Developers who want a simpler alternative to the big training packages but still want to abstract portions of the training code.
* Developers who need to test multiple
/r/Python
https://redd.it/1fxuqid
# What My Project Does
Helios is a light-weight package for training ML networks built on top of PyTorch. I initially developed this as a way to abstract the boiler-plate code that I kept copying around whenever I started a new project, but it's evolved to do much more than that. The main features are:
* It natively supports training by number of epochs, number of iterations, or until some condition is met.
* Ensures (as far as possible) to maintain reproducibility whenever training runs are stopped and restarted.
* An extensive registry system that enables writing generic training code for testing multiple networks with the same codebase. It also includes a way to automatically register all classes into the coressponding registries without having to manually import them.
* Native support for both single and multi-GPU training. Helios will automatically detect and use all GPUs available, or only those specified by the user. In addition, Helios supports training through torchrun.
* Automatic support for gradient accumulation when training by iteration count.
# Target Audience
* Developers who want a simpler alternative to the big training packages but still want to abstract portions of the training code.
* Developers who need to test multiple
/r/Python
https://redd.it/1fxuqid
Reddit
From the Python community on Reddit: Helios: a light-weight system for training AI networks using PyTorch
Explore this post and more from the Python community
Bleak and Kivy, somebody can share a working example for Android?
Hi.
I try the bleak example to run a kivy app with bluetooth support in android.
https://github.com/hbldh/bleak/tree/develop/examples/kivy
But i cannot make it to work.
Can somebody please share a code related to that? i mean bleak, kivy, android.
Thanks!
/r/Python
https://redd.it/1fxx23a
Hi.
I try the bleak example to run a kivy app with bluetooth support in android.
https://github.com/hbldh/bleak/tree/develop/examples/kivy
But i cannot make it to work.
Can somebody please share a code related to that? i mean bleak, kivy, android.
Thanks!
/r/Python
https://redd.it/1fxx23a
GitHub
bleak/examples/kivy at develop · hbldh/bleak
A cross platform Bluetooth Low Energy Client for Python using asyncio - hbldh/bleak
Built an AI Engineer to Help with Django – Try it Out!
Hey r/Django,
I’ve been building an AI engineer designed to help with Django coding. It’s connected to the codebase, so you can ask technical questions and get help with issues. It’s not just an LLM, but an agent that thinks through your questions and steps to resolve them.
As a fellow Django dev, I know how frustrating it is to sift through documentation, log files and forums to find answers. I trained it on the Django open-source repo, so whether you’re exploring features, checking issues, or troubleshooting your code, Remedee is ready to go.
Try it here: chat.remedee.ai
I’d love your feedback – let me know what you think!
/r/django
https://redd.it/1fy3ss4
Hey r/Django,
I’ve been building an AI engineer designed to help with Django coding. It’s connected to the codebase, so you can ask technical questions and get help with issues. It’s not just an LLM, but an agent that thinks through your questions and steps to resolve them.
As a fellow Django dev, I know how frustrating it is to sift through documentation, log files and forums to find answers. I trained it on the Django open-source repo, so whether you’re exploring features, checking issues, or troubleshooting your code, Remedee is ready to go.
Try it here: chat.remedee.ai
I’d love your feedback – let me know what you think!
/r/django
https://redd.it/1fy3ss4
chat.remedee.ai
Remedee AI
Chat with Remedee AI
D Simple Questions Thread
Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead!
Thread will stay alive until next one so keep posting after the date in the noscript.
Thanks to everyone for answering questions in the previous thread!
/r/MachineLearning
https://redd.it/1fxif7x
Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead!
Thread will stay alive until next one so keep posting after the date in the noscript.
Thanks to everyone for answering questions in the previous thread!
/r/MachineLearning
https://redd.it/1fxif7x
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
How to find remote world wide jobs?
I'm currently working at a small company in my hometown. Python and especially django are not common here. Where I'm working modern technologies like microservices architecture, Test-Driven Development (TDD), or cutting-edge tools aren't used. I'm eager to work on high-load projects. Additionally, I'm looking for opportunities with a higher salary. Are there any other platforms besides Upwork where I can find worldwide remote jobs or roles that offer relocation?
/r/django
https://redd.it/1fy7jhf
I'm currently working at a small company in my hometown. Python and especially django are not common here. Where I'm working modern technologies like microservices architecture, Test-Driven Development (TDD), or cutting-edge tools aren't used. I'm eager to work on high-load projects. Additionally, I'm looking for opportunities with a higher salary. Are there any other platforms besides Upwork where I can find worldwide remote jobs or roles that offer relocation?
/r/django
https://redd.it/1fy7jhf
Reddit
From the django community on Reddit
Explore this post and more from the django community
Python 3.13 released
https://www.python.org/downloads/release/python-3130/
> This is the stable release of Python 3.13.0
>
> Python 3.13.0 is the newest major release of the Python programming language, and it contains many new features and optimizations compared to Python 3.12. (Compared to the last release candidate, 3.13.0rc3, 3.13.0 contains two small bug and some documentation and testing changes.)
>
> Major new features of the 3.13 series, compared to 3.12
>
> Some of the new major new features and changes in Python 3.13 are:
>
> New features
>
> - A new and improved interactive interpreter, based on PyPy's, featuring multi-line editing and color support, as well as colorized exception tracebacks.
> - An experimental free-threaded build mode, which disables the Global Interpreter Lock, allowing threads to run more concurrently. The build mode is available as an experimental feature in the Windows and macOS installers as well.
> - A preliminary, experimental JIT, providing the ground work for significant performance improvements.
> - The locals() builtin function (and its C equivalent) now has well-defined semantics when mutating the returned mapping, which allows debuggers to operate more consistently.
> - A modified version of mimalloc is now included, optional but
/r/Python
https://redd.it/1fybncq
https://www.python.org/downloads/release/python-3130/
> This is the stable release of Python 3.13.0
>
> Python 3.13.0 is the newest major release of the Python programming language, and it contains many new features and optimizations compared to Python 3.12. (Compared to the last release candidate, 3.13.0rc3, 3.13.0 contains two small bug and some documentation and testing changes.)
>
> Major new features of the 3.13 series, compared to 3.12
>
> Some of the new major new features and changes in Python 3.13 are:
>
> New features
>
> - A new and improved interactive interpreter, based on PyPy's, featuring multi-line editing and color support, as well as colorized exception tracebacks.
> - An experimental free-threaded build mode, which disables the Global Interpreter Lock, allowing threads to run more concurrently. The build mode is available as an experimental feature in the Windows and macOS installers as well.
> - A preliminary, experimental JIT, providing the ground work for significant performance improvements.
> - The locals() builtin function (and its C equivalent) now has well-defined semantics when mutating the returned mapping, which allows debuggers to operate more consistently.
> - A modified version of mimalloc is now included, optional but
/r/Python
https://redd.it/1fybncq
Python.org
Python Release Python 3.13.0
The official home of the Python Programming Language
Tuesday Daily Thread: Advanced questions
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1fymjo9
# Weekly Wednesday Thread: Advanced Questions 🐍
Dive deep into Python with our Advanced Questions thread! This space is reserved for questions about more advanced Python topics, frameworks, and best practices.
## How it Works:
1. **Ask Away**: Post your advanced Python questions here.
2. **Expert Insights**: Get answers from experienced developers.
3. **Resource Pool**: Share or discover tutorials, articles, and tips.
## Guidelines:
* This thread is for **advanced questions only**. Beginner questions are welcome in our [Daily Beginner Thread](#daily-beginner-thread-link) every Thursday.
* Questions that are not advanced may be removed and redirected to the appropriate thread.
## Recommended Resources:
* If you don't receive a response, consider exploring r/LearnPython or join the [Python Discord Server](https://discord.gg/python) for quicker assistance.
## Example Questions:
1. **How can you implement a custom memory allocator in Python?**
2. **What are the best practices for optimizing Cython code for heavy numerical computations?**
3. **How do you set up a multi-threaded architecture using Python's Global Interpreter Lock (GIL)?**
4. **Can you explain the intricacies of metaclasses and how they influence object-oriented design in Python?**
5. **How would you go about implementing a distributed task queue using Celery and RabbitMQ?**
6. **What are some advanced use-cases for Python's decorators?**
7. **How can you achieve real-time data streaming in Python with WebSockets?**
8. **What are the
/r/Python
https://redd.it/1fymjo9
Discord
Join the Python Discord Server!
We're a large community focused around the Python programming language. We believe that anyone can learn to code. | 414235 members
Just Released Version 0.5.0 of Django Action Triggers!
First off, a huge thank you to everyone who provided feedback after the release of version 0.1.0! I've taken your input to heart and have been hard at work iterating and improving this tool. I’m excited to announce the release of version 0.5.0 of django-action-triggers.
There’s still more to come in terms of features and addressing suggestions, but here’s an overview of the current progress.
# What is Django Action Triggers
Django Action Triggers is a Django library that lets you trigger specific actions based on database events, detected via Django Signals. With this library, you can configure actions that run asynchronously when certain triggers (e.g., a model save) are detected.
For example, you could set up a trigger that hits a webhook and sends a message to AWS SQS whenever a new sale record is saved.
# Supported Integrations?
Here’s an overview of what integrations are currently supported:
Webhooks
RabbitMQ
Kafka
Redis
AWS SQS (Simple Queue Service)
AWS SNS (Simple Notification Service)
AWS Lambda (New in version 0.5.0)
GCP Pub/Sub (New in version 0.5.0)
# Comparison
The closest alternative I've come across is Debezium. Debezium allows streaming changes from databases. This project is different and is more suited for people who want a Django integration in the form of a library. Debezium on
/r/django
https://redd.it/1fyn1vo
First off, a huge thank you to everyone who provided feedback after the release of version 0.1.0! I've taken your input to heart and have been hard at work iterating and improving this tool. I’m excited to announce the release of version 0.5.0 of django-action-triggers.
There’s still more to come in terms of features and addressing suggestions, but here’s an overview of the current progress.
# What is Django Action Triggers
Django Action Triggers is a Django library that lets you trigger specific actions based on database events, detected via Django Signals. With this library, you can configure actions that run asynchronously when certain triggers (e.g., a model save) are detected.
For example, you could set up a trigger that hits a webhook and sends a message to AWS SQS whenever a new sale record is saved.
# Supported Integrations?
Here’s an overview of what integrations are currently supported:
Webhooks
RabbitMQ
Kafka
Redis
AWS SQS (Simple Queue Service)
AWS SNS (Simple Notification Service)
AWS Lambda (New in version 0.5.0)
GCP Pub/Sub (New in version 0.5.0)
# Comparison
The closest alternative I've come across is Debezium. Debezium allows streaming changes from databases. This project is different and is more suited for people who want a Django integration in the form of a library. Debezium on
/r/django
https://redd.it/1fyn1vo
GitHub
GitHub - Salaah01/django-action-triggers: A Django library for asynchronously triggering actions in response to database changes.…
A Django library for asynchronously triggering actions in response to database changes. It supports integration with webhooks, message brokers (e.g., Kafka, RabbitMQ), and can trigger other process...
Flask Ecomm project
Hi all, I made this ecomm project using Flask! I could use some help listing some features I could add and some more general feedback. Also if someone wants to look/use the repo please DM me and I'll share the link once I upload it to GitHub just make sure to leave a star lol ;)
https://reddit.com/link/1fy34of/video/6l1piixvsatd1/player
/r/flask
https://redd.it/1fy34of
Hi all, I made this ecomm project using Flask! I could use some help listing some features I could add and some more general feedback. Also if someone wants to look/use the repo please DM me and I'll share the link once I upload it to GitHub just make sure to leave a star lol ;)
https://reddit.com/link/1fy34of/video/6l1piixvsatd1/player
/r/flask
https://redd.it/1fy34of
Just Released Version 0.5.0 of Django Action Triggers!
First off, a huge thank you to everyone who provided feedback after the release of version 0.1.0! I've taken your input to heart and have been hard at work iterating and improving this tool. I’m excited to announce the release of version 0.5.0 of django-action-triggers.
There’s still more to come in terms of features and addressing suggestions, but here’s an overview of the current progress.
# What is Django Action Triggers
Django Action Triggers is a Django library that lets you trigger specific actions based on database events, detected via Django Signals. With this library, you can configure actions that run asynchronously when certain triggers (e.g., a model save) are detected.
For example, you could set up a trigger that hits a webhook and sends a message to AWS SQS whenever a new sale record is saved.
# Supported Integrations?
Here’s an overview of what integrations are currently supported:
Webhooks
RabbitMQ
Kafka
Redis
AWS SQS (Simple Queue Service)
AWS SNS (Simple Notification Service)
AWS Lambda (New in version 0.5.0)
GCP Pub/Sub (New in version 0.5.0)
# Comparison
The closest alternative I've come across is Debezium. Debezium allows streaming changes from databases. This project is different and is more suited for people who want a Django integration in the form of a library. Debezium on
/r/djangolearning
https://redd.it/1fyn2vt
First off, a huge thank you to everyone who provided feedback after the release of version 0.1.0! I've taken your input to heart and have been hard at work iterating and improving this tool. I’m excited to announce the release of version 0.5.0 of django-action-triggers.
There’s still more to come in terms of features and addressing suggestions, but here’s an overview of the current progress.
# What is Django Action Triggers
Django Action Triggers is a Django library that lets you trigger specific actions based on database events, detected via Django Signals. With this library, you can configure actions that run asynchronously when certain triggers (e.g., a model save) are detected.
For example, you could set up a trigger that hits a webhook and sends a message to AWS SQS whenever a new sale record is saved.
# Supported Integrations?
Here’s an overview of what integrations are currently supported:
Webhooks
RabbitMQ
Kafka
Redis
AWS SQS (Simple Queue Service)
AWS SNS (Simple Notification Service)
AWS Lambda (New in version 0.5.0)
GCP Pub/Sub (New in version 0.5.0)
# Comparison
The closest alternative I've come across is Debezium. Debezium allows streaming changes from databases. This project is different and is more suited for people who want a Django integration in the form of a library. Debezium on
/r/djangolearning
https://redd.it/1fyn2vt
GitHub
GitHub - Salaah01/django-action-triggers: A Django library for asynchronously triggering actions in response to database changes.…
A Django library for asynchronously triggering actions in response to database changes. It supports integration with webhooks, message brokers (e.g., Kafka, RabbitMQ), and can trigger other process...