N The 2024 Nobel Prize in Chemistry goes to the people Google Deepmind's AlphaFold. One half to David Baker and the other half jointly to Demis Hassabis and John M. Jumper.
Announcement: https://twitter.com/NobelPrize/status/1843951197960777760
/r/MachineLearning
https://redd.it/1fznxyr
Announcement: https://twitter.com/NobelPrize/status/1843951197960777760
/r/MachineLearning
https://redd.it/1fznxyr
X (formerly Twitter)
The Nobel Prize (@NobelPrize) on X
BREAKING NEWS
The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Chemistry with one half to David Baker “for computational protein design” and the other half jointly to Demis Hassabis and John M. Jumper “for protein structure…
The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Chemistry with one half to David Baker “for computational protein design” and the other half jointly to Demis Hassabis and John M. Jumper “for protein structure…
is there a way to make pyright recognize related name fields?
from django.db import models
class A(models.Model):
id = models.IntegerField()
class B(models.Model):
id = models.IntegerField()
fkA = models.ForeignKey(A, ondelete=models.CASCADE, relatedname="fkB")
a = A(id=1)
b = B(id=2, fkA=a)
a.fkB #Here it says it cannot access attribute fkB for class A
take for example the code snippet above. is there a way to make pyright know that fk\B fields for class A exists?
/r/django
https://redd.it/1fzlr75
from django.db import models
class A(models.Model):
id = models.IntegerField()
class B(models.Model):
id = models.IntegerField()
fkA = models.ForeignKey(A, ondelete=models.CASCADE, relatedname="fkB")
a = A(id=1)
b = B(id=2, fkA=a)
a.fkB #Here it says it cannot access attribute fkB for class A
take for example the code snippet above. is there a way to make pyright know that fk\B fields for class A exists?
/r/django
https://redd.it/1fzlr75
Reddit
From the django community on Reddit
Explore this post and more from the django community
Do multiple requests create multiple instances of a middleware object?
For example, let's say you have some middleware that does something with process_view(), to run some code before a view is rendered. If you have you Django app deployed with Guincorn and Nginix, does every client request get its own instantiation of that middleware class? Or do they all share the same instantiation of that object in memory wherever your code is deployed? (or does each of Gunicorn's workers create its own instantiation?)
/r/djangolearning
https://redd.it/1fzswzm
For example, let's say you have some middleware that does something with process_view(), to run some code before a view is rendered. If you have you Django app deployed with Guincorn and Nginix, does every client request get its own instantiation of that middleware class? Or do they all share the same instantiation of that object in memory wherever your code is deployed? (or does each of Gunicorn's workers create its own instantiation?)
/r/djangolearning
https://redd.it/1fzswzm
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
What personal challenges have you solved using Python? Any interesting projects or automations?
Hey everyone! I'm curious—what have you used Python for in your daily life? Are there any small, repetitive tasks you've automated that made things easier or saved you time? I'd love to hear about it!
I stumbled upon an old article on this Python a while ago. I think it's worth revisiting this topic about it again.
/r/Python
https://redd.it/1fzupwm
Hey everyone! I'm curious—what have you used Python for in your daily life? Are there any small, repetitive tasks you've automated that made things easier or saved you time? I'd love to hear about it!
I stumbled upon an old article on this Python a while ago. I think it's worth revisiting this topic about it again.
/r/Python
https://redd.it/1fzupwm
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
N Jurgen Schmidhuber on 2024 Physics Nobel Prize
The NobelPrizeinPhysics2024 for Hopfield & Hinton rewards plagiarism and incorrect attribution in computer science. It's mostly about Amari's "Hopfield network" and the "Boltzmann Machine."
1. The Lenz-Ising recurrent architecture with neuron-like elements was published in 1925 . In 1972, Shun-Ichi Amari made it adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights. However, Amari is only briefly cited in the "Scientific Background to the Nobel Prize in Physics 2024." Unfortunately, Amari's net was later called the "Hopfield network." Hopfield republished it 10 years later, without citing Amari, not even in later papers.
2. The related Boltzmann Machine paper by Ackley, Hinton, and Sejnowski (1985) was about learning internal representations in hidden units of neural networks (NNs) S20. It didn't cite the first working algorithm for deep learning of internal representations by Ivakhnenko & Lapa. It didn't cite Amari's separate work (1967-68) on learning internal representations in deep NNs end-to-end through stochastic gradient descent (SGD). Not even the later surveys by the authors nor the "Scientific Background to the Nobel Prize in Physics 2024" mention these origins of deep learning. (BM also did not cite relevant prior work by Sherrington & Kirkpatrick &
/r/MachineLearning
https://redd.it/1fzw5b1
The NobelPrizeinPhysics2024 for Hopfield & Hinton rewards plagiarism and incorrect attribution in computer science. It's mostly about Amari's "Hopfield network" and the "Boltzmann Machine."
1. The Lenz-Ising recurrent architecture with neuron-like elements was published in 1925 . In 1972, Shun-Ichi Amari made it adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights. However, Amari is only briefly cited in the "Scientific Background to the Nobel Prize in Physics 2024." Unfortunately, Amari's net was later called the "Hopfield network." Hopfield republished it 10 years later, without citing Amari, not even in later papers.
2. The related Boltzmann Machine paper by Ackley, Hinton, and Sejnowski (1985) was about learning internal representations in hidden units of neural networks (NNs) S20. It didn't cite the first working algorithm for deep learning of internal representations by Ivakhnenko & Lapa. It didn't cite Amari's separate work (1967-68) on learning internal representations in deep NNs end-to-end through stochastic gradient descent (SGD). Not even the later surveys by the authors nor the "Scientific Background to the Nobel Prize in Physics 2024" mention these origins of deep learning. (BM also did not cite relevant prior work by Sherrington & Kirkpatrick &
/r/MachineLearning
https://redd.it/1fzw5b1
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
PEP 760 – No More Bare Excepts
PEP 760 – No More Bare Excepts
This PEP proposes disallowing bare
- https://peps.python.org/pep-0760/
- https://discuss.python.org/t/pep-760-no-more-bare-excepts/
/r/Python
https://redd.it/1fzxwj3
PEP 760 – No More Bare Excepts
This PEP proposes disallowing bare
except: clauses in Python’s exception-handling syntax.- https://peps.python.org/pep-0760/
- https://discuss.python.org/t/pep-760-no-more-bare-excepts/
/r/Python
https://redd.it/1fzxwj3
Python Enhancement Proposals (PEPs)
PEP 760 – No More Bare Excepts | peps.python.org
This PEP proposes disallowing bare except: clauses in Python’s exception-handling syntax. Currently, Python allows catching all exceptions with a bare except: clause, which can lead to overly broad exception handling and mask important errors. This PEP ...
Speeding up unit tests in CI/CD
I have a large Django project that currently takes ca. 30 minutes to run all the unit tests serially in our CI/CD pipeline and we want speed this up as it's blocking our releases.
I have a Ruby background and am new to Python - so I'm investigating the options available in the Python ecosystem to speed this up. So far I've found:
[pytest-xdist](https://pypi.org/project/pytest-xdist/)
pytest-split
[pytest-parallel](https://pypi.org/project/pytest-parallel/)
pytest-run-parallel
[tox](https://tox.wiki/en/latest/index.html) parallel (not exactly what I need, as I only have one environment)
CircleCI's test splitting - I've used this for Ruby, and it didn't do so well when some classes had a lot of tests in them
I'd love to hear your experiences of these tools and if you have any other suggestions.
/r/Python
https://redd.it/1fzreee
I have a large Django project that currently takes ca. 30 minutes to run all the unit tests serially in our CI/CD pipeline and we want speed this up as it's blocking our releases.
I have a Ruby background and am new to Python - so I'm investigating the options available in the Python ecosystem to speed this up. So far I've found:
[pytest-xdist](https://pypi.org/project/pytest-xdist/)
pytest-split
[pytest-parallel](https://pypi.org/project/pytest-parallel/)
pytest-run-parallel
[tox](https://tox.wiki/en/latest/index.html) parallel (not exactly what I need, as I only have one environment)
CircleCI's test splitting - I've used this for Ruby, and it didn't do so well when some classes had a lot of tests in them
I'd love to hear your experiences of these tools and if you have any other suggestions.
/r/Python
https://redd.it/1fzreee
PyPI
pytest-xdist
pytest xdist plugin for distributed testing, most importantly across multiple CPUs
How should I get started with Django?
I recently started to work with Django but I'm completely utterly humbled and devastated at the same time whenever I try to add a new function with an API call into my react project. I really don't understand the magic behind it and usually need to get help from other colleagues.
The Django documents are (I'm sorry) terrible. The more I read into it the more questions arise.
Are there any sources that can give me a better insight on how to work with API in Django and maybe API in general?
I appreciate any sources given (except Django docs)
/r/djangolearning
https://redd.it/1g02p1b
I recently started to work with Django but I'm completely utterly humbled and devastated at the same time whenever I try to add a new function with an API call into my react project. I really don't understand the magic behind it and usually need to get help from other colleagues.
The Django documents are (I'm sorry) terrible. The more I read into it the more questions arise.
Are there any sources that can give me a better insight on how to work with API in Django and maybe API in general?
I appreciate any sources given (except Django docs)
/r/djangolearning
https://redd.it/1g02p1b
Reddit
From the djangolearning community on Reddit
Explore this post and more from the djangolearning community
in 2024 learn flask or django?
hi everyone, i was wonder which one of these frameworks is better and worth to learn and make money? flask? django? or learn both?
/r/flask
https://redd.it/1fzsrct
hi everyone, i was wonder which one of these frameworks is better and worth to learn and make money? flask? django? or learn both?
/r/flask
https://redd.it/1fzsrct
Reddit
From the flask community on Reddit
Explore this post and more from the flask community
Static files serving - S3 bucket alternative
Hello guys. I wanted to build an app with angular in frontend and Django in backend. The client must be able to click on a link and download a pdf. But the user must login to enter the app. How can I serve these pdfs? A friend told me about an S3 bucket. But is there any open source alternative for this? Is there any better solution? How to better integrate this solution with my Django authentication?
/r/django
https://redd.it/1fzuosv
Hello guys. I wanted to build an app with angular in frontend and Django in backend. The client must be able to click on a link and download a pdf. But the user must login to enter the app. How can I serve these pdfs? A friend told me about an S3 bucket. But is there any open source alternative for this? Is there any better solution? How to better integrate this solution with my Django authentication?
/r/django
https://redd.it/1fzuosv
Reddit
From the django community on Reddit
Explore this post and more from the django community
Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1g05xw8
# Weekly Thread: Professional Use, Jobs, and Education 🏢
Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment.
---
## How it Works:
1. Career Talk: Discuss using Python in your job, or the job market for Python roles.
2. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources.
3. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally.
---
## Guidelines:
- This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar.
- Keep discussions relevant to Python in the professional and educational context.
---
## Example Topics:
1. Career Paths: What kinds of roles are out there for Python developers?
2. Certifications: Are Python certifications worth it?
3. Course Recommendations: Any good advanced Python courses to recommend?
4. Workplace Tools: What Python libraries are indispensable in your professional work?
5. Interview Tips: What types of Python questions are commonly asked in interviews?
---
Let's help each other grow in our careers and education. Happy discussing! 🌟
/r/Python
https://redd.it/1g05xw8
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
D Why is there so little statistical analyses in ML research?
Why is it so common in ML research to not do any statistical test to verify that the results are actually significant? Most of the times, a single outcome is presented, instead of doing multiple runs and performing something like a t-test or Mann Whitney U Test etc. Drawing conclusions based on a single sample would be impossible in other disciplines, like psychology or medicine, why is this not considered a problem in ML research?
Also, can someone recommend a book for exactly this, statistical tests in the context of ml?
/r/MachineLearning
https://redd.it/1fznaa9
Why is it so common in ML research to not do any statistical test to verify that the results are actually significant? Most of the times, a single outcome is presented, instead of doing multiple runs and performing something like a t-test or Mann Whitney U Test etc. Drawing conclusions based on a single sample would be impossible in other disciplines, like psychology or medicine, why is this not considered a problem in ML research?
Also, can someone recommend a book for exactly this, statistical tests in the context of ml?
/r/MachineLearning
https://redd.it/1fznaa9
Reddit
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
ParScrape v0.4.6 Released
# What My project Does:
Scrapes data from sites and uses AI to extract structured data from it.
# Whats New:
Added more AI providers
Updated provider pricing data
Minor code cleanup and bug fixes
Better cleaning of HTML
# Key Features:
Uses Playwright / Selenium to bypass most simple bot checks.
Uses AI to extract data from a page and save it various formats such as CSV, XLSX, JSON, Markdown.
Has rich console output to display data right in your terminal.
# GitHub and PyPI
PAR Scrape is under active development and getting new features all the time.
Check out the project on GitHub or for full documentation, installation instructions, and to contribute: [https://github.com/paulrobello/par\_scrape](https://github.com/paulrobello/par_scrape)
PyPI https://pypi.org/project/par\_scrape/
# Comparison:
I have seem many command line and web applications for scraping but none that are as simple, flexible and fast as ParScrape
# Target Audience
AI enthusiasts and data hungry hobbyist
/r/Python
https://redd.it/1g06arb
# What My project Does:
Scrapes data from sites and uses AI to extract structured data from it.
# Whats New:
Added more AI providers
Updated provider pricing data
Minor code cleanup and bug fixes
Better cleaning of HTML
# Key Features:
Uses Playwright / Selenium to bypass most simple bot checks.
Uses AI to extract data from a page and save it various formats such as CSV, XLSX, JSON, Markdown.
Has rich console output to display data right in your terminal.
# GitHub and PyPI
PAR Scrape is under active development and getting new features all the time.
Check out the project on GitHub or for full documentation, installation instructions, and to contribute: [https://github.com/paulrobello/par\_scrape](https://github.com/paulrobello/par_scrape)
PyPI https://pypi.org/project/par\_scrape/
# Comparison:
I have seem many command line and web applications for scraping but none that are as simple, flexible and fast as ParScrape
# Target Audience
AI enthusiasts and data hungry hobbyist
/r/Python
https://redd.it/1g06arb
What to use instead of callbacks?
I have a lot of experience with Python, but I've also worked with JavaScript and Go and in some cases, it just makes sense to allow the caller to pass a callback (ore more likely a closure). For example to notify the caller of an event, or to allow it to make a decision. I'm considering this in the context of creating library code.
Python lambdas are limited, and writing named functions is clumsier than anonymous functions from other languages. Is there something - less clumsy, more Pythonic?
In my example, there's a long-ish multi-stage process, and I'd like to give the caller an opportunity to validate or modify the result of each step, in a simple way. I've considered class inheritance and mixins, but that seems like too much setup for just a callback. Is there some Python pattern I'm missing?
/r/Python
https://redd.it/1g02dtg
I have a lot of experience with Python, but I've also worked with JavaScript and Go and in some cases, it just makes sense to allow the caller to pass a callback (ore more likely a closure). For example to notify the caller of an event, or to allow it to make a decision. I'm considering this in the context of creating library code.
Python lambdas are limited, and writing named functions is clumsier than anonymous functions from other languages. Is there something - less clumsy, more Pythonic?
In my example, there's a long-ish multi-stage process, and I'd like to give the caller an opportunity to validate or modify the result of each step, in a simple way. I've considered class inheritance and mixins, but that seems like too much setup for just a callback. Is there some Python pattern I'm missing?
/r/Python
https://redd.it/1g02dtg
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
[blog post] Hugging Face + Dask for parallel data processing and model inference
Wanted to share a blog post on using Hugging Face with Dask to process the FineWeb dataset. The example goes through:
* Reading directly from Hugging Face with Dask, eg `df = dask.dataframe.read_parquet(hf://...)`
* Using a Hugging Face Language Model to classify the educational level of the text.
* Filtering the highly educational web pages as a new dataset and writing in parallel directly from Dask to Hugging Face storage.
The example goes through processing a small subset of the FineWeb dataset with pandas and then scaling out to multiple GPUs with Dask.
Blog post: [https://huggingface.co/blog/dask-scaling](https://huggingface.co/blog/dask-scaling)
/r/Python
https://redd.it/1fzyh7x
Wanted to share a blog post on using Hugging Face with Dask to process the FineWeb dataset. The example goes through:
* Reading directly from Hugging Face with Dask, eg `df = dask.dataframe.read_parquet(hf://...)`
* Using a Hugging Face Language Model to classify the educational level of the text.
* Filtering the highly educational web pages as a new dataset and writing in parallel directly from Dask to Hugging Face storage.
The example goes through processing a small subset of the FineWeb dataset with pandas and then scaling out to multiple GPUs with Dask.
Blog post: [https://huggingface.co/blog/dask-scaling](https://huggingface.co/blog/dask-scaling)
/r/Python
https://redd.it/1fzyh7x
huggingface.co
Scaling AI-based Data Processing with Hugging Face + Dask
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Considering moving from Flask-Sqlalchemy to Flask and plain Sqlalchemy: not sure how to start, or if useful
Hi all,
I wrote a free language-learning tool called Lute. I'm happy with how the project's been going, I and a bunch of other people use it.
I wrote Lute using Flask, and overall it's been very good. Recently I've been wondering if I should have tried to avoid Flask-Sqlalchemy -- I was over my head when I started, and did the best I could.
My reasons for wondering:
- when I'm running some service or domain level tests, eg., I'm connecting to the db, but I'm not using Flask. It's just python code creating objects, calling methods, etc. The tests all need an app context to execute, but that doesn't seem like it's adding anything.
- simple data crunching noscripts have to call the app initializer, and again push an app context, when really all I need is the service layer and domain objects. Unfortunately a lot of the code has stuff like "from lute.db import db" and "db.session.query() ...", etc, so the db usage is scattered around the code.
Today I hacked at changing it to plain sql alchemy, but it ended up spiralling out of my control, so I put that on ice to think a bit more.
These
/r/flask
https://redd.it/1g0ajo0
Hi all,
I wrote a free language-learning tool called Lute. I'm happy with how the project's been going, I and a bunch of other people use it.
I wrote Lute using Flask, and overall it's been very good. Recently I've been wondering if I should have tried to avoid Flask-Sqlalchemy -- I was over my head when I started, and did the best I could.
My reasons for wondering:
- when I'm running some service or domain level tests, eg., I'm connecting to the db, but I'm not using Flask. It's just python code creating objects, calling methods, etc. The tests all need an app context to execute, but that doesn't seem like it's adding anything.
- simple data crunching noscripts have to call the app initializer, and again push an app context, when really all I need is the service layer and domain objects. Unfortunately a lot of the code has stuff like "from lute.db import db" and "db.session.query() ...", etc, so the db usage is scattered around the code.
Today I hacked at changing it to plain sql alchemy, but it ended up spiralling out of my control, so I put that on ice to think a bit more.
These
/r/flask
https://redd.it/1g0ajo0
Nginx 404'ing all images.
I'm not sure if this should be in the nginx or Django Reddit, but I'll try here first. My blog is running on Docker. Initially, all images in the static files from the first set of articles I created while coding the blog were accessible to nginx. However, when I tried adding articles from the admin panel after deployment, the new images returned a 404 error. I tried debugging by checking my code and realized I didn't include a path for the media folder in the `settings.py` file. After adding that line and rebuilding the container... well, now the previously accessible images are returning a 404. I think my nginx server might not be configured correctly. *I've entered the container and verified that files are present*
Dockerfile:
# Use the official Python image from the Docker Hub
FROM python:3.11
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set the working directory
WORKDIR /app
# Copy the requirements file into the container
COPY requirements.txt /app/
# Install the dependencies
RUN pip install --upgrade pip && pip install -r requirements.txt
# Copy the entire project into the container
COPY . /app/
# Collect static files
RUN python manage.py collectstatic --noinput
EXPOSE 1617
# Run the Gunicorn server
CMD ["gunicorn", "redacted.wsgi:application", "--bind", "0.0.0.0:1617"\]
docker-compose:
version: '3'
services:
web:
build: .
command: gunicorn --workers
/r/django
https://redd.it/1g0cvxr
I'm not sure if this should be in the nginx or Django Reddit, but I'll try here first. My blog is running on Docker. Initially, all images in the static files from the first set of articles I created while coding the blog were accessible to nginx. However, when I tried adding articles from the admin panel after deployment, the new images returned a 404 error. I tried debugging by checking my code and realized I didn't include a path for the media folder in the `settings.py` file. After adding that line and rebuilding the container... well, now the previously accessible images are returning a 404. I think my nginx server might not be configured correctly. *I've entered the container and verified that files are present*
Dockerfile:
# Use the official Python image from the Docker Hub
FROM python:3.11
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Set the working directory
WORKDIR /app
# Copy the requirements file into the container
COPY requirements.txt /app/
# Install the dependencies
RUN pip install --upgrade pip && pip install -r requirements.txt
# Copy the entire project into the container
COPY . /app/
# Collect static files
RUN python manage.py collectstatic --noinput
EXPOSE 1617
# Run the Gunicorn server
CMD ["gunicorn", "redacted.wsgi:application", "--bind", "0.0.0.0:1617"\]
docker-compose:
version: '3'
services:
web:
build: .
command: gunicorn --workers
/r/django
https://redd.it/1g0cvxr
Reddit
From the django community on Reddit
Explore this post and more from the django community
How do I make a number column that automatically increases but only for a group of three column?
The model:
class Bunny(models.Model):
lear = models.CharField(maxlength=10)
rear = models.CharField(maxlength=10)
sex = models.CharField(maxlength=1, choices=[('M', 'Male'), ('F', 'Female')])
tattooedon = models.DateTimeField(null=True)
CID = models.ForeignKey(Club, ondelete=models.PROTECT)
UIDowner = models.ForeignKey("authservice.User", ondelete=models.PROTECT)
TID = models.ForeignKey("coverslip.Throw", ondelete=models.PROTECT, null=True, blank=True )
RID = models.ForeignKey("bunnies.Race", ondelete=models.PROTECT)
COLID = models.ForeignKey("bunnies.Color", ondelete=models.PROTECT, null=True)
UIDtattoomaster = models.ForeignKey("authservice.User", ondelete=models.PROTECT, relatedname='tattooedbunnies')
serialnumber = models.PositiveIntegerField(null=True, blank=True)
def str(self):
return self.lear + " / " + self.rear
class Meta:
pass
/r/django
https://redd.it/1g0gjp8
The model:
class Bunny(models.Model):
lear = models.CharField(maxlength=10)
rear = models.CharField(maxlength=10)
sex = models.CharField(maxlength=1, choices=[('M', 'Male'), ('F', 'Female')])
tattooedon = models.DateTimeField(null=True)
CID = models.ForeignKey(Club, ondelete=models.PROTECT)
UIDowner = models.ForeignKey("authservice.User", ondelete=models.PROTECT)
TID = models.ForeignKey("coverslip.Throw", ondelete=models.PROTECT, null=True, blank=True )
RID = models.ForeignKey("bunnies.Race", ondelete=models.PROTECT)
COLID = models.ForeignKey("bunnies.Color", ondelete=models.PROTECT, null=True)
UIDtattoomaster = models.ForeignKey("authservice.User", ondelete=models.PROTECT, relatedname='tattooedbunnies')
serialnumber = models.PositiveIntegerField(null=True, blank=True)
def str(self):
return self.lear + " / " + self.rear
class Meta:
pass
/r/django
https://redd.it/1g0gjp8
Reddit
From the django community on Reddit
Explore this post and more from the django community
PEP 735 Dependency Groups is accepted
https://peps.python.org/pep-0735/
https://discuss.python.org/t/pep-735-dependency-groups-in-pyproject-toml/39233/312
> This PEP specifies a mechanism for storing package requirements in pyproject.toml files such that they are not included in any built distribution of the project.
>
> This is suitable for creating named groups of dependencies, similar to requirements.txt files, which launchers, IDEs, and other tools can find and identify by name.
/r/Python
https://redd.it/1g0iqfr
https://peps.python.org/pep-0735/
https://discuss.python.org/t/pep-735-dependency-groups-in-pyproject-toml/39233/312
> This PEP specifies a mechanism for storing package requirements in pyproject.toml files such that they are not included in any built distribution of the project.
>
> This is suitable for creating named groups of dependencies, similar to requirements.txt files, which launchers, IDEs, and other tools can find and identify by name.
/r/Python
https://redd.it/1g0iqfr
Python Enhancement Proposals (PEPs)
PEP 735 – Dependency Groups in pyproject.toml | peps.python.org
This PEP specifies a mechanism for storing package requirements in pyproject.toml files such that they are not included in any built distribution of the project.
What I Learned from Making the Python Back End for My New Webapp
I learned a lot from making this, and think a lot of it would be interesting to others making web apps in Python:
https://youtubetrannoscriptoptimizer.com/blog/02\_what\_i\_learned\_making\_the\_python\_backend\_for\_yto
/r/Python
https://redd.it/1g0jybv
I learned a lot from making this, and think a lot of it would be interesting to others making web apps in Python:
https://youtubetrannoscriptoptimizer.com/blog/02\_what\_i\_learned\_making\_the\_python\_backend\_for\_yto
/r/Python
https://redd.it/1g0jybv
Youtubetrannoscriptoptimizer
What I Learned from Making the Python Backend for YouTube Trannoscript Optimizer
An in-depth look at the technical challenges and solutions in creating the FastAPI backend for YouTubeTrannoscriptOptimizer.com, a powerful tool for transforming YouTube content into polished written documents and interactive quizzes.
R nGPT: Normalized Transformer with Representation Learning on the Hypersphere
Paper: https://arxiv.org/pdf/2410.01131
Abstract:
>We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.
Highlights:
>Our key contributions are as follows:
Optimization of network parameters on the hypersphere We propose to normalize all vectors forming the embedding dimensions of network matrices to lie on a unit norm hypersphere. This allows us to view matrix-vector multiplications as dot products representing cosine similarities bounded in [-1,1\]. The normalization renders weight decay unnecessary.
Normalized Transformer as a variable-metric optimizer on the hypersphere The normalized Transformer itself performs a multi-step optimization (two steps per layer) on a hypersphere, where each step of the attention
/r/MachineLearning
https://redd.it/1g0lnij
Paper: https://arxiv.org/pdf/2410.01131
Abstract:
>We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.
Highlights:
>Our key contributions are as follows:
Optimization of network parameters on the hypersphere We propose to normalize all vectors forming the embedding dimensions of network matrices to lie on a unit norm hypersphere. This allows us to view matrix-vector multiplications as dot products representing cosine similarities bounded in [-1,1\]. The normalization renders weight decay unnecessary.
Normalized Transformer as a variable-metric optimizer on the hypersphere The normalized Transformer itself performs a multi-step optimization (two steps per layer) on a hypersphere, where each step of the attention
/r/MachineLearning
https://redd.it/1g0lnij