real-time preview. Drag a slider, see it instantly. The way it should be. **The shader pipeline:**// Simplified version of the color grading shader uniform float u\_exposure; uniform float u\_contrast; uniform float u\_saturation; uniform mat3 u\_perspectiveMatrix; void main() { vec4 color = texture(u\_texture, transformedCoord); // Exposure (stops) color.rgb \*= pow(2.0, u\_exposure); // Contrast (pivot at 0.5) color.rgb = (color.rgb - 0.5) \* u\_contrast + 0.5; // Saturation (luminance-preserving) float luma = dot(color. rgb, vec3(0.299, 0.587, 0.114)); color. rgb = mix(vec3(luma), color.rgb, u\_saturation); gl\_FragColor = color; }
# All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.
# Non-destructive editing with real-time preview
The edit mode is fully non-destructive:
* **Light adjustments:** Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
* **Color grading:** Saturation, Vibrance, White Balance
* **Black & White:** Intensity, Neutrals, Tone, Grain with artistic film presets
* **Perspective correction:** Vertical/horizontal keystoning, ±45° rotation
* **Black border prevention:** Geometric validation ensures no black pixels after transforms All edits are stored in `.ipo` sidecar files. Your originals stay untouched forever. **The math behind perspective correction:** I defined three coordinate systems: **Texture Space** — raw pixels from the source image **Projected Space** — after perspective matrix (where validation happens) **Screen Space** — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use `point_in_convex_polygon` checks to prevent any black borders before applying the crop.
# Map view with GPS clustering
# Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.
# The architecture
Backend (Pure Python, no GUI dependency):
├── models/ → Album, LiveGroup data structures
├── io/ → Scanner, metadata extraction
├── core/ → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/ → index.jsonl, file locking
└── app. py → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py → Qt signals/slots bridge to backend
├── services/ → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/ → Edit panels, map view
└── gl_*/ → OpenGL renderers (image viewer, crop tool, perspective)
The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. **Performance tier fallback:**
GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred fallback →
# If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.
# Why I'm posting
I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? **The app is:**
* 🆓 Free and open source (MIT)
* 💾 100% local, no cloud, no account
* 🪟 Windows native (Linux support planned)
* ⚡ GPU-accelerated, but runs on old laptops too
* 📱 Built specifically for iPhone Live Photo support GitHub: [https://github](https://github). com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've
# All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.
# Non-destructive editing with real-time preview
The edit mode is fully non-destructive:
* **Light adjustments:** Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
* **Color grading:** Saturation, Vibrance, White Balance
* **Black & White:** Intensity, Neutrals, Tone, Grain with artistic film presets
* **Perspective correction:** Vertical/horizontal keystoning, ±45° rotation
* **Black border prevention:** Geometric validation ensures no black pixels after transforms All edits are stored in `.ipo` sidecar files. Your originals stay untouched forever. **The math behind perspective correction:** I defined three coordinate systems: **Texture Space** — raw pixels from the source image **Projected Space** — after perspective matrix (where validation happens) **Screen Space** — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use `point_in_convex_polygon` checks to prevent any black borders before applying the crop.
# Map view with GPS clustering
# Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.
# The architecture
Backend (Pure Python, no GUI dependency):
├── models/ → Album, LiveGroup data structures
├── io/ → Scanner, metadata extraction
├── core/ → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/ → index.jsonl, file locking
└── app. py → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py → Qt signals/slots bridge to backend
├── services/ → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/ → Edit panels, map view
└── gl_*/ → OpenGL renderers (image viewer, crop tool, perspective)
The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. **Performance tier fallback:**
GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred fallback →
# If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.
# Why I'm posting
I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? **The app is:**
* 🆓 Free and open source (MIT)
* 💾 100% local, no cloud, no account
* 🪟 Windows native (Linux support planned)
* ⚡ GPU-accelerated, but runs on old laptops too
* 📱 Built specifically for iPhone Live Photo support GitHub: [https://github](https://github). com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've
Built a small open source analytics tool for GitHub repos
I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account.
It shows things like:
* Reviewer activity
* Contributor activity
* First-time contributor patterns
* Issue creation trends
* Issue lifecycle health
* Backlog health
* PR review lag
Nothing crazy, but seemed cool to me.
Here’s the link if you want to try it:
[github link](https://github.com/highfly-app/github-analytics)
[analytics page link](https://highfly.app/analytics?ref=reddit)
Example: [vercel/next.js repo](https://highfly.app/analytics/vercel/next.js?ref=reddit&timeRange=3months)
If you’ve got thoughts or ideas on more things to add, let me know.
Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.
Please star it if you can
https://redd.it/1pez878
@r_opensource
I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account.
It shows things like:
* Reviewer activity
* Contributor activity
* First-time contributor patterns
* Issue creation trends
* Issue lifecycle health
* Backlog health
* PR review lag
Nothing crazy, but seemed cool to me.
Here’s the link if you want to try it:
[github link](https://github.com/highfly-app/github-analytics)
[analytics page link](https://highfly.app/analytics?ref=reddit)
Example: [vercel/next.js repo](https://highfly.app/analytics/vercel/next.js?ref=reddit&timeRange=3months)
If you’ve got thoughts or ideas on more things to add, let me know.
Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.
Please star it if you can
https://redd.it/1pez878
@r_opensource
GitHub
GitHub - highfly-app/github-analytics: Shows deeper github repo analytics
Shows deeper github repo analytics . Contribute to highfly-app/github-analytics development by creating an account on GitHub.
OpenScad type of app for 2D graphic design?
Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?
https://redd.it/1pf598o
@r_opensource
Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?
https://redd.it/1pf598o
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Creator of Ruby on Rails denounces OSI's definition of "open source"
https://x.com/dhh/status/1996643925126533282
https://redd.it/1pf6cc5
@r_opensource
https://x.com/dhh/status/1996643925126533282
https://redd.it/1pf6cc5
@r_opensource
X (formerly Twitter)
DHH (@dhh) on X
@codejake I have no interest in playing capitalization games from a "complainer's viewpoint". Take the gift, don't take the gift. Both fine options! But get the fuck out of here trying to assert some narrow, proprietary definition of common words like "open"…
GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.
https://github.com/larswaechter/tokemon
https://redd.it/1pf6v2p
@r_opensource
https://github.com/larswaechter/tokemon
https://redd.it/1pf6v2p
@r_opensource
GitHub
GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.
A Node.js library for reading streamed JSON. Contribute to larswaechter/tokemon development by creating an account on GitHub.
CloudMeet - self-hosted Calendly alternative running on Cloudflare's free tier
Built a simple meeting scheduler because I didn't want to pay for Calendly.
It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).
Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.
Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.
Demo: https://meet.klappe.dev/cloudmeet
GitHub: https://github.com/dennisklappe/CloudMeet
MIT licensed. Happy to hear feedback or answer questions.
https://redd.it/1pfbc74
@r_opensource
Built a simple meeting scheduler because I didn't want to pay for Calendly.
It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).
Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.
Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.
Demo: https://meet.klappe.dev/cloudmeet
GitHub: https://github.com/dennisklappe/CloudMeet
MIT licensed. Happy to hear feedback or answer questions.
https://redd.it/1pfbc74
@r_opensource
I built an automated court scraper because finding a good lawyer shouldn't be a guessing game
Hey everyone,
I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.
So, I built CourtScrapper to fix this.
It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.
What My Project Does
Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.
Target Audience
The average person who is looking for a lawyer that makes sense for their particular situation
Comparison
Enterprise software that has API connections to state courts e.g. lexus nexus, west law
The Tech Stack:
Python
Playwright (for browser automation/stealth)
Pandas (for data formatting)
My personal use case:
1. Gather a list of lawyers I found through google
2. Adjust the values in the config file to determine the cases to be scraped
3. Program generates the excel sheet with the relevant cases for the listed attorneys
4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
2. How recent are similar cases handled by the lawyer?
3. Is the nature of the case similar to my situation? If so, what is the result of the case?
4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases
Note:
I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
I'm running this program as a proof of concept for now so it is only Dallas
I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
Same for any technical questions, read the documentation before asking any questions
I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.
Repo
Hey everyone,
I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.
So, I built CourtScrapper to fix this.
It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.
What My Project Does
Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.
Target Audience
The average person who is looking for a lawyer that makes sense for their particular situation
Comparison
Enterprise software that has API connections to state courts e.g. lexus nexus, west law
The Tech Stack:
Python
Playwright (for browser automation/stealth)
Pandas (for data formatting)
My personal use case:
1. Gather a list of lawyers I found through google
2. Adjust the values in the config file to determine the cases to be scraped
3. Program generates the excel sheet with the relevant cases for the listed attorneys
4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
2. How recent are similar cases handled by the lawyer?
3. Is the nature of the case similar to my situation? If so, what is the result of the case?
4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases
Note:
I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
I'm running this program as a proof of concept for now so it is only Dallas
I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
Same for any technical questions, read the documentation before asking any questions
I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.
Repo
Multi Agent Healthcare Assistant
As part of the Kaggle “5-Day Agents” program, I built a LLM-Based Multi-Agent Healthcare Assistant — a compact but powerful project demonstrating how AI agents can work together to support medical decision workflows.
What it does:
- Uses multiple AI agents for symptom analysis, triage, medical Q&A, and report summarization
- Provides structured outputs and risk categories
- Built with Google ADK, Python, and a clean Streamlit UI
🔗 Project & Code:
Web Application: https://medsense-ai.streamlit.app/
Code: https://github.com/Arvindh99/Multi-Level-AI-Healthcare-Agent-Google-ADK
https://redd.it/1pfi881
@r_opensource
As part of the Kaggle “5-Day Agents” program, I built a LLM-Based Multi-Agent Healthcare Assistant — a compact but powerful project demonstrating how AI agents can work together to support medical decision workflows.
What it does:
- Uses multiple AI agents for symptom analysis, triage, medical Q&A, and report summarization
- Provides structured outputs and risk categories
- Built with Google ADK, Python, and a clean Streamlit UI
🔗 Project & Code:
Web Application: https://medsense-ai.streamlit.app/
Code: https://github.com/Arvindh99/Multi-Level-AI-Healthcare-Agent-Google-ADK
https://redd.it/1pfi881
@r_opensource
Streamlit
AI Medical Assistant
This project demonstrates a robust, safety-focused Multi-Level Agent System built using the Googl...
A fast lightweight similarity search engine built in Rust
https://ahnlich.dev
https://redd.it/1pfkymi
@r_opensource
https://ahnlich.dev
https://redd.it/1pfkymi
@r_opensource
ahnlich.dev
A project by developers bringing vector database and artificial intelligence powered semantic search abilities closer to you
Advice on Getting Started with Open Source Contributions ?
Hey,
I’ve been wanting to get into open source for a while but im feeling stuck. I really want to improve my development skills and not rely on vibe coding too much. There’s so much info out there, it’s overwhelming. For someone totally new, what’s the easiest way to find a project that’s actually friendly to beginners?
Also, I’m nervous about accidentally breaking stuff or messing things up for others. I know maintainers review PRs, but how did you get over that fear when you first started? I want to be responsible and make sure my code works before submitting. How do you test your changes locally? What’s a good way to self-review so I’m confident I’m not wasting anyone’s time?
I’m decent with git and GitHub and have been working as an intern for 7 months, so I’m not a complete newbie. Any advice, tips, or been there done that stories would be graet.
Thanks a lot!
https://redd.it/1pfmghg
@r_opensource
Hey,
I’ve been wanting to get into open source for a while but im feeling stuck. I really want to improve my development skills and not rely on vibe coding too much. There’s so much info out there, it’s overwhelming. For someone totally new, what’s the easiest way to find a project that’s actually friendly to beginners?
Also, I’m nervous about accidentally breaking stuff or messing things up for others. I know maintainers review PRs, but how did you get over that fear when you first started? I want to be responsible and make sure my code works before submitting. How do you test your changes locally? What’s a good way to self-review so I’m confident I’m not wasting anyone’s time?
I’m decent with git and GitHub and have been working as an intern for 7 months, so I’m not a complete newbie. Any advice, tips, or been there done that stories would be graet.
Thanks a lot!
https://redd.it/1pfmghg
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I built my own Open Source extension for Broken Link Building & Site Audits
Hi,
I wanted to share a project I’ve been working on recently.
Originally, I started coding this because I just needed a quick way to spot broken backlinks on a page to do outreach (Broken Link Building). However, I got a bit carried away and it evolved into a full suite for analyzing on-page SEO, link integrity, and site structure.
It is 100% Open Source and runs locally in your browser.
Key Features for SEOs:
Status Analysis: Instantly detects broken links (404/500/Timeouts) and traces full redirect chains (e.g., 301 -> 302 -> 200).
Visual Site Audit: This is the biggest feature. It recursively crawls a website (up to 4 levels deep) and builds an interactive Force-Directed Graph. This helps you visualize internal linking structures and spot isolated nodes or errors visually.
SEO Metrics: Integrates with Moz API (V2) to show DA scores directly in the table and flags Rel attributes (dofollow/sponsored/ugc).
Automation: You can set it to monitor specific URLs daily in the background. It sends an email or browser notification if a backlink drops or breaks.
⚠️ : I built this entirely on my own in my free time. While I use it daily, you might encounter some bugs or unpolished features depending on the specific site structure you are analyzing.
I’m constantly working to fix them, but please be patient! If you are a dev or just want to help, I would be extremely happy to receive feedback, bug reports, or even Pull Requests on GitHub.
🔗 You can check the code or download it here: https://github.com/lucalocastro/TaliaLink
https://redd.it/1pfoys6
@r_opensource
Hi,
I wanted to share a project I’ve been working on recently.
Originally, I started coding this because I just needed a quick way to spot broken backlinks on a page to do outreach (Broken Link Building). However, I got a bit carried away and it evolved into a full suite for analyzing on-page SEO, link integrity, and site structure.
It is 100% Open Source and runs locally in your browser.
Key Features for SEOs:
Status Analysis: Instantly detects broken links (404/500/Timeouts) and traces full redirect chains (e.g., 301 -> 302 -> 200).
Visual Site Audit: This is the biggest feature. It recursively crawls a website (up to 4 levels deep) and builds an interactive Force-Directed Graph. This helps you visualize internal linking structures and spot isolated nodes or errors visually.
SEO Metrics: Integrates with Moz API (V2) to show DA scores directly in the table and flags Rel attributes (dofollow/sponsored/ugc).
Automation: You can set it to monitor specific URLs daily in the background. It sends an email or browser notification if a backlink drops or breaks.
⚠️ : I built this entirely on my own in my free time. While I use it daily, you might encounter some bugs or unpolished features depending on the specific site structure you are analyzing.
I’m constantly working to fix them, but please be patient! If you are a dev or just want to help, I would be extremely happy to receive feedback, bug reports, or even Pull Requests on GitHub.
🔗 You can check the code or download it here: https://github.com/lucalocastro/TaliaLink
https://redd.it/1pfoys6
@r_opensource
GitHub
GitHub - lucalocastro/TaliaLink: Extension for advanced SEO analysis, broken link checking, and website audits.
Extension for advanced SEO analysis, broken link checking, and website audits. - lucalocastro/TaliaLink
starting from source available till it get stable then open source it ?
I am creating application, I want to be open source with AGPLv3 but I want to start with source available license BSL1.1 until I reach v1 stable ? is this good practice or will I get burn for it ?
https://redd.it/1pfq92u
@r_opensource
I am creating application, I want to be open source with AGPLv3 but I want to start with source available license BSL1.1 until I reach v1 stable ? is this good practice or will I get burn for it ?
https://redd.it/1pfq92u
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Looking for a solution for video upload + registration for a music competition
Hey Everyone ,
We are organizing a classical music competition for our non-profit, and I’m looking for recommendations for a WordPress plugin or open-source solution that can handle:
🎤 What we need:
• A registration form (Name, Phone, Category etc.)
• Video upload by participants (3–5 min performance recorded on mobile)
• Large file support (300–500 MB or more)
• Store videos outside WordPress, ideally in Cloudflare R2 or S3 compatible storage
• Payment integration (UPI/Razorpay/Stripe/etc. based on the country)
If you’ve done something like this (contest, talent hunt, audition submissions, etc.), your input would help us a lot! 🙏
https://redd.it/1pfsheb
@r_opensource
Hey Everyone ,
We are organizing a classical music competition for our non-profit, and I’m looking for recommendations for a WordPress plugin or open-source solution that can handle:
🎤 What we need:
• A registration form (Name, Phone, Category etc.)
• Video upload by participants (3–5 min performance recorded on mobile)
• Large file support (300–500 MB or more)
• Store videos outside WordPress, ideally in Cloudflare R2 or S3 compatible storage
• Payment integration (UPI/Razorpay/Stripe/etc. based on the country)
If you’ve done something like this (contest, talent hunt, audition submissions, etc.), your input would help us a lot! 🙏
https://redd.it/1pfsheb
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Contribute to open source
Hello I am a young Developer I would like to participate to open source projet
Do you have any ideas how to do it
How to start
https://redd.it/1pft9rv
@r_opensource
Hello I am a young Developer I would like to participate to open source projet
Do you have any ideas how to do it
How to start
https://redd.it/1pft9rv
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Merging Fork back into Main Repo
I'm the current lead developer for PySolFC, an open source solitaire app, licensed under the GPL v3. Some time back, I identified a fork of the project called PySolIII, which was branched off the main project sometime before I joined, and was developed for a few years before it stopped around 2020. Though the lead developer is named, there is no contact information on the site.
There is a lot of good code/features there, and I would like to try to merge the fork back into the main branch. Though it wouldn't be a perfect merge as a few years of updates cause some ID conflicts, and there are a few features I'd prefer to frame a little differently.
I know because of the viral GPL v3 (it is cited in the PySolIII docs), I'm legally in the clear to merge the code, as long as I give it proper attribution and preserve any copyright notices. Though I'm wondering about etiquette. While PySolIII has not been updated in about 5 years, I still worry about going forward with merging too much over without getting in contact with the original developer.
Also, there is a mention of some of the new images being licensed under an OSI two clause license (http://pysoliii.freeshell.org/pysol/html/pg10.html).
Is there a reason to be cautious about doing such a code merge? Or am I overthinking things?
For context:
\- PySolFC main repo: https://github.com/shlomif/PySolFC
\- PySolIII site: http://pysoliii.freeshell.org/pysol/
https://redd.it/1pfudvw
@r_opensource
I'm the current lead developer for PySolFC, an open source solitaire app, licensed under the GPL v3. Some time back, I identified a fork of the project called PySolIII, which was branched off the main project sometime before I joined, and was developed for a few years before it stopped around 2020. Though the lead developer is named, there is no contact information on the site.
There is a lot of good code/features there, and I would like to try to merge the fork back into the main branch. Though it wouldn't be a perfect merge as a few years of updates cause some ID conflicts, and there are a few features I'd prefer to frame a little differently.
I know because of the viral GPL v3 (it is cited in the PySolIII docs), I'm legally in the clear to merge the code, as long as I give it proper attribution and preserve any copyright notices. Though I'm wondering about etiquette. While PySolIII has not been updated in about 5 years, I still worry about going forward with merging too much over without getting in contact with the original developer.
Also, there is a mention of some of the new images being licensed under an OSI two clause license (http://pysoliii.freeshell.org/pysol/html/pg10.html).
Is there a reason to be cautious about doing such a code merge? Or am I overthinking things?
For context:
\- PySolFC main repo: https://github.com/shlomif/PySolFC
\- PySolIII site: http://pysoliii.freeshell.org/pysol/
https://redd.it/1pfudvw
@r_opensource
HIRING Open Source Developers (Remote) - $90-$120 / hr
We’re looking for open-source contributors and experienced engineers who understand how to review, maintain, and troubleshoot live repositories.
# Who You Are
An open-source developer or maintainer who has contributed to or reviewed code in live repositories
Comfortable reasoning about Git at a deep level
Adept at debugging repository states and fixing broken histories without data loss
# Preferred Qualifications
3+ years of software engineering experience in open-source, backend, or DevOps roles
Demonstrated history of contributions on GitHub, GitLab, or other OSS platforms
(Bonus) Experience in code review or AI/LLM model evaluation
# Why Join
Turn your open-source experience into valuable, high-impact data
Fully remote, flexible work, with competitive compensation
We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
**CLICK HERE TO APPLY!**
https://redd.it/1pfx8gh
@r_opensource
We’re looking for open-source contributors and experienced engineers who understand how to review, maintain, and troubleshoot live repositories.
# Who You Are
An open-source developer or maintainer who has contributed to or reviewed code in live repositories
Comfortable reasoning about Git at a deep level
Adept at debugging repository states and fixing broken histories without data loss
# Preferred Qualifications
3+ years of software engineering experience in open-source, backend, or DevOps roles
Demonstrated history of contributions on GitHub, GitLab, or other OSS platforms
(Bonus) Experience in code review or AI/LLM model evaluation
# Why Join
Turn your open-source experience into valuable, high-impact data
Fully remote, flexible work, with competitive compensation
We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
**CLICK HERE TO APPLY!**
https://redd.it/1pfx8gh
@r_opensource
Mercor
Open Source Developers - Mercor Jobs
We’re looking for open-source contributors and experienced engineers who understand how to review, maintain, and troubleshoot live repositories.
Who You Are
An open-source developer or maintainer who has contributed to or reviewed code in live repositories…
Who You Are
An open-source developer or maintainer who has contributed to or reviewed code in live repositories…
Daily Linux command
Hello!
I just wanted to share a site I made. I’m not really a developer, but I am a Linux noob.
This site does pretty much one thing - present a Linux command daily, and some examples of usage. I added it to my phone’s homescreen, and have actually found myself using it daily. When on the go, or when just bored or something.
Anyway, here it is: https://licod.io
GitHub: https://github.com/fredrikk1/licode
https://redd.it/1pfy0s4
@r_opensource
Hello!
I just wanted to share a site I made. I’m not really a developer, but I am a Linux noob.
This site does pretty much one thing - present a Linux command daily, and some examples of usage. I added it to my phone’s homescreen, and have actually found myself using it daily. When on the go, or when just bored or something.
Anyway, here it is: https://licod.io
GitHub: https://github.com/fredrikk1/licode
https://redd.it/1pfy0s4
@r_opensource
licod.io
licod.io - Learn Linux Daily
Master Linux one day at a time. Daily lessons, commands, tips and examples.
Duplicate file finder recommendations?
To find and remove duplicate files like photo backups.
https://redd.it/1pfywwz
@r_opensource
To find and remove duplicate files like photo backups.
https://redd.it/1pfywwz
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Built an offline voice-to-text tool for macOS using Parakeet
https://github.com/gptguy/silentkeys
https://redd.it/1pg0lgv
@r_opensource
https://github.com/gptguy/silentkeys
https://redd.it/1pg0lgv
@r_opensource
GitHub
GitHub - gptguy/silentkeys: Real time, privacy first, low latency push to talk using Parakeet fully on device with Tauri and ORT.
Real time, privacy first, low latency push to talk using Parakeet fully on device with Tauri and ORT. - gptguy/silentkeys