Using OpnForm?
I’ve been tracking OpnForm for a while and recently had a chance to chat one-on-one with its creator, Julien Nahum. We dove into the early decisions, AWESOME growth hacks, and cool future plans for the project — here’s the actual recorded convo if you’re curious.
But here’s where I need help:
Are any of you using OpnForm in production for more advanced or large-scale form use cases? Any unexpected blockers, gotchas, etc? He mentioned it was iframe embedded vs natively embedded. Honest opinions encouraged.
https://redd.it/1peyj43
@r_opensource
I’ve been tracking OpnForm for a while and recently had a chance to chat one-on-one with its creator, Julien Nahum. We dove into the early decisions, AWESOME growth hacks, and cool future plans for the project — here’s the actual recorded convo if you’re curious.
But here’s where I need help:
Are any of you using OpnForm in production for more advanced or large-scale form use cases? Any unexpected blockers, gotchas, etc? He mentioned it was iframe embedded vs natively embedded. Honest opinions encouraged.
https://redd.it/1peyj43
@r_opensource
YouTube
The Birth and Early Growth Hacks Behind OpnForm and NoteForms with Julien Nahum
In this conversation, Julien Nahum—creator of OpnForm and NoteForms—shares the story behind building a privacy-focused open source form builder. He breaks down the early challenges, the growth tactics that sparked adoption, and how open source helped create…
Introducing AllTheSubs - A Collaborative Subreddit Analysis Database
https://allthesubs.ericrosenberg.com/
https://redd.it/1pf0249
@r_opensource
https://allthesubs.ericrosenberg.com/
https://redd.it/1pf0249
@r_opensource
I built a macOS Photos-style manager for Windows
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager No database. No import step. Every folder is an album. The app uses lightweight
✅ You can browse your library with any file manager
✅ You can sync with any cloud service
✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. Finally, I can show my Live Photos on Windows. Technical details for the curious:
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. Nearly 3 minutes to apply a single brightness adjustment. I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. So I rewrote the entire rendering pipeline in OpenGL 3.3. Why OpenGL 3.3 specifically?
✅ Maximum compatibility — runs on integrated GPUs from 2012, no dedicated GPU required
✅ Cross-platform — same shaders work on Windows, macOS, Linux
✅ Sufficient power — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in under 16ms — 60fps
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager No database. No import step. Every folder is an album. The app uses lightweight
. iphoto. album. json manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are never touched. This means:✅ You can browse your library with any file manager
✅ You can sync with any cloud service
✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. Finally, I can show my Live Photos on Windows. Technical details for the curious:
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. Nearly 3 minutes to apply a single brightness adjustment. I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. So I rewrote the entire rendering pipeline in OpenGL 3.3. Why OpenGL 3.3 specifically?
✅ Maximum compatibility — runs on integrated GPUs from 2012, no dedicated GPU required
✅ Cross-platform — same shaders work on Windows, macOS, Linux
✅ Sufficient power — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in under 16ms — 60fps
GitHub
GitHub - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager: A macOS Photos–style photo manager for Windows — folder-native, non-destructive…
A macOS Photos–style photo manager for Windows — folder-native, non-destructive, with HEIC/MOV Live Photo, map view, and GPU-accelerated browsing. - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager
I built a macOS Photos-style manager for Windows
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
\[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
[https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager](https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager) **No database. No import step. Every folder is an album.** The app uses lightweight `. iphoto. album. json` manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are **never touched**. This means:
* ✅ You can browse your library with any file manager
* ✅ You can sync with any cloud service
* ✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. **Finally, I can show my Live Photos on Windows.** **Technical details for the curious:**
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. **Nearly 3 minutes to apply a single brightness adjustment.** I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. **So I rewrote the entire rendering pipeline in OpenGL 3.3.** Why OpenGL 3.3 specifically?
* ✅ **Maximum compatibility** — runs on integrated GPUs from 2012, no dedicated GPU required
* ✅ **Cross-platform** — same shaders work on Windows, macOS, Linux
* ✅ **Sufficient power** — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in **under 16ms** — 60fps
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
\[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
[https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager](https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager) **No database. No import step. Every folder is an album.** The app uses lightweight `. iphoto. album. json` manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are **never touched**. This means:
* ✅ You can browse your library with any file manager
* ✅ You can sync with any cloud service
* ✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. **Finally, I can show my Live Photos on Windows.** **Technical details for the curious:**
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. **Nearly 3 minutes to apply a single brightness adjustment.** I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. **So I rewrote the entire rendering pipeline in OpenGL 3.3.** Why OpenGL 3.3 specifically?
* ✅ **Maximum compatibility** — runs on integrated GPUs from 2012, no dedicated GPU required
* ✅ **Cross-platform** — same shaders work on Windows, macOS, Linux
* ✅ **Sufficient power** — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in **under 16ms** — 60fps
GitHub
GitHub - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager: A macOS Photos–style photo manager for Windows — folder-native, non-destructive…
A macOS Photos–style photo manager for Windows — folder-native, non-destructive, with HEIC/MOV Live Photo, map view, and GPU-accelerated browsing. - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager
real-time preview. Drag a slider, see it instantly. The way it should be. **The shader pipeline:**// Simplified version of the color grading shader uniform float u\_exposure; uniform float u\_contrast; uniform float u\_saturation; uniform mat3 u\_perspectiveMatrix; void main() { vec4 color = texture(u\_texture, transformedCoord); // Exposure (stops) color.rgb \*= pow(2.0, u\_exposure); // Contrast (pivot at 0.5) color.rgb = (color.rgb - 0.5) \* u\_contrast + 0.5; // Saturation (luminance-preserving) float luma = dot(color. rgb, vec3(0.299, 0.587, 0.114)); color. rgb = mix(vec3(luma), color.rgb, u\_saturation); gl\_FragColor = color; }
# All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.
# Non-destructive editing with real-time preview
The edit mode is fully non-destructive:
* **Light adjustments:** Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
* **Color grading:** Saturation, Vibrance, White Balance
* **Black & White:** Intensity, Neutrals, Tone, Grain with artistic film presets
* **Perspective correction:** Vertical/horizontal keystoning, ±45° rotation
* **Black border prevention:** Geometric validation ensures no black pixels after transforms All edits are stored in `.ipo` sidecar files. Your originals stay untouched forever. **The math behind perspective correction:** I defined three coordinate systems: **Texture Space** — raw pixels from the source image **Projected Space** — after perspective matrix (where validation happens) **Screen Space** — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use `point_in_convex_polygon` checks to prevent any black borders before applying the crop.
# Map view with GPS clustering
# Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.
# The architecture
Backend (Pure Python, no GUI dependency):
├── models/ → Album, LiveGroup data structures
├── io/ → Scanner, metadata extraction
├── core/ → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/ → index.jsonl, file locking
└── app. py → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py → Qt signals/slots bridge to backend
├── services/ → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/ → Edit panels, map view
└── gl_*/ → OpenGL renderers (image viewer, crop tool, perspective)
The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. **Performance tier fallback:**
GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred fallback →
# If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.
# Why I'm posting
I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? **The app is:**
* 🆓 Free and open source (MIT)
* 💾 100% local, no cloud, no account
* 🪟 Windows native (Linux support planned)
* ⚡ GPU-accelerated, but runs on old laptops too
* 📱 Built specifically for iPhone Live Photo support GitHub: [https://github](https://github). com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've
# All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.
# Non-destructive editing with real-time preview
The edit mode is fully non-destructive:
* **Light adjustments:** Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
* **Color grading:** Saturation, Vibrance, White Balance
* **Black & White:** Intensity, Neutrals, Tone, Grain with artistic film presets
* **Perspective correction:** Vertical/horizontal keystoning, ±45° rotation
* **Black border prevention:** Geometric validation ensures no black pixels after transforms All edits are stored in `.ipo` sidecar files. Your originals stay untouched forever. **The math behind perspective correction:** I defined three coordinate systems: **Texture Space** — raw pixels from the source image **Projected Space** — after perspective matrix (where validation happens) **Screen Space** — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use `point_in_convex_polygon` checks to prevent any black borders before applying the crop.
# Map view with GPS clustering
# Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.
# The architecture
Backend (Pure Python, no GUI dependency):
├── models/ → Album, LiveGroup data structures
├── io/ → Scanner, metadata extraction
├── core/ → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/ → index.jsonl, file locking
└── app. py → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py → Qt signals/slots bridge to backend
├── services/ → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/ → Edit panels, map view
└── gl_*/ → OpenGL renderers (image viewer, crop tool, perspective)
The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. **Performance tier fallback:**
GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred fallback →
# If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.
# Why I'm posting
I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? **The app is:**
* 🆓 Free and open source (MIT)
* 💾 100% local, no cloud, no account
* 🪟 Windows native (Linux support planned)
* ⚡ GPU-accelerated, but runs on old laptops too
* 📱 Built specifically for iPhone Live Photo support GitHub: [https://github](https://github). com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've
Built a small open source analytics tool for GitHub repos
I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account.
It shows things like:
* Reviewer activity
* Contributor activity
* First-time contributor patterns
* Issue creation trends
* Issue lifecycle health
* Backlog health
* PR review lag
Nothing crazy, but seemed cool to me.
Here’s the link if you want to try it:
[github link](https://github.com/highfly-app/github-analytics)
[analytics page link](https://highfly.app/analytics?ref=reddit)
Example: [vercel/next.js repo](https://highfly.app/analytics/vercel/next.js?ref=reddit&timeRange=3months)
If you’ve got thoughts or ideas on more things to add, let me know.
Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.
Please star it if you can
https://redd.it/1pez878
@r_opensource
I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account.
It shows things like:
* Reviewer activity
* Contributor activity
* First-time contributor patterns
* Issue creation trends
* Issue lifecycle health
* Backlog health
* PR review lag
Nothing crazy, but seemed cool to me.
Here’s the link if you want to try it:
[github link](https://github.com/highfly-app/github-analytics)
[analytics page link](https://highfly.app/analytics?ref=reddit)
Example: [vercel/next.js repo](https://highfly.app/analytics/vercel/next.js?ref=reddit&timeRange=3months)
If you’ve got thoughts or ideas on more things to add, let me know.
Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.
Please star it if you can
https://redd.it/1pez878
@r_opensource
GitHub
GitHub - highfly-app/github-analytics: Shows deeper github repo analytics
Shows deeper github repo analytics . Contribute to highfly-app/github-analytics development by creating an account on GitHub.
OpenScad type of app for 2D graphic design?
Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?
https://redd.it/1pf598o
@r_opensource
Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?
https://redd.it/1pf598o
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Creator of Ruby on Rails denounces OSI's definition of "open source"
https://x.com/dhh/status/1996643925126533282
https://redd.it/1pf6cc5
@r_opensource
https://x.com/dhh/status/1996643925126533282
https://redd.it/1pf6cc5
@r_opensource
X (formerly Twitter)
DHH (@dhh) on X
@codejake I have no interest in playing capitalization games from a "complainer's viewpoint". Take the gift, don't take the gift. Both fine options! But get the fuck out of here trying to assert some narrow, proprietary definition of common words like "open"…
GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.
https://github.com/larswaechter/tokemon
https://redd.it/1pf6v2p
@r_opensource
https://github.com/larswaechter/tokemon
https://redd.it/1pf6v2p
@r_opensource
GitHub
GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.
A Node.js library for reading streamed JSON. Contribute to larswaechter/tokemon development by creating an account on GitHub.
CloudMeet - self-hosted Calendly alternative running on Cloudflare's free tier
Built a simple meeting scheduler because I didn't want to pay for Calendly.
It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).
Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.
Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.
Demo: https://meet.klappe.dev/cloudmeet
GitHub: https://github.com/dennisklappe/CloudMeet
MIT licensed. Happy to hear feedback or answer questions.
https://redd.it/1pfbc74
@r_opensource
Built a simple meeting scheduler because I didn't want to pay for Calendly.
It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).
Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.
Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.
Demo: https://meet.klappe.dev/cloudmeet
GitHub: https://github.com/dennisklappe/CloudMeet
MIT licensed. Happy to hear feedback or answer questions.
https://redd.it/1pfbc74
@r_opensource
I built an automated court scraper because finding a good lawyer shouldn't be a guessing game
Hey everyone,
I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.
So, I built CourtScrapper to fix this.
It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.
What My Project Does
Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.
Target Audience
The average person who is looking for a lawyer that makes sense for their particular situation
Comparison
Enterprise software that has API connections to state courts e.g. lexus nexus, west law
The Tech Stack:
Python
Playwright (for browser automation/stealth)
Pandas (for data formatting)
My personal use case:
1. Gather a list of lawyers I found through google
2. Adjust the values in the config file to determine the cases to be scraped
3. Program generates the excel sheet with the relevant cases for the listed attorneys
4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
2. How recent are similar cases handled by the lawyer?
3. Is the nature of the case similar to my situation? If so, what is the result of the case?
4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases
Note:
I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
I'm running this program as a proof of concept for now so it is only Dallas
I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
Same for any technical questions, read the documentation before asking any questions
I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.
Repo
Hey everyone,
I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.
So, I built CourtScrapper to fix this.
It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.
What My Project Does
Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.
Target Audience
The average person who is looking for a lawyer that makes sense for their particular situation
Comparison
Enterprise software that has API connections to state courts e.g. lexus nexus, west law
The Tech Stack:
Python
Playwright (for browser automation/stealth)
Pandas (for data formatting)
My personal use case:
1. Gather a list of lawyers I found through google
2. Adjust the values in the config file to determine the cases to be scraped
3. Program generates the excel sheet with the relevant cases for the listed attorneys
4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
2. How recent are similar cases handled by the lawyer?
3. Is the nature of the case similar to my situation? If so, what is the result of the case?
4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases
Note:
I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
I'm running this program as a proof of concept for now so it is only Dallas
I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
Same for any technical questions, read the documentation before asking any questions
I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.
Repo
Multi Agent Healthcare Assistant
As part of the Kaggle “5-Day Agents” program, I built a LLM-Based Multi-Agent Healthcare Assistant — a compact but powerful project demonstrating how AI agents can work together to support medical decision workflows.
What it does:
- Uses multiple AI agents for symptom analysis, triage, medical Q&A, and report summarization
- Provides structured outputs and risk categories
- Built with Google ADK, Python, and a clean Streamlit UI
🔗 Project & Code:
Web Application: https://medsense-ai.streamlit.app/
Code: https://github.com/Arvindh99/Multi-Level-AI-Healthcare-Agent-Google-ADK
https://redd.it/1pfi881
@r_opensource
As part of the Kaggle “5-Day Agents” program, I built a LLM-Based Multi-Agent Healthcare Assistant — a compact but powerful project demonstrating how AI agents can work together to support medical decision workflows.
What it does:
- Uses multiple AI agents for symptom analysis, triage, medical Q&A, and report summarization
- Provides structured outputs and risk categories
- Built with Google ADK, Python, and a clean Streamlit UI
🔗 Project & Code:
Web Application: https://medsense-ai.streamlit.app/
Code: https://github.com/Arvindh99/Multi-Level-AI-Healthcare-Agent-Google-ADK
https://redd.it/1pfi881
@r_opensource
Streamlit
AI Medical Assistant
This project demonstrates a robust, safety-focused Multi-Level Agent System built using the Googl...
A fast lightweight similarity search engine built in Rust
https://ahnlich.dev
https://redd.it/1pfkymi
@r_opensource
https://ahnlich.dev
https://redd.it/1pfkymi
@r_opensource
ahnlich.dev
A project by developers bringing vector database and artificial intelligence powered semantic search abilities closer to you
Advice on Getting Started with Open Source Contributions ?
Hey,
I’ve been wanting to get into open source for a while but im feeling stuck. I really want to improve my development skills and not rely on vibe coding too much. There’s so much info out there, it’s overwhelming. For someone totally new, what’s the easiest way to find a project that’s actually friendly to beginners?
Also, I’m nervous about accidentally breaking stuff or messing things up for others. I know maintainers review PRs, but how did you get over that fear when you first started? I want to be responsible and make sure my code works before submitting. How do you test your changes locally? What’s a good way to self-review so I’m confident I’m not wasting anyone’s time?
I’m decent with git and GitHub and have been working as an intern for 7 months, so I’m not a complete newbie. Any advice, tips, or been there done that stories would be graet.
Thanks a lot!
https://redd.it/1pfmghg
@r_opensource
Hey,
I’ve been wanting to get into open source for a while but im feeling stuck. I really want to improve my development skills and not rely on vibe coding too much. There’s so much info out there, it’s overwhelming. For someone totally new, what’s the easiest way to find a project that’s actually friendly to beginners?
Also, I’m nervous about accidentally breaking stuff or messing things up for others. I know maintainers review PRs, but how did you get over that fear when you first started? I want to be responsible and make sure my code works before submitting. How do you test your changes locally? What’s a good way to self-review so I’m confident I’m not wasting anyone’s time?
I’m decent with git and GitHub and have been working as an intern for 7 months, so I’m not a complete newbie. Any advice, tips, or been there done that stories would be graet.
Thanks a lot!
https://redd.it/1pfmghg
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I built my own Open Source extension for Broken Link Building & Site Audits
Hi,
I wanted to share a project I’ve been working on recently.
Originally, I started coding this because I just needed a quick way to spot broken backlinks on a page to do outreach (Broken Link Building). However, I got a bit carried away and it evolved into a full suite for analyzing on-page SEO, link integrity, and site structure.
It is 100% Open Source and runs locally in your browser.
Key Features for SEOs:
Status Analysis: Instantly detects broken links (404/500/Timeouts) and traces full redirect chains (e.g., 301 -> 302 -> 200).
Visual Site Audit: This is the biggest feature. It recursively crawls a website (up to 4 levels deep) and builds an interactive Force-Directed Graph. This helps you visualize internal linking structures and spot isolated nodes or errors visually.
SEO Metrics: Integrates with Moz API (V2) to show DA scores directly in the table and flags Rel attributes (dofollow/sponsored/ugc).
Automation: You can set it to monitor specific URLs daily in the background. It sends an email or browser notification if a backlink drops or breaks.
⚠️ : I built this entirely on my own in my free time. While I use it daily, you might encounter some bugs or unpolished features depending on the specific site structure you are analyzing.
I’m constantly working to fix them, but please be patient! If you are a dev or just want to help, I would be extremely happy to receive feedback, bug reports, or even Pull Requests on GitHub.
🔗 You can check the code or download it here: https://github.com/lucalocastro/TaliaLink
https://redd.it/1pfoys6
@r_opensource
Hi,
I wanted to share a project I’ve been working on recently.
Originally, I started coding this because I just needed a quick way to spot broken backlinks on a page to do outreach (Broken Link Building). However, I got a bit carried away and it evolved into a full suite for analyzing on-page SEO, link integrity, and site structure.
It is 100% Open Source and runs locally in your browser.
Key Features for SEOs:
Status Analysis: Instantly detects broken links (404/500/Timeouts) and traces full redirect chains (e.g., 301 -> 302 -> 200).
Visual Site Audit: This is the biggest feature. It recursively crawls a website (up to 4 levels deep) and builds an interactive Force-Directed Graph. This helps you visualize internal linking structures and spot isolated nodes or errors visually.
SEO Metrics: Integrates with Moz API (V2) to show DA scores directly in the table and flags Rel attributes (dofollow/sponsored/ugc).
Automation: You can set it to monitor specific URLs daily in the background. It sends an email or browser notification if a backlink drops or breaks.
⚠️ : I built this entirely on my own in my free time. While I use it daily, you might encounter some bugs or unpolished features depending on the specific site structure you are analyzing.
I’m constantly working to fix them, but please be patient! If you are a dev or just want to help, I would be extremely happy to receive feedback, bug reports, or even Pull Requests on GitHub.
🔗 You can check the code or download it here: https://github.com/lucalocastro/TaliaLink
https://redd.it/1pfoys6
@r_opensource
GitHub
GitHub - lucalocastro/TaliaLink: Extension for advanced SEO analysis, broken link checking, and website audits.
Extension for advanced SEO analysis, broken link checking, and website audits. - lucalocastro/TaliaLink
starting from source available till it get stable then open source it ?
I am creating application, I want to be open source with AGPLv3 but I want to start with source available license BSL1.1 until I reach v1 stable ? is this good practice or will I get burn for it ?
https://redd.it/1pfq92u
@r_opensource
I am creating application, I want to be open source with AGPLv3 but I want to start with source available license BSL1.1 until I reach v1 stable ? is this good practice or will I get burn for it ?
https://redd.it/1pfq92u
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Looking for a solution for video upload + registration for a music competition
Hey Everyone ,
We are organizing a classical music competition for our non-profit, and I’m looking for recommendations for a WordPress plugin or open-source solution that can handle:
🎤 What we need:
• A registration form (Name, Phone, Category etc.)
• Video upload by participants (3–5 min performance recorded on mobile)
• Large file support (300–500 MB or more)
• Store videos outside WordPress, ideally in Cloudflare R2 or S3 compatible storage
• Payment integration (UPI/Razorpay/Stripe/etc. based on the country)
If you’ve done something like this (contest, talent hunt, audition submissions, etc.), your input would help us a lot! 🙏
https://redd.it/1pfsheb
@r_opensource
Hey Everyone ,
We are organizing a classical music competition for our non-profit, and I’m looking for recommendations for a WordPress plugin or open-source solution that can handle:
🎤 What we need:
• A registration form (Name, Phone, Category etc.)
• Video upload by participants (3–5 min performance recorded on mobile)
• Large file support (300–500 MB or more)
• Store videos outside WordPress, ideally in Cloudflare R2 or S3 compatible storage
• Payment integration (UPI/Razorpay/Stripe/etc. based on the country)
If you’ve done something like this (contest, talent hunt, audition submissions, etc.), your input would help us a lot! 🙏
https://redd.it/1pfsheb
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Contribute to open source
Hello I am a young Developer I would like to participate to open source projet
Do you have any ideas how to do it
How to start
https://redd.it/1pft9rv
@r_opensource
Hello I am a young Developer I would like to participate to open source projet
Do you have any ideas how to do it
How to start
https://redd.it/1pft9rv
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Merging Fork back into Main Repo
I'm the current lead developer for PySolFC, an open source solitaire app, licensed under the GPL v3. Some time back, I identified a fork of the project called PySolIII, which was branched off the main project sometime before I joined, and was developed for a few years before it stopped around 2020. Though the lead developer is named, there is no contact information on the site.
There is a lot of good code/features there, and I would like to try to merge the fork back into the main branch. Though it wouldn't be a perfect merge as a few years of updates cause some ID conflicts, and there are a few features I'd prefer to frame a little differently.
I know because of the viral GPL v3 (it is cited in the PySolIII docs), I'm legally in the clear to merge the code, as long as I give it proper attribution and preserve any copyright notices. Though I'm wondering about etiquette. While PySolIII has not been updated in about 5 years, I still worry about going forward with merging too much over without getting in contact with the original developer.
Also, there is a mention of some of the new images being licensed under an OSI two clause license (http://pysoliii.freeshell.org/pysol/html/pg10.html).
Is there a reason to be cautious about doing such a code merge? Or am I overthinking things?
For context:
\- PySolFC main repo: https://github.com/shlomif/PySolFC
\- PySolIII site: http://pysoliii.freeshell.org/pysol/
https://redd.it/1pfudvw
@r_opensource
I'm the current lead developer for PySolFC, an open source solitaire app, licensed under the GPL v3. Some time back, I identified a fork of the project called PySolIII, which was branched off the main project sometime before I joined, and was developed for a few years before it stopped around 2020. Though the lead developer is named, there is no contact information on the site.
There is a lot of good code/features there, and I would like to try to merge the fork back into the main branch. Though it wouldn't be a perfect merge as a few years of updates cause some ID conflicts, and there are a few features I'd prefer to frame a little differently.
I know because of the viral GPL v3 (it is cited in the PySolIII docs), I'm legally in the clear to merge the code, as long as I give it proper attribution and preserve any copyright notices. Though I'm wondering about etiquette. While PySolIII has not been updated in about 5 years, I still worry about going forward with merging too much over without getting in contact with the original developer.
Also, there is a mention of some of the new images being licensed under an OSI two clause license (http://pysoliii.freeshell.org/pysol/html/pg10.html).
Is there a reason to be cautious about doing such a code merge? Or am I overthinking things?
For context:
\- PySolFC main repo: https://github.com/shlomif/PySolFC
\- PySolIII site: http://pysoliii.freeshell.org/pysol/
https://redd.it/1pfudvw
@r_opensource