Vllama: CLI based Framework to run vision models in local and remote GPUs
Hello all, this is my first post. I have built a simple CLI tool, which can help all to run the llms, vision models like image and video gen, models in the local system and if the system doesn't have the gpu or sufficient ram, they can also run it using kaggle's gpu(which is 30 hrs free for a week).
This is inspired from Ollama, which made downloading llms easy and interacting with it much easy, so I thought of why can't this be made for vision models, so I tried this first on my system, basic image generation is working but not that good, then I thought, why can't we use the Kaggle's GPU to generate videos and images and that can happen directly from the terminal with a single step, so that everyone can use this, so I built this VLLAMA.
In this, currently there are many features, like image, video generation in local and kaggles gpu session; download llms and make it run and also interact with it from anywhere (inspired by ollama) also improved it further by creating a vs code extension VLLAMA, using which you can chat directly from the vs code's chat section, users can chat with the local running llm with just adding "@vllama" at the start of the message and this doesn't use any usage cost and can be used as much as anyone wants, you can check this out at in the vscode extensions.
I want to implement this further so that the companies or anyone with gpu access can download the best llms for their usage and initialize it in their gpu servers, and can directly interact with it from the vscode's chat section and also in further versions, I am planning to implement agentic features so that users can use the local llm to use for code editing, in line suggestions, so that they don't have to pay for premiums and many more.
Currently it also has simple Text-to-Speech, and Speech-to-Text, which I am planning to include in the further versions, using open source audio models and also in further, implement 3D generation models, so that everyone can leverage the use of the open models directly from their terminal, and making the complex process of the using open models easy with just a single command in the terminal.
I have also implemented simple functionalities which can help, like listing the downloaded models and their sizes. Other things available are, basic dataset preprocessing, and training ML models directly with just two commands by just providing it the dataset. This is a basic implementation and want to further improve this so that users with just a dataset can clean and pre-process the data, train the models in their local or using the kaggle's or any free gpu providing services or their own gpus or cloud provided gpus, and can directly deploy the models and can use it any cases.
Currently this are the things it is doing and I want to improve such that everyone can use this for any case of the AI and leveraging the use of open models.
Please checkout the work at: https://github.com/ManvithGopu13/Vllama
Published version at: https://pypi.org/project/vllama/
Also the extension: https://marketplace.visualstudio.com/items?itemName=ManvithGopu.vllama
I would appreciate your time for reading and thankful for everyone who want to contribute and spread a word of it.
Please leave your requests for improvements and any suggestions, ideas, and even roasts or anything in the comments or in the issues, this is well taken and appreciated. Thanks in advance. If you find the project useful, kindly contribute and can star it.
https://redd.it/1penhp2
@r_opensource
Hello all, this is my first post. I have built a simple CLI tool, which can help all to run the llms, vision models like image and video gen, models in the local system and if the system doesn't have the gpu or sufficient ram, they can also run it using kaggle's gpu(which is 30 hrs free for a week).
This is inspired from Ollama, which made downloading llms easy and interacting with it much easy, so I thought of why can't this be made for vision models, so I tried this first on my system, basic image generation is working but not that good, then I thought, why can't we use the Kaggle's GPU to generate videos and images and that can happen directly from the terminal with a single step, so that everyone can use this, so I built this VLLAMA.
In this, currently there are many features, like image, video generation in local and kaggles gpu session; download llms and make it run and also interact with it from anywhere (inspired by ollama) also improved it further by creating a vs code extension VLLAMA, using which you can chat directly from the vs code's chat section, users can chat with the local running llm with just adding "@vllama" at the start of the message and this doesn't use any usage cost and can be used as much as anyone wants, you can check this out at in the vscode extensions.
I want to implement this further so that the companies or anyone with gpu access can download the best llms for their usage and initialize it in their gpu servers, and can directly interact with it from the vscode's chat section and also in further versions, I am planning to implement agentic features so that users can use the local llm to use for code editing, in line suggestions, so that they don't have to pay for premiums and many more.
Currently it also has simple Text-to-Speech, and Speech-to-Text, which I am planning to include in the further versions, using open source audio models and also in further, implement 3D generation models, so that everyone can leverage the use of the open models directly from their terminal, and making the complex process of the using open models easy with just a single command in the terminal.
I have also implemented simple functionalities which can help, like listing the downloaded models and their sizes. Other things available are, basic dataset preprocessing, and training ML models directly with just two commands by just providing it the dataset. This is a basic implementation and want to further improve this so that users with just a dataset can clean and pre-process the data, train the models in their local or using the kaggle's or any free gpu providing services or their own gpus or cloud provided gpus, and can directly deploy the models and can use it any cases.
Currently this are the things it is doing and I want to improve such that everyone can use this for any case of the AI and leveraging the use of open models.
Please checkout the work at: https://github.com/ManvithGopu13/Vllama
Published version at: https://pypi.org/project/vllama/
Also the extension: https://marketplace.visualstudio.com/items?itemName=ManvithGopu.vllama
I would appreciate your time for reading and thankful for everyone who want to contribute and spread a word of it.
Please leave your requests for improvements and any suggestions, ideas, and even roasts or anything in the comments or in the issues, this is well taken and appreciated. Thanks in advance. If you find the project useful, kindly contribute and can star it.
https://redd.it/1penhp2
@r_opensource
GitHub
GitHub - ManvithGopu13/Vllama
Contribute to ManvithGopu13/Vllama development by creating an account on GitHub.
Submitted my FOSS Privacy focused app that protect files from apps that require storgae or all file access permission.
Hey Everyone,
I'm developer of Seek Privacy android app, a week ago has published it on Fdroid it's in last step of being merged.
The app could feel like vault app but the purpose to build it was not to secure, hide, encrypt files but to protect any type of files from apps with storage access.
Like we download many apps from playstore with internet access, to function they require different storage access permissions. We could ignore few apps but the apps we need to use we are forced to give those permissions. I always felt insecure what these internet connected apps on playstore could be doing I not wanted to just trust them. So I wanted to let them have all files permission so I could use them, but still they never get to touch specific files on storage but I could still access these files normally.
The app is diff from other vault like apps, cause I tried to implement ease of use alongside privacy, which I felt I lacked in other foss apps. So data is removed from external storage and encrypted, but we still could easily access, open, share it using the SeekPrivacy app.
New updates will include categorization for more ease of use and thumbnail to preview the stored file.
Any feedback on the concept is welcome! Excited to contribute to FOSS and Privacy community.
GitHub link : https://github.com/duckniii/SeekPrivacy
https://redd.it/1peq2sx
@r_opensource
Hey Everyone,
I'm developer of Seek Privacy android app, a week ago has published it on Fdroid it's in last step of being merged.
The app could feel like vault app but the purpose to build it was not to secure, hide, encrypt files but to protect any type of files from apps with storage access.
Like we download many apps from playstore with internet access, to function they require different storage access permissions. We could ignore few apps but the apps we need to use we are forced to give those permissions. I always felt insecure what these internet connected apps on playstore could be doing I not wanted to just trust them. So I wanted to let them have all files permission so I could use them, but still they never get to touch specific files on storage but I could still access these files normally.
The app is diff from other vault like apps, cause I tried to implement ease of use alongside privacy, which I felt I lacked in other foss apps. So data is removed from external storage and encrypted, but we still could easily access, open, share it using the SeekPrivacy app.
New updates will include categorization for more ease of use and thumbnail to preview the stored file.
Any feedback on the concept is welcome! Excited to contribute to FOSS and Privacy community.
GitHub link : https://github.com/duckniii/SeekPrivacy
https://redd.it/1peq2sx
@r_opensource
GitHub
GitHub - duckniii/SeekPrivacy: Some apps need or force storage permission : SeekPrivacy hides what they don’t need to see. Even…
Some apps need or force storage permission : SeekPrivacy hides what they don’t need to see. Even if you grant all-files access, your private files stay invisible, but you can still access them norm...
Combining Kubescape with ARMO CADR Effective or Overkill
Comparing Kubescape vs ARMO CADR for cloud security. CADR’s runtime monitoring seems to complement Kubescape’s scanning. Thoughts on integrating both in workflows?
https://redd.it/1pepvh1
@r_opensource
Comparing Kubescape vs ARMO CADR for cloud security. CADR’s runtime monitoring seems to complement Kubescape’s scanning. Thoughts on integrating both in workflows?
https://redd.it/1pepvh1
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I built a productivity app with one rule: if it's not scheduled, it won't get done
I built a personal productivity app based on a controversial belief: unscheduled tasks don't get done. They sit in "someday/maybe" lists forever, creating guilt while you ignore them.
So I made something stricter than GTD. No inbox. No weekly review. Just daily accountability.
## How it works: Two panes
https://imgur.com/a/a2rCTBw
Left pane (Thoughts): Your journal. Write anything as it comes - notes, ideas, tasks. Chronological, like a diary.
Right pane (Time): Your timeline. The app extracts all time-sensitive items from your thoughts and puts them in a schedule.
You can be messy in your thinking (left), but your commitments are crystal clear (right).
## The forcing function: Daily Review
Every morning, the Time pane shows Daily Review - all your undone items from the past. You must deal with each one:
- ✓ Mark done (if you forgot)
- ↷ Reschedule
- × Cancel permanently
If you keep rescheduling something, you'll see "10 days old" staring at you. Eventually you either do it or admit you don't care.
Daily accountability, not weekly. No escape.
## Natural language scheduling
Type it naturally. The app parses the time and schedules it automatically.
The key: When you write a task, you schedule it right then. The app forces you to answer "when will you do this?" You can't skip it.
## Two viewing modes
- Infinite scroll: See 30 days past/future at once
- Book mode: One day per page, flip like a journal
## My stance
If something matters enough to write down, it matters enough to schedule. No "I'll prioritize later." Either:
- Do it now (IRL)
- Schedule it for a specific time
- Don't write it down
This isn't for everyone. It's for people who know unscheduled work doesn't get done and want daily accountability instead of weekly reviews.
## Why I'm posting
I've used this daily for months and it changed how I work. But I don't know if this philosophy resonates with anyone else.
Is "schedule it or don't write it" too strict? Do you also believe unscheduled tasks are just guilt generators? Or am I solving a problem only I have?
If this resonates, I'll keep improving it. It's open source, no backend, local storage only.
GitHub: https://github.com/sawtdakhili/Thoughts-Time
Would love honest feedback on both the philosophy and execution.
https://redd.it/1pesd80
@r_opensource
I built a personal productivity app based on a controversial belief: unscheduled tasks don't get done. They sit in "someday/maybe" lists forever, creating guilt while you ignore them.
So I made something stricter than GTD. No inbox. No weekly review. Just daily accountability.
## How it works: Two panes
https://imgur.com/a/a2rCTBw
Left pane (Thoughts): Your journal. Write anything as it comes - notes, ideas, tasks. Chronological, like a diary.
Right pane (Time): Your timeline. The app extracts all time-sensitive items from your thoughts and puts them in a schedule.
You can be messy in your thinking (left), but your commitments are crystal clear (right).
## The forcing function: Daily Review
Every morning, the Time pane shows Daily Review - all your undone items from the past. You must deal with each one:
- ✓ Mark done (if you forgot)
- ↷ Reschedule
- × Cancel permanently
If you keep rescheduling something, you'll see "10 days old" staring at you. Eventually you either do it or admit you don't care.
Daily accountability, not weekly. No escape.
## Natural language scheduling
t buy milk at 5pm
t call mom Friday 2pm
e team meeting from 2pm to 3pm
Type it naturally. The app parses the time and schedules it automatically.
The key: When you write a task, you schedule it right then. The app forces you to answer "when will you do this?" You can't skip it.
## Two viewing modes
- Infinite scroll: See 30 days past/future at once
- Book mode: One day per page, flip like a journal
## My stance
If something matters enough to write down, it matters enough to schedule. No "I'll prioritize later." Either:
- Do it now (IRL)
- Schedule it for a specific time
- Don't write it down
This isn't for everyone. It's for people who know unscheduled work doesn't get done and want daily accountability instead of weekly reviews.
## Why I'm posting
I've used this daily for months and it changed how I work. But I don't know if this philosophy resonates with anyone else.
Is "schedule it or don't write it" too strict? Do you also believe unscheduled tasks are just guilt generators? Or am I solving a problem only I have?
If this resonates, I'll keep improving it. It's open source, no backend, local storage only.
GitHub: https://github.com/sawtdakhili/Thoughts-Time
Would love honest feedback on both the philosophy and execution.
https://redd.it/1pesd80
@r_opensource
GitHub - artcore-c/email-xray: Chrome extension to detect hidden text in email
https://github.com/artcore-c/email-xray
https://redd.it/1pev0cj
@r_opensource
https://github.com/artcore-c/email-xray
https://redd.it/1pev0cj
@r_opensource
GitHub
GitHub - artcore-c/email-xray: Chrome extension to detect hidden text in email
Chrome extension to detect hidden text in email. Contribute to artcore-c/email-xray development by creating an account on GitHub.
Using OpnForm?
I’ve been tracking OpnForm for a while and recently had a chance to chat one-on-one with its creator, Julien Nahum. We dove into the early decisions, AWESOME growth hacks, and cool future plans for the project — here’s the actual recorded convo if you’re curious.
But here’s where I need help:
Are any of you using OpnForm in production for more advanced or large-scale form use cases? Any unexpected blockers, gotchas, etc? He mentioned it was iframe embedded vs natively embedded. Honest opinions encouraged.
https://redd.it/1peyj43
@r_opensource
I’ve been tracking OpnForm for a while and recently had a chance to chat one-on-one with its creator, Julien Nahum. We dove into the early decisions, AWESOME growth hacks, and cool future plans for the project — here’s the actual recorded convo if you’re curious.
But here’s where I need help:
Are any of you using OpnForm in production for more advanced or large-scale form use cases? Any unexpected blockers, gotchas, etc? He mentioned it was iframe embedded vs natively embedded. Honest opinions encouraged.
https://redd.it/1peyj43
@r_opensource
YouTube
The Birth and Early Growth Hacks Behind OpnForm and NoteForms with Julien Nahum
In this conversation, Julien Nahum—creator of OpnForm and NoteForms—shares the story behind building a privacy-focused open source form builder. He breaks down the early challenges, the growth tactics that sparked adoption, and how open source helped create…
Introducing AllTheSubs - A Collaborative Subreddit Analysis Database
https://allthesubs.ericrosenberg.com/
https://redd.it/1pf0249
@r_opensource
https://allthesubs.ericrosenberg.com/
https://redd.it/1pf0249
@r_opensource
I built a macOS Photos-style manager for Windows
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager No database. No import step. Every folder is an album. The app uses lightweight
✅ You can browse your library with any file manager
✅ You can sync with any cloud service
✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. Finally, I can show my Live Photos on Windows. Technical details for the curious:
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. Nearly 3 minutes to apply a single brightness adjustment. I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. So I rewrote the entire rendering pipeline in OpenGL 3.3. Why OpenGL 3.3 specifically?
✅ Maximum compatibility — runs on integrated GPUs from 2012, no dedicated GPU required
✅ Cross-platform — same shaders work on Windows, macOS, Linux
✅ Sufficient power — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in under 16ms — 60fps
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager No database. No import step. Every folder is an album. The app uses lightweight
. iphoto. album. json manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are never touched. This means:✅ You can browse your library with any file manager
✅ You can sync with any cloud service
✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. Finally, I can show my Live Photos on Windows. Technical details for the curious:
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. Nearly 3 minutes to apply a single brightness adjustment. I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. So I rewrote the entire rendering pipeline in OpenGL 3.3. Why OpenGL 3.3 specifically?
✅ Maximum compatibility — runs on integrated GPUs from 2012, no dedicated GPU required
✅ Cross-platform — same shaders work on Windows, macOS, Linux
✅ Sufficient power — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in under 16ms — 60fps
GitHub
GitHub - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager: A macOS Photos–style photo manager for Windows — folder-native, non-destructive…
A macOS Photos–style photo manager for Windows — folder-native, non-destructive, with HEIC/MOV Live Photo, map view, and GPU-accelerated browsing. - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager
I built a macOS Photos-style manager for Windows
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
\[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
[https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager](https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager) **No database. No import step. Every folder is an album.** The app uses lightweight `. iphoto. album. json` manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are **never touched**. This means:
* ✅ You can browse your library with any file manager
* ✅ You can sync with any cloud service
* ✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. **Finally, I can show my Live Photos on Windows.** **Technical details for the curious:**
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. **Nearly 3 minutes to apply a single brightness adjustment.** I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. **So I rewrote the entire rendering pipeline in OpenGL 3.3.** Why OpenGL 3.3 specifically?
* ✅ **Maximum compatibility** — runs on integrated GPUs from 2012, no dedicated GPU required
* ✅ **Cross-platform** — same shaders work on Windows, macOS, Linux
* ✅ **Sufficient power** — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in **under 16ms** — 60fps
I built a macOS Photos-style manager for Windows because I couldn't view my iPhone Live Photos on my engineering laptop
\[Show & Tell\] I'm an electrical engineering student. I also love photography — specifically, I love Live Photos on my iPhone. Those 1. 5-second motion clips capture moments that still photos can't: my cat mid-pounce, friends bursting into laughter, waves crashing on rocks. The problem? My field runs on Windows. MATLAB, LTspice, Altium Designer, Cadence, Multisim — almost every EE tool requires Windows. I can't switch to Mac for school. But every time I transfer my photos to my laptop, the magic dies. My HEIC stills become orphaned files. The MOV motion clips scatter into random folders. Windows Photos app shows them as separate, unrelated files. The "Live" part of Live Photo? Gone. I searched everywhere for a solution. Stack Overflow. Reddit. Apple forums. Nothing. Some suggested "just use iCloud web" — but it's painfully slow and requires constant internet. Others said "convert to GIF" — destroying quality and losing the original. A few recommended paid software that wanted to import everything into proprietary databases, corrupting my folder structure in the process. So I spent 6 months building what I actually needed.
# How it works: Folder = Album
[https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager](https://github.com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager) **No database. No import step. Every folder is an album.** The app uses lightweight `. iphoto. album. json` manifests to store your "human decisions" — cover photo, featured images, custom order. Your original files are **never touched**. This means:
* ✅ You can browse your library with any file manager
* ✅ You can sync with any cloud service
* ✅ If my app dies tomorrow, your photos are still perfectly organized
# The killer feature: Live Photo pairing
The app automatically pairs your HEIC/JPG stills with their MOV motion clips using Apple's `ContentIdentifier` metadata. A "LIVE" badge appears — hover to play the motion inline, just like on your iPhone. **Finally, I can show my Live Photos on Windows.** **Technical details for the curious:**
Live Photo Detection Pipeline:
ExifTool extracts ContentIdentifier from HEIC/MOV
Fallback: time-proximity matching (±1. 5s capture time)
Paired assets stored in index.jsonl for instant reload
# I spent weeks reverse-engineering how Apple stores this metadata. Turns out the ContentIdentifier is embedded in QuickTime atoms — ExifTool can read it, but you need to know exactly where to look.
# The performance nightmare that forced me into GPU programming
My first version did everything on CPU with pure Python + NumPy. It worked... technically. Then I tried editing a 48MP photo. **Nearly 3 minutes to apply a single brightness adjustment.** I watched the progress bar crawl. I alt-tabbed. I made coffee. I came back. Still processing. This was unacceptable. Photo editing needs to feel instant — you drag a slider, you see the result. Not "drag a slider, go make lunch." I profiled the code. The bottleneck was clear: Python's GIL + CPU-bound pixel operations = death by a thousand loops. Even with NumPy vectorization and Numba JIT compilation, I was hitting a wall. A 48MP image is 48 million pixels. Each pixel needs multiple operations for exposure, contrast, saturation... that's billions of calculations per adjustment. **So I rewrote the entire rendering pipeline in OpenGL 3.3.** Why OpenGL 3.3 specifically?
* ✅ **Maximum compatibility** — runs on integrated GPUs from 2012, no dedicated GPU required
* ✅ **Cross-platform** — same shaders work on Windows, macOS, Linux
* ✅ **Sufficient power** — for 2D image processing, I don't need Vulkan's complexity As a student, I know many of us run old ThinkPads or budget laptops. I needed something that works on a 10-year-old machine with Intel HD Graphics, not just RTX 4090s. The result? That same 48MP photo now renders adjustments in **under 16ms** — 60fps
GitHub
GitHub - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager: A macOS Photos–style photo manager for Windows — folder-native, non-destructive…
A macOS Photos–style photo manager for Windows — folder-native, non-destructive, with HEIC/MOV Live Photo, map view, and GPU-accelerated browsing. - OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager
real-time preview. Drag a slider, see it instantly. The way it should be. **The shader pipeline:**// Simplified version of the color grading shader uniform float u\_exposure; uniform float u\_contrast; uniform float u\_saturation; uniform mat3 u\_perspectiveMatrix; void main() { vec4 color = texture(u\_texture, transformedCoord); // Exposure (stops) color.rgb \*= pow(2.0, u\_exposure); // Contrast (pivot at 0.5) color.rgb = (color.rgb - 0.5) \* u\_contrast + 0.5; // Saturation (luminance-preserving) float luma = dot(color. rgb, vec3(0.299, 0.587, 0.114)); color. rgb = mix(vec3(luma), color.rgb, u\_saturation); gl\_FragColor = color; }
# All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.
# Non-destructive editing with real-time preview
The edit mode is fully non-destructive:
* **Light adjustments:** Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
* **Color grading:** Saturation, Vibrance, White Balance
* **Black & White:** Intensity, Neutrals, Tone, Grain with artistic film presets
* **Perspective correction:** Vertical/horizontal keystoning, ±45° rotation
* **Black border prevention:** Geometric validation ensures no black pixels after transforms All edits are stored in `.ipo` sidecar files. Your originals stay untouched forever. **The math behind perspective correction:** I defined three coordinate systems: **Texture Space** — raw pixels from the source image **Projected Space** — after perspective matrix (where validation happens) **Screen Space** — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use `point_in_convex_polygon` checks to prevent any black borders before applying the crop.
# Map view with GPS clustering
# Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.
# The architecture
Backend (Pure Python, no GUI dependency):
├── models/ → Album, LiveGroup data structures
├── io/ → Scanner, metadata extraction
├── core/ → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/ → index.jsonl, file locking
└── app. py → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py → Qt signals/slots bridge to backend
├── services/ → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/ → Edit panels, map view
└── gl_*/ → OpenGL renderers (image viewer, crop tool, perspective)
The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. **Performance tier fallback:**
GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred fallback →
# If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.
# Why I'm posting
I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? **The app is:**
* 🆓 Free and open source (MIT)
* 💾 100% local, no cloud, no account
* 🪟 Windows native (Linux support planned)
* ⚡ GPU-accelerated, but runs on old laptops too
* 📱 Built specifically for iPhone Live Photo support GitHub: [https://github](https://github). com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've
# All calculations happen on the GPU in parallel — millions of pixels processed simultaneously. The CPU just uploads uniforms and lets the GPU do what it's designed for.
# Non-destructive editing with real-time preview
The edit mode is fully non-destructive:
* **Light adjustments:** Brilliance, Exposure, Highlights, Shadows, Brightness, Contrast, Black Point
* **Color grading:** Saturation, Vibrance, White Balance
* **Black & White:** Intensity, Neutrals, Tone, Grain with artistic film presets
* **Perspective correction:** Vertical/horizontal keystoning, ±45° rotation
* **Black border prevention:** Geometric validation ensures no black pixels after transforms All edits are stored in `.ipo` sidecar files. Your originals stay untouched forever. **The math behind perspective correction:** I defined three coordinate systems: **Texture Space** — raw pixels from the source image **Projected Space** — after perspective matrix (where validation happens) **Screen Space** — for mouse interaction The crop box must be fully contained within the transformed quadrilateral. I use `point_in_convex_polygon` checks to prevent any black borders before applying the crop.
# Map view with GPS clustering
# Every photo with GPS metadata appears on an interactive map. I built a custom MapLibre-style vector tile renderer in PySide6/Qt6 — no web view, pure OpenGL. Tiles are cached locally. Reverse geocoding converts coordinates to human-readable locations ("Tokyo, Japan"). Perfect for reliving travel memories — see all photos from your trip plotted on an actual map.
# The architecture
Backend (Pure Python, no GUI dependency):
├── models/ → Album, LiveGroup data structures
├── io/ → Scanner, metadata extraction
├── core/ → Live Photo pairing, image filters (NumPy → Numba JIT fallback)
├── cache/ → index.jsonl, file locking
└── app. py → Facade coordinating everything
GUI (PySide6/Qt6):
├── facade.py → Qt signals/slots bridge to backend
├── services/ → Async tasks (scan, import, move)
├── controllers/→ MVC pattern
├── widgets/ → Edit panels, map view
└── gl_*/ → OpenGL renderers (image viewer, crop tool, perspective)
The backend is fully testable without any GUI. The GUI layer uses strict MVC — Controllers trigger actions, Models hold state, Widgets render. **Performance tier fallback:**
GPU (OpenGL 3.3) → NumPy vectorized → Numba JIT → Pure Python
↑ preferred fallback →
# If your machine somehow doesn't support OpenGL 3.3, the app falls back to CPU processing. It'll be slow, but it'll work.
# Why I'm posting
I've been using this daily for 6 months with my 80,000+ photo library. It genuinely solved a problem that frustrated me for years. But I don't know if anyone else has this pain. Are there other iPhone users stuck on Windows who miss their Live Photos? Is "folder = album" a philosophy that resonates? Or am I solving a problem only I have? **The app is:**
* 🆓 Free and open source (MIT)
* 💾 100% local, no cloud, no account
* 🪟 Windows native (Linux support planned)
* ⚡ GPU-accelerated, but runs on old laptops too
* 📱 Built specifically for iPhone Live Photo support GitHub: [https://github](https://github). com/OliverZhaohaibin/iPhotos-LocalPhotoAlbumManager Would love feedback on both the concept and execution. Roast my architecture. Tell me what's missing. Or just tell me if you've
Built a small open source analytics tool for GitHub repos
I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account.
It shows things like:
* Reviewer activity
* Contributor activity
* First-time contributor patterns
* Issue creation trends
* Issue lifecycle health
* Backlog health
* PR review lag
Nothing crazy, but seemed cool to me.
Here’s the link if you want to try it:
[github link](https://github.com/highfly-app/github-analytics)
[analytics page link](https://highfly.app/analytics?ref=reddit)
Example: [vercel/next.js repo](https://highfly.app/analytics/vercel/next.js?ref=reddit&timeRange=3months)
If you’ve got thoughts or ideas on more things to add, let me know.
Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.
Please star it if you can
https://redd.it/1pez878
@r_opensource
I started Highfly (not open source atm), a project management tool geared towards devs. I also built a small analytics page for GitHub open source repos and figured others might find it useful too. It came out of some internal work I was doing around repo activity, and it felt simple enough to separate and share. It’s free, works on any public repo, and doesn’t require an account.
It shows things like:
* Reviewer activity
* Contributor activity
* First-time contributor patterns
* Issue creation trends
* Issue lifecycle health
* Backlog health
* PR review lag
Nothing crazy, but seemed cool to me.
Here’s the link if you want to try it:
[github link](https://github.com/highfly-app/github-analytics)
[analytics page link](https://highfly.app/analytics?ref=reddit)
Example: [vercel/next.js repo](https://highfly.app/analytics/vercel/next.js?ref=reddit&timeRange=3months)
If you’ve got thoughts or ideas on more things to add, let me know.
Note: It takes a couple of minutes to collect all the data and caches it for 2 weeks. Not trying to hit githubs ratelimits.
Please star it if you can
https://redd.it/1pez878
@r_opensource
GitHub
GitHub - highfly-app/github-analytics: Shows deeper github repo analytics
Shows deeper github repo analytics . Contribute to highfly-app/github-analytics development by creating an account on GitHub.
OpenScad type of app for 2D graphic design?
Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?
https://redd.it/1pf598o
@r_opensource
Hi! Does anyone know a 2D graphic design application when you design by code, like OpenScad?
https://redd.it/1pf598o
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Creator of Ruby on Rails denounces OSI's definition of "open source"
https://x.com/dhh/status/1996643925126533282
https://redd.it/1pf6cc5
@r_opensource
https://x.com/dhh/status/1996643925126533282
https://redd.it/1pf6cc5
@r_opensource
X (formerly Twitter)
DHH (@dhh) on X
@codejake I have no interest in playing capitalization games from a "complainer's viewpoint". Take the gift, don't take the gift. Both fine options! But get the fuck out of here trying to assert some narrow, proprietary definition of common words like "open"…
GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.
https://github.com/larswaechter/tokemon
https://redd.it/1pf6v2p
@r_opensource
https://github.com/larswaechter/tokemon
https://redd.it/1pf6v2p
@r_opensource
GitHub
GitHub - larswaechter/tokemon: A Node.js library for reading streamed JSON.
A Node.js library for reading streamed JSON. Contribute to larswaechter/tokemon development by creating an account on GitHub.
CloudMeet - self-hosted Calendly alternative running on Cloudflare's free tier
Built a simple meeting scheduler because I didn't want to pay for Calendly.
It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).
Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.
Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.
Demo: https://meet.klappe.dev/cloudmeet
GitHub: https://github.com/dennisklappe/CloudMeet
MIT licensed. Happy to hear feedback or answer questions.
https://redd.it/1pfbc74
@r_opensource
Built a simple meeting scheduler because I didn't want to pay for Calendly.
It syncs with Google Calendar, handles availability, sends email confirmations/reminders, and runs entirely on Cloudflare's free tier (Pages + D1 + Workers).
Deployment is very easy - fork the repo, add your API keys as GitHub secrets, run the workflow. That's it.
Stack: SvelteKit, Cloudflare Pages, D1 (SQLite), Workers for cron.
Demo: https://meet.klappe.dev/cloudmeet
GitHub: https://github.com/dennisklappe/CloudMeet
MIT licensed. Happy to hear feedback or answer questions.
https://redd.it/1pfbc74
@r_opensource
I built an automated court scraper because finding a good lawyer shouldn't be a guessing game
Hey everyone,
I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.
So, I built CourtScrapper to fix this.
It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.
What My Project Does
Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.
Target Audience
The average person who is looking for a lawyer that makes sense for their particular situation
Comparison
Enterprise software that has API connections to state courts e.g. lexus nexus, west law
The Tech Stack:
Python
Playwright (for browser automation/stealth)
Pandas (for data formatting)
My personal use case:
1. Gather a list of lawyers I found through google
2. Adjust the values in the config file to determine the cases to be scraped
3. Program generates the excel sheet with the relevant cases for the listed attorneys
4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
2. How recent are similar cases handled by the lawyer?
3. Is the nature of the case similar to my situation? If so, what is the result of the case?
4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases
Note:
I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
I'm running this program as a proof of concept for now so it is only Dallas
I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
Same for any technical questions, read the documentation before asking any questions
I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.
Repo
Hey everyone,
I recently caught 2 cases, 1 criminal and 1 civil and I realized how incredibly difficult it is for the average person to find a suitable lawyer for their specific situation. There's two ways the average person look for a lawyer, a simple google search based on SEO ( google doesn't know to rank attorneys ) or through connections, which is basically flying blind. Trying to navigate court systems to actually see an lawyer's track record is a nightmare, the portals are clunky, slow, and often require manual searching case-by-case, it's as if it's built by people who DOESN'T want you to use their system.
So, I built CourtScrapper to fix this.
It’s an open-source Python tool that automates extracting case information from the Dallas County Courts Portal (with plans to expand). It lets you essentially "background check" an attorney's actual case history to see what they’ve handled and how it went.
What My Project Does
Multi-lawyer Search: You can input a list of attorneys and it searches them all concurrently.
Deep Filtering: Filters by case type (e.g., Felony), charge keywords (e.g., "Assault", "Theft"), and date ranges.
Captcha Handling: Automatically handles the court’s captchas using 2Captcha (or manual input if you prefer).
Data Export: Dumps everything into clean Excel/CSV/JSON files so you can actually analyze the data.
Target Audience
The average person who is looking for a lawyer that makes sense for their particular situation
Comparison
Enterprise software that has API connections to state courts e.g. lexus nexus, west law
The Tech Stack:
Python
Playwright (for browser automation/stealth)
Pandas (for data formatting)
My personal use case:
1. Gather a list of lawyers I found through google
2. Adjust the values in the config file to determine the cases to be scraped
3. Program generates the excel sheet with the relevant cases for the listed attorneys
4. I personally go through each case to determine if I should consider it for my particular situation. The analysis is as follows
1. Determine whether my case's prosecutor/opposing lawyer/judge is someone someone the lawyer has dealt with
2. How recent are similar cases handled by the lawyer?
3. Is the nature of the case similar to my situation? If so, what is the result of the case?
4. Has the lawyer trialed any similar cases or is every filtered case settled in pre trial?
5. Upon shortlisting the lawyers, I can then go into each document in each of the cases of the shortlisted lawyer to get details on how exactly they handle them, saving me a lot of time as compared to just blindly researching cases
Note:
I have many people assuming the program generates a form of win/loss ratio based on the information gathered. No it doesn't. It generates a list of relevant case with its respective case details.
I have tried AI scrappers and the problem with them is they don't work well if it requires a lot of clicking and typing
Expanding to other court systems will required manual coding, it's tedious. So when I do expand to other courts, it will only make sense to do it for the big cities e.g. Houston, NYC, LA, SF etc
I'm running this program as a proof of concept for now so it is only Dallas
I'll be working on a frontend so non technical users can access the program easily, it will be free with a donation portal to fund the hosting
If you would like to contribute, I have very clear documentation on the various code flows in my repo under the Docs folder. Please read it before asking any questions
Same for any technical questions, read the documentation before asking any questions
I’d love for you guys to roast my code or give me some feedback. I’m looking to make this more robust and potentially support more counties.
Repo
Multi Agent Healthcare Assistant
As part of the Kaggle “5-Day Agents” program, I built a LLM-Based Multi-Agent Healthcare Assistant — a compact but powerful project demonstrating how AI agents can work together to support medical decision workflows.
What it does:
- Uses multiple AI agents for symptom analysis, triage, medical Q&A, and report summarization
- Provides structured outputs and risk categories
- Built with Google ADK, Python, and a clean Streamlit UI
🔗 Project & Code:
Web Application: https://medsense-ai.streamlit.app/
Code: https://github.com/Arvindh99/Multi-Level-AI-Healthcare-Agent-Google-ADK
https://redd.it/1pfi881
@r_opensource
As part of the Kaggle “5-Day Agents” program, I built a LLM-Based Multi-Agent Healthcare Assistant — a compact but powerful project demonstrating how AI agents can work together to support medical decision workflows.
What it does:
- Uses multiple AI agents for symptom analysis, triage, medical Q&A, and report summarization
- Provides structured outputs and risk categories
- Built with Google ADK, Python, and a clean Streamlit UI
🔗 Project & Code:
Web Application: https://medsense-ai.streamlit.app/
Code: https://github.com/Arvindh99/Multi-Level-AI-Healthcare-Agent-Google-ADK
https://redd.it/1pfi881
@r_opensource
Streamlit
AI Medical Assistant
This project demonstrates a robust, safety-focused Multi-Level Agent System built using the Googl...
A fast lightweight similarity search engine built in Rust
https://ahnlich.dev
https://redd.it/1pfkymi
@r_opensource
https://ahnlich.dev
https://redd.it/1pfkymi
@r_opensource
ahnlich.dev
A project by developers bringing vector database and artificial intelligence powered semantic search abilities closer to you
Advice on Getting Started with Open Source Contributions ?
Hey,
I’ve been wanting to get into open source for a while but im feeling stuck. I really want to improve my development skills and not rely on vibe coding too much. There’s so much info out there, it’s overwhelming. For someone totally new, what’s the easiest way to find a project that’s actually friendly to beginners?
Also, I’m nervous about accidentally breaking stuff or messing things up for others. I know maintainers review PRs, but how did you get over that fear when you first started? I want to be responsible and make sure my code works before submitting. How do you test your changes locally? What’s a good way to self-review so I’m confident I’m not wasting anyone’s time?
I’m decent with git and GitHub and have been working as an intern for 7 months, so I’m not a complete newbie. Any advice, tips, or been there done that stories would be graet.
Thanks a lot!
https://redd.it/1pfmghg
@r_opensource
Hey,
I’ve been wanting to get into open source for a while but im feeling stuck. I really want to improve my development skills and not rely on vibe coding too much. There’s so much info out there, it’s overwhelming. For someone totally new, what’s the easiest way to find a project that’s actually friendly to beginners?
Also, I’m nervous about accidentally breaking stuff or messing things up for others. I know maintainers review PRs, but how did you get over that fear when you first started? I want to be responsible and make sure my code works before submitting. How do you test your changes locally? What’s a good way to self-review so I’m confident I’m not wasting anyone’s time?
I’m decent with git and GitHub and have been working as an intern for 7 months, so I’m not a complete newbie. Any advice, tips, or been there done that stories would be graet.
Thanks a lot!
https://redd.it/1pfmghg
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community