Opensource by Reddit – Telegram
Opensource by Reddit
20 subscribers
5 photos
2 videos
9.51K links
Reddit's ♨️ take on Open Source Technology.

Join the discussion ➡️ @opensource_chats

Channel Inquiries ➡️ @group_contacts_bot

👄 TIPS ➡️➡️➡️ https://news.1rj.ru/str/addlist/mB9fRZOHTUk5ZjZk

🌈 made possible by
@reddit2telegram
@r_channels
Download Telegram
Gommitlint - A CLI tool for linting commits written in Go

My second Goproject. I needed a CLI commitlinter, and non of the available ones filled my needs and had the functions I wanted. (Conform from Siderolabs came close). So here is my take.I would say it turned out pretty ok, even if there is still cleanup and polish to do before 1.0. Also did a GithHub- and a Forgejo Action to go with it. I did an effort to follow good Open Source practices etc. Read more here, and you'll find the links too. https://itiquette.codeberg.page/posts/gommitlint-release/ Will continue to polish it of course! Cheers!

https://redd.it/1qe8q3f
@r_opensource
[GPL-3.0] I built an open-source, bloat-free image viewer to replace Windows Photos. Written in C++23.

I am the developer of **QuickView**, a lightweight, open-source image viewer for Windows.



I'm posting today because I just released **v3.0**, which is a massive rewrite of the core engine. My goal was to create something significantly faster than the default Windows Photos app, specifically for handling heavy formats like RAW, PSD, and EXR.

**The Source Code (GPL-3.0):**[https://github.com/justnullname/QuickView](https://github.com/justnullname/QuickView)

**Why is it different?** It's not an Electron app wrapper. It's built with modern **C++23** and uses a **Direct2D** native rendering pipeline. We just released v3.0.5 with a new "Quantum Stream" architecture that separates UI and decoding threads, meaning the interface never freezes, even when loading huge 100MB+ RAW files.

**Key Features:**

*  **Instant Startup:** Opens almost instantly.
* 🎞️ **Format Beast:** Supports everything from standard JPG/PNG to modern **JXL/AVIF** and pro formats like **RAW/EXR/PSD**.
* 🎮 **144Hz+ Ready:** Pan and zoom are incredibly smooth, utilizing SIMD (AVX2) acceleration.
* 🛠️ **Geek Tools:** Includes a real-time RGB histogram and a "Photo Wall" overlay mode (press T).
* 🍃 **Portable & Free:** No installation needed, no ads, just one EXE.

It is completely **free, portable (single .exe), and has no ads/telemetry**.

I'd appreciate any feedback on the new rendering performance!

Download available on the GitHub Releases page.

https://redd.it/1qe9wtp
@r_opensource
Maintainers & contributors: How can I make my project docs clearer?

Hey everyone!

I’m maintaining Img2Num. It started as an app that turned images into color-by-number SVGs, but now it’s shifting focus to being a raster-to-SVG vectorization library.

I’ve written a bunch of docs, guides, and rules for contributors, but people still get confused or miss steps. I’d love some honest feedback on making the project easier to understand and contribute to.

Some things I’d like feedback on:

\- Are the setup and usage instructions clear enough?

\- Do the contributing guidelines make sense, especially around CI and formatting rules?

\- Does the docs explain the project purpose and structure well now that the focus has shifted?

\- Any general tips to make it more approachable for first-time contributors.

Repo link: https://github.com/Ryan-Millard/Img2Num

Thanks a ton for any suggestions!

https://redd.it/1qefdvl
@r_opensource
Open sourcing my research paper

I have submitted my research paper on IEEE transactions on signal processing. I wanted to open source the paper on arxiv. what are the steps to follow and what are the things to take into consideration.

The submitted paper at IEEE is still under review, Area Editor has been assigned and Successful manunoscripts will be assigned to an Associate Editor.

provide me some guidance , as this is the first time i am publishing a research paper.

https://redd.it/1qell0t
@r_opensource
I built an open-source job tracker to organize my job search

Job hunting is exhausting. Between crafting tailored resumes, tracking multiple applications, and remembering which stage each one is in, it's easy to lose track of everything. I experienced this firsthand during my own job search, and like any developer facing a problem, I decided to build a solution. I would love to hear your feedback!

🛠️ Stack: Next.js 16, Supabase, TypeScript, Tailwind

Live: jobapplytracker.com

GitHub: https://github.com/berkinduz/job-apply-tracker





https://redd.it/1qesakt
@r_opensource
Am I Cheating?

So, I'm running a smaller-sized open-source project on GitHub with around 1.2k stars (interestingly enough, it's neither a dev tool nor a library, but a super niche, consumer-facing educational tool that I host online).

Recently, I've had the idea of automatically generating "good first issues" for the repo to encourage growth and drive traffic to the project. The issues are so dead simple that anyone with 0 experience in our tech stack or even programming in general can come in, get them done in under a minute, open a PR and be done with it.

Lo and behold, the repo has gotten 100+ new, one-and-done contributors and an according number of stars and forks, to the point where I feel that I'm cheating the system and GitHub's algorithm by doing this; the automatically-created "good first issues" are monotone and brain-dead at best, and even though their contents technically reach the end-users, these issues/contributions provide no real meaningful value other than consistently and artificially inflating my repo's star/fork/contributors count.

So, am I cheating? All feedback welcome.

https://redd.it/1qiuagx
@r_opensource
I made a documentary about Open Source in Ukraine and around the world

Hey all, I wanted to share with you a documentary I just published yesterday called "Gift Community: A Story About Open Source." I visited the Open Source community in Los Angeles, Denmark, India ... and, yes, Ukraine. I met legendary developers like Mitchell Hashimoto (HashiCorp, Terraform, Vault, etc., now Ghostty), Poul-Henning Kamp (FreeBSD, Varnish/Vinyl), and Kailash Nadh (Zerodha). Along the way, I slept in an air-raid shelter, flew in Mitchell's private jet, and ventured out into Bangalore traffic. In the doc I tried to weave it all together into a story about "the deeper meaning of Open Source." Let me know how I did. :-)

https://www.youtube.com/watch?v=tOn-L3tGKw0

https://redd.it/1qj9bnt
@r_opensource
Pedro Organiza: a deterministic, non-destructive, review-before-apply music library organizer I’ve been building It is a local-first music library organizer for people with big, messy collections

Hi everyone,

For the past months I’ve been building a personal project called Pedro Organiza — a desktop tool to analyze, clean, and reorganize large music libraries in a safe, deterministic, review-before-apply way.

It started as a personal need: I have a very large, messy music collection with duplicates, inconsistent tags, broken albums, and many years of accumulated chaos. Existing tools were either too automatic, too destructive, or too opaque for my taste.

So I decided to build something with a very strict philosophy:

No silent destructive actions
No “magic” operations you can’t inspect
Always analyze → review → apply
Local-first: your music never leaves your machine
Deterministic behavior: same input, same result

# What Pedro can already do

Current core features:

Recursive scanning of large libraries (tens of thousands of files)
Metadata extraction using Mutagen
Fingerprinting and hashing of files
Intelligent alias normalization (artist/noscript/album variants)
Duplicate detection using:
File hashes
Metadata similarity
Fuzzy matching
Clustering of potential duplicates and aliases
Two-phase workflow:
1. Analyze & propose actions
2. Review in UI
3. Apply explicitly

The UI lets you:

Browse and search your entire library from a local SQLite DB
Edit tags individually or in bulk
Inspect duplicate clusters before touching anything
See exactly what will be changed before executing

Backend is Python (FastAPI + CLI tools), frontend is React.

# Design philosophy

Some principles I’ve been following very strictly:

No automatic deletions
No irreversible actions without review
Transparency over convenience
UI-first for non-technical users, but CLI still exists
Additive database schema (no forced rescans when schema evolves)

In short: Pedro is meant for people who care deeply about their music and don’t trust black boxes.

# What’s work-in-progress right now

Currently working on:

Polishing the startup / first-run UX
Improving performance with very large libraries (50k+ tracks)
Refining alias normalization and cluster quality
Better progress reporting and logging in the UI
Tag side-panel for faster metadata editing

# Planned next features

Some ideas already planned for future versions:

Background watcher for new files
Drag & drop support in the UI
Album art fetching and management
Export filtered views as playlists (.m3u, etc.)
Packaging for Windows / macOS / Linux (AppImage, .exe, .dmg)
Flatpak release

Longer term:

Plugin system for custom analyzers
Optional online metadata providers
Better visualization of library health

# Project status

Actively developed
Not “1.0” yet, but already usable
Open-source (license still being finalized)
Currently running on Linux, Windows support in progress

I’m not trying to build a commercial product — this is a serious long-term open-source tool for people with large, messy collections.

# Looking for

I’d really appreciate feedback from people who:

Have large music libraries
Have tried tools like beets, MusicBrainz Picard, MediaMonkey, etc.
Care about safe workflows

Questions I’m particularly interested in:

What’s your biggest pain point organizing music?
What features do you miss in existing tools?
Would you prefer more automation or more control?

If there’s interest, I’m happy to share screenshots, design notes, and such once the next milestone is published.

You can check it out (work in development, so expect regulra updates)
https://github.com/crevilla2050/pedro-organiza/

Thanks for reading — and thanks in advance for any feedback.

https://redd.it/1qje7mb
@r_opensource
Repo Fork Etiquette Question

(To preface this I am relatively new to open source and only have one other slightly-used project on GitHub.)

I made a feature request on a repository asking to add a piracy aspect to a selfhosted music service, which the author replied and said it was out of scope, difficult to implement, and that they were fundamentally against the idea. They then closed the issue as unplanned. So, I made a fork and implemented it myself.

My fork is now approaching similar visibility to the original repo and the author has since reopened the original issue, done a complete 180 on their stance and are saying that after seeing my fork, they think that it would be a good idea to implement and they are going to begin working on it.

Am I wrong to be annoyed by this? I've told the author that I think it would be a good idea to keep the original/fork separated due to one using piracy and one not using it, but they remained adamant that they wanted to take my idea and implement it in their repo. To me, this seems like they just want to remove the need/viability of my fork after seeing it growing in popularity.

https://redd.it/1qj9k0a
@r_opensource
This media is not supported in your browser
VIEW IN TELEGRAM
I open sourced a single file less than 30 lines to help you write structured git commit messages
https://redd.it/1qjngly
@r_opensource
I built a GPS tracker that sends data directly to YOUR server (no cloud)

Hey r/opensource,

I got tired of GPS apps that force you through their cloud or make you run complex middleware just to log locations to your own database.

So I built Colota \- it's dead simple:

Point it at your server (any HTTPS endpoint)
It POSTs JSON with your GPS coords
Done.

Why you might care:

Works offline-first \- saves to local SQLite, syncs when it can
Custom JSON fields \- your API wants `latitude` instead of `lat`? Just rename it in settings
Geofences \- auto-pause tracking at home/work (privacy + battery)
No cloud/telemetry/tracking \- your data stays on your device or YOUR server
Open source (Github)

Example use cases:

Live map on your personal website
Simple INSERT INTO locations to PostgreSQL
Home Assistant webhook
Literally any server that accepts POST requests

Current integrations that work:

Dawarich (works great out of the box)
OwnTracks Recorder
Home Assistant
Custom backends (just needs to accept JSON)

Features in roadmap:

Smart Geofence Management (Visual geofence editor (drag to resize) and Statistics: "You spent 8 hours at work today"
Location History Trail with Date Filter (See your movement paths over time)
Statistics Dashboard (Distance traveled (daily/weekly/monthly; Most visited locations)

I need 12 beta testers for Google Play requirements (14 days)

If you have:

Android phone
Your own server (or want to test offline mode)
5 minutes to install and give feedback

Join the Google Group colota-beta-testing@googlegroups.com and then you can download the beta version at https://play.google.com/apps/testing/com.Colota

FAQ:

Q: Does it drain battery?
A: \~5-10% per hour with optimizations. Silent zones help a lot.

Q: What's the difference vs OwnTracks?
A: Persistent SQLite (OwnTracks uses memory), better retry logic, built-in export, no server software required.

Q: Do I NEED a server?
A: Nope. Works 100% offline. Server is optional. You can export data from the app and use it e.g. in QGIS

Q: What data does it send?
A: Only GPS coords to YOUR endpoint. Zero telemetry.

Free forever. No ads. Open source.

https://redd.it/1qiwua1
@r_opensource
I made a visual grid that shows your subnoscriptions sized by how much they actually cost you

Hey everyone! I built a simple tool that turns my subnoscriptions into a proportional treemap - bigger box = bigger monthly spend.

Seeing it visually was honestly a bit confronting. I knew streaming services cost money, but I didn't realize they made up quite a lot of my total subnoscription spend until I saw them as massive boxs. Made it pretty easy to decide what to cut first.

What it does:

Shows all your subnoscriptions as proportional boxes
Instantly highlights which services dominate your budget
Useful for deciding what's actually worth keeping vs what to cancel

Privacy-focused:

No signup required
100% free (personal project, I make nothing from this)
All data stays in your browser - nothing sent anywhere

Try it here: visualize.nguyenvu.dev
Source code: hoangvu12/subgrid

Would love feedback, is this actually useful, or am I the only one who needed to see it visually to take action? Open to suggestions on what would make it better.

https://redd.it/1qjqitk
@r_opensource
MIT License Question

Just to clarify if I'm understanding it right, can I use an MIT License open source software (without modifying its source code) and integrate or embed it on my own project?

I will also distribute it.

https://redd.it/1qjtzeq
@r_opensource
Anyone else frustrated with internal technical documentation? KT sessions, README's, Lucid charts, etc are all painful.

## Internal documentation tends to rot — why is that?

Most teams end up with internal documentation spread across a few familiar places:

- Knowledge-transfer sessions that go stale within months
- README files that never quite get filled in
- Multiple diagrams of the same system, each slightly different
- Notes in tools like Notion with code copied long ago
- Shared docs that are hard to maintain and rarely revisited

In practice, internal documentation is often outdated. When it isn’t, it’s usually fragmented or painful to work with.

That doesn’t seem like a motivation problem. It feels more structural. Documentation lives across too many tools, and keeping everything aligned with the code requires constant context switching — something most engineers actively avoid.

---

## Thinking about alternatives

A lot of documentation tooling works well for public or external docs, but internal technical documentation has different needs. It’s closer to the codebase, changes more frequently, and often needs diagrams, snippets, and architecture notes to stay useful.

One approach I’ve been exploring is keeping documentation directly alongside the repository and consolidating common internal-doc formats into a single editor. The idea is to reduce context switching and make it harder for docs to drift away from the source of truth.

That raises a few open questions:

- Does keeping docs in the repo actually change how often they’re updated?
- Are inline ERDs and architecture diagrams useful in practice, or do teams still export them elsewhere?
- Is “documentation drift” something teams would want tooling to actively detect?

---

## Curious how others handle this

How does your team handle internal technical documentation today?

- Where does it tend to break down?
- What tools do you rely on?
- What would make documentation easier to keep accurate?

For anyone interested, I’ve been experimenting with an open-source tool around this idea here:

https://doclific.com / https://github.com/muellerluke/doclific

Let me know if you have any feedback!

https://redd.it/1qjvd4z
@r_opensource
Data Modeling Tool

Are there any tools on the market today that use open-source code for data modeling?

https://redd.it/1qjv8r4
@r_opensource
Separate licenses for assets?

I'm working on an open-source game engine project where in addition to the engine, there will be community-contributed assets including models, music, and levels. I want to better protect people's work so they'll be more comfortable contributing it, so I'd like the assets to be under separate licenses to prevent them from being used commercially without the creator's permission. What is the best way to go about doing that?

https://redd.it/1qjz2z8
@r_opensource
Speak up for more funding for Open Source from the EU!

The European Commission has just [launched a consultation on the EU's future Open Source strategy](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/16213-European-Open-Digital-Ecosystems_en). The Deadline is the **3rd of February**.

The Strategy will cover:

* EU stance towards deploying Open Source software in the EU institutions.
* EU stance towards procurement of Open Source solutions.
* The EU's own Open Source contributions.
* Funding for Open Source developers, companies, and foundations.
* Helping Open Source foundations to get established in Europe.
* Implementation of EU laws like the CRA.
* Future consultation of the Open Source community in lawmaking.

The EU has funded loads of Open Source projects (like Mastodon) through its Next-Generation Internet initiative. (You can see a full list [here](https://nlnet.nl/project/) (all the ones with NGI))

That initiative is currently up for review.

**If you want more funding from the EU for Open Source, now is the time to speak up!**

https://redd.it/1qk1lc1
@r_opensource
Open sourcing a UI engine that replaces component trees with semantic intent

I'm opening up a project I've been working on called **Ripple**.

**The concept:**
Most AI UI generation today relies on LLMs knowing specific component libraries ("render Card with props X"). I wanted to build something more abstract: an engine where the AI just declares *intent* ("show flight options") and the engine handles the rendering implementation.

**Status:**

* Core engine (Svelte) is live in the demo.
* Docs and specs are up on GitHub.
* Full code release is scheduled for Q2.

**Demo(FREE)** inside
**GitHub:** [github.com/interacly/ripple](http://github.com/interacly/ripple) (we are cleaning up the code and building a stand alone core to support more frontend frameworks)

Would love to hear your thoughts on this

https://redd.it/1qk2v36
@r_opensource