Opensource by Reddit – Telegram
Opensource by Reddit
20 subscribers
5 photos
2 videos
9.52K links
Reddit's ♨️ take on Open Source Technology.

Join the discussion ➡️ @opensource_chats

Channel Inquiries ➡️ @group_contacts_bot

👄 TIPS ➡️➡️➡️ https://news.1rj.ru/str/addlist/mB9fRZOHTUk5ZjZk

🌈 made possible by
@reddit2telegram
@r_channels
Download Telegram
Need advice on maintaining a healthy open-source community (not the code side)

I’m maintaining an open-source project (Img2Num, a browser-based image to colour-by-number tool using React and WebAssembly in C++), and I’m trying to be intentional about the community and maintenance side, not just shipping features.

I’d love advice, resources, or hard-earned lessons around things like:
- Contributor onboarding (what actually works vs. noise), e.g., good docs, good first issues, or other important things
- Issue & PR management without burning out. I find it tough to keep track of everything the project needs to get done since it's still quite new.
- Setting contribution norms and boundaries
- Roadmaps:
- How detailed should they be?
- Where should they live (README, GitHub Projects, docs, elsewhere)?
- Releases:
- Release early/often vs. fewer “stable” releases
Communicating breaking changes
- Community spaces:
- When (if ever) does Discord/Slack make sense?
- Signs it’s too early or not worth the overhead
- Social media:
- Useful for OSS communities or mostly just a distraction? If yes, what should actually be shared?
- Long-term sustainability:
- Avoiding maintainer burnout
- Keeping expectations realistic as the project grows

If you’ve maintained or helped grow an open-source project (especially a small or mid-sized one), what do you wish you’d known earlier?

Any resources (such as blogs, talks, books, examples, or just candid experience) are all very welcome. I just want to learn whatever I can before it's too late.

Thanks for getting this far! I’m specifically trying to learn how to do this well rather than accidentally harming the community. Any help would be amazing.

https://redd.it/1ps4czt
@r_opensource
Can I use AGPL for my project but also use MIT for some parts of the code?

I wrote a project with kotlin multiplatform, which compiles to JVM, Android, iOS and web..Because of the web part I want to use AGPL.

But there might be parts of the code that are interesting for others to use (smaller solutions). Can I set up another license for that part or would it be confusing or a legal problem?

Maybe it would be easier to copy these parts to another project and put that under MIT license.

There are no other contributors so far.

I just want to prevent anybody to use the code, make it proprietary and get money out of it.

https://redd.it/1ps896r
@r_opensource
built ReSearch: A self-hosted search engine with its own crawler targeted to kill AI slop. Uses Go, KVRocks, and Quickwit backend. Focuses on crawling/search on curated domains list which are configurable.

I saw a yt video from "kurzgesagt" with noscript: "AI Slop Is Destroying The Internet", the video describes how the internet is getting filled with AI generated slop and how the existing LLMs are using misinformation and inaccurate AI slop as a verified source from the internet and confidently hallucinating. A thought struck me, what if instead of crawling the entire internet what if we had a search engine with curated domain list to crawl. The internet is filled with social media, porn, SEO optimization junk, AI slop, so I though by doing this we can create a mini internet with high valued results, now this means we have less NOISE, HIGH QUALITY results

The primary targeted clients sere AI and LLM companies, I can run multiple clusters with each cluster focuses on particular topic, like research papers(google scholar), code documentations (code generation LLM), one for Dark web, one targeting cyber security sites etc etc,

But then I though its would be a failed business and I planed to make it open source xD

I did plan and implement it to handle 50M plus search results, there are some bottlenecks, you can definitely increase the limit by fixing those, the code is optimised, efficinet and fuctional and I probably won't be maintaining it.

It is build with scalability and distributed arch in mind, KV rocks and quick both are extremely scalable, and you can run multiple crawling engines in parallel writing to the same DB, didn't get to test this product to the extremes but, I worked with 20 domains which weren't blocking me on scrapping (am I going to jail?), and max was scraping 200k records, the search results were pretty fast as quickwit uses inverse index for search, so it s fast despite the scale.

and also you do need to work on following sitemap logic, and had plans to include AI generated content identification and skipping indexing those sites.

I would appreciate any review on the architecture, code quality, scalability, feel free to reach out for anything :")

Tech Stack: Golang (Wanted to do Rust but I didn't work on Rust before and it felt like it was too much to trade for a slight performance gain)

QuickWit, powerful, efficient, fast, Rust based (why not OPENSEARCH, no budget for Ram in this economy, and I definitely hate java stack)

I did use AI here and there to improve my efficiency and reduce manual work, but not entirely build on vibes,


you can deploy on your local machine if you wanna run your own search engine

github link: https://github.com/apigate-in/ReSearch

https://redd.it/1psalit
@r_opensource
Looking for Contributors – Modern C++ CLI Calculator (OSS)

Hi! 👋
I’m maintaining an open-source **C++ CLI Calculator** and I’m looking for contributors.

**Project highlights:**

* Modern C++ (CMake, clean architecture)
* REPL mode, expression parsing
* Matrices, prime factorization, base conversion
* Batch noscripting support
* CI/CD and multi-platform packaging (Snap, DEB)

**Looking for help with:**

* Core C++ features and refactors
* Performance and parsing improvements
* Tests and CI
* Documentation and CLI UX polish

Beginner-friendly issues are available, and code reviews are provided.
If you enjoy clean C++ and well-structured CLI tools, you’ll feel at home.

👉 GitHub: [https://github.com/Benedek553/cli-calculator]()

PRs, issues, and discussions are welcome.



https://redd.it/1ps8dct
@r_opensource
Anyone know of any (free) open source git repository sites like github/gitlab?

Like with (near) complete privacy ( as in no data shared and no data being in the view of microsoft for example) and being completely open souce and free. (hopefully free, but if its completely open soruce and private, im willing to pay some money to use it).


edit: i also mean foss code repositories, not just git.

https://redd.it/1pshict
@r_opensource
Why is open-source maintenance so hard?💔

Good after-breakfast

I feel like I'm jumping through hoops just to marvel at my own reflection.

I’ve been working on an open source project recently, and it's just so hard to keep it maintained and release new features consistently. Even with contributors and users who seem interested, there’s always this constant pressure: fixing bugs, reviewing PRs, updating dependencies, handling feature requests, and keeping documentation up to date, which I initially neglected and am now burdened by - nobody wants to help with that either, and I don't blame them. :(

I’ve noticed that contributors sometimes drop off, issues pile up, and maintaining consistency becomes overwhelming. It makes me wonder: is this just the nature of open source, or are there strategies that successful projects use to make maintenance sustainable? When I make posts on places like Reddit, people just respond with acidic comments, and it takes all of the joy out of OSS for me.

I want to hear from you.

What are the biggest challenges you face in maintaining an open source project?

How do you manage your community's expectations while keeping your sanity?

Are there tools, workflows, or approaches that make maintenance easier? I've tried things like CodeRabbit after someone recommended it to me, but now I'm considered a noscript kiddy for using half a second of AI per week.

I simply want to understand why it's so hard and what can be done to survive in the long term.
Thanks in advance for your thoughts!

https://redd.it/1psif9g
@r_opensource
I built a "Serverless" Tamagotchi that lives in a GitHub repo (MIT License)

Hi r/opensource,

I'm sharing **ForkMonkey**, an experimental project I released this week.

It's a digital pet that lives entirely inside a GitHub repository. The goal was to build a self-sustaining application using **zero external infrastructure**, relying only on the GitHub ecosystem (Actions, Pages, Git history).

**Repository:** [https://github.com/roeiba/forkMonkey](https://github.com/roeiba/forkMonkey) **License:** MIT

# The "Not Open Source" Elephant in the Room 🐘

I posted here yesterday and got some heat because the default configuration uses GPT-4o (via GitHub Models) for the evolution logic. **That's a fair point.**

I want to clarify: **The engine is 100% open source and model-agnostic.**

The core logic (`src/evolution.py`) uses an abstract `AIProvider` interface.

* **Default:** Uses GitHub Models (free tier) for zero-setup accessibility.
* **Open Models:** You can switch the underlying model to **Llama 3** (or any other model hosted on GitHub Models) just by setting an environment variable (`GITHUB_MODEL=Meta-Llama-3-70B-Instruct`).
* **Bring Your Own:** It also supports Anthropic natively, and because it's standard Python, plugging in a local Ollama endpoint or HuggingFace inference is a trivial PR away.

# How it works

1. **Repo as DB:** The state of your pet is a JSON file. Every "evolution" is a Git commit. The history of your pet is literally the `git log`.
2. **Actions as Backend:** A cron workflow runs daily, invokes the Python engine, calculates changes, and commits them back.
3. **Forking as functionality:** This is the mechanics I'm most proud of. "Adopting" a monkey means **Forking** the repo. This isn't just a copy; the setup noscript reads the *upstream* parent's DNA and mutates it. This creates a traceable genealogical tree across the entire GitHub network.

I'd love for this community to play with the **"Git as a Database"** concepts or help add providers for fully local/open-weight model execution.

Feedback and PRs welcome!

https://redd.it/1psk4pe
@r_opensource
XMRetail POS @SlowBearDigger

Hey yall Monero frens!

Just dropped https://xmretail.store

it's a zero knowledge POS system for sellers rockin Monero.

https://github.com/SlowBearDigger/xmretail-pos (OPEN SOURCE, MIT license - fork it, mod it, whatever ya want!)

All opensource, no servers spyin on ya biz, data stays encrypted in ya browser with AES GCM 256 bit. Watch only security using your private view key to scan the blockchain for sales, no spend key needed so funds stay safe in your wallet. Direct P2P to any node, no fees no middlemen, full privacy straight from the Monero protocol.

Features: fast sync with monero-ts WASM, fresh subaddresses per tx (it will generate after your first payment, or if you already have sub addresses verify that they match), custom QR codes to match your brand, inventory tracking with stock and variations, slick industrial design for busy spots.

Built with React Vite TS Tailwind n Web Crypto API (Yes, i used AI to do this faster, you can always check the code and help this to grow). Quick start: clone repo, npm install, npm run dev.

It's in testin phase so PLEASE try it out and lemme know if it works for you or not - bugs, ideas, whatever! Comment below with feedback, the more the better it gets. If you got the skills, audit the code plz, it's all there on GitHub for you to poke at.

Help the XMR community grow!!

My X @SlowBearDigger

https://redd.it/1psluo3
@r_opensource
Free language translation package, 15 languages

Published my first NPM package a little while ago and wanted to share. I was working for an ed-tech startup and found a concerning lack of accessibility for translation APIs at scale despite the information being out there via wiktionary. Using wiktionary HTML dumps, I was able to parse out information for most use cases.

Features:

* automatic accent correction
* verb form detection and base verb translatoin
* returns word type (adjective, noun etc.)
* requires one of the two languages to be English, but translates between it and 14 other languages ranging from Spanish to Chinese
* roman and character based translation for character languages

Would love some feedback and to see what else would be helpful to add. Please feel free to contribute directly as well! Hope this makes life a little easier for anyone building language-based apps but don't have the budget for super expensive APIs.

[https://github.com/akassa01/wikiglot](https://github.com/akassa01/wikiglot)

[https://www.npmjs.com/package/wikiglot](https://www.npmjs.com/package/wikiglot)

https://redd.it/1psook4
@r_opensource
Case Management software

Hi there, I work in HR for a large NHS Trust and they won’t pay for my team to have Casework management software. I have AuADHD and really struggle with the sheer volume of work that comes in and know that a management system could help me track all my cases and tasks plus a knowledge base in one place.

I could try to build something using PowerApps and SharePoint however if there was something decent out there I’d love to try that out. Thank you ☺️

https://redd.it/1psu2sv
@r_opensource
Made a free is-an-ai.dev subdomain service!

Hey r/opensource,

I recently built a small open-source project that gives people (preferably developers) a free subdomain on is-an-ai.dev domain.

We have a PR based workflow. Submit a PR to our github repo, maintainers will review and approve it, and your dns records will be added.

Links:
Website: https://www.is-an-ai.dev

Docs: https://docs.is-an-ai.dev

GitHub org: https://github.com/is-an-ai-dev

Register: https://github.com/is-an-ai-dev/register

If you think it’s useful, feel free to try it out or contribute.
If not, I’d still appreciate your opinion about service and how can I make this free service known!

Thanks in anticipation!

https://redd.it/1pstzuo
@r_opensource
lagident - A tool to find poor quality network connections

Hi community,

I have finally published a project that was sleeping on my disk for 11 month. Lagident.

The idea is, to run lagident on one (or better multiple) computers in your network to identify weak and poor quality connections. By taking measurements from multiple points, it is easier to identify if you are dealing with a bad network card, a broken switch or router.



In my case I had issues while online gaming with my desktop PC, but I wasn't sure about the root cause. So i created lagident to find the issue in my network (it was a bad driver for my network card).

Today i have all my network devices monitored by Lagident. For example if i move my Router, i can see if this decreases the Wifi quality for my Smart-TV.


Please see the GitHub repo for screenshots.

https://github.com/nook24/lagident


Happy holidays!

https://redd.it/1psv6uk
@r_opensource
Ephemera. An open-source, self-hosted SSH Certificate Authority built on native OpenSSH

Hey everyone,

I’ve been working on a self-hosted project exploring a different approach to SSH access and sudo control, without cloud dependencies or SSH proxies.

Ephemera is an air-gap friendly SSH Certificate Authority built entirely on native OpenSSH and PAM primitives. The core idea is to eliminate long-lived trust rather than rotate it.

Repo:
[https://github.com/Qarait/ephemera](https://github.com/Qarait/ephemera)

Documentation/architecture notes:

[https://qarait.github.io/ephemera/index.html](https://qarait.github.io/ephemera/index.html)

Documentation includes a published threat model and security disclosure policy.

At a high level, Ephemera:

* Replaces static SSH keys with short-lived certificates (minutes)
* Requires WebAuthn hardware-backed presence for cert issuance
* Implements Just-in-Time sudo, privileged commands pause until explicitly approved
* Uses policy driven RBAC (OIDC groups, IP ranges, time windows)
* Produces tamper evident, hash chained audit logs
* Supports encrypted, sovereign recovery via Shamir secret sharing
* Runs fully self hosted, Dockerized and air-gap capable

Explicit non-goals (intentional design choices):

1) No MITM SSH proxy-direct OpenSSH connections only, no traffic interception layer.

2) No custom SSH protocol-relies exclusively on upstream OpenSSH semantics.

3) No always on root access-all privilege escalation is time bound and explicitly approved.

4) Prefer native OpenSSH and PAM over agents or sidecars.

This is not a commercial project. I’m looking for architecture and threat model feedback, especially from people who’ve designed or operated access control systems at scale.

https://redd.it/1pt267m
@r_opensource
Open-source cross-platform media player using QtMultimedia + FFmpeg with hardware acceleration

Pars Local Player (PLP) is an open-source media player focused on simple and reliable

radio streams and video playback.



It was created because existing players were often unreliable for streams and had

inconsistent controls and outdated UI.



Key points:

\- Cross-platform: Windows and Linux (64-bit)

\- Clean and predictable UI

\- Reliable radio and network stream playback

\- Hardware-accelerated decoding (DirectX 11 on Windows, VAAPI on Linux)

\- Wide format support for video, audio, and playlists

\- No telemetry or analytics



Help and documentation:

https://parrothat.com/plp (Help section)

https://parrothat.com/plp/linuxdguides.html (Linux Distros Guides)

Source code:

https://github.com/parrothat/plp



https://redd.it/1pt1drv
@r_opensource
built a minimal neofetch-style tool in Python — feedback welcome

Hey all,

I’ve been using neofetch / fastfetch for a long time, but I wanted something
much simpler — no config files, no themes, no plugins, just a fast snapshot
of system info when I open a terminal.

So I built fetchx.

Goals:
- Minimal output by default
- Zero configuration
- No external dependencies (Python stdlib only)
- Clear modes instead of endless flags
- Works cleanly on Linux and WSL

Usage:
- fetchx → default system snapshot
- fetchx --network → network info only
- fetchx --full → everything fetchx can detect

It’s a single-file tool, installs system-wide with a curl command, and runs in
milliseconds.

Repo: https://github.com/v9mirza/fetchx

This is an early version — I’m mainly looking for feedback on:
- output choices
- missing info that should be included
- things that should not be included

Appreciate any thoughts.

https://redd.it/1pt5vkp
@r_opensource