Opensource by Reddit – Telegram
Opensource by Reddit
20 subscribers
5 photos
2 videos
9.52K links
Reddit's ♨️ take on Open Source Technology.

Join the discussion ➡️ @opensource_chats

Channel Inquiries ➡️ @group_contacts_bot

👄 TIPS ➡️➡️➡️ https://news.1rj.ru/str/addlist/mB9fRZOHTUk5ZjZk

🌈 made possible by
@reddit2telegram
@r_channels
Download Telegram
Free language translation package, 15 languages

Published my first NPM package a little while ago and wanted to share. I was working for an ed-tech startup and found a concerning lack of accessibility for translation APIs at scale despite the information being out there via wiktionary. Using wiktionary HTML dumps, I was able to parse out information for most use cases.

Features:

* automatic accent correction
* verb form detection and base verb translatoin
* returns word type (adjective, noun etc.)
* requires one of the two languages to be English, but translates between it and 14 other languages ranging from Spanish to Chinese
* roman and character based translation for character languages

Would love some feedback and to see what else would be helpful to add. Please feel free to contribute directly as well! Hope this makes life a little easier for anyone building language-based apps but don't have the budget for super expensive APIs.

[https://github.com/akassa01/wikiglot](https://github.com/akassa01/wikiglot)

[https://www.npmjs.com/package/wikiglot](https://www.npmjs.com/package/wikiglot)

https://redd.it/1psook4
@r_opensource
Case Management software

Hi there, I work in HR for a large NHS Trust and they won’t pay for my team to have Casework management software. I have AuADHD and really struggle with the sheer volume of work that comes in and know that a management system could help me track all my cases and tasks plus a knowledge base in one place.

I could try to build something using PowerApps and SharePoint however if there was something decent out there I’d love to try that out. Thank you ☺️

https://redd.it/1psu2sv
@r_opensource
Made a free is-an-ai.dev subdomain service!

Hey r/opensource,

I recently built a small open-source project that gives people (preferably developers) a free subdomain on is-an-ai.dev domain.

We have a PR based workflow. Submit a PR to our github repo, maintainers will review and approve it, and your dns records will be added.

Links:
Website: https://www.is-an-ai.dev

Docs: https://docs.is-an-ai.dev

GitHub org: https://github.com/is-an-ai-dev

Register: https://github.com/is-an-ai-dev/register

If you think it’s useful, feel free to try it out or contribute.
If not, I’d still appreciate your opinion about service and how can I make this free service known!

Thanks in anticipation!

https://redd.it/1pstzuo
@r_opensource
lagident - A tool to find poor quality network connections

Hi community,

I have finally published a project that was sleeping on my disk for 11 month. Lagident.

The idea is, to run lagident on one (or better multiple) computers in your network to identify weak and poor quality connections. By taking measurements from multiple points, it is easier to identify if you are dealing with a bad network card, a broken switch or router.



In my case I had issues while online gaming with my desktop PC, but I wasn't sure about the root cause. So i created lagident to find the issue in my network (it was a bad driver for my network card).

Today i have all my network devices monitored by Lagident. For example if i move my Router, i can see if this decreases the Wifi quality for my Smart-TV.


Please see the GitHub repo for screenshots.

https://github.com/nook24/lagident


Happy holidays!

https://redd.it/1psv6uk
@r_opensource
Ephemera. An open-source, self-hosted SSH Certificate Authority built on native OpenSSH

Hey everyone,

I’ve been working on a self-hosted project exploring a different approach to SSH access and sudo control, without cloud dependencies or SSH proxies.

Ephemera is an air-gap friendly SSH Certificate Authority built entirely on native OpenSSH and PAM primitives. The core idea is to eliminate long-lived trust rather than rotate it.

Repo:
[https://github.com/Qarait/ephemera](https://github.com/Qarait/ephemera)

Documentation/architecture notes:

[https://qarait.github.io/ephemera/index.html](https://qarait.github.io/ephemera/index.html)

Documentation includes a published threat model and security disclosure policy.

At a high level, Ephemera:

* Replaces static SSH keys with short-lived certificates (minutes)
* Requires WebAuthn hardware-backed presence for cert issuance
* Implements Just-in-Time sudo, privileged commands pause until explicitly approved
* Uses policy driven RBAC (OIDC groups, IP ranges, time windows)
* Produces tamper evident, hash chained audit logs
* Supports encrypted, sovereign recovery via Shamir secret sharing
* Runs fully self hosted, Dockerized and air-gap capable

Explicit non-goals (intentional design choices):

1) No MITM SSH proxy-direct OpenSSH connections only, no traffic interception layer.

2) No custom SSH protocol-relies exclusively on upstream OpenSSH semantics.

3) No always on root access-all privilege escalation is time bound and explicitly approved.

4) Prefer native OpenSSH and PAM over agents or sidecars.

This is not a commercial project. I’m looking for architecture and threat model feedback, especially from people who’ve designed or operated access control systems at scale.

https://redd.it/1pt267m
@r_opensource
Open-source cross-platform media player using QtMultimedia + FFmpeg with hardware acceleration

Pars Local Player (PLP) is an open-source media player focused on simple and reliable

radio streams and video playback.



It was created because existing players were often unreliable for streams and had

inconsistent controls and outdated UI.



Key points:

\- Cross-platform: Windows and Linux (64-bit)

\- Clean and predictable UI

\- Reliable radio and network stream playback

\- Hardware-accelerated decoding (DirectX 11 on Windows, VAAPI on Linux)

\- Wide format support for video, audio, and playlists

\- No telemetry or analytics



Help and documentation:

https://parrothat.com/plp (Help section)

https://parrothat.com/plp/linuxdguides.html (Linux Distros Guides)

Source code:

https://github.com/parrothat/plp



https://redd.it/1pt1drv
@r_opensource
built a minimal neofetch-style tool in Python — feedback welcome

Hey all,

I’ve been using neofetch / fastfetch for a long time, but I wanted something
much simpler — no config files, no themes, no plugins, just a fast snapshot
of system info when I open a terminal.

So I built fetchx.

Goals:
- Minimal output by default
- Zero configuration
- No external dependencies (Python stdlib only)
- Clear modes instead of endless flags
- Works cleanly on Linux and WSL

Usage:
- fetchx → default system snapshot
- fetchx --network → network info only
- fetchx --full → everything fetchx can detect

It’s a single-file tool, installs system-wide with a curl command, and runs in
milliseconds.

Repo: https://github.com/v9mirza/fetchx

This is an early version — I’m mainly looking for feedback on:
- output choices
- missing info that should be included
- things that should not be included

Appreciate any thoughts.

https://redd.it/1pt5vkp
@r_opensource
Trigger dot dev

Can anyone help me understand how projects like trigger dot dev make money while open-sourcing their whole project? I asked Antigravity to tell me how the project was built; it seems to be simple, mostly using Redis and PostgreSQL, are people willing to pay more money now for an expert to maintain the tech than for running the tech itself?

I am trying to wrap my brain around this.

https://redd.it/1pt76vg
@r_opensource
I built a free Snapchat Memories downloader that also fixes missing capture time + GPS metadata (EXIF/XMP)

Hey everyone,

Snapchat’s “My Data” export for Memories gives you a memories\_history.html file with download links, but the downloaded photos/videos often don’t end up with correct embedded metadata (capture time and location). That makes imports into Photos / Google Photos / Lightroom messy because everything sorts by download date.

So I put together a small Python tool that:

Parses your [memories\_history.html](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
Downloads all your Memories media (supports the GET/POST link variants Snapchat uses)
Extracts ZIP bundles (some filtered snaps)
Writes proper capture date/time + GPS into the files using ExifTool (EXIF/XMP)
Updates filesystem timestamps (helps Finder sorting on macOS)
Supports aggressive parallel download mode (\--concurrency)
Creates `manifest.csv` and a [download\_state.json](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) so reruns can skip already-downloaded items

Repo: [
https://github.com/jbisinger/Snapchat\_Memories\_Downloader](https://github.com/jbisinger/Snapchat_Memories_Downloader)

How to use (high level):

1. Export your Snapchat data: [
https://accounts.snapchat.com/](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) → My Data → Request Data → extract ZIP → find [memories\_history.html](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
2. Install ExifTool:

macOS: brew install exiftool

1. Install Python deps:

`pip install -r requirements.txt`

1. Run:

python [main.py](http://_vscodecontentref_/5) -m [memories_history.html](http://_vscodecontentref_/6) -d ./downloads Optional fast mode:
`python [main.py](http://_vscodecontentref_/7) -m [memories_history.html](http://_vscodecontentref_/8) -d ./downloads --concurrency 200 --delay 2`

Important notes / disclaimers:

This is for personal backups/organization. Use it at your own risk.
Snapchat links can expire; you may need to re-export if downloads fail.
High concurrency can stress your connection (and may trigger rate limiting). If you get errors, reduce \--concurrency or increase \--delay.
Some file formats may not accept every metadata tag consistently; the tool still downloads the media even if metadata writing fails.
I’m not affiliated with Snapchat. No warranty, no guarantees.

If you try it, I’d love feedback: performance issues, file types that break metadata, or any improvements you’d want (better filename scheme, progress UI, etc.).

https://redd.it/1pt8nyl
@r_opensource
sketch2prompt (MIT): planning step + generated specs for AI-assisted workflows

I open sourced a planning tool I built to speed up my AI coding workflow

I got tired of AI assistants guessing wrong about how my projects should be structured. So I built a tool where you sketch out your system visually first, then export specs that tell the AI "here's what exists, here's what talks to what, here's what's off limits."

It's a canvas where you drag out components (frontend, backend, database, auth, etc), give them names and tech choices, and draw lines showing how they connect. When you hit export, you get a ZIP with markdown and YAML files that you drop in your project folder. Your AI assistant reads those instead of making stuff up.



The goal is basically: freeze the architecture decisions before the AI starts building, so it works within your plan instead of inventing its own.

No account needed, no API keys stored on my end (bring your own if you want AI-enhanced output, otherwise it uses templates). MIT licensed.

Repo: https://github.com/jmassengille/sketch2prompt

Live: https://www.sketch2prompt.com/

DemoVid: https://www.reddit.com/user/jmGille/comments/1ptaboa/sketch2prompt\_demo/

If anyone gives it a shot, would love to hear if the output actually makes sense or if something's confusing. Still iterating on it.

https://redd.it/1ptanpm
@r_opensource
Built an open-source frontend security scanner with a desktop GUI (ShieldEye SurfaceScan) 🔍🛡️

Hi all,

over the last months I’ve been tinkering with a side project in my spare time and it slowly grew into something that feels usable, so I decided to put it out there.
It ended up as
**ShieldEye SurfaceScan**
– an open-source desktop app that looks at the
**frontend attack surface**
of a site. 🔍

The idea is simple: you point it at a URL, it spins up a headless browser, lets the page execute its JavaScript and then tries to make sense of what it sees. It looks at HTML and noscripts, guesses which third‑party libraries are in use, checks HTTP security headers and cookies, and then puts everything into a few views: dashboard, detailed results and some basic analytics. If you have Ollama running locally, it can also add a short AI‑generated summary of the situation, but that part is completely optional. 🤖

Under the hood it’s a small stack of services talking to each other:

\- a GTK desktop GUI written in Python,
\- an API in Node + TypeScript + Express,
\- a Playwright-based worker that does the actual page loading and analysis,
\- PostgreSQL, Redis and MinIO for data, queues and storage.

Even though I mainly use it through the GUI, there is also a JSON API behind it (for scans, results and analytics), so it can be driven from noscripts or CI if someone prefers to keep it headless.

In my head the main audience is:

\- people learning web security who want something to poke at the frontend surface of their own projects,
\- developers who like a quick sanity check of headers / JS / deps without wiring a whole pipeline,
\- anyone who enjoys self‑hosted tools with a native-style UI instead of another browser tab. 🖥️

The code is on GitHub (MIT‑licensed):

https://github.com/exiv703/ShieldEye-SurfaceScan

There’s a README with a bit more detail about the architecture, Docker setup and some screenshots.

If you do take it for a spin, I’d be interested in any feedback on:
\- how the GUI feels to use (what’s confusing or clunky),
\- what kind of checks you’d expect from a tool focused on the frontend surface,
\- anything that breaks on other systems (I mostly run it on Linux 🐧).

Still treating this as a work in progress, but it’s already at the point where it can run real scans against your own apps and show something useful.Hi all,

over the last months I’ve been tinkering with a side project in my spare time and it slowly grew into something that feels usable, so I decided to put it out there.
It ended up as **ShieldEye SurfaceScan** – an open-source desktop app that looks at the **frontend attack surface** of a site. 🔍

The idea is simple: you point it at a URL, it spins up a headless browser, lets the page execute its JavaScript and then tries to make sense of what it sees. It looks at HTML and noscripts, guesses which third‑party libraries are in use, checks HTTP security headers and cookies, and then puts everything into a few views: dashboard, detailed results and some basic analytics. If you have Ollama running locally, it can also add a short AI‑generated summary of the situation, but that part is completely optional. 🤖

Under the hood it’s a small stack of services talking to each other:

\- a GTK desktop GUI written in Python,
\- an API in Node + TypeScript + Express,
\- a Playwright-based worker that does the actual page loading and analysis,
\- PostgreSQL, Redis and MinIO for data, queues and storage.

Even though I mainly use it through the GUI, there is also a JSON API behind it (for scans, results and analytics), so it can be driven from noscripts or CI if someone prefers to keep it headless.

In my head the main audience is:

\- people learning web security who want something to poke at the frontend surface of their own projects,
\- developers who like a quick sanity check of headers / JS / deps without wiring a whole pipeline,
\-
anyone who enjoys self‑hosted tools with a native-style UI instead of another browser tab. 🖥️

The code is on GitHub (MIT‑licensed):

https://github.com/exiv703/ShieldEye-SurfaceScan

There’s a README with a bit more detail about the architecture, Docker setup and some screenshots.


If you do take it for a spin, I’d be interested in any feedback on:
\- how the GUI feels to use (what’s confusing or clunky),
\- what kind of checks you’d expect from a tool focused on the frontend surface,
\- anything that breaks on other systems (I mostly run it on Linux 🐧).


Still treating this as a work in progress, but it’s already at the point where it can run real scans against your own apps and show something useful.

https://redd.it/1ptbnko
@r_opensource
A simple CLI file encrypter in Go

GitHub: https://github.com/pingminus/SafeGuard

A simple CLI file encryption tool in Go with AES-GCM, XOR, and Caesar ciphers. Great for learning and experimentation. Not for high-security use. Contributions and improvements are welcome! I originally started writing it in C++, but ran into library issues, so I switched to Go.



https://redd.it/1pt9q0f
@r_opensource
Struggling with SEO in Vite + React FOSS. Am I screwed?😭😭

Hello everyone,

I hope at least one of you can help me...

I maintain a FOSS Vite React project that’s still pre-v1 and needs a lot of work, and I want it to be discoverable so new devs can find it and help implement the long list of features needed before the first proper release, but I’m running into serious SEO headaches and honestly don't know what to do.

I’ve tried a bunch of approaches in many projects like react-helmet (and the async version, Vite SSG, static rendering plugins, server-side rendering with things like vite-plugin-ssr, but I keep running into similar problems.

The head tags just don’t want to update properly for different pages - they update, but only after a short while and only when JS is enabled. Meta tags, noscripts, denoscriptions, and whatnot often stay the same or don't show the right stuff. Am I doing it wrong?

What can I do about crawlers that don’t execute JavaScript? How do I make sure they actually see the right content?

I’m also not sure if things like Algolia DocSearch will work properly if pages aren’t statically rendered or SEO-friendly.
I'm 100% missing something fundamental about SEO in modern React apps because many of them out there are fine - my apps just aren't.🥲

Is it even feasible to do “good” SEO in a Vite + SPA setup without full SSR or am I basically screwed if I want pages to be crawlable by non-JS bots?😭

At this point, I'll happily accept any forms of advice, experiences, or recommended approaches — especially if you’ve done SEO for an open-source project that needs to attract contributors.

I just need a solid way to get it to work because I don't want to waste my time again in another project.😭😭😭😭

https://redd.it/1pta5ql
@r_opensource
Leaving the Big Tech behind

Doctorow has been all over the media on both sides of the Atlantic.
Yes, much has gone to shit. People put up with no end of it, because of the Sunk Cost Fallacy.
Surely, now is the time to challenge that fallacy, on the brink of huge tech downturn.
Federated social media, privacy focused mobile devices, the right to repair, open source operating systems. All these are within the grasp of anyone who is prepared to make a little effort. Secure, paid mail services abound.
But there needs to be a concerted, off-ramp from things like Facebook. It's no use signing up to Mastodon and finding literal crickets.
Is anyone up for creating a welcome committee, so people might actually find a friendly face if they take the plunge into Mastodon or Pixelfed?

https://redd.it/1pth83u
@r_opensource
Ephemera: an open-source, self-hosted SSH Certificate Authority built on native OpenSSH (seeking architecture review)

Hey everyone,

I’ve been working on a self-hosted project exploring a different approach to SSH access and sudo control, without cloud dependencies or SSH proxies.

Ephemera is an air-gap-friendly SSH Certificate Authority built entirely on native OpenSSH and PAM primitives. The core idea is to eliminate long-lived trust rather than rotate it.

Repo:

https://github.com/Qarait/ephemera

Documentation:

https://qarait.github.io/ephemera/index.html



At a high level, Ephemera:



1-Replaces static SSH keys with short-lived certificates (minutes)



2-Requires WebAuthn hardware-backed presence for cert issuance



3-Implements Just-in-Time sudo: privileged commands pause until explicitly approved



4-Uses policy-driven RBAC (OIDC groups, IP ranges, time windows)



5-Produces tamper-evident, hash-chained audit logs



6-Supports encrypted, sovereign recovery via Shamir secret sharing



7-Runs fully self-hosted, Dockerized and air-gap capable



Explicit non-goals (intentional design choices):

No MITM SSH proxy, direct OpenSSH connections only; no traffic interception layer.

No custom SSH protocol, relies exclusively on upstream OpenSSH semantics.

No always-on root access, all privilege escalation is time-bound and explicitly approved.

Prefer native OpenSSH and PAM primitives over agents, sidecars or long-running daemons.

https://redd.it/1pthmwx
@r_opensource
Leaving the Big Tech behind

Doctorow has been all over the media on both sides of the Atlantic.
Yes, much has gone to shit. People put up with no end of it, because of the Sunk Cost Fallacy.
Surely, now is the time to challenge that fallacy, on the brink of huge tech downturn.
Federated social media, privacy focused mobile devices, the right to repair, open source operating systems. All these are within the grasp of anyone who is prepared to make a little effort. Secure, paid mail services abound.
But there needs to be a concerted, off-ramp from things like Facebook. It's no use signing up to Mastodon and finding literal crickets.
Is anyone up for creating a welcome committee, so people might actually find a friendly face if they take the plunge into Mastodon or Pixelfed?

https://redd.it/1pth7ak
@r_opensource