Can I use AGPL for my project but also use MIT for some parts of the code?
I wrote a project with kotlin multiplatform, which compiles to JVM, Android, iOS and web..Because of the web part I want to use AGPL.
But there might be parts of the code that are interesting for others to use (smaller solutions). Can I set up another license for that part or would it be confusing or a legal problem?
Maybe it would be easier to copy these parts to another project and put that under MIT license.
There are no other contributors so far.
I just want to prevent anybody to use the code, make it proprietary and get money out of it.
https://redd.it/1ps896r
@r_opensource
I wrote a project with kotlin multiplatform, which compiles to JVM, Android, iOS and web..Because of the web part I want to use AGPL.
But there might be parts of the code that are interesting for others to use (smaller solutions). Can I set up another license for that part or would it be confusing or a legal problem?
Maybe it would be easier to copy these parts to another project and put that under MIT license.
There are no other contributors so far.
I just want to prevent anybody to use the code, make it proprietary and get money out of it.
https://redd.it/1ps896r
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Unique features of C++ DataFrame (1)
https://github.com/hosseinmoein/DataFrame
https://redd.it/1ps77kb
@r_opensource
https://github.com/hosseinmoein/DataFrame
https://redd.it/1ps77kb
@r_opensource
GitHub
GitHub - hosseinmoein/DataFrame: C++ DataFrame for statistical, financial, and ML analysis in modern C++
C++ DataFrame for statistical, financial, and ML analysis in modern C++ - hosseinmoein/DataFrame
built ReSearch: A self-hosted search engine with its own crawler targeted to kill AI slop. Uses Go, KVRocks, and Quickwit backend. Focuses on crawling/search on curated domains list which are configurable.
I saw a yt video from "kurzgesagt" with noscript: "AI Slop Is Destroying The Internet", the video describes how the internet is getting filled with AI generated slop and how the existing LLMs are using misinformation and inaccurate AI slop as a verified source from the internet and confidently hallucinating. A thought struck me, what if instead of crawling the entire internet what if we had a search engine with curated domain list to crawl. The internet is filled with social media, porn, SEO optimization junk, AI slop, so I though by doing this we can create a mini internet with high valued results, now this means we have less NOISE, HIGH QUALITY results
The primary targeted clients sere AI and LLM companies, I can run multiple clusters with each cluster focuses on particular topic, like research papers(google scholar), code documentations (code generation LLM), one for Dark web, one targeting cyber security sites etc etc,
But then I though its would be a failed business and I planed to make it open source xD
I did plan and implement it to handle 50M plus search results, there are some bottlenecks, you can definitely increase the limit by fixing those, the code is optimised, efficinet and fuctional and I probably won't be maintaining it.
It is build with scalability and distributed arch in mind, KV rocks and quick both are extremely scalable, and you can run multiple crawling engines in parallel writing to the same DB, didn't get to test this product to the extremes but, I worked with 20 domains which weren't blocking me on scrapping (am I going to jail?), and max was scraping 200k records, the search results were pretty fast as quickwit uses inverse index for search, so it s fast despite the scale.
and also you do need to work on following sitemap logic, and had plans to include AI generated content identification and skipping indexing those sites.
I would appreciate any review on the architecture, code quality, scalability, feel free to reach out for anything :")
Tech Stack: Golang (Wanted to do Rust but I didn't work on Rust before and it felt like it was too much to trade for a slight performance gain)
QuickWit, powerful, efficient, fast, Rust based (why not OPENSEARCH, no budget for Ram in this economy, and I definitely hate java stack)
I did use AI here and there to improve my efficiency and reduce manual work, but not entirely build on vibes,
you can deploy on your local machine if you wanna run your own search engine
github link: https://github.com/apigate-in/ReSearch
https://redd.it/1psalit
@r_opensource
I saw a yt video from "kurzgesagt" with noscript: "AI Slop Is Destroying The Internet", the video describes how the internet is getting filled with AI generated slop and how the existing LLMs are using misinformation and inaccurate AI slop as a verified source from the internet and confidently hallucinating. A thought struck me, what if instead of crawling the entire internet what if we had a search engine with curated domain list to crawl. The internet is filled with social media, porn, SEO optimization junk, AI slop, so I though by doing this we can create a mini internet with high valued results, now this means we have less NOISE, HIGH QUALITY results
The primary targeted clients sere AI and LLM companies, I can run multiple clusters with each cluster focuses on particular topic, like research papers(google scholar), code documentations (code generation LLM), one for Dark web, one targeting cyber security sites etc etc,
But then I though its would be a failed business and I planed to make it open source xD
I did plan and implement it to handle 50M plus search results, there are some bottlenecks, you can definitely increase the limit by fixing those, the code is optimised, efficinet and fuctional and I probably won't be maintaining it.
It is build with scalability and distributed arch in mind, KV rocks and quick both are extremely scalable, and you can run multiple crawling engines in parallel writing to the same DB, didn't get to test this product to the extremes but, I worked with 20 domains which weren't blocking me on scrapping (am I going to jail?), and max was scraping 200k records, the search results were pretty fast as quickwit uses inverse index for search, so it s fast despite the scale.
and also you do need to work on following sitemap logic, and had plans to include AI generated content identification and skipping indexing those sites.
I would appreciate any review on the architecture, code quality, scalability, feel free to reach out for anything :")
Tech Stack: Golang (Wanted to do Rust but I didn't work on Rust before and it felt like it was too much to trade for a slight performance gain)
QuickWit, powerful, efficient, fast, Rust based (why not OPENSEARCH, no budget for Ram in this economy, and I definitely hate java stack)
I did use AI here and there to improve my efficiency and reduce manual work, but not entirely build on vibes,
you can deploy on your local machine if you wanna run your own search engine
github link: https://github.com/apigate-in/ReSearch
https://redd.it/1psalit
@r_opensource
GitHub
GitHub - apigate-in/ReSearch: ReSearch is a web crawler and search engine that crawls a predefined set of domains, indexes the…
ReSearch is a web crawler and search engine that crawls a predefined set of domains, indexes the content of the pages, and provides a search API to query the indexed data. - apigate-in/ReSearch
Looking for Contributors – Modern C++ CLI Calculator (OSS)
Hi! 👋
I’m maintaining an open-source **C++ CLI Calculator** and I’m looking for contributors.
**Project highlights:**
* Modern C++ (CMake, clean architecture)
* REPL mode, expression parsing
* Matrices, prime factorization, base conversion
* Batch noscripting support
* CI/CD and multi-platform packaging (Snap, DEB)
**Looking for help with:**
* Core C++ features and refactors
* Performance and parsing improvements
* Tests and CI
* Documentation and CLI UX polish
Beginner-friendly issues are available, and code reviews are provided.
If you enjoy clean C++ and well-structured CLI tools, you’ll feel at home.
👉 GitHub: [https://github.com/Benedek553/cli-calculator]()
PRs, issues, and discussions are welcome.
https://redd.it/1ps8dct
@r_opensource
Hi! 👋
I’m maintaining an open-source **C++ CLI Calculator** and I’m looking for contributors.
**Project highlights:**
* Modern C++ (CMake, clean architecture)
* REPL mode, expression parsing
* Matrices, prime factorization, base conversion
* Batch noscripting support
* CI/CD and multi-platform packaging (Snap, DEB)
**Looking for help with:**
* Core C++ features and refactors
* Performance and parsing improvements
* Tests and CI
* Documentation and CLI UX polish
Beginner-friendly issues are available, and code reviews are provided.
If you enjoy clean C++ and well-structured CLI tools, you’ll feel at home.
👉 GitHub: [https://github.com/Benedek553/cli-calculator]()
PRs, issues, and discussions are welcome.
https://redd.it/1ps8dct
@r_opensource
GitHub
GitHub - Benedek553/cli-calculator: Fast, open-source C++ command-line calculator with REPL, noscripting, variables, and advanced…
Fast, open-source C++ command-line calculator with REPL, noscripting, variables, and advanced math — built for developers who live in the terminal. - Benedek553/cli-calculator
Anyone know of any (free) open source git repository sites like github/gitlab?
Like with (near) complete privacy ( as in no data shared and no data being in the view of microsoft for example) and being completely open souce and free. (hopefully free, but if its completely open soruce and private, im willing to pay some money to use it).
edit: i also mean foss code repositories, not just git.
https://redd.it/1pshict
@r_opensource
Like with (near) complete privacy ( as in no data shared and no data being in the view of microsoft for example) and being completely open souce and free. (hopefully free, but if its completely open soruce and private, im willing to pay some money to use it).
edit: i also mean foss code repositories, not just git.
https://redd.it/1pshict
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Why is open-source maintenance so hard?💔
Good after-breakfast
I feel like I'm jumping through hoops just to marvel at my own reflection.
I’ve been working on an open source project recently, and it's just so hard to keep it maintained and release new features consistently. Even with contributors and users who seem interested, there’s always this constant pressure: fixing bugs, reviewing PRs, updating dependencies, handling feature requests, and keeping documentation up to date, which I initially neglected and am now burdened by - nobody wants to help with that either, and I don't blame them. :(
I’ve noticed that contributors sometimes drop off, issues pile up, and maintaining consistency becomes overwhelming. It makes me wonder: is this just the nature of open source, or are there strategies that successful projects use to make maintenance sustainable? When I make posts on places like Reddit, people just respond with acidic comments, and it takes all of the joy out of OSS for me.
I want to hear from you.
What are the biggest challenges you face in maintaining an open source project?
How do you manage your community's expectations while keeping your sanity?
Are there tools, workflows, or approaches that make maintenance easier? I've tried things like CodeRabbit after someone recommended it to me, but now I'm considered a noscript kiddy for using half a second of AI per week.
I simply want to understand why it's so hard and what can be done to survive in the long term.
Thanks in advance for your thoughts!
https://redd.it/1psif9g
@r_opensource
Good after-breakfast
I feel like I'm jumping through hoops just to marvel at my own reflection.
I’ve been working on an open source project recently, and it's just so hard to keep it maintained and release new features consistently. Even with contributors and users who seem interested, there’s always this constant pressure: fixing bugs, reviewing PRs, updating dependencies, handling feature requests, and keeping documentation up to date, which I initially neglected and am now burdened by - nobody wants to help with that either, and I don't blame them. :(
I’ve noticed that contributors sometimes drop off, issues pile up, and maintaining consistency becomes overwhelming. It makes me wonder: is this just the nature of open source, or are there strategies that successful projects use to make maintenance sustainable? When I make posts on places like Reddit, people just respond with acidic comments, and it takes all of the joy out of OSS for me.
I want to hear from you.
What are the biggest challenges you face in maintaining an open source project?
How do you manage your community's expectations while keeping your sanity?
Are there tools, workflows, or approaches that make maintenance easier? I've tried things like CodeRabbit after someone recommended it to me, but now I'm considered a noscript kiddy for using half a second of AI per week.
I simply want to understand why it's so hard and what can be done to survive in the long term.
Thanks in advance for your thoughts!
https://redd.it/1psif9g
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I built a "Serverless" Tamagotchi that lives in a GitHub repo (MIT License)
Hi r/opensource,
I'm sharing **ForkMonkey**, an experimental project I released this week.
It's a digital pet that lives entirely inside a GitHub repository. The goal was to build a self-sustaining application using **zero external infrastructure**, relying only on the GitHub ecosystem (Actions, Pages, Git history).
**Repository:** [https://github.com/roeiba/forkMonkey](https://github.com/roeiba/forkMonkey) **License:** MIT
# The "Not Open Source" Elephant in the Room 🐘
I posted here yesterday and got some heat because the default configuration uses GPT-4o (via GitHub Models) for the evolution logic. **That's a fair point.**
I want to clarify: **The engine is 100% open source and model-agnostic.**
The core logic (`src/evolution.py`) uses an abstract `AIProvider` interface.
* **Default:** Uses GitHub Models (free tier) for zero-setup accessibility.
* **Open Models:** You can switch the underlying model to **Llama 3** (or any other model hosted on GitHub Models) just by setting an environment variable (`GITHUB_MODEL=Meta-Llama-3-70B-Instruct`).
* **Bring Your Own:** It also supports Anthropic natively, and because it's standard Python, plugging in a local Ollama endpoint or HuggingFace inference is a trivial PR away.
# How it works
1. **Repo as DB:** The state of your pet is a JSON file. Every "evolution" is a Git commit. The history of your pet is literally the `git log`.
2. **Actions as Backend:** A cron workflow runs daily, invokes the Python engine, calculates changes, and commits them back.
3. **Forking as functionality:** This is the mechanics I'm most proud of. "Adopting" a monkey means **Forking** the repo. This isn't just a copy; the setup noscript reads the *upstream* parent's DNA and mutates it. This creates a traceable genealogical tree across the entire GitHub network.
I'd love for this community to play with the **"Git as a Database"** concepts or help add providers for fully local/open-weight model execution.
Feedback and PRs welcome!
https://redd.it/1psk4pe
@r_opensource
Hi r/opensource,
I'm sharing **ForkMonkey**, an experimental project I released this week.
It's a digital pet that lives entirely inside a GitHub repository. The goal was to build a self-sustaining application using **zero external infrastructure**, relying only on the GitHub ecosystem (Actions, Pages, Git history).
**Repository:** [https://github.com/roeiba/forkMonkey](https://github.com/roeiba/forkMonkey) **License:** MIT
# The "Not Open Source" Elephant in the Room 🐘
I posted here yesterday and got some heat because the default configuration uses GPT-4o (via GitHub Models) for the evolution logic. **That's a fair point.**
I want to clarify: **The engine is 100% open source and model-agnostic.**
The core logic (`src/evolution.py`) uses an abstract `AIProvider` interface.
* **Default:** Uses GitHub Models (free tier) for zero-setup accessibility.
* **Open Models:** You can switch the underlying model to **Llama 3** (or any other model hosted on GitHub Models) just by setting an environment variable (`GITHUB_MODEL=Meta-Llama-3-70B-Instruct`).
* **Bring Your Own:** It also supports Anthropic natively, and because it's standard Python, plugging in a local Ollama endpoint or HuggingFace inference is a trivial PR away.
# How it works
1. **Repo as DB:** The state of your pet is a JSON file. Every "evolution" is a Git commit. The history of your pet is literally the `git log`.
2. **Actions as Backend:** A cron workflow runs daily, invokes the Python engine, calculates changes, and commits them back.
3. **Forking as functionality:** This is the mechanics I'm most proud of. "Adopting" a monkey means **Forking** the repo. This isn't just a copy; the setup noscript reads the *upstream* parent's DNA and mutates it. This creates a traceable genealogical tree across the entire GitHub network.
I'd love for this community to play with the **"Git as a Database"** concepts or help add providers for fully local/open-weight model execution.
Feedback and PRs welcome!
https://redd.it/1psk4pe
@r_opensource
GitHub
GitHub - roeiba/forkMonkey
Contribute to roeiba/forkMonkey development by creating an account on GitHub.
XMRetail POS @SlowBearDigger
Hey yall Monero frens!
Just dropped https://xmretail.store
it's a zero knowledge POS system for sellers rockin Monero.
https://github.com/SlowBearDigger/xmretail-pos (OPEN SOURCE, MIT license - fork it, mod it, whatever ya want!)
All opensource, no servers spyin on ya biz, data stays encrypted in ya browser with AES GCM 256 bit. Watch only security using your private view key to scan the blockchain for sales, no spend key needed so funds stay safe in your wallet. Direct P2P to any node, no fees no middlemen, full privacy straight from the Monero protocol.
Features: fast sync with monero-ts WASM, fresh subaddresses per tx (it will generate after your first payment, or if you already have sub addresses verify that they match), custom QR codes to match your brand, inventory tracking with stock and variations, slick industrial design for busy spots.
Built with React Vite TS Tailwind n Web Crypto API (Yes, i used AI to do this faster, you can always check the code and help this to grow). Quick start: clone repo, npm install, npm run dev.
It's in testin phase so PLEASE try it out and lemme know if it works for you or not - bugs, ideas, whatever! Comment below with feedback, the more the better it gets. If you got the skills, audit the code plz, it's all there on GitHub for you to poke at.
Help the XMR community grow!!
My X @SlowBearDigger
https://redd.it/1psluo3
@r_opensource
Hey yall Monero frens!
Just dropped https://xmretail.store
it's a zero knowledge POS system for sellers rockin Monero.
https://github.com/SlowBearDigger/xmretail-pos (OPEN SOURCE, MIT license - fork it, mod it, whatever ya want!)
All opensource, no servers spyin on ya biz, data stays encrypted in ya browser with AES GCM 256 bit. Watch only security using your private view key to scan the blockchain for sales, no spend key needed so funds stay safe in your wallet. Direct P2P to any node, no fees no middlemen, full privacy straight from the Monero protocol.
Features: fast sync with monero-ts WASM, fresh subaddresses per tx (it will generate after your first payment, or if you already have sub addresses verify that they match), custom QR codes to match your brand, inventory tracking with stock and variations, slick industrial design for busy spots.
Built with React Vite TS Tailwind n Web Crypto API (Yes, i used AI to do this faster, you can always check the code and help this to grow). Quick start: clone repo, npm install, npm run dev.
It's in testin phase so PLEASE try it out and lemme know if it works for you or not - bugs, ideas, whatever! Comment below with feedback, the more the better it gets. If you got the skills, audit the code plz, it's all there on GitHub for you to poke at.
Help the XMR community grow!!
My X @SlowBearDigger
https://redd.it/1psluo3
@r_opensource
GitHub
GitHub - SlowBearDigger/xmretail-pos: xmretail-pos
xmretail-pos. Contribute to SlowBearDigger/xmretail-pos development by creating an account on GitHub.
TrieLingual: A language learning tool
https://github.com/mreichhoff/TrieLingual
https://redd.it/1pshhp6
@r_opensource
https://github.com/mreichhoff/TrieLingual
https://redd.it/1pshhp6
@r_opensource
GitHub
GitHub - mreichhoff/TrieLingual: Learn languages by studying the building blocks of their sentences.
Learn languages by studying the building blocks of their sentences. - mreichhoff/TrieLingual
Free language translation package, 15 languages
Published my first NPM package a little while ago and wanted to share. I was working for an ed-tech startup and found a concerning lack of accessibility for translation APIs at scale despite the information being out there via wiktionary. Using wiktionary HTML dumps, I was able to parse out information for most use cases.
Features:
* automatic accent correction
* verb form detection and base verb translatoin
* returns word type (adjective, noun etc.)
* requires one of the two languages to be English, but translates between it and 14 other languages ranging from Spanish to Chinese
* roman and character based translation for character languages
Would love some feedback and to see what else would be helpful to add. Please feel free to contribute directly as well! Hope this makes life a little easier for anyone building language-based apps but don't have the budget for super expensive APIs.
[https://github.com/akassa01/wikiglot](https://github.com/akassa01/wikiglot)
[https://www.npmjs.com/package/wikiglot](https://www.npmjs.com/package/wikiglot)
https://redd.it/1psook4
@r_opensource
Published my first NPM package a little while ago and wanted to share. I was working for an ed-tech startup and found a concerning lack of accessibility for translation APIs at scale despite the information being out there via wiktionary. Using wiktionary HTML dumps, I was able to parse out information for most use cases.
Features:
* automatic accent correction
* verb form detection and base verb translatoin
* returns word type (adjective, noun etc.)
* requires one of the two languages to be English, but translates between it and 14 other languages ranging from Spanish to Chinese
* roman and character based translation for character languages
Would love some feedback and to see what else would be helpful to add. Please feel free to contribute directly as well! Hope this makes life a little easier for anyone building language-based apps but don't have the budget for super expensive APIs.
[https://github.com/akassa01/wikiglot](https://github.com/akassa01/wikiglot)
[https://www.npmjs.com/package/wikiglot](https://www.npmjs.com/package/wikiglot)
https://redd.it/1psook4
@r_opensource
GitHub
GitHub - akassa01/wikiglot
Contribute to akassa01/wikiglot development by creating an account on GitHub.
Case Management software
Hi there, I work in HR for a large NHS Trust and they won’t pay for my team to have Casework management software. I have AuADHD and really struggle with the sheer volume of work that comes in and know that a management system could help me track all my cases and tasks plus a knowledge base in one place.
I could try to build something using PowerApps and SharePoint however if there was something decent out there I’d love to try that out. Thank you ☺️
https://redd.it/1psu2sv
@r_opensource
Hi there, I work in HR for a large NHS Trust and they won’t pay for my team to have Casework management software. I have AuADHD and really struggle with the sheer volume of work that comes in and know that a management system could help me track all my cases and tasks plus a knowledge base in one place.
I could try to build something using PowerApps and SharePoint however if there was something decent out there I’d love to try that out. Thank you ☺️
https://redd.it/1psu2sv
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Made a free is-an-ai.dev subdomain service!
Hey r/opensource,
I recently built a small open-source project that gives people (preferably developers) a free subdomain on is-an-ai.dev domain.
We have a PR based workflow. Submit a PR to our github repo, maintainers will review and approve it, and your dns records will be added.
Links:
Website: https://www.is-an-ai.dev
Docs: https://docs.is-an-ai.dev
GitHub org: https://github.com/is-an-ai-dev
Register: https://github.com/is-an-ai-dev/register
If you think it’s useful, feel free to try it out or contribute.
If not, I’d still appreciate your opinion about service and how can I make this free service known!
Thanks in anticipation!
https://redd.it/1pstzuo
@r_opensource
Hey r/opensource,
I recently built a small open-source project that gives people (preferably developers) a free subdomain on is-an-ai.dev domain.
We have a PR based workflow. Submit a PR to our github repo, maintainers will review and approve it, and your dns records will be added.
Links:
Website: https://www.is-an-ai.dev
Docs: https://docs.is-an-ai.dev
GitHub org: https://github.com/is-an-ai-dev
Register: https://github.com/is-an-ai-dev/register
If you think it’s useful, feel free to try it out or contribute.
If not, I’d still appreciate your opinion about service and how can I make this free service known!
Thanks in anticipation!
https://redd.it/1pstzuo
@r_opensource
is-an-ai.dev
Free Subdomains for devlopers
A tool for detecting and diagnosing node-level issues in AI environments
https://github.com/scitix/sichek
https://redd.it/1pnoscripttf
@r_opensource
https://github.com/scitix/sichek
https://redd.it/1pnoscripttf
@r_opensource
GitHub
GitHub - scitix/sichek: Sichek is a tool for detecting and diagnosing node-level issues in AI environments, ensuring the reliability…
Sichek is a tool for detecting and diagnosing node-level issues in AI environments, ensuring the reliability and high performance of GPU-intensive workloads. It proactively identifies hardware and ...
lagident - A tool to find poor quality network connections
Hi community,
I have finally published a project that was sleeping on my disk for 11 month. Lagident.
The idea is, to run lagident on one (or better multiple) computers in your network to identify weak and poor quality connections. By taking measurements from multiple points, it is easier to identify if you are dealing with a bad network card, a broken switch or router.
In my case I had issues while online gaming with my desktop PC, but I wasn't sure about the root cause. So i created lagident to find the issue in my network (it was a bad driver for my network card).
Today i have all my network devices monitored by Lagident. For example if i move my Router, i can see if this decreases the Wifi quality for my Smart-TV.
Please see the GitHub repo for screenshots.
https://github.com/nook24/lagident
Happy holidays!
https://redd.it/1psv6uk
@r_opensource
Hi community,
I have finally published a project that was sleeping on my disk for 11 month. Lagident.
The idea is, to run lagident on one (or better multiple) computers in your network to identify weak and poor quality connections. By taking measurements from multiple points, it is easier to identify if you are dealing with a bad network card, a broken switch or router.
In my case I had issues while online gaming with my desktop PC, but I wasn't sure about the root cause. So i created lagident to find the issue in my network (it was a bad driver for my network card).
Today i have all my network devices monitored by Lagident. For example if i move my Router, i can see if this decreases the Wifi quality for my Smart-TV.
Please see the GitHub repo for screenshots.
https://github.com/nook24/lagident
Happy holidays!
https://redd.it/1psv6uk
@r_opensource
GitHub
GitHub - nook24/lagident: Lagident pings targets and collects information about response time and packet loss. Helps you identify…
Lagident pings targets and collects information about response time and packet loss. Helps you identify anomalies across your network - nook24/lagident
Why I use 100% Open-source for my webcomic - David Revoy
https://www.davidrevoy.com/article170/why-i-m-using-100-open-source
https://redd.it/1psxf2d
@r_opensource
https://www.davidrevoy.com/article170/why-i-m-using-100-open-source
https://redd.it/1psxf2d
@r_opensource
David Revoy
Why I'm using 100% Open-source ?
Mosaico: open-source data platform for robotics and physical AI
https://mosaico.dev/
https://redd.it/1pt0wkd
@r_opensource
https://mosaico.dev/
https://redd.it/1pt0wkd
@r_opensource
mosaico.dev
Mosaico | Next-Gen Robotics
Pioneering Data Management for Robotics
Ephemera. An open-source, self-hosted SSH Certificate Authority built on native OpenSSH
Hey everyone,
I’ve been working on a self-hosted project exploring a different approach to SSH access and sudo control, without cloud dependencies or SSH proxies.
Ephemera is an air-gap friendly SSH Certificate Authority built entirely on native OpenSSH and PAM primitives. The core idea is to eliminate long-lived trust rather than rotate it.
Repo:
[https://github.com/Qarait/ephemera](https://github.com/Qarait/ephemera)
Documentation/architecture notes:
[https://qarait.github.io/ephemera/index.html](https://qarait.github.io/ephemera/index.html)
Documentation includes a published threat model and security disclosure policy.
At a high level, Ephemera:
* Replaces static SSH keys with short-lived certificates (minutes)
* Requires WebAuthn hardware-backed presence for cert issuance
* Implements Just-in-Time sudo, privileged commands pause until explicitly approved
* Uses policy driven RBAC (OIDC groups, IP ranges, time windows)
* Produces tamper evident, hash chained audit logs
* Supports encrypted, sovereign recovery via Shamir secret sharing
* Runs fully self hosted, Dockerized and air-gap capable
Explicit non-goals (intentional design choices):
1) No MITM SSH proxy-direct OpenSSH connections only, no traffic interception layer.
2) No custom SSH protocol-relies exclusively on upstream OpenSSH semantics.
3) No always on root access-all privilege escalation is time bound and explicitly approved.
4) Prefer native OpenSSH and PAM over agents or sidecars.
This is not a commercial project. I’m looking for architecture and threat model feedback, especially from people who’ve designed or operated access control systems at scale.
https://redd.it/1pt267m
@r_opensource
Hey everyone,
I’ve been working on a self-hosted project exploring a different approach to SSH access and sudo control, without cloud dependencies or SSH proxies.
Ephemera is an air-gap friendly SSH Certificate Authority built entirely on native OpenSSH and PAM primitives. The core idea is to eliminate long-lived trust rather than rotate it.
Repo:
[https://github.com/Qarait/ephemera](https://github.com/Qarait/ephemera)
Documentation/architecture notes:
[https://qarait.github.io/ephemera/index.html](https://qarait.github.io/ephemera/index.html)
Documentation includes a published threat model and security disclosure policy.
At a high level, Ephemera:
* Replaces static SSH keys with short-lived certificates (minutes)
* Requires WebAuthn hardware-backed presence for cert issuance
* Implements Just-in-Time sudo, privileged commands pause until explicitly approved
* Uses policy driven RBAC (OIDC groups, IP ranges, time windows)
* Produces tamper evident, hash chained audit logs
* Supports encrypted, sovereign recovery via Shamir secret sharing
* Runs fully self hosted, Dockerized and air-gap capable
Explicit non-goals (intentional design choices):
1) No MITM SSH proxy-direct OpenSSH connections only, no traffic interception layer.
2) No custom SSH protocol-relies exclusively on upstream OpenSSH semantics.
3) No always on root access-all privilege escalation is time bound and explicitly approved.
4) Prefer native OpenSSH and PAM over agents or sidecars.
This is not a commercial project. I’m looking for architecture and threat model feedback, especially from people who’ve designed or operated access control systems at scale.
https://redd.it/1pt267m
@r_opensource
GitHub
GitHub - Qarait/ephemera: Zero-Trust SSH CA
Zero-Trust SSH CA. Contribute to Qarait/ephemera development by creating an account on GitHub.
Open-source cross-platform media player using QtMultimedia + FFmpeg with hardware acceleration
Pars Local Player (PLP) is an open-source media player focused on simple and reliable
radio streams and video playback.
It was created because existing players were often unreliable for streams and had
inconsistent controls and outdated UI.
Key points:
\- Cross-platform: Windows and Linux (64-bit)
\- Clean and predictable UI
\- Reliable radio and network stream playback
\- Hardware-accelerated decoding (DirectX 11 on Windows, VAAPI on Linux)
\- Wide format support for video, audio, and playlists
\- No telemetry or analytics
Help and documentation:
https://parrothat.com/plp (Help section)
https://parrothat.com/plp/linuxdguides.html (Linux Distros Guides)
Source code:
https://github.com/parrothat/plp
https://redd.it/1pt1drv
@r_opensource
Pars Local Player (PLP) is an open-source media player focused on simple and reliable
radio streams and video playback.
It was created because existing players were often unreliable for streams and had
inconsistent controls and outdated UI.
Key points:
\- Cross-platform: Windows and Linux (64-bit)
\- Clean and predictable UI
\- Reliable radio and network stream playback
\- Hardware-accelerated decoding (DirectX 11 on Windows, VAAPI on Linux)
\- Wide format support for video, audio, and playlists
\- No telemetry or analytics
Help and documentation:
https://parrothat.com/plp (Help section)
https://parrothat.com/plp/linuxdguides.html (Linux Distros Guides)
Source code:
https://github.com/parrothat/plp
https://redd.it/1pt1drv
@r_opensource
built a minimal neofetch-style tool in Python — feedback welcome
Hey all,
I’ve been using neofetch / fastfetch for a long time, but I wanted something
much simpler — no config files, no themes, no plugins, just a fast snapshot
of system info when I open a terminal.
So I built fetchx.
Goals:
- Minimal output by default
- Zero configuration
- No external dependencies (Python stdlib only)
- Clear modes instead of endless flags
- Works cleanly on Linux and WSL
Usage:
-
-
-
It’s a single-file tool, installs system-wide with a curl command, and runs in
milliseconds.
Repo: https://github.com/v9mirza/fetchx
This is an early version — I’m mainly looking for feedback on:
- output choices
- missing info that should be included
- things that should not be included
Appreciate any thoughts.
https://redd.it/1pt5vkp
@r_opensource
Hey all,
I’ve been using neofetch / fastfetch for a long time, but I wanted something
much simpler — no config files, no themes, no plugins, just a fast snapshot
of system info when I open a terminal.
So I built fetchx.
Goals:
- Minimal output by default
- Zero configuration
- No external dependencies (Python stdlib only)
- Clear modes instead of endless flags
- Works cleanly on Linux and WSL
Usage:
-
fetchx → default system snapshot-
fetchx --network → network info only-
fetchx --full → everything fetchx can detectIt’s a single-file tool, installs system-wide with a curl command, and runs in
milliseconds.
Repo: https://github.com/v9mirza/fetchx
This is an early version — I’m mainly looking for feedback on:
- output choices
- missing info that should be included
- things that should not be included
Appreciate any thoughts.
https://redd.it/1pt5vkp
@r_opensource
GitHub
GitHub - v9mirza/fetchx
Contribute to v9mirza/fetchx development by creating an account on GitHub.
Trigger dot dev
Can anyone help me understand how projects like trigger dot dev make money while open-sourcing their whole project? I asked Antigravity to tell me how the project was built; it seems to be simple, mostly using Redis and PostgreSQL, are people willing to pay more money now for an expert to maintain the tech than for running the tech itself?
I am trying to wrap my brain around this.
https://redd.it/1pt76vg
@r_opensource
Can anyone help me understand how projects like trigger dot dev make money while open-sourcing their whole project? I asked Antigravity to tell me how the project was built; it seems to be simple, mostly using Redis and PostgreSQL, are people willing to pay more money now for an expert to maintain the tech than for running the tech itself?
I am trying to wrap my brain around this.
https://redd.it/1pt76vg
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I built a free Snapchat Memories downloader that also fixes missing capture time + GPS metadata (EXIF/XMP)
Hey everyone,
Snapchat’s “My Data” export for Memories gives you a memories\_history.html file with download links, but the downloaded photos/videos often don’t end up with correct embedded metadata (capture time and location). That makes imports into Photos / Google Photos / Lightroom messy because everything sorts by download date.
So I put together a small Python tool that:
Parses your [memories\_history.html](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
Downloads all your Memories media (supports the GET/POST link variants Snapchat uses)
Extracts ZIP bundles (some filtered snaps)
Writes proper capture date/time + GPS into the files using ExifTool (EXIF/XMP)
Updates filesystem timestamps (helps Finder sorting on macOS)
Supports aggressive parallel download mode (\--concurrency)
Creates `manifest.csv` and a [download\_state.json](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) so reruns can skip already-downloaded items
Repo: [https://github.com/jbisinger/Snapchat\_Memories\_Downloader](https://github.com/jbisinger/Snapchat_Memories_Downloader)
How to use (high level):
1. Export your Snapchat data: [https://accounts.snapchat.com/](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) → My Data → Request Data → extract ZIP → find [memories\_history.html](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
2. Install ExifTool:
macOS:
1. Install Python deps:
`pip install -r requirements.txt`
1. Run:
`python [main.py](http://_vscodecontentref_/7) -m [memories_history.html](http://_vscodecontentref_/8) -d ./downloads --concurrency 200 --delay 2`
Important notes / disclaimers:
This is for personal backups/organization. Use it at your own risk.
Snapchat links can expire; you may need to re-export if downloads fail.
High concurrency can stress your connection (and may trigger rate limiting). If you get errors, reduce \--concurrency or increase \--delay.
Some file formats may not accept every metadata tag consistently; the tool still downloads the media even if metadata writing fails.
I’m not affiliated with Snapchat. No warranty, no guarantees.
If you try it, I’d love feedback: performance issues, file types that break metadata, or any improvements you’d want (better filename scheme, progress UI, etc.).
https://redd.it/1pt8nyl
@r_opensource
Hey everyone,
Snapchat’s “My Data” export for Memories gives you a memories\_history.html file with download links, but the downloaded photos/videos often don’t end up with correct embedded metadata (capture time and location). That makes imports into Photos / Google Photos / Lightroom messy because everything sorts by download date.
So I put together a small Python tool that:
Parses your [memories\_history.html](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
Downloads all your Memories media (supports the GET/POST link variants Snapchat uses)
Extracts ZIP bundles (some filtered snaps)
Writes proper capture date/time + GPS into the files using ExifTool (EXIF/XMP)
Updates filesystem timestamps (helps Finder sorting on macOS)
Supports aggressive parallel download mode (\--concurrency)
Creates `manifest.csv` and a [download\_state.json](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) so reruns can skip already-downloaded items
Repo: [https://github.com/jbisinger/Snapchat\_Memories\_Downloader](https://github.com/jbisinger/Snapchat_Memories_Downloader)
How to use (high level):
1. Export your Snapchat data: [https://accounts.snapchat.com/](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) → My Data → Request Data → extract ZIP → find [memories\_history.html](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
2. Install ExifTool:
macOS:
brew install exiftool1. Install Python deps:
`pip install -r requirements.txt`
1. Run:
python [main.py](http://_vscodecontentref_/5) -m [memories_history.html](http://_vscodecontentref_/6) -d ./downloads Optional fast mode:`python [main.py](http://_vscodecontentref_/7) -m [memories_history.html](http://_vscodecontentref_/8) -d ./downloads --concurrency 200 --delay 2`
Important notes / disclaimers:
This is for personal backups/organization. Use it at your own risk.
Snapchat links can expire; you may need to re-export if downloads fail.
High concurrency can stress your connection (and may trigger rate limiting). If you get errors, reduce \--concurrency or increase \--delay.
Some file formats may not accept every metadata tag consistently; the tool still downloads the media even if metadata writing fails.
I’m not affiliated with Snapchat. No warranty, no guarantees.
If you try it, I’d love feedback: performance issues, file types that break metadata, or any improvements you’d want (better filename scheme, progress UI, etc.).
https://redd.it/1pt8nyl
@r_opensource