Looking for someone to fork/update a project
https://github.com/szszss/TpacTool
ever since the latest update, this tool is no longer able to open tpac files, it crashes when you click on any of the assets. unfortunately i'm not a developer just a concerned modder whod like to be able to extract 3d models and textures again.
https://redd.it/1og2wxy
@r_opensource
https://github.com/szszss/TpacTool
ever since the latest update, this tool is no longer able to open tpac files, it crashes when you click on any of the assets. unfortunately i'm not a developer just a concerned modder whod like to be able to extract 3d models and textures again.
https://redd.it/1og2wxy
@r_opensource
GitHub
GitHub - szszss/TpacTool: An unofficial asset explorer for Mount&Blade II: Bannerlord
An unofficial asset explorer for Mount&Blade II: Bannerlord - szszss/TpacTool
Best realistic FOSS driving simulator?
Realistic not in graphics, but to prepare for driving school. Like FlightGear from the world of cars. It should support wheels, pedals and stuff.
https://redd.it/1og5tnd
@r_opensource
Realistic not in graphics, but to prepare for driving school. Like FlightGear from the world of cars. It should support wheels, pedals and stuff.
https://redd.it/1og5tnd
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
MouseUtils - An autoclicker for Windows and Linux
MouseUtils is an autoclicker, currently supporting Windows and Linux (X11 only), written in C++ with a Qt6 GUI.
This is my first C++ project I've released, and to be honest, made (excluding the classic "Hello, world!"). There will most likely be (and I believe there is) bad practices and general mistakes in the source code. I'm a bit of a newb :P
Feel free to play around, and if you're interested in contributing, please do!
GitHub: https://github.com/AngusAU293/MouseUtils
Thank you, and have a great day!
https://redd.it/1og7bu1
@r_opensource
MouseUtils is an autoclicker, currently supporting Windows and Linux (X11 only), written in C++ with a Qt6 GUI.
This is my first C++ project I've released, and to be honest, made (excluding the classic "Hello, world!"). There will most likely be (and I believe there is) bad practices and general mistakes in the source code. I'm a bit of a newb :P
Feel free to play around, and if you're interested in contributing, please do!
GitHub: https://github.com/AngusAU293/MouseUtils
Thank you, and have a great day!
https://redd.it/1og7bu1
@r_opensource
GitHub
GitHub - AngusAU293/MouseUtils: An autoclicker written in C++ for Windows and Linux
An autoclicker written in C++ for Windows and Linux - AngusAU293/MouseUtils
What are your favorite open source services?
Hey everyone! I have been DeGoogling and I’m prioritizing replacing Google services with ones that are also open source. I thought it would be interesting to share favorite opensource tools!
Mine are:
Browser: Librewolf
Email: Tuta Mail
Calendar: Tuta Calendar
Photos: Ente Password manager: Bitwarden & KeepassXC
Google docs: Cryptpad
https://redd.it/1ofz88q
@r_opensource
Hey everyone! I have been DeGoogling and I’m prioritizing replacing Google services with ones that are also open source. I thought it would be interesting to share favorite opensource tools!
Mine are:
Browser: Librewolf
Email: Tuta Mail
Calendar: Tuta Calendar
Photos: Ente Password manager: Bitwarden & KeepassXC
Google docs: Cryptpad
https://redd.it/1ofz88q
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
What are you using for mailing lists?
So I have an open source project that has a mailing list where people can sign up to hear about new versions. It has a few hundred subscribers and I send 0-2 e-mails a month, most months nothing. Everyone on it has explicitly signed up for it.
Up until now I've been running a self-hosted phpList instance but that means I'm dealing with issues with my web host's IP address reputation etc. I'd like to move to something hosted. So question 1 is: What are people using?
MailChimp is an option. I tried phpList.com but something's wrong with my account configuration and I'm not getting a response from their support. Searching around here I found someone recommend SendFox, which looks really nice, so I thought I'd try that.
But that brings us to question 2: A lot of these services require a physical address be attached to each e-mail to comply with the CAN-SPAM act, including SendFox, and I'd really rather not blast my personal address out like that. What are people doing for that? Paying for a P.O. box or one of these services that give you an address and scan your mail for you? Or should I stay with self-hosted and try to fight out the deliverability issues myself? I know I'm not the first person to run into this.
https://redd.it/1ogc7sn
@r_opensource
So I have an open source project that has a mailing list where people can sign up to hear about new versions. It has a few hundred subscribers and I send 0-2 e-mails a month, most months nothing. Everyone on it has explicitly signed up for it.
Up until now I've been running a self-hosted phpList instance but that means I'm dealing with issues with my web host's IP address reputation etc. I'd like to move to something hosted. So question 1 is: What are people using?
MailChimp is an option. I tried phpList.com but something's wrong with my account configuration and I'm not getting a response from their support. Searching around here I found someone recommend SendFox, which looks really nice, so I thought I'd try that.
But that brings us to question 2: A lot of these services require a physical address be attached to each e-mail to comply with the CAN-SPAM act, including SendFox, and I'd really rather not blast my personal address out like that. What are people doing for that? Paying for a P.O. box or one of these services that give you an address and scan your mail for you? Or should I stay with self-hosted and try to fight out the deliverability issues myself? I know I'm not the first person to run into this.
https://redd.it/1ogc7sn
@r_opensource
ROM /e/OS
Is anyone else testing this ROM?
I had 3 problems with it.
1° I can't install applications from outside the store, it says "unable to install".
2° It always fails when I put a password on my cell phone, not even one type of password remains.
3° On the map I cannot locate where I am with echolocation.
I haven't found how to solve it yet, not one of the 3. .-.
If anyone managed to do it, please give a tip to your friends.
https://redd.it/1ogcrbd
@r_opensource
Is anyone else testing this ROM?
I had 3 problems with it.
1° I can't install applications from outside the store, it says "unable to install".
2° It always fails when I put a password on my cell phone, not even one type of password remains.
3° On the map I cannot locate where I am with echolocation.
I haven't found how to solve it yet, not one of the 3. .-.
If anyone managed to do it, please give a tip to your friends.
https://redd.it/1ogcrbd
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Flathub announces toolchain fixes to address longstanding license and copyright compliance issues
https://docs.flathub.org/blog/enhanced-license-compliance-tools
https://redd.it/1ogdab6
@r_opensource
https://docs.flathub.org/blog/enhanced-license-compliance-tools
https://redd.it/1ogdab6
@r_opensource
docs.flathub.org
Enhanced License Compliance Tools for Flathub | Flathub Documentation
tl;dr: Flathub has improved tooling to make license compliance easier for developers. Distros should rebuild OS images with updated runtimes from Flathub; app developers should ensure they're using up-to-date runtimes and verify that licenses and copyright…
My First Open Source Project: GitRead
Hey everyone! 👋
I'm excited to share my first open-source project with the community — GitRead, an AI-powered README generator that helps developers create professional project documentation in seconds.
🔧 GitRead analyzes your GitHub repository, generates a high-quality README, and allows you to customize it with a live Markdown editor and preview. Whether you're launching a new project or improving an existing one, GitRead can save you time and make your repo shine!
This project means a lot to me — it’s my first open-source contribution and I'm really looking forward to feedback from other developers. I'm super happy (and a little nervous 😅).
# 💻 GitHub Repository
👉 https://github.com/PoRiFiRo123/gitread
## 🌐 Live Demo
👉 https://git-read.vercel.app
https://redd.it/1ogegx5
@r_opensource
Hey everyone! 👋
I'm excited to share my first open-source project with the community — GitRead, an AI-powered README generator that helps developers create professional project documentation in seconds.
🔧 GitRead analyzes your GitHub repository, generates a high-quality README, and allows you to customize it with a live Markdown editor and preview. Whether you're launching a new project or improving an existing one, GitRead can save you time and make your repo shine!
This project means a lot to me — it’s my first open-source contribution and I'm really looking forward to feedback from other developers. I'm super happy (and a little nervous 😅).
# 💻 GitHub Repository
👉 https://github.com/PoRiFiRo123/gitread
## 🌐 Live Demo
👉 https://git-read.vercel.app
https://redd.it/1ogegx5
@r_opensource
GitHub
GitHub - PoRiFiRo123/gitread: GitRead is an AI-powered tool that automatically generates professional, customizable README files…
GitRead is an AI-powered tool that automatically generates professional, customizable README files for GitHub repositories by analyzing the project’s structure and metadata—saving time and improvin...
🚀 Built a tool to make open source contributions easier — looking for feedback!
Hey everyone! 👋
I’ve been working on something called Open Source Contribution Captain — a free tool that helps newcomers find beginner-friendly GitHub issues matched to their tech stack.
It also uses AI-generated summaries to explain what each issue needs, what’s been tried, and any blockers — so you can skip hours of manual digging and get started faster.
🌐 Try it here: https://opencontributioncaptain.com/
I’d really appreciate your feedback —
Does it actually help you find issues more easily?
What can be improved or added?
Thanks for checking it out! ⚓️
https://redd.it/1ogf0rq
@r_opensource
Hey everyone! 👋
I’ve been working on something called Open Source Contribution Captain — a free tool that helps newcomers find beginner-friendly GitHub issues matched to their tech stack.
It also uses AI-generated summaries to explain what each issue needs, what’s been tried, and any blockers — so you can skip hours of manual digging and get started faster.
🌐 Try it here: https://opencontributioncaptain.com/
I’d really appreciate your feedback —
Does it actually help you find issues more easily?
What can be improved or added?
Thanks for checking it out! ⚓️
https://redd.it/1ogf0rq
@r_opensource
Opencontributioncaptain
OSS Contribution Captain
Explore beginner-friendly GitHub issues with AI-generated summaries. Discover open source projects, filter by language, and contribute confidently as a new developer.
miniLLM: MIT Licensed pretrain framework for language models
It's been a long time I haven't published anything open source (and it was really a shame for me) then I remembered how much I loved idea of nanoGPT by Andrej Karpathy. Recently, most of my pipelines and AI-backed projects however were on Qwen models so I thought to myself, what happens if I do the same thing with Qwen?
And here is MiniLLM which is working more like a "framework" for pretraining and not a standalone model itself. Although I have made a 360 million parameters model using the code which works fine (it understands English, although hallucinates a lot).
So here is the code:
https://github.com/prp-e/minillm
And I'd love to see your comments, contributions and opinions on the project.
https://redd.it/1oghy0q
@r_opensource
It's been a long time I haven't published anything open source (and it was really a shame for me) then I remembered how much I loved idea of nanoGPT by Andrej Karpathy. Recently, most of my pipelines and AI-backed projects however were on Qwen models so I thought to myself, what happens if I do the same thing with Qwen?
And here is MiniLLM which is working more like a "framework" for pretraining and not a standalone model itself. Although I have made a 360 million parameters model using the code which works fine (it understands English, although hallucinates a lot).
So here is the code:
https://github.com/prp-e/minillm
And I'd love to see your comments, contributions and opinions on the project.
https://redd.it/1oghy0q
@r_opensource
GitHub
GitHub - prp-e/minillm
Contribute to prp-e/minillm development by creating an account on GitHub.
Fully open source peer-to-peer 4chan alternative built on IPFS
https://github.com/plebbit/5chan
https://redd.it/1ogbwjx
@r_opensource
https://github.com/plebbit/5chan
https://redd.it/1ogbwjx
@r_opensource
GitHub
GitHub - plebbit/5chan: A bitsocial client with a 4chan UI
A bitsocial client with a 4chan UI. Contribute to plebbit/5chan development by creating an account on GitHub.
I built a self-hosted alternative to Google's Video Intelligence API after spending about $450 analyzing my personal videos (MIT License)
Hey r/opensource !
I have 2TB+ of personal video footage accumulated over the years (mostly outdoor GoPro footage). Finding specific moments was nearly impossible – imagine trying to search through thousands of videos for "that scene where "@ilias' was riding a bike and laughing."
I tried Google's Video Intelligence API. It worked perfectly... until I got the bill: about $450+ for just a few videos. Scaling to my entire library would cost $1,500+, plus I'd have to upload all my raw personal footage to their cloud. and here's the bill
(https://preview.redd.it/i-built-a-self-hosted-alternative-to-googles-video-v0-lzejfn9i3gxf1.png?width=607&format=png&auto=webp&s=0ed90647a9832c0c469a4cabdb4864f0a959ca3b)
So I built Edit Mind – a completely self-hosted video analysis tool that runs entirely on your own hardware.
# What it does:
Indexes videos locally: Transcribes audio, detects objects (YOLOv8), recognizes faces, analyzes emotions
Semantic search: Type "scenes where u/John is happy near a campfire" and get instant results
Zero cloud dependency: Your raw videos never leave your machine
Vector database: Uses ChromaDB locally to store metadata and enable semantic search
NLP query parsing: Converts natural language to structured queries (uses Gemini API by default, but fully supports local LLMs via Ollama)
Rough cut generation: Select scenes and export as video + FCPXML for Final Cut Pro (coming soon)
# The workflow:
1. Drop your video library into the app
2. It analyzes everything once (takes time, but only happens once)
3. Search naturally: "scenes with "@sarah" looking surprised"
4. Get results in seconds, even across 2TB of footage
5. Export selected scenes as rough cuts
# Technical stack:
Electron app (cross-platform desktop)
Python backend for ML processing (face_recognition, YOLOv8, FER)
ChromaDB for local vector storage
FFmpeg for video processing
Plugin architecture – easy to extend with custom analyzers
# Self-hosting benefits:
Privacy: Your personal videos stay on your hardware
Cost: Free after setup (vs $0.10/min on GCP)
Speed: No upload/download bottlenecks
Customization: Plugin system for custom analyzers
Offline capable: Can run 100% offline with local LLM
# Current limitations:
Needs decent hardware (GPU recommended, but CPU works)
Face recognition requires initial training (adding known faces)
First-time indexing is slow (but only done once)
Query parsing uses Gemini API by default (easily swappable for Ollama)
# Why share this:
I can't be the only person drowning in video files. Parents with family footage, content creators, documentary makers, security camera hoarders – anyone with large video libraries who wants semantic search without cloud costs.
Repo: https://github.com/iliashad/edit-mind
Demo: https://youtu.be/Ky9v85Mk6aY
License: MIT
Built this over a few weekends out of frustration. Would love your feedback on architecture, deployment strategies, or feature ideas!
https://redd.it/1ogj4in
@r_opensource
Hey r/opensource !
I have 2TB+ of personal video footage accumulated over the years (mostly outdoor GoPro footage). Finding specific moments was nearly impossible – imagine trying to search through thousands of videos for "that scene where "@ilias' was riding a bike and laughing."
I tried Google's Video Intelligence API. It worked perfectly... until I got the bill: about $450+ for just a few videos. Scaling to my entire library would cost $1,500+, plus I'd have to upload all my raw personal footage to their cloud. and here's the bill
(https://preview.redd.it/i-built-a-self-hosted-alternative-to-googles-video-v0-lzejfn9i3gxf1.png?width=607&format=png&auto=webp&s=0ed90647a9832c0c469a4cabdb4864f0a959ca3b)
So I built Edit Mind – a completely self-hosted video analysis tool that runs entirely on your own hardware.
# What it does:
Indexes videos locally: Transcribes audio, detects objects (YOLOv8), recognizes faces, analyzes emotions
Semantic search: Type "scenes where u/John is happy near a campfire" and get instant results
Zero cloud dependency: Your raw videos never leave your machine
Vector database: Uses ChromaDB locally to store metadata and enable semantic search
NLP query parsing: Converts natural language to structured queries (uses Gemini API by default, but fully supports local LLMs via Ollama)
Rough cut generation: Select scenes and export as video + FCPXML for Final Cut Pro (coming soon)
# The workflow:
1. Drop your video library into the app
2. It analyzes everything once (takes time, but only happens once)
3. Search naturally: "scenes with "@sarah" looking surprised"
4. Get results in seconds, even across 2TB of footage
5. Export selected scenes as rough cuts
# Technical stack:
Electron app (cross-platform desktop)
Python backend for ML processing (face_recognition, YOLOv8, FER)
ChromaDB for local vector storage
FFmpeg for video processing
Plugin architecture – easy to extend with custom analyzers
# Self-hosting benefits:
Privacy: Your personal videos stay on your hardware
Cost: Free after setup (vs $0.10/min on GCP)
Speed: No upload/download bottlenecks
Customization: Plugin system for custom analyzers
Offline capable: Can run 100% offline with local LLM
# Current limitations:
Needs decent hardware (GPU recommended, but CPU works)
Face recognition requires initial training (adding known faces)
First-time indexing is slow (but only done once)
Query parsing uses Gemini API by default (easily swappable for Ollama)
# Why share this:
I can't be the only person drowning in video files. Parents with family footage, content creators, documentary makers, security camera hoarders – anyone with large video libraries who wants semantic search without cloud costs.
Repo: https://github.com/iliashad/edit-mind
Demo: https://youtu.be/Ky9v85Mk6aY
License: MIT
Built this over a few weekends out of frustration. Would love your feedback on architecture, deployment strategies, or feature ideas!
https://redd.it/1ogj4in
@r_opensource
I built a simple Discord bot that notifies you of new GitHub issues/PRs (and lets you filter by label and type)
https://github.com/Easonliuuuuu/Github-issue-discord-bot.git
I built it to be useful for everyone. For people who just started contributing to open source, you can set up a personal tracker for
It's a Python bot, and it's 100% open-source.
Invitation Link
Let me know what you think!
https://redd.it/1ogiqna
@r_opensource
https://github.com/Easonliuuuuu/Github-issue-discord-bot.git
I built it to be useful for everyone. For people who just started contributing to open source, you can set up a personal tracker for
"good first issue" labels across all your favorite repos. For seasoned developers, you can set it up in your team's channel to monitor all new PRs, or just filter for issues with a specific "bug" or "needs-review" label.It's a Python bot, and it's 100% open-source.
Invitation Link
Let me know what you think!
https://redd.it/1ogiqna
@r_opensource
GitHub
GitHub - Easonliuuuuu/Github-issue-discord-bot
Contribute to Easonliuuuuu/Github-issue-discord-bot development by creating an account on GitHub.
I built a free tool to visualize your Google Timeline data (100% in your browser, no data uploaded)
TL;DR: Export your Google Timeline data, drag it into this tool, and see all your location history on an interactive map. Everything runs in your browser - your data never leaves your computer
I'd attach a screenshot, but this sub unfortunately doesn't allow it. Here's a link to the screenshot though: https://pbs.twimg.com/media/G3dTDt6WQAAtynK?format=jpg&name=4096x4096
# Why I built this
Google recently killed their web-based Timeline viewer and started limiting how long they keep your location history. When you export your data, you just get JSON files that are basically useless without a way to visualize them.
I mean, I already have Dawarich that could do pretty much the same, but it heavily relies on backend processing, so for a browser-based quick viewer, I had to rebuild it from scratch.
So, my Timeline Visualizer can:
Handle massive files (tested with 600k+ GPS points)
Not send my location data to yet another server
Actually work without crashing my browser
# How it works
Drop your Google Timeline JSON files into the browser. The tool:
1. Auto-detects the format (Records.json, Semantic Timeline, Location History, etc.)
2. Processes everything locally in JavaScript
3. Streams points to an interactive map in batches
4. Shows your location history with activity paths
For a 170 MB file with 630,000 points, it takes about 7-8 seconds to process on my MacBook Pro M1 Pro.
# Privacy first
Your data never leaves your browser. No uploads, no tracking, no servers. All processing happens in JavaScript on your device. Close the tab and your data is gone.
It's open source too, so you can verify exactly what it does: [GitHub](https://github.com/dawarich-app/site/blob/main/src/components/TimelineMap.js)
# Features
Year filtering \- Too many points? Filter by year. The tool defaults to showing just your earliest year (usually 40-60k points instead of 600k+)
Visits \- Side panel shows only actual visits/places, not every GPS ping
Activity paths \- See your routes on the map
Auto-zoom \- Switch years and the map automatically fits to that data
Dark mode \- Because of course
# Supported formats
Everything Google exports:
Records.json (raw GPS pings)
Semantic Timeline (YYYY_MONTH.json files)
Location History (newer phone exports)
# Getting your data
Instructions are on the tool page, but basically:
Google Takeout \- takeout.google.com (doesn't work for everyone anymore)
Android \- Google Maps → Settings → Location → Location Services → Timeline → Export
iOS \- Google Maps → Settings → Personal Content → Export Timeline data
# Limitations
Bigger files take time to process. I personally have a Records.json file size of \~170 MB with 630,000 points and it worked well and fast, but it always depends on your hardware and file size. Older computers with limited RAM might struggle with multiple huge files.
Try it: dawarich.app/tools/timeline-visualizer
Code: GitHub
Since I created Dawarich, I'm already familiar with the JSON files schema, but still, I used locationhistoryformat.com to double-check some details about the different formats Google uses. It misses schema for the newer phone exports, though, so I used
https://redd.it/1ogkrnc
@r_opensource
TL;DR: Export your Google Timeline data, drag it into this tool, and see all your location history on an interactive map. Everything runs in your browser - your data never leaves your computer
I'd attach a screenshot, but this sub unfortunately doesn't allow it. Here's a link to the screenshot though: https://pbs.twimg.com/media/G3dTDt6WQAAtynK?format=jpg&name=4096x4096
# Why I built this
Google recently killed their web-based Timeline viewer and started limiting how long they keep your location history. When you export your data, you just get JSON files that are basically useless without a way to visualize them.
I mean, I already have Dawarich that could do pretty much the same, but it heavily relies on backend processing, so for a browser-based quick viewer, I had to rebuild it from scratch.
So, my Timeline Visualizer can:
Handle massive files (tested with 600k+ GPS points)
Not send my location data to yet another server
Actually work without crashing my browser
# How it works
Drop your Google Timeline JSON files into the browser. The tool:
1. Auto-detects the format (Records.json, Semantic Timeline, Location History, etc.)
2. Processes everything locally in JavaScript
3. Streams points to an interactive map in batches
4. Shows your location history with activity paths
For a 170 MB file with 630,000 points, it takes about 7-8 seconds to process on my MacBook Pro M1 Pro.
# Privacy first
Your data never leaves your browser. No uploads, no tracking, no servers. All processing happens in JavaScript on your device. Close the tab and your data is gone.
It's open source too, so you can verify exactly what it does: [GitHub](https://github.com/dawarich-app/site/blob/main/src/components/TimelineMap.js)
# Features
Year filtering \- Too many points? Filter by year. The tool defaults to showing just your earliest year (usually 40-60k points instead of 600k+)
Visits \- Side panel shows only actual visits/places, not every GPS ping
Activity paths \- See your routes on the map
Auto-zoom \- Switch years and the map automatically fits to that data
Dark mode \- Because of course
# Supported formats
Everything Google exports:
Records.json (raw GPS pings)
Semantic Timeline (YYYY_MONTH.json files)
Location History (newer phone exports)
# Getting your data
Instructions are on the tool page, but basically:
Google Takeout \- takeout.google.com (doesn't work for everyone anymore)
Android \- Google Maps → Settings → Location → Location Services → Timeline → Export
iOS \- Google Maps → Settings → Personal Content → Export Timeline data
# Limitations
Bigger files take time to process. I personally have a Records.json file size of \~170 MB with 630,000 points and it worked well and fast, but it always depends on your hardware and file size. Older computers with limited RAM might struggle with multiple huge files.
Try it: dawarich.app/tools/timeline-visualizer
Code: GitHub
Since I created Dawarich, I'm already familiar with the JSON files schema, but still, I used locationhistoryformat.com to double-check some details about the different formats Google uses. It misses schema for the newer phone exports, though, so I used
jq to inspect those files directly.https://redd.it/1ogkrnc
@r_opensource
Dawarich
Google Timeline Visualizer - View Your Location History on a Map
Free, privacy-first tool to visualize your Google Timeline location history on an interactive map. All data processing happens in your browser - no data is sent to any server.
Gisia: An Open Source Lightweight Self-Hosted DevOps Platform for Your Projects
https://github.com/gisiahq/gisia
https://redd.it/1ogmmk5
@r_opensource
https://github.com/gisiahq/gisia
https://redd.it/1ogmmk5
@r_opensource
GitHub
GitHub - gisiahq/gisia: Self-hosted personal DevOps platform — lightweight, private, and fully yours.
Self-hosted personal DevOps platform — lightweight, private, and fully yours. - gisiahq/gisia
I built an open-source Steam automation tool (Steam Game Idler) as an alternative to ArchiSteamFarm, Steam Achievement Manager, and Idle Master
**TL;DR:** Created [Steam Game Idler](https://github.com/zevnda/steam-game-idler) \- a modern, user-friendly tool for farming Steam trading cards, managing achievements, and boosting playtime. Fully open-source, no telemetry, uses official Steam APIs.
# The Problem
If you're a PC gamer, you know Steam's trading card system is tedious. You need to idle games for hours to get card drops. For 500+ game libraries, that's impractical.
Existing solutions like **ArchiSteamFarm** are powerful but complex (JSON configs, CLI-heavy, designed for headless servers). **Idle Master** was great but abandoned in 2016 and **Idle Master Extended** has its bugs too.
# The Solution
I built **Steam Game Idler (SGI)** to be:
* **Modern stack:** Tauri + TypeScript + Rust (lightweight, fast, native)
* **User-friendly:** Actual GUI, no config files needed
* **All-in-one:** Card farming, achievement management, and playtime boosting in one app
* **Security-first:** AES-256-GCM encryption, official Steamworks SDK, zero telemetry
* **Fully open-source:** [Audit the code yourself](https://github.com/zevnda/steam-game-idler)
# Features
* Idle up to 32 games simultaneously (Steam's limit)
* Auto-unlock achievements with human-like timing
* Manually lock/unlock any achievement
* Trading card inventory manager
* Playtime booster
* Real-time notifications for card drops
# Why Open Source Matters
Steam automation tools have a bad rep - some are malware, others are sketchy. I wanted full transparency:
* Anyone can review the source code
* Build from source if you don't trust binaries
* Community contributions welcome
* No hidden telemetry or data collection
The project has 300+ GitHub stars and active issues and discussions.
# Tech Stack
* **Frontend:** TypeScript + React (NextJS)
* **Backend:** Rust (via Tauri)
* **APIs:** Official Steamworks SDK (not reverse-engineered hacks) and a custom C# tool
* **Security:** Custom XOR-based obfuscation for local credential storage
* **Platform:** Windows
# Current State
* ✅ Stable release (v2.1.20)
* ✅ Active development
* ✅ Full documentation at [steamgameidler.com](https://steamgameidler.com)
* ❌ Linux/Mac support
# Lessons Learned
Building this taught me a lot about:
* Working with proprietary APIs (Steamworks is... interesting)
* Balancing power-user features with beginner UX
* Security best practices for local credential storage
* Why Tauri is awesome for desktop apps (smaller bundle size than Electron, native performance)
# Get Involved
* **GitHub:** [github.com/zevnda/steam-game-idler](https://github.com/zevnda/steam-game-idler)
* **Docs:** [steamgameidler.com](https://steamgameidler.com)
* **Issues/PRs welcome!**
Happy to answer questions about the tech stack, Steam APIs, or anything else. Also open to feedback and feature requests.
**Note:** This is a personal project I use myself. Steam's ToS is vague on automation, so use at your own risk. No bans reported in 10+ years of similar tools existing, but YMMV.
https://redd.it/1ognzvr
@r_opensource
**TL;DR:** Created [Steam Game Idler](https://github.com/zevnda/steam-game-idler) \- a modern, user-friendly tool for farming Steam trading cards, managing achievements, and boosting playtime. Fully open-source, no telemetry, uses official Steam APIs.
# The Problem
If you're a PC gamer, you know Steam's trading card system is tedious. You need to idle games for hours to get card drops. For 500+ game libraries, that's impractical.
Existing solutions like **ArchiSteamFarm** are powerful but complex (JSON configs, CLI-heavy, designed for headless servers). **Idle Master** was great but abandoned in 2016 and **Idle Master Extended** has its bugs too.
# The Solution
I built **Steam Game Idler (SGI)** to be:
* **Modern stack:** Tauri + TypeScript + Rust (lightweight, fast, native)
* **User-friendly:** Actual GUI, no config files needed
* **All-in-one:** Card farming, achievement management, and playtime boosting in one app
* **Security-first:** AES-256-GCM encryption, official Steamworks SDK, zero telemetry
* **Fully open-source:** [Audit the code yourself](https://github.com/zevnda/steam-game-idler)
# Features
* Idle up to 32 games simultaneously (Steam's limit)
* Auto-unlock achievements with human-like timing
* Manually lock/unlock any achievement
* Trading card inventory manager
* Playtime booster
* Real-time notifications for card drops
# Why Open Source Matters
Steam automation tools have a bad rep - some are malware, others are sketchy. I wanted full transparency:
* Anyone can review the source code
* Build from source if you don't trust binaries
* Community contributions welcome
* No hidden telemetry or data collection
The project has 300+ GitHub stars and active issues and discussions.
# Tech Stack
* **Frontend:** TypeScript + React (NextJS)
* **Backend:** Rust (via Tauri)
* **APIs:** Official Steamworks SDK (not reverse-engineered hacks) and a custom C# tool
* **Security:** Custom XOR-based obfuscation for local credential storage
* **Platform:** Windows
# Current State
* ✅ Stable release (v2.1.20)
* ✅ Active development
* ✅ Full documentation at [steamgameidler.com](https://steamgameidler.com)
* ❌ Linux/Mac support
# Lessons Learned
Building this taught me a lot about:
* Working with proprietary APIs (Steamworks is... interesting)
* Balancing power-user features with beginner UX
* Security best practices for local credential storage
* Why Tauri is awesome for desktop apps (smaller bundle size than Electron, native performance)
# Get Involved
* **GitHub:** [github.com/zevnda/steam-game-idler](https://github.com/zevnda/steam-game-idler)
* **Docs:** [steamgameidler.com](https://steamgameidler.com)
* **Issues/PRs welcome!**
Happy to answer questions about the tech stack, Steam APIs, or anything else. Also open to feedback and feature requests.
**Note:** This is a personal project I use myself. Steam's ToS is vague on automation, so use at your own risk. No bans reported in 10+ years of similar tools existing, but YMMV.
https://redd.it/1ognzvr
@r_opensource
GitHub
GitHub - zevnda/steam-game-idler: Farm Steam trading cards, manage achievements, and idle games automatically — an all-in-one alternative…
Farm Steam trading cards, manage achievements, and idle games automatically — an all-in-one alternative to ArchiSteamFarm, Steam Achievement Manager, and Idle Master - zevnda/steam-game-idler
GitHub - timeplus-io/proton: Fastest SQL pipeline engine in a single C++ binary, for stream processing, analytics, observability and AI.
https://github.com/timeplus-io/proton
https://redd.it/1ogrbv6
@r_opensource
https://github.com/timeplus-io/proton
https://redd.it/1ogrbv6
@r_opensource
GitHub
GitHub - timeplus-io/proton: ⚡ Fastest SQL ETL pipeline in a single C++ binary, built for stream processing, observability, analytics…
⚡ Fastest SQL ETL pipeline in a single C++ binary, built for stream processing, observability, analytics and AI/ML - GitHub - timeplus-io/proton: ⚡ Fastest SQL ETL pipeline in a single C++ bin...
built an app that tracks the world’s top artists
hey everyone,
i’ve been working on a small project called world's top artists: it tracks the world’s top 500 artists, updated daily, with insights, real-time stats and discovery features.
the data comes from both spotify and apple music, aggregated into one place.
it includes a bunch of cool views:
– a world map showing top cities for listeners
– a constellation graph showing how artists are connected (based on related artists)
– a “former 500” page that keeps track of artists who dropped out of the chart
– artist and music discovery features based on daily trends
right now the app pulls the top 500 from **kworb.net**, but I also keep a separate file of around 15,000 potential artists who could enter the top list.
I chose this approach because for now it’s a showcase / mvp, and I didn’t want to do heavy scraping.
if the app shows potential and people enjoy it, I plan to move it to a proper server and domain.
I already have an algorithm that can fetch the top 500 directly from spotify without relying on other sources.
the interesting part is that the whole thing is fully client-side, so no backend at all.
all data is stored as static json files on github, and a noscript runs every 24h via github actions to rebuild and push the new data.
it’s fast, lightweight, and surprisingly capable for something that’s just html, json and javanoscript, thanks to next.js export optimization :D
link: https://music.eduardlupu.com
github: https://github.com/EduardLupu/spotify-artists
i’d really love to hear any kind of feedback: things you’d add, improve, or explore.
I want to keep working on it, but I’m kind of short on new ideas at the moment.
what features do you think would be fun or interesting to see next?
https://redd.it/1ogrv9w
@r_opensource
hey everyone,
i’ve been working on a small project called world's top artists: it tracks the world’s top 500 artists, updated daily, with insights, real-time stats and discovery features.
the data comes from both spotify and apple music, aggregated into one place.
it includes a bunch of cool views:
– a world map showing top cities for listeners
– a constellation graph showing how artists are connected (based on related artists)
– a “former 500” page that keeps track of artists who dropped out of the chart
– artist and music discovery features based on daily trends
right now the app pulls the top 500 from **kworb.net**, but I also keep a separate file of around 15,000 potential artists who could enter the top list.
I chose this approach because for now it’s a showcase / mvp, and I didn’t want to do heavy scraping.
if the app shows potential and people enjoy it, I plan to move it to a proper server and domain.
I already have an algorithm that can fetch the top 500 directly from spotify without relying on other sources.
the interesting part is that the whole thing is fully client-side, so no backend at all.
all data is stored as static json files on github, and a noscript runs every 24h via github actions to rebuild and push the new data.
it’s fast, lightweight, and surprisingly capable for something that’s just html, json and javanoscript, thanks to next.js export optimization :D
link: https://music.eduardlupu.com
github: https://github.com/EduardLupu/spotify-artists
i’d really love to hear any kind of feedback: things you’d add, improve, or explore.
I want to keep working on it, but I’m kind of short on new ideas at the moment.
what features do you think would be fun or interesting to see next?
https://redd.it/1ogrv9w
@r_opensource
kworb.net
Kworb.net - All your music data needs in one place
A website that collects and analyzes music data from around the world. All of the charts, sales and streams, constantly updated.
🌱 Just released my first small web dev project — still learning, but proud of how it’s coming along!
👋 Hey everyone!
I’ve been learning web development for a while (still a student, trying to get better every day), and I finally decided to share one of my first small projects.
It’s a simple web page I built to practice HTML, CSS, and JavaScript — nothing huge, but it helped me understand layouts, responsive design, and a bit of interactivity.
The project isn’t perfect (far from it 😅), but I’d love to get some feedback or suggestions from more experienced developers — especially on how to structure my code better or make the design more modern.
🔗 GitHub repo: https://github.com/SplashyFrost/Urban-Threads-Streetwear
I’m really open to learning and improving, so any comment or tip would mean a lot 🙏
Thanks for taking the time to check it out!
https://redd.it/1oguq9r
@r_opensource
👋 Hey everyone!
I’ve been learning web development for a while (still a student, trying to get better every day), and I finally decided to share one of my first small projects.
It’s a simple web page I built to practice HTML, CSS, and JavaScript — nothing huge, but it helped me understand layouts, responsive design, and a bit of interactivity.
The project isn’t perfect (far from it 😅), but I’d love to get some feedback or suggestions from more experienced developers — especially on how to structure my code better or make the design more modern.
🔗 GitHub repo: https://github.com/SplashyFrost/Urban-Threads-Streetwear
I’m really open to learning and improving, so any comment or tip would mean a lot 🙏
Thanks for taking the time to check it out!
https://redd.it/1oguq9r
@r_opensource
GitHub
GitHub - SplashyFrost/Urban-Threads-Streetwear: Urban Threads is a static front-end demo of a compact streetwear storefront built…
Urban Threads is a static front-end demo of a compact streetwear storefront built with HTML, CSS and vanilla JavaScript. It showcases a modern monochrome UI with subtle glass (backdrop-blur) effect...
Lightweight, minimalist/customizable software for writing?
I work better on paper, however it is wasteful and my wrist resents me.
For notes, I use notepad++ with everything stripped down so its nothing but a blank window, but I dont quite feel compelled to write there. Libreoffice lags the hell off in my pc (3000g,8gbram) past a few dozen pages, and while I like gdocs, specially because it works on the cloud, it also underperforms past a certain point and sometimes also feel a bit clunky.
I dont need a lot of formatting options, what I want is
\- Reliable autosave (notepad++ has failed me more than ocne in several diferent ways)
\- Lightweight (I like how notepad++ allows me to just instantly scroll up and down a txt with several mb under its belt with no issue)
\- Sanity (basically nothing breaking down if I copy paste it from on to it)
\- No distractions/clutter
Any advice?
https://redd.it/1ogx3lf
@r_opensource
I work better on paper, however it is wasteful and my wrist resents me.
For notes, I use notepad++ with everything stripped down so its nothing but a blank window, but I dont quite feel compelled to write there. Libreoffice lags the hell off in my pc (3000g,8gbram) past a few dozen pages, and while I like gdocs, specially because it works on the cloud, it also underperforms past a certain point and sometimes also feel a bit clunky.
I dont need a lot of formatting options, what I want is
\- Reliable autosave (notepad++ has failed me more than ocne in several diferent ways)
\- Lightweight (I like how notepad++ allows me to just instantly scroll up and down a txt with several mb under its belt with no issue)
\- Sanity (basically nothing breaking down if I copy paste it from on to it)
\- No distractions/clutter
Any advice?
https://redd.it/1ogx3lf
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Best open source video editor for hierarchical video clip management between multiple subfolders
There are two folders below my main project folder:
Folder01 contains it's own separate project file named “project01” with it's own clips.
Folder02 contains “project02", a separate project file that only contains a few clips and nothing else.
We need to include project01 as part of project02's timeline and integrate that file into our project's timeline before exporting it as one single mp4 file as part of our greater project. Is there a way to do this without having to export project01 first under specific open source video editing programs like shotcut, kdenlive, and openshot?
Which one of these open source video editing programs might help us achieve complex video editing task involving two or more subfolders like these (by importing, I mean being able to include a project file and treat it as it where one single clip from our timeline's perspective)?
https://redd.it/1ogywka
@r_opensource
There are two folders below my main project folder:
Folder01 contains it's own separate project file named “project01” with it's own clips.
Folder02 contains “project02", a separate project file that only contains a few clips and nothing else.
We need to include project01 as part of project02's timeline and integrate that file into our project's timeline before exporting it as one single mp4 file as part of our greater project. Is there a way to do this without having to export project01 first under specific open source video editing programs like shotcut, kdenlive, and openshot?
Which one of these open source video editing programs might help us achieve complex video editing task involving two or more subfolders like these (by importing, I mean being able to include a project file and treat it as it where one single clip from our timeline's perspective)?
https://redd.it/1ogywka
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community