Opensource by Reddit – Telegram
Opensource by Reddit
21 subscribers
5 photos
2 videos
9.59K links
Reddit's ♨️ take on Open Source Technology.

Join the discussion ➡️ @opensource_chats

Channel Inquiries ➡️ @group_contacts_bot

👄 TIPS ➡️➡️➡️ https://news.1rj.ru/str/addlist/mB9fRZOHTUk5ZjZk

🌈 made possible by
@reddit2telegram
@r_channels
Download Telegram
Best alternative to Simplewall

After the creator of Simplewall left this.

What alternative for Windows Firewall + DNS do you recommend?

https://redd.it/1n35vbx
@r_opensource
Termix - Self hosted web-based SSH terminal, reverse tunnels, and file editing

GitHub Repo: [https://github.com/LukeGus/](https://github.com/LukeGus/Termix)

For the past couple of months, I have been working on my free self-hosted passion project, Termix.

Termix is an open-source, forever-free, self-hosted all-in-one server management platform. It provides a web-based solution for managing your servers and infrastructure through a single, intuitive interface. Termix offers SSH terminal access, SSH tunneling capabilities, and remote file editing, with many more tools to come.

Complete Feature List:

* **SSH Terminal Access** \- Full-featured terminal with split-screen support (up to 4 panels) and tab system
* **SSH Tunnel Management** \- Create and manage SSH tunnels with automatic reconnection and health monitoring
* **Remote File Editor** \- Edit files directly on remote servers with syntax highlighting, file management features (uploading, removing, renaming, deleting files)
* **SSH Host Manager** \- Save, organize, and manage your SSH connections with tags and folders
* **Server Stats** \- View CPU, memory, and HDD usage on any SSH server
* **User Authentication** \- Secure user management with admin controls and OIDC support with more auth types planned
* **Modern UI** \- Clean interface built with React, Tailwind CSS, and Shadcn

https://redd.it/1n35so1
@r_opensource
Drowning in pull requests from contributors with wildly different code quality

Maintaining an open source project and getting 10-15 PRs weekly from contributors ranging from complete beginners to senior engineers. The quality variance is insane.

Some PRs are production-ready, others introduce bugs that would crash the entire system. I spend more time reviewing and providing feedback than actually working on features. It's becoming unsustainable as a volunteer effort. The challenge is being educational without being discouraging. Want to help beginners learn but also need to maintain project quality. Can't just auto-reject low-quality PRs but can't merge everything either. Started using greptile to do initial screening and provide consistent feedback formatting. Helps catch obvious issues and gives me a starting point for more detailed reviews. Still working on finding the right balance between automation and human mentoring.

How do other maintainers handle this? What's your process for managing PR quality at scale while staying welcoming to new contributors?

https://redd.it/1n3c097
@r_opensource
Java Project to Contribute

Hey everyone
I’m Prakash, a software developer at Oracle with \~2 years of experience. I’m really interested in contributing to OpenSearch and would like to start with a project/module that’s both beginner-friendly and offers good learning opportunities.

I initially tried diving into the main OpenSearch repo, but found it a bit overwhelming 😅. Could you suggest some good sub-projects, plugins, or areas where new contributors usually start and can learn effectively?

My priority is to contribute to Java-based projects.

https://redd.it/1n3ckjy
@r_opensource
Scenario-driven simulator for async backends (design before you code)

Hello!

I’d like to share AsyncFlow, an open-source scenario-driven simulator for asynchronous distributed backends.

With AsyncFlow you can do something like:

Declare a topology (users → client → load balancer → servers) → Run it → Inspect results (latency, queues, RAM, failures).

That’s the core idea: simulate your async system before it exists in production.

🔹 What AsyncFlow Does?

• Scenario-driven simulations → inject network RTT, jitter, outages, or resource caps.

• Event-loop aware servers → CPU blocks, RAM stays allocated, I/O yields like real async frameworks.

• Network edges → model latency distributions, spikes, or degraded links.

• Metrics out of the box → p95/p99 latency, throughput, ready queues, RAM, sockets under pressure.

• Event injection → simulate what-if dynamics (traffic spikes, server crashes, link degradation).

🔹 Who Is This For?

• Planners → test capacity and resilience trade-offs pre-deploy.

• Learners → see how async event loops behave under load.

• Educators → teach distributed systems with reproducible scenarios.

Repo: https://github.com/AsyncFlow-Sim/AsyncFlow

I’d love feedback on:

• Which scenarios (failures, network anomalies, overloads) you’d find most useful?

• What metrics would help you the most?

• Any OSS tips on docs and examples.

Thanks, happy to answer questions.


https://redd.it/1n3iknv
@r_opensource
Maintaining an Open Source Project in the Times of AI Coding

None of this text was written or reviewed by AI. All typos and mistakes are mine and mine alone.

After reviewing and merging dozens of PR's by external contributors who co-wrote them with AI (predominantly Claude), I thought I'd share my experiences, and speculate on the state of vibe coded projects.

tl;dr:

On one hand, I think writing and merging contributions to OSS got slower due to availability of AI tools. It is faster to get to some sorta-working, sorta-OK looking solution, but the review process, ironing out the details and bugs takes much longer than if the code had been written entirely without AI. I also think, there would be less overall frustration on both sides. On the other hand, I think without Claude we simply wouldn't have these contributions. The extreme speed to an initial pseudo-solution and the pseudo-addressing of review comments are addictive and are probably the only reason why people consider writing a contribution. So I guess a sort of win overall?

Now the longer version with some background. I am the dev of Serena MCP, where we use language servers to provide IDE-like tools to agents. In the last months, the popularity of the project exploded and we got tons of external contributions, mainly support for more languages. Serena is not a very complex project, and we made sure that adding support for a new language is not too hard. There is a detailed guideline on how to do that, and it can be done in a test-driven way.

Here is where external contributors working with Claude show the benefits and the downsides. Due to the instructions, Claude writes some tests and spits out initial support for a new language really quickly. But it will do anything to let the tests pass - including horrible levels of cheating. I have seen code where:

1. Tests are simply skipped if the asserts fail
2. Tests only testing trivialities, like isinstance(output, list) instead of doing anything useful
3. Using mocks instead of testing real implementations
4. If a problem appears, instead of fixing the configuration of the language server, Claude will write horrible hacks and workarounds to "solve" a non-existing problem. Tests pass, but the implementation is brittle, wrong and unnecessary

No human would ever write code this way. As you might imagine, the review process is often tenuous for both sides. When I comment on a hack, the PR authors were sometimes not even aware that it was present and couldn't explain why it was necessary. The PR in the end becomes a ton of commits (we always have to squash) and takes quite a lot of time to completion. As I said, without Claude it would probably be faster. But then again, without Claude it would probably not happen at all...

If you have made it this far, here some practical personal recommendations both for maintainers and for general users of AI for coding.

1. Make sure to include extremely detailed instructions on how tests should be written and that hacks and mocks have to be avoided. Shout at Claude if you must (that helps!).
2. Roll up your sleeves and put human effort on tests, maybe go through the effort of really writing them before the feature. Pretend it's 2022
3. Before starting with AI, think whether some simple copy-paste and minor adjustments will not also get you to an initial implementation faster. You will also feel more like you own the code
4. Know when to cut your losses. If you notice that you loose a lot of time with Claude, consider going back and doing some things on your own.
5. For maintainers - be aware of the typical cheating behavior of AI and be extremely suspicious of workarounds. Review the tests very thoroughly, more thoroughly than you'd have done a few years ago.

Finally, I don't even want to think about projects by vibe coders who are not seasoned programmers... After some weeks of development, it will probably be sandcastles with a foundation based on fantasy soap bubbles that will collapse with the first blow of the wind and can't be fixed.

Would love to
🚀 Open Source AI Smartwatch Project - Looking for Collaborators!

TL;DR: Building an open course AI model for smartwatches as a side project. I'm learning as I go and looking for fellow enthusiasts to join the journey!

# About This Project

This is my passion side project to create an open source AI-powered smartwatch platform. I'm not an expert in the field, but I'm deeply committed to learning and building something meaningful together. Looking for like-minded people who want to explore wearable AI technology!

# What We're Building

An open source AI-powered smartwatch platform that democratizes wearable intelligence. Our goal is to create accessible, privacy-focused AI models that can run efficiently on smartwatch hardware while providing meaningful insights and interactions.

# What We Need

🧠 AI/ML Engineers \- Model optimization, edge AI, TinyML
📊 Data Scientists \- Health analytics, user insights
📚 Technical Writers \- Documentation, tutorials
🤝 Fellow Learners \- Anyone excited to explore and learn together!

# My Commitment

📚 Always Learning \- I'm actively studying AI, embedded systems, and wearable tech
Dedicated Time \- This is my main side project, and I'm committed to consistent progress
🎯 Long-term Vision \- Not just a weekend hack - I'm in this for the long haul
💬 Open Communication \- Regular updates, transparent about challenges and progress

# Why Join Us?

100% Open Source \- All code, models, and hardware designs will be freely available
Learn Together \- Perfect for anyone wanting to dive into edge AI and wearables
No Pressure Environment \- We're all learning and figuring this out together
Real Impact \- Build something people will actually use and benefit from
Flexible Commitment \- Contribute as much or as little as your schedule allows
Portfolio Builder \- Great project to showcase your growing skills

# Current Status

🔥 Early stage - perfect time to shape the project direction
📋 Defining architecture and technical requirements
🎯 First milestone: Basic AI inference on smartwatch hardware
📖 Learning and researching best practices as we go

# Perfect If You're...

A student or professional wanting to explore wearable AI
Someone with expertise willing to mentor and guide
A beginner excited to learn alongside others
Anyone with even basic skills in relevant areas
Just curious about the intersection of AI and wearables!

# Get Involved

DM Me or Comment!

New to this stuff too? Perfect! Let's learn and build together. I believe the best projects come from passionate people willing to figure things out as they go.

Have 5 minutes? Star our repo and share with anyone who might be interested in this learning journey!

Let's build the future of wearable AI - one commit at a time! 🤝

https://redd.it/1n3u8jj
@r_opensource
Seeking feedback on rtcio: A Modern, Type-Safe WebRTC library with a decoupled signaler

For the past few weeks, I've been working on a new open-source library called rtcio. My goal was to create a WebRTC wrapper with a simple, high-level API similar to Socket.IO, but built from the ground up with modern TypeScript and a more flexible architecture than existing solutions.

Before I push for an alpha release, I would love to get some feedback from experienced TypeScript and WebRTC developers on the overall structure, API design, and any potential improvements you might see.

GitHub Repo: https://github.com/dbidwell94/rtc.io/tree/refactor

NOTE: This is not on the master branch yet as this is an extensive refactor from a previous attempt I made a couple of years ago on this matter. I have grown a lot as a dev since then and thought it would be worth it to re-do this library so I can use it in a project in the near future. You could check the master branch's current implementation if you want to laugh to yourself though :)

The "Why"

While libraries like simple-peer are great, I wanted something that felt more at home in a modern, modular project. My main goals were:

1. True Signaler Agnosticism: I wanted a library that wasn't tied to any specific signaling transport (like Socket.IO).
2. First-Class Type Safety: A fully-typed, generic event system was a must-have.
3. Modern Monorepo Structure: A clean setup for the core library and its plugins.

Key Architectural & TypeScript Decisions

This is where I'd most appreciate your feedback.

1. Decoupled Signaling Layer The core of the library is completely decoupled from the signaling server via a ClientSignaler interface (@rtcio/signaling). This means you can write (or use) any signaling implementation (WebSockets, Socket.IO, GraphQL, etc.) and plug it into the main RTC manager. This makes the library incredibly flexible and easy to test.

2. Type-Safe, Generic Event System I spent a lot of time on the event system. The RTC manager and the individual P2PConnection instances are generic. You can pass in an interface defining your custom P2P events, and you get full type safety and autocomplete when using .on() and .emit().

// Define your custom events
interface AppEvents {
chatMessage: (from: string, message: string) => void;
gameState: (state: GameStateObject) => void;
}

const rtc = new RTC<AppEvents>(signaler, 'my-room');

// 'peer' is now fully typed with your events
rtc.on('connected', (peer) => {
// Autocomplete for 'chatMessage' and 'gameState' works here
peer.on('chatMessage', (from, message) => {
console.log(${from}: ${message});
});
});

3. Monorepo with npm Workspaces & Project References

The project is structured as a monorepo with packages/*, and I'm using TypeScript's project references to manage the dependencies between the core, signaling-interface, and signaler implementation packages. This keeps everything clean and builds efficiently.

Request for Feedback

I'm looking for constructive criticism on:

1. API Design: Is the RTC manager and P2PConnection API intuitive?
2. Type Safety: Are there places where the types could be improved or made safer?
3. Monorepo Structure: Any best practices I might be missing for managing a TS monorepo?
4. General Code Quality: Anything you see that could be done better?

Thanks for taking the time to look this over! I look forward to your comments!

https://redd.it/1n3uyjq
@r_opensource
The story of our open source Agent!

Hey u/opensource 👋

I wanted to share the journey behind a wild couple of days building Droidrun, our open-source agent framework for automating real Android apps.

We started building Droidrun because we were frustrated: everything in automation and agent tech seemed stuck in the browser. But people live on their phones and apps are walled gardens. So we built an agent that could actually tap, scroll, and interact inside real mobile apps, like a human.

A few weeks ago, we posted a short demo no pitch, just an agent running a real Android UI. Within 48 hours:

We hit [XXXX+ GitHub](https://github.com/droidrun/droidrun)
Got devs joining our Discord
Landed on the radar of investors
And closed a $2M+ funding round shortly after

What worked for us:

We led with a real demo, not a roadmap
Posted in the right communities, not product forums
Asked for feedback, not attention
And open-sourced from day one, which gave us credibility + momentum

We’re still in the early days, and there’s a ton to figure out. But the biggest lesson so far:

Don’t wait to polish. Ship the weird, broken, raw thing if the core is strong, people will get it.

If you’re working on something agentic, mobile, or just bold than I’d love to hear what you’re building too.

AMA if helpful!

https://redd.it/1n3woa7
@r_opensource
Building an Open Source Alternative to VAPI - Seeking Community Input 🚀

Hey r/opensource community!

( Used claude ai to edit this post, used it as an assistant but not to generate whole post, just to cleanup grammer and present my thoughts coherently . I have also posted this in other reddit threads.)



I'm exploring building an **open source alternative to VAPI** and wanted to start a discussion to gauge interest and gather your thoughts.



\## The Problem I'm Seeing



While platforms like VAPI, Bland, and Retell are powerful, I've noticed several pain points:

\- **Skyrocketing costs at scale** - VAPI bills can get expensive quickly for high-volume use cases

\- **Limited transparency** and control over the underlying infrastructure

\- **No self-hosting options** for compliance-heavy enterprises or those wanting full control

\- **Vendor lock-in** concerns with closed-source solutions

\- **Slow feature updates** in existing open source alternatives (looking at you, Vocode)

\- **Evaluation and testing** often feel like afterthoughts rather than core features



\## My Vision: Open Source Voice AI Platform



Think **Zapier vs n8n** but for voice AI. Just like how n8n provides an open source alternative to Zapier's workflow automation, why shouldn't there be a open source voice AI platform?



\### Key Differentiators

\- **Full self-hosting capabilities** - Deploy on your own infrastructure

\- **BYOC (Bring Your Own Cloud)** - Perfect for compliance-heavy enterprises and high-volume use cases

\- **Cost control** - Avoid those skyrocketing VAPI bills by running on your own resources

\- **Complete transparency** - Open source means you can audit, modify, and extend as needed



\### Core Philosophy: Testing & Observability First

Unlike other platforms that bolt on evaluation later, I want to build:

\- **Concurrent voice agent testing**

\- **Built-in evaluation frameworks**

\- **Guardrails and safety measures**

\- **Comprehensive observability**



All as **first-class citizens**, not afterthoughts.



\### Beta version Feature Set (Keeping It Focused only to the assistant related functionalites for now and no workflow and tool calling features in beta version)



\- Basic conversion builder with prompts and variables

\- Basic knowledge base (one vector store to start with), file uploads, maybe a postgres pgvector(later might have general options to use multiple options for KB as tool calling in later versions

\- Provider options for voice models with configuration options

\- Model router options with fallback

\- Voice assistants with workflow building

\- Model routing and load balancing

\- Basic FinOps dashboard

\- Calls logs with trannoscripts and user feedback

\- No tool calling for beta version

\- Evaluation and testing suite

\- Monitoring and guardrails



\## Questions for the Community



I'd love to hear your thoughts:



1. **What features would you most want to see** in an open source voice AI platform as a builder?



2. **What frustrates you most** about current voice AI platforms (VAPI, Bland, Retell, etc.)? Cost scaling? Lack of control?



3. **Do you believe there's a real need** for an open source alternative, or are current solutions sufficient?



4. **Would self-hosting capabilities** be valuable for your use case?



5. **What would make you consider switching** from your current voice AI platform?



\## Why This Matters



I genuinely believe that voice AI infrastructure should be:

\- **Transparent and auditable** - Know exactly what's happening under the hood

\- **Cost-effective at scale** - No more surprise bills when your usage grows

\- **Self-hostable** - Deploy on your own infrastructure for compliance and control

\- **Community-driven in product roadmap and tools** - Built by users, for users

\- **Free from vendor lock-in** - Your data and workflows stay yours

\- **Built with testing and observability as core principles** - Not an after thought
I'll be publishing a detailed roadmap soon, but wanted to start this conversation first to ensure I'm building something the community actually needs and wants.



**What are your thoughts? Am I missing something obvious, or does this resonate with challenges you've faced?**



\## Monetization & Sustainability



I'm exploring an **open core model** like gitlab or may also.explore a n8n kind of approach to monetisation , builder led word of mouth evangelisation.



This approach ensures the core platform remains freely accessible while providing a path to monetize enterprise use cases in a transparent, community-friendly way.


I have been working on this for the past three weeks now, I will share the repo and a version 1 of the product in the coming week

https://redd.it/1n3y0kd
@r_opensource
Opensyte - an open-source Hubspot & Zoho alternative

I am developing an open-source, all-in-one business management software called Opensyte, which aims to serve as an alternative to HubSpot and Zoho. I have completed about 40% of the features in just one month.

What sets Opensyte apart from HubSpot and Zoho?

\- Simplicity: Opensyte is much simpler to use, with all features consolidated in one location, making it both easy and quick to navigate.

\- User-Friendly Interface: The user interface of Opensyte is distinctly different from other business management platforms. All features are organized in a sidebar, allowing users to switch between them effortlessly. Everything is clearly laid out, so you don't need to be an expert to use the platform!

\- User Management & Access Control: I have put in significant effort to ensure that this feature stands out from those of other platforms. Our User Management & Access Control system is highly customizable. You can create custom roles with predefined permission sets and manage which features users can view and access.

You can see right now what features are already implemented from the github link below.

Github link: https://github.com/Opensyte/opensyte

https://redd.it/1n3yuvy
@r_opensource
Mocky Balboa: A Server-Side Mocking Tool for Any SSR Framework

Have you ever struggled with writing end to end tests for your server side rendered apps? This was something I was wrangling with a couple of years ago. I scoured the internet for solutions, I wasn't the first to come up against this problem. Solutions ranged from branching logic in the application, proxy servers, to bypassing SSR completely. I felt like there was a better way.

The solution I built back then inspired a new tool Mocky Balboa that I'm wanting to share today. It's framework agnostic with first class support for major SSR frameworks. There's also first class support for Cypress and Playwright. If you're framework isn't listed there's the option to build custom integrations leveraging the server and client packages.

It's easy to setup and intuitive to use. The mocking API follows a very similar pattern to Playwright's route API. Mocks are written declaratively alongside your tests, with support for serving files if you need to mock binary responses.

Here's a snippet from the Playwright docs page:

import { test, expect } from "@playwright/test";
import { createClient } from "@mocky-balboa/playwright";

test("my page loads", async ({ page, context }) => {
// Create our Mocky Balboa client and establish a connection with the server
const client = await createClient(context);

// Register our fixture on routes matching '/api/users'
client.route("/api/users", (route) => {
return route.fulfill({
status: 200,
body: JSON.stringify(
{ id: "user-1", name: "John Doe" },
{ id: "user-2", name: "Jane Doe" }
),
headers: {
"Content-Type": "application/json"
},
});
});

// Visit the page of our application in the browser
await page.goto("http://localhost:3000");

// Our mock above should have been returned on our server
await expect(page.getByText("John Doe")).toBeVisible();
});

I'd love feedback, and I hope others find it as useful as I have when it comes to writing tests for your SSR frontends.

https://redd.it/1n40gkn
@r_opensource
10GB of Cannabis/Strain Images Available for Download

For anyone who’s ever needed strain images: I put together a repository with 10GB of images of various cannabis strains.

All images are organized by strain name, perfect for visual references, posts, or research.

Check it out here: https://github.com/linhacanabica/images-strains-weed

Enjoy!

https://redd.it/1n43cew
@r_opensource
What is an alternative to Spotify?

Greetings,

I wanted to ask what a good alternative to Spotify may be. I am just so sick of Spotify sending data without my knowledge to some 3rd parties and connecting to random platforms. When I look at my network traffic, I see more than *5 PORTS* occupied by Spotify.

https://redd.it/1n447l0
@r_opensource