I built an open-source IDE and framework for Android apps development in Swift
http://docs.swifdroid.com/app/
https://redd.it/1q3wt4s
@r_opensource
http://docs.swifdroid.com/app/
https://redd.it/1q3wt4s
@r_opensource
Swifdroid
Application Development - Swift for Android
SwifDroid documentation (Android framework for Swift).
I Built a Weather App Using React Native & TypeScript
https://youtu.be/Lj7ITw_tCFc?si=rPAYh9535JJieoxA
https://redd.it/1q3yblr
@r_opensource
https://youtu.be/Lj7ITw_tCFc?si=rPAYh9535JJieoxA
https://redd.it/1q3yblr
@r_opensource
YouTube
React Native Weather App | Real-Time Weather & 7-Day Forecast | TypeScript
In this video, I’m showcasing a Weather App built using React Native and TypeScript.
The app displays real-time weather information, supports city-based search, and provides a 7-day weather forecast with a clean and responsive user interface.
Tech Stack:…
The app displays real-time weather information, supports city-based search, and provides a 7-day weather forecast with a clean and responsive user interface.
Tech Stack:…
we are building an OpenSource Youtube Alternative | Booster
https://www.boostervideos.net/about
We’re two brothers who decided to build a new video platform from scratch. We’ve been working on this project, called Booster, for about two months now.
The idea came from our own frustration with existing video platforms. With Booster, we’re trying to improve the experience by using voluntary ads that give rewards to users, allowing them to boost and support their favorite channels and friends directly, and avoid content made with AI and Vertical Short Form videos.
The theme you see right now in the screen is now available for free to every user who logs in and creates a new account. We would like to know from webdevs, how we can improve it and make it better, and also know if there is any bugs or something you would llike to point out.
Regarding costs, we've solved the high costs of infrastructure thanks to our provider, so it doesn't pose a big expense, thanks to their encoding and CDN.
Regarding revenue, monetization currently would come from a virtual currency called XP, which users can either earn for free by watching voluntary feature videos or purchase. XP is used to boost channels and buy personalization assets. We also plan to implement voluntary, rewarded ads that give users free XP. The goal is to test whether users and creators actually like and adopt this model.
Moderation is made through community votes, which are a way of letting the users and the common viewer decide if the report of a specific user was accurate or not.
In the link, we've included the about page, which includes how Booster works, plus the Discord and the open GitHub.
https://redd.it/1q3zt1t
@r_opensource
https://www.boostervideos.net/about
We’re two brothers who decided to build a new video platform from scratch. We’ve been working on this project, called Booster, for about two months now.
The idea came from our own frustration with existing video platforms. With Booster, we’re trying to improve the experience by using voluntary ads that give rewards to users, allowing them to boost and support their favorite channels and friends directly, and avoid content made with AI and Vertical Short Form videos.
The theme you see right now in the screen is now available for free to every user who logs in and creates a new account. We would like to know from webdevs, how we can improve it and make it better, and also know if there is any bugs or something you would llike to point out.
Regarding costs, we've solved the high costs of infrastructure thanks to our provider, so it doesn't pose a big expense, thanks to their encoding and CDN.
Regarding revenue, monetization currently would come from a virtual currency called XP, which users can either earn for free by watching voluntary feature videos or purchase. XP is used to boost channels and buy personalization assets. We also plan to implement voluntary, rewarded ads that give users free XP. The goal is to test whether users and creators actually like and adopt this model.
Moderation is made through community votes, which are a way of letting the users and the common viewer decide if the report of a specific user was accurate or not.
In the link, we've included the about page, which includes how Booster works, plus the Discord and the open GitHub.
https://redd.it/1q3zt1t
@r_opensource
Booster
Video platform oriented for creators and users
Tiny PHP pretty-printer that formats arrays like PyTorch tensors
I’ve released a small helper for anyone working with PHP + data-heavy code (ML experiments, debugging, logs, educational projects, etc.).
PrettyPrint is a zero-dependency callable pretty-printer for PHP arrays with clean, Python-style formatting. It supports aligned 2D tables, PyTorch-like tensor views, summarization (head/tail rows & columns), and works both in CLI and web contexts.
Install:
composer require apphp/pretty-print
Examples:
Aligned 2D table:
pprint(1, 23, 456, 12, 3, 45);
// [ 1, 23, 456,
// 12, 3, 45]
PyTorch-style 2D output:
pprint($matrix);
// tensor(
// [ 1, 2, 3, 4, 5,
// 6, 7, 8, 9, 10,
// 11, 12, 13, 14, 15
// ])
Summaries for big matrices:
pprint($m, headRows: 2, tailRows: 1, headCols: 2, tailCols: 2);
3D tensors with ellipsis:
pprint($tensor3d, headB: 1, tailB: 1);
// tensor(
// [ 1, 2, ..., 4, 5,
// 6, 7, ..., 9, 10,
// ...,
// 21, 22, ..., 24, 25
// ])
Also supports labels, precision, start/end strings, and even acts as a callable object:
$pp = new PrettyPrint();
$pp('Hello', 42);
// Hello 42
You may find much more configuration features in repo: *https://github.com/apphp/pretty-print*
If you often stare at messy
https://redd.it/1q40sko
@r_opensource
I’ve released a small helper for anyone working with PHP + data-heavy code (ML experiments, debugging, logs, educational projects, etc.).
PrettyPrint is a zero-dependency callable pretty-printer for PHP arrays with clean, Python-style formatting. It supports aligned 2D tables, PyTorch-like tensor views, summarization (head/tail rows & columns), and works both in CLI and web contexts.
Install:
composer require apphp/pretty-print
Examples:
Aligned 2D table:
pprint(1, 23, 456, 12, 3, 45);
// [ 1, 23, 456,
// 12, 3, 45]
PyTorch-style 2D output:
pprint($matrix);
// tensor(
// [ 1, 2, 3, 4, 5,
// 6, 7, 8, 9, 10,
// 11, 12, 13, 14, 15
// ])
Summaries for big matrices:
pprint($m, headRows: 2, tailRows: 1, headCols: 2, tailCols: 2);
3D tensors with ellipsis:
pprint($tensor3d, headB: 1, tailB: 1);
// tensor(
// [ 1, 2, ..., 4, 5,
// 6, 7, ..., 9, 10,
// ...,
// 21, 22, ..., 24, 25
// ])
Also supports labels, precision, start/end strings, and even acts as a callable object:
$pp = new PrettyPrint();
$pp('Hello', 42);
// Hello 42
You may find much more configuration features in repo: *https://github.com/apphp/pretty-print*
If you often stare at messy
print_r() dumps to print arrays, this might make your day slightly better 😄https://redd.it/1q40sko
@r_opensource
GitHub
GitHub - apphp/pretty-print: PrettyPrint is a small, zero-dependency PHP utility that formats arrays in a clean, readable, PyTorch…
PrettyPrint is a small, zero-dependency PHP utility that formats arrays in a clean, readable, PyTorch-inspired style. It supports aligned 2D tables, 3D tensors, summarized tensor views, and flexibl...
First OSS project: URL redirect service – what features would make you use it?
As my first contribution to the open-source community, I built a tiny URL redirect service that runs on Cloudflare Workers or a VPS (Node/Bun/Docker).
Repo: https://github.com/dima6312/gr8hopper
I’m curious: what would make you actually use something like this?
E.g. “I’d use it if it had X” (metrics, A/B testing, webhooks, multi-tenant, whatever).
If any of those “X” ideas sound fun, I’d love contributors – issues, discussions, and PRs are all very welcome. I had a very specific use case to solve, which the tool does in 100%, but it could do much more!
https://redd.it/1q41u5d
@r_opensource
As my first contribution to the open-source community, I built a tiny URL redirect service that runs on Cloudflare Workers or a VPS (Node/Bun/Docker).
Repo: https://github.com/dima6312/gr8hopper
I’m curious: what would make you actually use something like this?
E.g. “I’d use it if it had X” (metrics, A/B testing, webhooks, multi-tenant, whatever).
If any of those “X” ideas sound fun, I’d love contributors – issues, discussions, and PRs are all very welcome. I had a very specific use case to solve, which the tool does in 100%, but it could do much more!
https://redd.it/1q41u5d
@r_opensource
GitHub
GitHub - dima6312/gr8hopper: Lightweight URL redirect service with admin UI. Runs on Cloudflare Workers or Node.js/Bun.
Lightweight URL redirect service with admin UI. Runs on Cloudflare Workers or Node.js/Bun. - dima6312/gr8hopper
pg-status — a lightweight microservice for checking PostgreSQL host status
**Hi!** I’d like to introduce my new project — [**pg-status**](https://github.com/krylosov-aa/pg-status).
It’s a lightweight, high-performance microservice designed to determine the status of PostgreSQL hosts. Its main goal is to help your backend identify a live master and a sufficiently up-to-date synchronous replica.
# Key features
* Very easy to deploy as a sidecar and integrate with your existing PostgreSQL setup
* Identifies the master and synchronous replicas, and assists with failover
* Helps balance load between hosts
If you find this project useful, I’d really appreciate your support — a [star on GitHub ](https://github.com/krylosov-aa/pg-status) would mean a lot!
But first, let’s talk about the problem **pg-status** is built to solve.
# PostgreSQL on multiple hosts
To improve the resilience and scalability of a PostgreSQL database, it’s common to run multiple hosts using the classic master–replica setup. There’s one **master** host that accepts writes, and one or more **replicas** that receive changes from the master via physical or logical replication.
Everything works great in theory — but there are a few important details to consider:
* Any host can fail
* A replica may need to take over as the master (failover)
* A replica can significantly lag behind the master
From the perspective of a backend application connecting to these databases, this introduces several practical challenges:
* How to determine which host is currently the live master
* How to identify which replicas are available
* How to measure replica lag to decide whether it’s suitable for reads
* How to switch the client connection pool (or otherwise handle reconnection) after failover
* How to distribute load effectively among hosts
There are already various approaches to solving these problems — each with its own pros and cons. Here are a few of the common methods I’ve encountered:
# Via DNS
In this approach, specific hostnames point to the master and replica instances. Essentially, there’s no built-in master failover handling, and it doesn’t help determine the replica status — you have to query it manually via SQL.
It’s possible to add an external service that detects host states and updates the DNS records accordingly, but there are a few drawbacks:
* DNS updates can take several seconds — or even tens of seconds — which can be critical
* DNS might automatically switch to read-only mode
Overall, this solution *does* work, and `pg-status` can actually serve as such a service for host state detection.
Also, as far as I know, many PostgreSQL cloud providers rely on this exact mechanism.
# Multihost in libpq
With this method, the client driver (libpq) can locate the first available host from a given list that matches the desired role (master or replica). However, it doesn’t provide any built-in load balancing.
A change in the master is detected only after an actual SQL query fails — at which point the connection crashes, and the client cycles through the hosts list again upon reconnection.
# Proxy
You can set up a proxy that supports on-the-fly configuration updates. In that case, you’ll also need some component responsible for notifying the proxy when it should switch to a different host.
This is generally a solid approach, but it still depends on an external mechanism that monitors PostgreSQL host states and communicates those changes to the proxy. `pg-status` fits perfectly for this purpose — it can serve as that mechanism.
Alternatively, you can use **pgpool-II**, which is specifically designed for such scenarios. It not only determines which host to route traffic to but can even perform automatic failover itself. The main downside, however, is that it can be complex to deploy and configure.
# CloudNativePG
As far as I know, CloudNativePG already provides all this functionality out of the box. The main considerations here are deployment complexity and the requirement to run within a Kubernetes environment.
# My solution - pg-status
At
**Hi!** I’d like to introduce my new project — [**pg-status**](https://github.com/krylosov-aa/pg-status).
It’s a lightweight, high-performance microservice designed to determine the status of PostgreSQL hosts. Its main goal is to help your backend identify a live master and a sufficiently up-to-date synchronous replica.
# Key features
* Very easy to deploy as a sidecar and integrate with your existing PostgreSQL setup
* Identifies the master and synchronous replicas, and assists with failover
* Helps balance load between hosts
If you find this project useful, I’d really appreciate your support — a [star on GitHub ](https://github.com/krylosov-aa/pg-status) would mean a lot!
But first, let’s talk about the problem **pg-status** is built to solve.
# PostgreSQL on multiple hosts
To improve the resilience and scalability of a PostgreSQL database, it’s common to run multiple hosts using the classic master–replica setup. There’s one **master** host that accepts writes, and one or more **replicas** that receive changes from the master via physical or logical replication.
Everything works great in theory — but there are a few important details to consider:
* Any host can fail
* A replica may need to take over as the master (failover)
* A replica can significantly lag behind the master
From the perspective of a backend application connecting to these databases, this introduces several practical challenges:
* How to determine which host is currently the live master
* How to identify which replicas are available
* How to measure replica lag to decide whether it’s suitable for reads
* How to switch the client connection pool (or otherwise handle reconnection) after failover
* How to distribute load effectively among hosts
There are already various approaches to solving these problems — each with its own pros and cons. Here are a few of the common methods I’ve encountered:
# Via DNS
In this approach, specific hostnames point to the master and replica instances. Essentially, there’s no built-in master failover handling, and it doesn’t help determine the replica status — you have to query it manually via SQL.
It’s possible to add an external service that detects host states and updates the DNS records accordingly, but there are a few drawbacks:
* DNS updates can take several seconds — or even tens of seconds — which can be critical
* DNS might automatically switch to read-only mode
Overall, this solution *does* work, and `pg-status` can actually serve as such a service for host state detection.
Also, as far as I know, many PostgreSQL cloud providers rely on this exact mechanism.
# Multihost in libpq
With this method, the client driver (libpq) can locate the first available host from a given list that matches the desired role (master or replica). However, it doesn’t provide any built-in load balancing.
A change in the master is detected only after an actual SQL query fails — at which point the connection crashes, and the client cycles through the hosts list again upon reconnection.
# Proxy
You can set up a proxy that supports on-the-fly configuration updates. In that case, you’ll also need some component responsible for notifying the proxy when it should switch to a different host.
This is generally a solid approach, but it still depends on an external mechanism that monitors PostgreSQL host states and communicates those changes to the proxy. `pg-status` fits perfectly for this purpose — it can serve as that mechanism.
Alternatively, you can use **pgpool-II**, which is specifically designed for such scenarios. It not only determines which host to route traffic to but can even perform automatic failover itself. The main downside, however, is that it can be complex to deploy and configure.
# CloudNativePG
As far as I know, CloudNativePG already provides all this functionality out of the box. The main considerations here are deployment complexity and the requirement to run within a Kubernetes environment.
# My solution - pg-status
At
GitHub
GitHub - krylosov-aa/pg-status: A microservice (sidecar) that helps instantly determine the status of your PostgreSQL hosts including…
A microservice (sidecar) that helps instantly determine the status of your PostgreSQL hosts including whether they are alive, which one is the master, which ones are replicas, and how far each repl...
my workplace, we use a PostgreSQL cloud provider that offers a built-in failover mechanism and lets us connect to the master via DNS. However, I wanted to avoid situations where DNS updates take too long to reflect the new master.
I also wanted more control — not just connecting to the master, but also balancing read load across replicas and understanding how far each replica lags behind the master. At the same time, I didn’t want to complicate the system architecture with a shared proxy that could become a single point of failure.
In the end, the ideal solution turned out to be a tiny sidecar service running next to the backend. This sidecar takes responsibility for selecting the appropriate host. On the backend side, I maintain a client connection pool and, before issuing a connection, I check the current host status and immediately reconnect to the right one if needed.
The sidecar approach brings some extra benefits:
* A sidecar failure affects only the single instance it’s attached to, not the entire system.
* PostgreSQL availability is measured relative to the local instance — meaning the health check can automatically report that this instance shouldn't receive traffic if the database is unreachable (for example, due to network isolation between data centers).
That’s how **pg-status** was born. Its job is to periodically poll PostgreSQL hosts, keep track of their current state, and expose several lightweight, fast endpoints for querying this information.
You can call **pg-status** directly from your backend on each request — for example, to make sure the master hasn’t failed over, and if it has, to reconnect automatically. Alternatively, you can use its special endpoints to select an appropriate replica for read operations based on replication lag.
For example, I have a library for Python - [context-async-sqlalchemy](https://github.com/krylosov-aa/context-async-sqlalchemy), which [has a special place](https://krylosov-aa.github.io/context-async-sqlalchemy/master_replica/), where you can user pg-status to always get to the right host.
# How to use
# Installation
You can build **pg-status** from source, install it from a `.deb` or binary package, or run it as a Docker container (lightweight Alpine-based images are available or ubuntu-based). Currently, the target architecture is **Linux amd64**, but the microservice can be compiled for other targets using **CMake** if needed.
# Usage
The service’s behavior is configured via **environment variables**. Some variables are required (for example, connection parameters for your PostgreSQL hosts), while others are optional and have default values.
You can find the full list of parameters here: [https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#parameters](https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#parameters)
When running, **pg-status** exposes several simple HTTP endpoints:
* `GET /master` \- returns the current master
* `GET /replica` \- returns a random replica using the round-robin algorithm
* `GET /sync_by_time` \- returns a synchronous replica based on time or the master, meaning the lag behind the master is measured in time
* `GET /sync_by_bytes` \- returns a synchronous replica based on bytes (based on the WAL LSN log) or the master, meaning the lag behind the master is measured in bytes written to the log
* `GET /sync_by_time_or_bytes` \- essentially a host from sync\_by\_time or from sync\_by\_bytes
* `GET /sync_by_time_and_bytes` \- essentially a host from sync\_by\_time and From sync\_by\_bytes
* `GET /hosts` \- returns a list of all hosts and their current status: live, master, or replica.
As you can see, **pg-status** provides a flexible API for identifying the appropriate replica to use. You can also set maximum acceptable lag thresholds (in time or bytes) via environment variables.
Almost all endpoints support two response modes:
1. Plain text (default)
2. JSON — when you include the header `Accept: application/json` For example: `{"host": "localhost"}`
**pg-status** can also work alongside a **proxy** or any
I also wanted more control — not just connecting to the master, but also balancing read load across replicas and understanding how far each replica lags behind the master. At the same time, I didn’t want to complicate the system architecture with a shared proxy that could become a single point of failure.
In the end, the ideal solution turned out to be a tiny sidecar service running next to the backend. This sidecar takes responsibility for selecting the appropriate host. On the backend side, I maintain a client connection pool and, before issuing a connection, I check the current host status and immediately reconnect to the right one if needed.
The sidecar approach brings some extra benefits:
* A sidecar failure affects only the single instance it’s attached to, not the entire system.
* PostgreSQL availability is measured relative to the local instance — meaning the health check can automatically report that this instance shouldn't receive traffic if the database is unreachable (for example, due to network isolation between data centers).
That’s how **pg-status** was born. Its job is to periodically poll PostgreSQL hosts, keep track of their current state, and expose several lightweight, fast endpoints for querying this information.
You can call **pg-status** directly from your backend on each request — for example, to make sure the master hasn’t failed over, and if it has, to reconnect automatically. Alternatively, you can use its special endpoints to select an appropriate replica for read operations based on replication lag.
For example, I have a library for Python - [context-async-sqlalchemy](https://github.com/krylosov-aa/context-async-sqlalchemy), which [has a special place](https://krylosov-aa.github.io/context-async-sqlalchemy/master_replica/), where you can user pg-status to always get to the right host.
# How to use
# Installation
You can build **pg-status** from source, install it from a `.deb` or binary package, or run it as a Docker container (lightweight Alpine-based images are available or ubuntu-based). Currently, the target architecture is **Linux amd64**, but the microservice can be compiled for other targets using **CMake** if needed.
# Usage
The service’s behavior is configured via **environment variables**. Some variables are required (for example, connection parameters for your PostgreSQL hosts), while others are optional and have default values.
You can find the full list of parameters here: [https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#parameters](https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#parameters)
When running, **pg-status** exposes several simple HTTP endpoints:
* `GET /master` \- returns the current master
* `GET /replica` \- returns a random replica using the round-robin algorithm
* `GET /sync_by_time` \- returns a synchronous replica based on time or the master, meaning the lag behind the master is measured in time
* `GET /sync_by_bytes` \- returns a synchronous replica based on bytes (based on the WAL LSN log) or the master, meaning the lag behind the master is measured in bytes written to the log
* `GET /sync_by_time_or_bytes` \- essentially a host from sync\_by\_time or from sync\_by\_bytes
* `GET /sync_by_time_and_bytes` \- essentially a host from sync\_by\_time and From sync\_by\_bytes
* `GET /hosts` \- returns a list of all hosts and their current status: live, master, or replica.
As you can see, **pg-status** provides a flexible API for identifying the appropriate replica to use. You can also set maximum acceptable lag thresholds (in time or bytes) via environment variables.
Almost all endpoints support two response modes:
1. Plain text (default)
2. JSON — when you include the header `Accept: application/json` For example: `{"host": "localhost"}`
**pg-status** can also work alongside a **proxy** or any
GitHub
GitHub - krylosov-aa/context-async-sqlalchemy: A convenient way to configure and work with an async SQLAlchemy session through…
A convenient way to configure and work with an async SQLAlchemy session through context in asynchronous applications - krylosov-aa/context-async-sqlalchemy
other solution responsible for handling database connections. In this setup, your backend always connects to a single proxy host (for instance, one that points to the master). The proxy itself doesn’t know the current PostgreSQL state — instead, it queries **pg-status** via its HTTP endpoints to decide when to switch to a different host.
# pg-status Implementation Details
**pg-status** is a microservice written in **C**. I chose this language for two main reasons:
* It’s extremely resource-efficient — perfect for a lightweight sidecar scenario
* I simply enjoy writing in C, and this project felt like a natural fit
The microservice consists of two core components running in two active threads:
1. PG Monitoring
The first thread is responsible for monitoring. It periodically polls all configured hosts using the **libpq** library to determine their current status. This part has an extensive list of configurable parameters, all set via environment variables:
* How often to poll hosts
* Connection timeout for each host
* Number of failed connection attempts before marking a host as dead
* Maximum acceptable replica lag (in milliseconds) considered “synchronous”
* Maximum acceptable replica lag (in bytes, based on WAL LSN) considered “synchronous”
Currently, only **physical replication** is supported.
2. HTTP Server
The second thread runs the **HTTP server**, which handles client requests and retrieves the current host status from memory. It’s implemented using [**libmicrohttpd**](https://www.gnu.org/software/libmicrohttpd/), offering great performance while keeping the footprint small.
This means your backend can safely query **pg-status** before every SQL operation without noticeable overhead.
In my testing (in a Docker container limited to 0.1 CPU and 6 MB of RAM), I achieved around **1500 RPS** with extremely low latency. You can see detailed performance metrics [here](https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#performance).
# Potential Improvements
Right now, I’m happy with the functionality — **pg-status** is already used in production in my own projects. That said, some improvements I’m considering include:
* Support for **logical replication**
* Adding precise time and byte lag information directly to the JSON responses so clients can make more informed decisions
If you find the project interesting or have ideas for enhancements, feel free to open an issue on GitHub — contributions and feedback are always welcome!
# Summary
**pg-status** is a lightweight, efficient microservice designed to solve a practical problem — determining the status of PostgreSQL hosts — while being exceptionally easy to deploy and operate.
* Licensed under **MIT**
* Open source and available on GitHub: [https://github.com/krylosov-aa/pg-status](https://github.com/krylosov-aa/pg-status)
* Available as source, `.deb` binary package, or Docker container
If you like the project, I’d really appreciate your support — please ⭐ it on GitHub!
Thanks for reading!
https://redd.it/1q44hhj
@r_opensource
# pg-status Implementation Details
**pg-status** is a microservice written in **C**. I chose this language for two main reasons:
* It’s extremely resource-efficient — perfect for a lightweight sidecar scenario
* I simply enjoy writing in C, and this project felt like a natural fit
The microservice consists of two core components running in two active threads:
1. PG Monitoring
The first thread is responsible for monitoring. It periodically polls all configured hosts using the **libpq** library to determine their current status. This part has an extensive list of configurable parameters, all set via environment variables:
* How often to poll hosts
* Connection timeout for each host
* Number of failed connection attempts before marking a host as dead
* Maximum acceptable replica lag (in milliseconds) considered “synchronous”
* Maximum acceptable replica lag (in bytes, based on WAL LSN) considered “synchronous”
Currently, only **physical replication** is supported.
2. HTTP Server
The second thread runs the **HTTP server**, which handles client requests and retrieves the current host status from memory. It’s implemented using [**libmicrohttpd**](https://www.gnu.org/software/libmicrohttpd/), offering great performance while keeping the footprint small.
This means your backend can safely query **pg-status** before every SQL operation without noticeable overhead.
In my testing (in a Docker container limited to 0.1 CPU and 6 MB of RAM), I achieved around **1500 RPS** with extremely low latency. You can see detailed performance metrics [here](https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#performance).
# Potential Improvements
Right now, I’m happy with the functionality — **pg-status** is already used in production in my own projects. That said, some improvements I’m considering include:
* Support for **logical replication**
* Adding precise time and byte lag information directly to the JSON responses so clients can make more informed decisions
If you find the project interesting or have ideas for enhancements, feel free to open an issue on GitHub — contributions and feedback are always welcome!
# Summary
**pg-status** is a lightweight, efficient microservice designed to solve a practical problem — determining the status of PostgreSQL hosts — while being exceptionally easy to deploy and operate.
* Licensed under **MIT**
* Open source and available on GitHub: [https://github.com/krylosov-aa/pg-status](https://github.com/krylosov-aa/pg-status)
* Available as source, `.deb` binary package, or Docker container
If you like the project, I’d really appreciate your support — please ⭐ it on GitHub!
Thanks for reading!
https://redd.it/1q44hhj
@r_opensource
www.gnu.org
Libmicrohttpd - GNU Project - Free Software Foundation
Skip to main text
Reversible Debugging - Store any program state as JSON
https://youtu.be/8GpOxnFrksY
https://redd.it/1q3zd9i
@r_opensource
https://youtu.be/8GpOxnFrksY
https://redd.it/1q3zd9i
@r_opensource
YouTube
I decided to represent every program ever in json
Check it out and download at https://jisp.world
I built an open-source, ephemeral voice chat app (Rust + Svelte) – voca.vc
I wanted to share my first open-source project: **voca**.
It’s a simple, ephemeral voice chat application. You create a room, share the link, and chat. No accounts, no database, and no persistent logs. Once the room is empty, it's gone.
The Tech Stack:
Backend: Rust (Axum + Tokio) for the signaling server. It’s super lightweight—handling thousands of concurrent rooms with minimal resource usage.
Frontend: Svelte 5 + Tailwind for the UI.
WebRTC: Pure P2P mesh for audio (data doesn't touch my server, only signaling does).
Why I built this: I wanted a truly private and friction-free way to hop on a voice call without signing up for Discord or generating a Zoom meeting link. I also wanted to learn Rust and deep dive into WebRTC.
For Developers: I’ve published the core logic as SDKs if you want to add voice chat to your own apps:
@treyorr/voca-client (Core SDK)
@treyorr/voca-react
@treyorr/voca-svelte
Self-Hosting: Ideally, you can just use [voca.vc](https://voca.vc/) for free, but it's also designed to be self-hosted easily. The docker image is small and needs no external dependencies like Redis or Postgres. [Self-hosting docs here](https://voca.vc/docs/self-hosting).
Feedback: This is my first "real" open-source release, so I’d love you to roast my code or give feedback on the architecture!
Repo: github.com/treyorr/voca
Demo: [voca.vc](https://voca.vc/)
Docs: voca.vc/docs
Thanks!
https://redd.it/1q49n5w
@r_opensource
I wanted to share my first open-source project: **voca**.
It’s a simple, ephemeral voice chat application. You create a room, share the link, and chat. No accounts, no database, and no persistent logs. Once the room is empty, it's gone.
The Tech Stack:
Backend: Rust (Axum + Tokio) for the signaling server. It’s super lightweight—handling thousands of concurrent rooms with minimal resource usage.
Frontend: Svelte 5 + Tailwind for the UI.
WebRTC: Pure P2P mesh for audio (data doesn't touch my server, only signaling does).
Why I built this: I wanted a truly private and friction-free way to hop on a voice call without signing up for Discord or generating a Zoom meeting link. I also wanted to learn Rust and deep dive into WebRTC.
For Developers: I’ve published the core logic as SDKs if you want to add voice chat to your own apps:
@treyorr/voca-client (Core SDK)
@treyorr/voca-react
@treyorr/voca-svelte
Self-Hosting: Ideally, you can just use [voca.vc](https://voca.vc/) for free, but it's also designed to be self-hosted easily. The docker image is small and needs no external dependencies like Redis or Postgres. [Self-hosting docs here](https://voca.vc/docs/self-hosting).
Feedback: This is my first "real" open-source release, so I’d love you to roast my code or give feedback on the architecture!
Repo: github.com/treyorr/voca
Demo: [voca.vc](https://voca.vc/)
Docs: voca.vc/docs
Thanks!
https://redd.it/1q49n5w
@r_opensource
[Web Kernel] An experimental kernel built with Javanoscript for web noscripting
https://github.com/thescarletgeek/web-kernel
https://redd.it/1q4fble
@r_opensource
https://github.com/thescarletgeek/web-kernel
https://redd.it/1q4fble
@r_opensource
GitHub
GitHub - thescarletgeek/web-kernel: Web Kernel is an experimental runtime layer inspired by the core concepts of operating system…
Web Kernel is an experimental runtime layer inspired by the core concepts of operating system kernels. - thescarletgeek/web-kernel
I open-sourced my Amazon Scraper (AmzPy)
Hey everyone,
A while back, I was building a side project that required Amazon product data. Despite having the credentials, Amazon’s API access remained a black box for me (the classic "keys granted but not working" loop).
I decided not to let a closed API stop my project. I built and open-sourced **AmzPy**, a library that scrapes product details, search results, and variants using browser impersonation to stay under the radar.
**Source Code:** [https://github.com/theonlyanil/amzpy](https://github.com/theonlyanil/amzpy)
**Why I’m sharing it:** The project is functional and on PyPI, but I believe the community could help make it even more robust. Currently, it handles:
* Product details (Price, Ratings, Images)
* Multi-page search results
* Proxy support & Browser impersonation (via `curl_cffi`)
**Open for Contributions:** If you've ever dealt with scraping challenges or want to help add features like review scraping or automated CAPTCHA solving, I’d love for you to check out the repo and fork it.
The goal is to keep this a viable alternative for developers who are tired of gatekept data.
https://redd.it/1q4i5c4
@r_opensource
Hey everyone,
A while back, I was building a side project that required Amazon product data. Despite having the credentials, Amazon’s API access remained a black box for me (the classic "keys granted but not working" loop).
I decided not to let a closed API stop my project. I built and open-sourced **AmzPy**, a library that scrapes product details, search results, and variants using browser impersonation to stay under the radar.
**Source Code:** [https://github.com/theonlyanil/amzpy](https://github.com/theonlyanil/amzpy)
**Why I’m sharing it:** The project is functional and on PyPI, but I believe the community could help make it even more robust. Currently, it handles:
* Product details (Price, Ratings, Images)
* Multi-page search results
* Proxy support & Browser impersonation (via `curl_cffi`)
**Open for Contributions:** If you've ever dealt with scraping challenges or want to help add features like review scraping or automated CAPTCHA solving, I’d love for you to check out the repo and fork it.
The goal is to keep this a viable alternative for developers who are tired of gatekept data.
https://redd.it/1q4i5c4
@r_opensource
GitHub
GitHub - theonlyanil/amzpy: A lightweight Amazon scraper library.
A lightweight Amazon scraper library. Contribute to theonlyanil/amzpy development by creating an account on GitHub.
Git Brag: Highlight and Share Your Open Source Contributions
https://blog.tedivm.com/open-source/2026/01/git-brag-highlight-and-share-your-open-source-contributions/
https://redd.it/1q4nv8p
@r_opensource
https://blog.tedivm.com/open-source/2026/01/git-brag-highlight-and-share-your-open-source-contributions/
https://redd.it/1q4nv8p
@r_opensource
tedious ramblings
Git Brag: Highlight and Share Your Open Source Contributions - tedious ramblings
Git Brag is an open source web application (or CLI) that creates a simple report of the contributions you've made to open source projects on GitHub. It reads
How do you get eyeballs on your Open Source project?
The only downside of building something that's actually valuable ( which will take time and efforts) is getting 0 attention.
How do you deal with that?
If you guys have a project which has decent number of stars how did you do it?
https://redd.it/1q4swrx
@r_opensource
The only downside of building something that's actually valuable ( which will take time and efforts) is getting 0 attention.
How do you deal with that?
If you guys have a project which has decent number of stars how did you do it?
https://redd.it/1q4swrx
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Best personal wiki software recommendation
Back when it was first becoming popular I tried notion but then they started adding all sorts of ai things into it that I did not need nor want.
So I switched to Obsidian which is much better but still doesn't quite achieve the thing I really want it to achieve.
Because Obsidian is markdown based I find it kind of limiting since I want something similar to actual Wikipedia pages, markdown doesn't really let me customise the pages to the extent that I want to not to mention that with how Obsidian builds its tree and because it is so folder based it's really annoying to make it work more similar to how web pages work where I could have, if I have a markdown file for for example a zebra I can't make another page under zebra because zebra is a file, so I need to make a zebra folder in which I put a noscript page zebra and then put other pages parallel to that and link them into zebra even tho they are technically supposed to be under zebra I'm not sure whether it's clear or not what I'm trying to say but I hope it is.
I tried doing mediawiki but not do I understand that not the site itself but rather that I don't have enough knowledge about that side of computers to know how servers and hosting websites work, so I do not know whether I could host wiki media as a single device off-line software, because if I could obviously that would be the best.
So I'm really wondering whether the above is possible and/or what would be a good alternative software to Obsidian that has more customisable pages and that is quite so folder based like how Obsidian does folders and markdown files.
Technically I don't mind even if it isn't open source, it's more important that it is off-line.
I'm really asking here cuz I have no idea where else I could ask lol.
https://redd.it/1q4tn3r
@r_opensource
Back when it was first becoming popular I tried notion but then they started adding all sorts of ai things into it that I did not need nor want.
So I switched to Obsidian which is much better but still doesn't quite achieve the thing I really want it to achieve.
Because Obsidian is markdown based I find it kind of limiting since I want something similar to actual Wikipedia pages, markdown doesn't really let me customise the pages to the extent that I want to not to mention that with how Obsidian builds its tree and because it is so folder based it's really annoying to make it work more similar to how web pages work where I could have, if I have a markdown file for for example a zebra I can't make another page under zebra because zebra is a file, so I need to make a zebra folder in which I put a noscript page zebra and then put other pages parallel to that and link them into zebra even tho they are technically supposed to be under zebra I'm not sure whether it's clear or not what I'm trying to say but I hope it is.
I tried doing mediawiki but not do I understand that not the site itself but rather that I don't have enough knowledge about that side of computers to know how servers and hosting websites work, so I do not know whether I could host wiki media as a single device off-line software, because if I could obviously that would be the best.
So I'm really wondering whether the above is possible and/or what would be a good alternative software to Obsidian that has more customisable pages and that is quite so folder based like how Obsidian does folders and markdown files.
Technically I don't mind even if it isn't open source, it's more important that it is off-line.
I'm really asking here cuz I have no idea where else I could ask lol.
https://redd.it/1q4tn3r
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Linus Torvalds Gets Candid About Windows, Workflows, and AI
https://thenewstack.io/linus-torvalds-gets-candid-about-windows-workflows-and-ai/
https://redd.it/1q7qxo0
@r_opensource
https://thenewstack.io/linus-torvalds-gets-candid-about-windows-workflows-and-ai/
https://redd.it/1q7qxo0
@r_opensource
The New Stack
Linus Torvalds Gets Candid About Windows, Workflows, and AI
Linux creator Linus Torvalds recently appeared on the YouTube channel Linus Tech Tips for a casual interview where he built a custom PC and shared personal insights into his low-stress workflow and "friendly" relationship with Microsoft.
the maintainer_burnout is real and it is getting worse
i have been contributing to different open source projects for about five years now and i am starting to realize why so many of them just die. it feels like we have built an ecosystem where everyone wants to consume the code but nobody wants to help maintain it. you release a tool to be helpful and suddenly you have a thousand people demanding new features and free support like they are paying customers.
it is a weird cycle because the more successful your project gets the more it feels like a chore. i have seen some of the best developers i know just walk away from their own repos because they couldn't handle the "ennoscriptment" from users who don't contribute a single line of code. we are basically running the internet on the unpaid overtime of a few burnt-out people.
https://redd.it/1q76f90
@r_opensource
i have been contributing to different open source projects for about five years now and i am starting to realize why so many of them just die. it feels like we have built an ecosystem where everyone wants to consume the code but nobody wants to help maintain it. you release a tool to be helpful and suddenly you have a thousand people demanding new features and free support like they are paying customers.
it is a weird cycle because the more successful your project gets the more it feels like a chore. i have seen some of the best developers i know just walk away from their own repos because they couldn't handle the "ennoscriptment" from users who don't contribute a single line of code. we are basically running the internet on the unpaid overtime of a few burnt-out people.
https://redd.it/1q76f90
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Brave overhauls adblock engine, cutting its memory consumption by 75% | Brave
https://brave.com/privacy-updates/36-adblock-memory-reduction/
https://redd.it/1q78ugg
@r_opensource
https://brave.com/privacy-updates/36-adblock-memory-reduction/
https://redd.it/1q78ugg
@r_opensource
Brave
Brave overhauls adblock engine, cutting its memory consumption by 75% | Brave
Brave has overhauled its Rust-based adblock engine to reduce memory consumption by 75%, bringing better battery life and smoother multitasking to all users.
Released a tiny vector-field + attractor visualizer. < 150 loc, and zero dependencies outside matplotlib
Was messing with some small mathematical tools lately, and wrote a micro-library for visualizing 2D vector fields and simple attractors. I kept it intentionally minimal:
- pure Python.
- no heavy scientific stack beyond matplotlib.
- small codebase (about 150 lines).
- includes presets (saddle, spiral, circular, etc.).
- supports streamlines and field-intensity plots.
- ships with a couple of example noscripts + tests
It’s not meant (and definitely won’t) compete with large visualization libraries. I needed a clean, lightweight tool for quick experiments. Thanks all.
https://pypi.org/project/fieldviz-mini/
https://github.com/rjsabouhi/fieldviz-mini
https://redd.it/1q7qy86
@r_opensource
Was messing with some small mathematical tools lately, and wrote a micro-library for visualizing 2D vector fields and simple attractors. I kept it intentionally minimal:
- pure Python.
- no heavy scientific stack beyond matplotlib.
- small codebase (about 150 lines).
- includes presets (saddle, spiral, circular, etc.).
- supports streamlines and field-intensity plots.
- ships with a couple of example noscripts + tests
It’s not meant (and definitely won’t) compete with large visualization libraries. I needed a clean, lightweight tool for quick experiments. Thanks all.
https://pypi.org/project/fieldviz-mini/
https://github.com/rjsabouhi/fieldviz-mini
https://redd.it/1q7qy86
@r_opensource
PyPI
fieldviz-mini
Tiny vector-field and attractor visualizer
Open Receipt Format (ORF): an open, payment-agnostic standard for digital receipts
https://openreceiptformat.github.io/orf-spec/
https://redd.it/1q7yx6o
@r_opensource
https://openreceiptformat.github.io/orf-spec/
https://redd.it/1q7yx6o
@r_opensource
Reddit
From the opensource community on Reddit: Open Receipt Format (ORF): an open, payment-agnostic standard for digital receipts
Posted by mikeatmnl - 2 votes and 0 comments
Favorite Permissive License: Apache 2.0 or MIT?
These are the 2 biggest permissive licenses AFAIK. Which one do you prefer and why?
https://redd.it/1q80yea
@r_opensource
These are the 2 biggest permissive licenses AFAIK. Which one do you prefer and why?
https://redd.it/1q80yea
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community