Reddit Programming – Telegram
Reddit Programming
210 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
I built a "Zero Trust" linter for AI-generated code
https://www.reddit.com/r/programming/comments/1q768at/i_built_a_zero_trust_linter_for_aigenerated_code/

<!-- SC_OFF -->After catching my third // TODO: implement this later in production code written by an AI assistant, I decided to build a tool to catch these issues before they ship. AntiSlop is a CLI tool that acts as a safety net for AI-generated code. It scans your codebase for the "lazy artifacts" that LLMs often leave behind: Stub functions and empty implementations console.log / print() debugging statements Hedging comments like "temporary", "for now", "simplified" Unhandled errors in critical paths The key differentiator: it uses tree-sitter AST parsing instead of regex, so it actually understands code structure and ignores string literals. Supports Rust, Python, JavaScript/TypeScript, and Go. Install: cargo install antislop npm install -g antislop GitHub: https://github.com/skew202/antislop Would love feedback from others dealing with AI code in production. What's your workflow for reviewing AI-generated code? <!-- SC_ON --> submitted by /u/Diligent-Bread-6942 (https://www.reddit.com/user/Diligent-Bread-6942)
[link] (https://github.com/skew202/antislop) [comments] (https://www.reddit.com/r/programming/comments/1q768at/i_built_a_zero_trust_linter_for_aigenerated_code/)
Looking for a job in Switzerland.
https://www.reddit.com/r/programming/comments/1q76qmv/looking_for_a_job_in_switzerland/

<!-- SC_OFF -->Hi, after months and months spending time on LinkedIn job board, I’m wondering is there a better way for searching jobs in Switzerland. Which sites I can look for besides LinkedIn? I’m looking for Python roles (fields: Data science, AI, Python backend or Blockchain development roles.) Have more than 3y work experience in automovite industry working with Python and for Blockchain, I’m a self learner. I’m EU citizen (just mentioning, I know that Switzerland is not in the EU 🙂). <!-- SC_ON --> submitted by /u/Lane114 (https://www.reddit.com/user/Lane114)
[link] (https://blog.posttfu.com/best-cities-in-switzerland-for-job-opportunities/) [comments] (https://www.reddit.com/r/programming/comments/1q76qmv/looking_for_a_job_in_switzerland/)
pg-status — a lightweight microservice for checking PostgreSQL host status
https://www.reddit.com/r/programming/comments/1q7c3ao/pgstatus_a_lightweight_microservice_for_checking/

<!-- SC_OFF -->Hi! I’d like to introduce my new project — pg-status. It’s a lightweight, high-performance microservice designed to determine the status of PostgreSQL hosts. Its main goal is to help your backend identify a live master and a sufficiently up-to-date synchronous replica. Key features Very easy to deploy as a sidecar and integrate with your existing PostgreSQL setup Identifies the master and synchronous replicas, and assists with failover Helps balance load between hosts If you find this project useful, I’d really appreciate your support — a star on GitHub (https://github.com/krylosov-aa/pg-status) would mean a lot! But first, let’s talk about the problem pg-status is built to solve. PostgreSQL on multiple hosts To improve the resilience and scalability of a PostgreSQL database, it’s common to run multiple hosts using the classic master–replica setup. There’s one master host that accepts writes, and one or more replicas that receive changes from the master via physical or logical replication. Everything works great in theory — but there are a few important details to consider: Any host can fail A replica may need to take over as the master (failover) A replica can significantly lag behind the master From the perspective of a backend application connecting to these databases, this introduces several practical challenges: How to determine which host is currently the live master How to identify which replicas are available How to measure replica lag to decide whether it’s suitable for reads How to switch the client connection pool (or otherwise handle reconnection) after failover How to distribute load effectively among hosts There are already various approaches to solving these problems — each with its own pros and cons. Here are a few of the common methods I’ve encountered: Via DNS In this approach, specific hostnames point to the master and replica instances. Essentially, there’s no built-in master failover handling, and it doesn’t help determine the replica status — you have to query it manually via SQL. It’s possible to add an external service that detects host states and updates the DNS records accordingly, but there are a few drawbacks: DNS updates can take several seconds — or even tens of seconds — which can be critical DNS might automatically switch to read-only mode Overall, this solution does work, and pg-status can actually serve as such a service for host state detection. Also, as far as I know, many PostgreSQL cloud providers rely on this exact mechanism. Multihost in libpq With this method, the client driver (libpq) can locate the first available host from a given list that matches the desired role (master or replica). However, it doesn’t provide any built-in load balancing. A change in the master is detected only after an actual SQL query fails — at which point the connection crashes, and the client cycles through the hosts list again upon reconnection. Proxy You can set up a proxy that supports on-the-fly configuration updates. In that case, you’ll also need some component responsible for notifying the proxy when it should switch to a different host. This is generally a solid approach, but it still depends on an external mechanism that monitors PostgreSQL host states and communicates those changes to the proxy. pg-status fits perfectly for this purpose — it can serve as that mechanism. Alternatively, you can use pgpool-II, which is specifically designed for such scenarios. It not only determines which host to route traffic to but can even perform automatic failover itself. The main downside, however, is that it can be complex to deploy and configure. CloudNativePG As far as I know, CloudNativePG already provides all this functionality out of the box. The main considerations here are deployment complexity and the
requirement to run within a Kubernetes environment. My solution - pg-status At my workplace, we use a PostgreSQL cloud provider that offers a built-in failover mechanism and lets us connect to the master via DNS. However, I wanted to avoid situations where DNS updates take too long to reflect the new master. I also wanted more control — not just connecting to the master, but also balancing read load across replicas and understanding how far each replica lags behind the master. At the same time, I didn’t want to complicate the system architecture with a shared proxy that could become a single point of failure. In the end, the ideal solution turned out to be a tiny sidecar service running next to the backend. This sidecar takes responsibility for selecting the appropriate host. On the backend side, I maintain a client connection pool and, before issuing a connection, I check the current host status and immediately reconnect to the right one if needed. The sidecar approach brings some extra benefits: A sidecar failure affects only the single instance it’s attached to, not the entire system. PostgreSQL availability is measured relative to the local instance — meaning the health check can automatically report that this instance shouldn't receive traffic if the database is unreachable (for example, due to network isolation between data centers). That’s how pg-status was born. Its job is to periodically poll PostgreSQL hosts, keep track of their current state, and expose several lightweight, fast endpoints for querying this information. You can call pg-status directly from your backend on each request — for example, to make sure the master hasn’t failed over, and if it has, to reconnect automatically. Alternatively, you can use its special endpoints to select an appropriate replica for read operations based on replication lag. For example, I have a library for Python - context-async-sqlalchemy (https://github.com/krylosov-aa/context-async-sqlalchemy), which has a special place (https://krylosov-aa.github.io/context-async-sqlalchemy/master_replica/), where you can user pg-status to always get to the right host. How to use Installation You can build pg-status from source, install it from a .deb or binary package, or run it as a Docker container (lightweight Alpine-based images are available or ubuntu-based). Currently, the target architecture is Linux amd64, but the microservice can be compiled for other targets using CMake if needed. Usage The service’s behavior is configured via environment variables. Some variables are required (for example, connection parameters for your PostgreSQL hosts), while others are optional and have default values. You can find the full list of parameters here: https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#parameters When running, pg-status exposes several simple HTTP endpoints: GET /master - returns the current master GET /replica - returns a random replica using the round-robin algorithm GET /sync_by_time - returns a synchronous replica based on time or the master, meaning the lag behind the master is measured in time GET /sync_by_bytes - returns a synchronous replica based on bytes (based on the WAL LSN log) or the master, meaning the lag behind the master is measured in bytes written to the log GET /sync_by_time_or_bytes - essentially a host from sync_by_time or from sync_by_bytes GET /sync_by_time_and_bytes - essentially a host from sync_by_time and From sync_by_bytes GET /hosts - returns a list of all hosts and their current status: live, master, or replica. As you can see, pg-status provides a flexible API for identifying the appropriate replica to use. You can also set maximum acceptable lag thresholds (in time or bytes) via environment variables. Almost all endpoints support two response modes: Plain text (default) JSON — when you include the header Accept: application/json For example: {"host": "localhost"} pg-status can also work alongside a proxy or any other solution
responsible for handling database connections. In this setup, your backend always connects to a single proxy host (for instance, one that points to the master). The proxy itself doesn’t know the current PostgreSQL state — instead, it queries pg-status via its HTTP endpoints to decide when to switch to a different host. pg-status Implementation Details pg-status is a microservice written in C. I chose this language for two main reasons: It’s extremely resource-efficient — perfect for a lightweight sidecar scenario I simply enjoy writing in C, and this project felt like a natural fit The microservice consists of two core components running in two active threads: PG Monitoring The first thread is responsible for monitoring. It periodically polls all configured hosts using the libpq library to determine their current status. This part has an extensive list of configurable parameters, all set via environment variables: How often to poll hosts Connection timeout for each host Number of failed connection attempts before marking a host as dead Maximum acceptable replica lag (in milliseconds) considered “synchronous” Maximum acceptable replica lag (in bytes, based on WAL LSN) considered “synchronous” Currently, only physical replication is supported. HTTP Server The second thread runs the HTTP server, which handles client requests and retrieves the current host status from memory. It’s implemented using libmicrohttpd, offering great performance while keeping the footprint small. This means your backend can safely query pg-status before every SQL operation without noticeable overhead. In my testing (in a Docker container limited to 0.1 CPU and 6 MB of RAM), I achieved around 1500 RPS with extremely low latency. You can see detailed performance metrics here (https://github.com/krylosov-aa/pg-status?tab=readme-ov-file#performance). Potential Improvements Right now, I’m happy with the functionality — pg-status is already used in production in my own projects. That said, some improvements I’m considering include: Support for logical replication Adding precise time and byte lag information directly to the JSON responses so clients can make more informed decisions If you find the project interesting or have ideas for enhancements, feel free to open an issue on GitHub — contributions and feedback are always welcome! Summary pg-status is a lightweight, efficient microservice designed to solve a practical problem — determining the status of PostgreSQL hosts — while being exceptionally easy to deploy and operate. Licensed under MIT Open source and available on GitHub: https://github.com/krylosov-aa/pg-status Available as source, .deb binary package, or Docker container If you like the project, I’d really appreciate your support — please it on GitHub! Thanks for reading! <!-- SC_ON --> submitted by /u/One-Novel1842 (https://www.reddit.com/user/One-Novel1842)
[link] (https://github.com/krylosov-aa/pg-status) [comments] (https://www.reddit.com/r/programming/comments/1q7c3ao/pgstatus_a_lightweight_microservice_for_checking/)
Python Typing Survey 2025: Code Quality and Flexibility As Top Reasons for Typing Adoption
https://www.reddit.com/r/programming/comments/1q7cxmb/python_typing_survey_2025_code_quality_and/

<!-- SC_OFF -->The 2025 Typed Python Survey, conducted by contributors from JetBrains, Meta, and the broader Python typing community, offers a comprehensive look at the current state of Python’s type system and developer tooling. <!-- SC_ON --> submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://engineering.fb.com/2025/12/22/developer-tools/python-typing-survey-2025-code-quality-flexibility-typing-adoption/) [comments] (https://www.reddit.com/r/programming/comments/1q7cxmb/python_typing_survey_2025_code_quality_and/)
Sakila25: Updated Classic Sakila Database with 2025 Movies from TMDB – Now Supports Multiple DBs Including MongoDB
https://www.reddit.com/r/programming/comments/1q7d630/sakila25_updated_classic_sakila_database_with/

<!-- SC_OFF -->The Sakila sample database has been a go-to for SQL practice for years, but its data feels ancient. I recreated it as Sakila25 using Python to pull fresh 2025 movie data from TMDB, added streaming providers/subnoscriptions, and made it work across databases: MySQL / PostgreSQL / SQL Server MongoDB (NoSQL version) CSV exports Everything is noscripted and reproducible – great for learning database design, ETL, API integration, or comparing SQL vs NoSQL. GitHub Repo: https://github.com/lilhuss26/sakila25 Includes pre-built dumps, views (e.g., revenue by provider), and modern schema tweaks like credit card info. Open source (MIT) – stars, forks, and PRs welcome! What do you think – useful for tutorials or projects? <!-- SC_ON --> submitted by /u/Think-Raccoon5197 (https://www.reddit.com/user/Think-Raccoon5197)
[link] (https://github.com/lilhuss26/sakila25) [comments] (https://www.reddit.com/r/programming/comments/1q7d630/sakila25_updated_classic_sakila_database_with/)
Testing fundamentals I wish I understood earlier as a developer
https://www.reddit.com/r/programming/comments/1q7ricq/testing_fundamentals_i_wish_i_understood_earlier/

<!-- SC_OFF -->I’ve noticed a lot of devs (including past me) jump into frameworks before understanding why tests fail or what to test at all. I wrote a fundamentals-first piece covering: Unit vs integration vs end-to-end What makes a test useful Common testing anti-patterns How testing actually helps velocity long-term Blog link: https://www.hexplain.space/blog/tt4bwNwfenmcQDT29U2e What testing concept clicked late for you? <!-- SC_ON --> submitted by /u/third_void (https://www.reddit.com/user/third_void)
[link] (https://www.hexplain.space/blog/tt4bwNwfenmcQDT29U2e) [comments] (https://www.reddit.com/r/programming/comments/1q7ricq/testing_fundamentals_i_wish_i_understood_earlier/)
My C++ compiler just wrote its own fan-fiction (inference at compile-time)
https://www.reddit.com/r/programming/comments/1q831cb/my_c_compiler_just_wrote_its_own_fanfiction/

<!-- SC_OFF -->Not really, but at least it generated its own main characters. I've been obsessed with pushing language models into places they don't belong. Last summer it was a 1KB bigram model for the NES (https://github.com/erodola/bigram-nes) written in 6502 assembly. This week, I decided that even 1983 hardware was too much runtime for me. So I built a bigram language model that runs entirely during the C++ compilation phase. Technically it's a Markov chain implemented via constexpr and template metaprogramming. The model's weights are hardcoded in an array. A fun part was implementing the random number generator: since compilers are (mostly) deterministic (rightfully so), I hashed __TIME__ and __DATE__ using an FNV-1a algorithm to seed a constexpr Xorshift32 RNG. When you run the binary, the CPU does zero math. It just prints a string that was hallucinated by the compiler, different at each compile. ```cpp // this line does all the work while you're getting coffee static constexpr NameGenerator result(seed, T); int main() { // just printing a constant baked into the data segment std::cout << result.name << std::endl; } ``` Aside from the fun of it, I hope it proves a point that the bottleneck isn't always our hardware. We have wiggle room to redefine when execution should happen, and bake deterministic inference directly into the binary. Code is here: https://github.com/erodola/bigram-metacpp <!-- SC_ON --> submitted by /u/Brief_Argument8155 (https://www.reddit.com/user/Brief_Argument8155)
[link] (https://github.com/erodola/bigram-metacpp) [comments] (https://www.reddit.com/r/programming/comments/1q831cb/my_c_compiler_just_wrote_its_own_fanfiction/)