Reddit Programming – Telegram
Reddit Programming
211 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
Follow-up: Load testing my polyglot microservices game - Results and what I learned with k6 [Case Study, Open Source]
https://www.reddit.com/r/programming/comments/1ps7duq/followup_load_testing_my_polyglot_microservices/

<!-- SC_OFF -->Some time ago, I shared my polyglot Codenames custom version here - a multiplayer game built with Java (Spring Boot), Rust (Actix), and C# (ASP.NET Core SignalR). Some asked about performance characteristics across the different stacks. I finally added proper load testing with k6. Here are the results. The Setup Services tested (Docker containers, local machine): Account Service - Java 25 + Spring Boot 4 + WebFlux Game Service - Rust + Actix-web Chat Service - .NET 10 + SignalR Test scenarios: Smoke tests (baseline, 1 VU) Load tests (10 concurrent users, 6m30s ramp) SignalR real-time chat (2 concurrent sessions) Game WebSocket (3 concurrent sessions) Results Service Endpoint p95 Latency Account (Java) Login 64ms Account (Java) Register 138ms Game (Rust) Create game 15ms Game (Rust) Join game 4ms Game (Rust) WS Connect 4ms Chat (.NET) WS Connect 37ms Load test (10 VUs sustained): 1,411 complete user flows 8,469 HTTP requests 21.68 req/s throughput 63ms p95 response time 0% error rate SignalR Chat test (.NET): 84 messages sent, 178 received 37ms p95 connection time 100% message delivery Game WebSocket test (Rust/Actix): 90 messages sent, 75 received 4ms p95 connection time 45 WebSocket sessions 100% success rate What I learned Rust is fast, but the gap is smaller than expected. The Game service (Rust) responds in 4-15ms, while Account (Java with WebFlux) sits at 64-138ms. That's about 10x difference, but both are well under any reasonable SLA. For a hobby project, Java's developer experience wins. SignalR just works. I expected WebSocket testing to be painful. The k6 implementation required a custom SignalR client, but once working the .NET service handled real-time messaging flawlessly. WebFlux handles the load. Spring Boot 4 + WebFlux on Java 25 handles concurrent requests efficiently with its reactive/non-blocking model. The polyglot tax is real but manageable. Three different build systems, three deployment configs, three ways to handle JSON. But each service plays to its language's strengths. The SignalR client implements the JSON protocol handshake, message framing and hub invocation (basically what the official client does, but for k6). The Game WebSocket client is simpler, native WebSocket with JSON messages for join/leave/gameplay actions. What's next Test against GCP Cloud Run (cold starts, auto-scaling) Stress testing to find breaking points Add Gatling for comparison <!-- SC_ON --> submitted by /u/Lightforce_ (https://www.reddit.com/user/Lightforce_)
[link] (https://gitlab.com/RobinTrassard/codenames-microservices/-/tree/account-java-version) [comments] (https://www.reddit.com/r/programming/comments/1ps7duq/followup_load_testing_my_polyglot_microservices/)
Constvector: Log-structured std:vector alternative – 30-40% faster push/pop
https://www.reddit.com/r/programming/comments/1ps8s9e/constvector_logstructured_stdvector_alternative/

<!-- SC_OFF -->Usually std::vector starts with 'N' capacity and grows to '2 * N' capacity once its size crosses X; at that time, we also copy the data from the old array to the new array. That has few problems
1. Copy cost,
2. OS needs to manage the small capacity array (size N) that's freed by the application.
3. L1 and L2 cache need to invalidate the array items, since the array moved to new location, and CPU need to fetch to L1/L2 since it's new data for CPU, but in reality it's not. std::vector's reallocations and recopies are amortised O(1), but at low level they have lot of negative impact. Here's a log-structured alternative (constvector) with power-of-2 blocks: Push: 3.5 ns/op (vs 5 ns std::vector) Pop: 3.4 ns/op (vs 5.3 ns) Index: minor slowdown (3.8 vs 3.4 ns) Strict worst-case O(1), Θ(N) space trade-off, only log(N) extra compared to std::vector. It reduces internal memory fragmentation. It won't invalidate L1, L2 cache without modifications, hence improving performance: In the github I benchmarked for 1K to 1B size vectors and this consistently improved showed better performance for push and pop operations.

Youtube: https://youtu.be/ledS08GkD40 Practically we can use 64 size for meta array (for the log(N)) as extra space. I implemented the bare vector operations to compare, since the actual std::vector implementations have a lot of iterator validation code, causing the extra overhead. <!-- SC_ON --> submitted by /u/pilotwavetheory (https://www.reddit.com/user/pilotwavetheory)
[link] (https://github.com/tendulkar/) [comments] (https://www.reddit.com/r/programming/comments/1ps8s9e/constvector_logstructured_stdvector_alternative/)
Crunch: A Message Definition and Serialization Protocol for Getting Things Right
https://www.reddit.com/r/programming/comments/1ps9y9k/crunch_a_message_definition_and_serialization/

<!-- SC_OFF -->Crunch is a tool I developed using modern C++ for defining, serializing, and deserializing messages. Think along the domain of protobuf, flatbuffers, bebop, and mavLINK. I developed crunch to address some grievances I have with the interface design in these existing protocols. It has the following features:
1. Field and message level validation is required. What makes a field semantically correct in your program is baked into the C++ type system. The serialization format is a plugin. You can choose read/write speed optimized serialization, a protobuf-esque tag-length-value plugin, or write your own. Messages have integrity checks baked-in. CRC-16 or parity are shipped with Crunch, or you can write your own. No dynamic memory allocation. Using template magic, Crunch calculates the worst-case length for all message types, for all serialization protocols, and exposes a constexpr API to create a buffer for serialization and deserialization. I'm very happy with how it has turned out so far. I tried to make it super easy to use by providing bazel and cmake targets and extensive documentation. Future work involves automating cross-platform integration tests via QEMU, registering with as many package managers as I can, and creating bindings in other languages. Hopefully Crunch can be useful in your project! I have written the first in a series of blog posts about the development of Crunch linked in my profile if you're interested! <!-- SC_ON --> submitted by /u/volatile-int (https://www.reddit.com/user/volatile-int)
[link] (https://github.com/sam-w-yellin/crunch) [comments] (https://www.reddit.com/r/programming/comments/1ps9y9k/crunch_a_message_definition_and_serialization/)
Load Balancing Sounds Simple Until Traffic Actually Spikes. Here’s What People Get Wrong
https://www.reddit.com/r/programming/comments/1psbwq0/load_balancing_sounds_simple_until_traffic/

<!-- SC_OFF -->Load balancing is often described as “just spread traffic across servers,” but that definition collapses the moment real traffic shows up. The real failures happen when a backend is technically “healthy” but painfully slow, when sticky sessions quietly break stateful apps, or when retries and timeouts double your traffic without you noticing. At scale, load balancing stops being about distribution and starts being about failure management—health checks can lie, round-robin falls apart under uneven load, and autoscaling without the right balancing strategy just multiplies problems. This breakdown explains where textbook load balancing diverges from production reality, including L4 vs L7 trade-offs and why “even traffic” is often the wrong goal: Load Balancing (https://www.netcomlearning.com/blog/what-is-load-balancing) <!-- SC_ON --> submitted by /u/netcommah (https://www.reddit.com/user/netcommah)
[link] (https://www.netcomlearning.com/blog/what-is-load-balancing) [comments] (https://www.reddit.com/r/programming/comments/1psbwq0/load_balancing_sounds_simple_until_traffic/)
Cloud Code Feels Magical Until You Realize What It’s Actually Abstracting Away
https://www.reddit.com/r/programming/comments/1pscjp2/cloud_code_feels_magical_until_you_realize_what/

<!-- SC_OFF -->Cloud Code looks like a productivity win on day one; deploy from your IDE, preview resources instantly, fewer YAML headaches. But the real value (and risk) is what it abstracts: IAM wiring, deployment context, environment drift, and the false sense that “local == prod.” Teams move faster, but without understanding what Cloud Code is generating and managing under the hood, debugging and scaling can get messy fast. This write-up breaks down where Cloud Code genuinely helps, where it can hide complexity, and how to use it without turning your IDE into a black box: Cloud Code (https://www.netcomlearning.com/blog/cloud-code) <!-- SC_ON --> submitted by /u/netcommah (https://www.reddit.com/user/netcommah)
[link] (https://www.netcomlearning.com/blog/cloud-code) [comments] (https://www.reddit.com/r/programming/comments/1pscjp2/cloud_code_feels_magical_until_you_realize_what/)
AlloyDB for PostgreSQL: Familiar SQL, Very Unfamiliar Performance Characteristics
https://www.reddit.com/r/programming/comments/1psclu3/alloydb_for_postgresql_familiar_sql_very/

<!-- SC_OFF -->AlloyDB looks like “just Postgres on GCP” until you actually run real workloads on it. The surprises show up fast query performance that doesn’t behave like vanilla Postgres, storage and compute scaling that changes how you think about bottlenecks, and read pools that quietly reshape how apps should be architected. It’s powerful, but only if you understand what Google has modified under the hood and where it diverges from self-managed or Cloud SQL Postgres. This breakdown explains what AlloyDB optimizes, where it shines, and where assumptions from traditional Postgres can get you into trouble: AlloyDB (https://www.netcomlearning.com/blog/alloydb-for-postgresql) <!-- SC_ON --> submitted by /u/netcommah (https://www.reddit.com/user/netcommah)
[link] (https://www.netcomlearning.com/blog/alloydb-for-postgresql) [comments] (https://www.reddit.com/r/programming/comments/1psclu3/alloydb_for_postgresql_familiar_sql_very/)
A Git confusion I see a lot with junior devs: fetch vs pull
https://www.reddit.com/r/programming/comments/1psd3r3/a_git_confusion_i_see_a_lot_with_junior_devs/

<!-- SC_OFF -->I’ve seen quite a few junior devs get stuck when git pull suddenly throws conflicts, even though they “just wanted latest code”. I wrote a short explanation aimed at juniors that breaks down: what git fetch actually does why git pull behaves differently when the branch isn’t clean where git pull --rebase fits in No theory dump. Just real examples and mental models that helped my teams.
Sharing in case it helps someone avoid a confusing first Git conflict. <!-- SC_ON --> submitted by /u/sshetty03 (https://www.reddit.com/user/sshetty03)
[link] (https://medium.com/stackademic/the-real-difference-between-git-fetch-git-pull-and-git-pull-rebase-991514cb5bd6?sk=dd39ca5be91586de5ac83efe60075566) [comments] (https://www.reddit.com/r/programming/comments/1psd3r3/a_git_confusion_i_see_a_lot_with_junior_devs/)
Where should input validation and recovery logic live in a Java CLI program? (main loop vs input methods vs exceptions)
https://www.reddit.com/r/programming/comments/1psq57j/where_should_input_validation_and_recovery_logic/

<!-- SC_OFF -->I’m designing a Java CLI application based on a while loop with multiple user input points. My main question is about where input validation and error recovery logic should be placed when the user enters invalid input. Currently, I’m considering several approaches: A. Validate in main • Input methods return raw values • main checks validity • On invalid input, print an error message and continue the loop B. Validate inside input methods • Methods like getUserChoice() internally loop until valid input is provided • The method guarantees returning a valid value C. Use exceptions • Input methods throw exceptions on invalid input • The caller (e.g., main) catches the exception and decides how to recover All three approaches work functionally, but I’m unsure which one is more appropriate in a teaching project or small system, especially in terms of: • responsibility separation • readability • maintainability • future extensibility Is there a generally recommended approach for this kind of CLI application, or does it depend on context? How would you structure this in practice? <!-- SC_ON --> submitted by /u/Mission_Upstairs_242 (https://www.reddit.com/user/Mission_Upstairs_242)
[link] (https://www.reddit.com/r/programming/submit/) [comments] (https://www.reddit.com/r/programming/comments/1psq57j/where_should_input_validation_and_recovery_logic/)
Functional Equality (rewrite)
https://www.reddit.com/r/programming/comments/1pt2c68/functional_equality_rewrite/

<!-- SC_OFF -->Three years after my original post here (https://www.reddit.com/r/programming/comments/13yjutr/functional_equality/), I've extensively rewritten my essay on Functional Equality vs. Semantic Equality in programming languages. It dives into Leibniz's Law, substitutability, caching pitfalls, and a survey of == across langs like Python, Go, and Haskell. Feedback welcome! <!-- SC_ON --> submitted by /u/Master-Reception9062 (https://www.reddit.com/user/Master-Reception9062)
[link] (https://jonathanwarden.com/functional-equality/) [comments] (https://www.reddit.com/r/programming/comments/1pt2c68/functional_equality_rewrite/)
Algorithmically Generated Crosswords: Finding 'good enough' for an NP-Complete problem
https://www.reddit.com/r/programming/comments/1pt2x8x/algorithmically_generated_crosswords_finding_good/

<!-- SC_OFF -->The library is on GitHub (Eyas/xwgen) and linked from the post, which you can use with a provided sample dictionary. <!-- SC_ON --> submitted by /u/eyassh (https://www.reddit.com/user/eyassh)
[link] (https://blog.eyas.sh/2025/12/algorithmic-crosswords/) [comments] (https://www.reddit.com/r/programming/comments/1pt2x8x/algorithmically_generated_crosswords_finding_good/)