Lessons Learned from two decades of writing bad code
https://www.reddit.com/r/programming/comments/1p45vnj/lessons_learned_from_two_decades_of_writing_bad/
submitted by /u/ReDucTor (https://www.reddit.com/user/ReDucTor)
[link] (https://youtu.be/1YrenWdSeO0) [comments] (https://www.reddit.com/r/programming/comments/1p45vnj/lessons_learned_from_two_decades_of_writing_bad/)
https://www.reddit.com/r/programming/comments/1p45vnj/lessons_learned_from_two_decades_of_writing_bad/
submitted by /u/ReDucTor (https://www.reddit.com/user/ReDucTor)
[link] (https://youtu.be/1YrenWdSeO0) [comments] (https://www.reddit.com/r/programming/comments/1p45vnj/lessons_learned_from_two_decades_of_writing_bad/)
Why /dev/null Is an ACID Compliant Database
https://www.reddit.com/r/programming/comments/1p46nnw/why_devnull_is_an_acid_compliant_database/
submitted by /u/alexeyr (https://www.reddit.com/user/alexeyr)
[link] (https://jyu.dev/blog/why-dev-null-is-an-acid-compliant-database/) [comments] (https://www.reddit.com/r/programming/comments/1p46nnw/why_devnull_is_an_acid_compliant_database/)
https://www.reddit.com/r/programming/comments/1p46nnw/why_devnull_is_an_acid_compliant_database/
submitted by /u/alexeyr (https://www.reddit.com/user/alexeyr)
[link] (https://jyu.dev/blog/why-dev-null-is-an-acid-compliant-database/) [comments] (https://www.reddit.com/r/programming/comments/1p46nnw/why_devnull_is_an_acid_compliant_database/)
Building Standalone Julia Binaries: A Complete Guide
https://www.reddit.com/r/programming/comments/1p4j2ks/building_standalone_julia_binaries_a_complete/
submitted by /u/joelreymont (https://www.reddit.com/user/joelreymont)
[link] (https://joel.id/julia-my-love/) [comments] (https://www.reddit.com/r/programming/comments/1p4j2ks/building_standalone_julia_binaries_a_complete/)
https://www.reddit.com/r/programming/comments/1p4j2ks/building_standalone_julia_binaries_a_complete/
submitted by /u/joelreymont (https://www.reddit.com/user/joelreymont)
[link] (https://joel.id/julia-my-love/) [comments] (https://www.reddit.com/r/programming/comments/1p4j2ks/building_standalone_julia_binaries_a_complete/)
Day 121: Building Linux System Log Collectors
https://www.reddit.com/r/programming/comments/1p4jf18/day_121_building_linux_system_log_collectors/
<!-- SC_OFF --> JSON Schema validation engine with fast-fail semantics and detailed error reporting Structured log pipeline processing 50K+ JSON events/second with zero data loss Multi-tier caching strategy reducing validation overhead by 85% Dead letter queue pattern for malformed messages with automatic retry logic Schema evolution framework supporting backward-compatible field additions System Design Deep Dive: Five Patterns for Reliable Structured Data Pattern 1: Producer-Side Schema Validation (Fail-Fast) The Trade-off: Validate at producer vs. consumer vs. both ends? Most systems validate at the consumer—this is a mistake. By the time invalid JSON reaches Kafka, you’ve wasted network bandwidth, storage, and processing cycles. Worse, Kafka replication amplifies the problem 3x (leader + 2 replicas). The Solution: Validate at the producer with a three-tier approach: Fast syntactic validation (is this JSON?)—100µs avg latency Schema conformance check (matches expected structure?)—500µs with cached schemas Business rule validation (timestamp not in future?)—200µs Dropbox uses this pattern to reject 3% of incoming logs before they hit Kafka, saving 12TB of storage daily. The key insight: failed validations are cheap at the edge, expensive in the core. Anti-pattern Warning: Don’t validate synchronously on the request path. Use async validation with immediate acknowledgment, then route failures to a dead letter queue. Otherwise, a schema validation bug can bring down your entire API. https://sdcourse.substack.com/p/day-15-json-support-for-structured-8ba https://github.com/sysdr/course/tree/main/day121/linux-log-collector https://systemdr.substack.com/ <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://sdcourse.substack.com/p/day-121-building-linux-system-log) [comments] (https://www.reddit.com/r/programming/comments/1p4jf18/day_121_building_linux_system_log_collectors/)
https://www.reddit.com/r/programming/comments/1p4jf18/day_121_building_linux_system_log_collectors/
<!-- SC_OFF --> JSON Schema validation engine with fast-fail semantics and detailed error reporting Structured log pipeline processing 50K+ JSON events/second with zero data loss Multi-tier caching strategy reducing validation overhead by 85% Dead letter queue pattern for malformed messages with automatic retry logic Schema evolution framework supporting backward-compatible field additions System Design Deep Dive: Five Patterns for Reliable Structured Data Pattern 1: Producer-Side Schema Validation (Fail-Fast) The Trade-off: Validate at producer vs. consumer vs. both ends? Most systems validate at the consumer—this is a mistake. By the time invalid JSON reaches Kafka, you’ve wasted network bandwidth, storage, and processing cycles. Worse, Kafka replication amplifies the problem 3x (leader + 2 replicas). The Solution: Validate at the producer with a three-tier approach: Fast syntactic validation (is this JSON?)—100µs avg latency Schema conformance check (matches expected structure?)—500µs with cached schemas Business rule validation (timestamp not in future?)—200µs Dropbox uses this pattern to reject 3% of incoming logs before they hit Kafka, saving 12TB of storage daily. The key insight: failed validations are cheap at the edge, expensive in the core. Anti-pattern Warning: Don’t validate synchronously on the request path. Use async validation with immediate acknowledgment, then route failures to a dead letter queue. Otherwise, a schema validation bug can bring down your entire API. https://sdcourse.substack.com/p/day-15-json-support-for-structured-8ba https://github.com/sysdr/course/tree/main/day121/linux-log-collector https://systemdr.substack.com/ <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://sdcourse.substack.com/p/day-121-building-linux-system-log) [comments] (https://www.reddit.com/r/programming/comments/1p4jf18/day_121_building_linux_system_log_collectors/)
A bug caused by a door in a game you may have heard of called "Half Life 2" (spoiler: fp precision)
https://www.reddit.com/r/programming/comments/1p4kf9a/a_bug_caused_by_a_door_in_a_game_you_may_have/
submitted by /u/self (https://www.reddit.com/user/self)
[link] (https://mastodon.gamedev.place/@TomF/115589875974658415) [comments] (https://www.reddit.com/r/programming/comments/1p4kf9a/a_bug_caused_by_a_door_in_a_game_you_may_have/)
https://www.reddit.com/r/programming/comments/1p4kf9a/a_bug_caused_by_a_door_in_a_game_you_may_have/
submitted by /u/self (https://www.reddit.com/user/self)
[link] (https://mastodon.gamedev.place/@TomF/115589875974658415) [comments] (https://www.reddit.com/r/programming/comments/1p4kf9a/a_bug_caused_by_a_door_in_a_game_you_may_have/)
Floodfill algorithm in Python with interactive demos
https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/
<!-- SC_OFF -->I wrote this tutorial because I've always liked graph-related algorithms and I wanted to try my hand at writing something with interactive demos. This article teaches you how to implement and use the floodfill algorithm and includes interactive demos to: - use floodfill to colour regions in an image - step through the general floodfill algorithm step by step, with annotations of what the algorithm is doing - applying floodfill in a grid with obstacles to see how the starting point affects the process - use floodfill to count the number of disconnected regions in a grid - use a modified version of floodfill to simulate the fluid spreading over a surface with obstacles I know the internet can be relentless but I'm really looking forward to everyone's comments and suggestions, since I love interactive articles and I hope to be able to create more of these in the future. Happy reading and let me know what you think! The article: https://mathspp.com/blog/floodfill-algorithm-in-python <!-- SC_ON --> submitted by /u/RojerGS (https://www.reddit.com/user/RojerGS)
[link] (https://mathspp.com/blog/floodfill-algorithm-in-python) [comments] (https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/)
https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/
<!-- SC_OFF -->I wrote this tutorial because I've always liked graph-related algorithms and I wanted to try my hand at writing something with interactive demos. This article teaches you how to implement and use the floodfill algorithm and includes interactive demos to: - use floodfill to colour regions in an image - step through the general floodfill algorithm step by step, with annotations of what the algorithm is doing - applying floodfill in a grid with obstacles to see how the starting point affects the process - use floodfill to count the number of disconnected regions in a grid - use a modified version of floodfill to simulate the fluid spreading over a surface with obstacles I know the internet can be relentless but I'm really looking forward to everyone's comments and suggestions, since I love interactive articles and I hope to be able to create more of these in the future. Happy reading and let me know what you think! The article: https://mathspp.com/blog/floodfill-algorithm-in-python <!-- SC_ON --> submitted by /u/RojerGS (https://www.reddit.com/user/RojerGS)
[link] (https://mathspp.com/blog/floodfill-algorithm-in-python) [comments] (https://www.reddit.com/r/programming/comments/1p4no76/floodfill_algorithm_in_python_with_interactive/)
No, LLVM can't fix your code
https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/
submitted by /u/Commission-Either (https://www.reddit.com/user/Commission-Either)
[link] (https://daymare.net/blogs/no-llvm-cant-fix-your-code/) [comments] (https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/)
https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/
submitted by /u/Commission-Either (https://www.reddit.com/user/Commission-Either)
[link] (https://daymare.net/blogs/no-llvm-cant-fix-your-code/) [comments] (https://www.reddit.com/r/programming/comments/1p4o2qa/no_llvm_cant_fix_your_code/)
Visualizing recursive merge sort with a recursive sequence diagram
https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/
submitted by /u/Veuxdo (https://www.reddit.com/user/Veuxdo)
[link] (https://app.ilograph.com/demo.ilograph.Merge%2520Sort/Merge%2520Sort%2520Main) [comments] (https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/)
https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/
submitted by /u/Veuxdo (https://www.reddit.com/user/Veuxdo)
[link] (https://app.ilograph.com/demo.ilograph.Merge%2520Sort/Merge%2520Sort%2520Main) [comments] (https://www.reddit.com/r/programming/comments/1p4obh8/visualizing_recursive_merge_sort_with_a_recursive/)
Looking for partnership to create multiple micro-SaaS (trial and error, no attachment)
https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/
<!-- SC_OFF -->Hey guys! I'm a backend developer (Java and Python) and I recently launched a saas that ended up not getting any users. Instead of getting discouraged, I'm trying to regain the desire to test new ideas, without getting too attached to each project (as I did in the last one that I failed) so that the process is light and we can learn quickly with each attempt. The journey alone is very complicated, it's difficult to maintain motivation, focus and speed when you do everything alone, apart from the lack of time to study and keep the system progressing... That's why I'm looking for someone (or some people) to form a pair or small team. The idea is simple: test several microsaas or complete saas ideas, validate quickly, discard without drama and continue. You can be frontend, backend, mobile, designer, marketing… any area that can help. The important thing is to have a real desire to create, launch and test. If you are interested, comment here or send me a DM and we can exchange ideas ;) <!-- SC_ON --> submitted by /u/renanaq (https://www.reddit.com/user/renanaq)
[link] (https://inspiras.me/) [comments] (https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/)
https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/
<!-- SC_OFF -->Hey guys! I'm a backend developer (Java and Python) and I recently launched a saas that ended up not getting any users. Instead of getting discouraged, I'm trying to regain the desire to test new ideas, without getting too attached to each project (as I did in the last one that I failed) so that the process is light and we can learn quickly with each attempt. The journey alone is very complicated, it's difficult to maintain motivation, focus and speed when you do everything alone, apart from the lack of time to study and keep the system progressing... That's why I'm looking for someone (or some people) to form a pair or small team. The idea is simple: test several microsaas or complete saas ideas, validate quickly, discard without drama and continue. You can be frontend, backend, mobile, designer, marketing… any area that can help. The important thing is to have a real desire to create, launch and test. If you are interested, comment here or send me a DM and we can exchange ideas ;) <!-- SC_ON --> submitted by /u/renanaq (https://www.reddit.com/user/renanaq)
[link] (https://inspiras.me/) [comments] (https://www.reddit.com/r/programming/comments/1p4pg4i/looking_for_partnership_to_create_multiple/)
How revenue decisions shape technical debt
https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/
submitted by /u/ArtisticProgrammer11 (https://www.reddit.com/user/ArtisticProgrammer11)
[link] (https://www.hyperact.co.uk/blog/how-revenue-decisions-shape-technical-debt) [comments] (https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/)
https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/
submitted by /u/ArtisticProgrammer11 (https://www.reddit.com/user/ArtisticProgrammer11)
[link] (https://www.hyperact.co.uk/blog/how-revenue-decisions-shape-technical-debt) [comments] (https://www.reddit.com/r/programming/comments/1p4qvfc/how_revenue_decisions_shape_technical_debt/)
My first real Rust project
https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/
submitted by /u/nfrankel (https://www.reddit.com/user/nfrankel)
[link] (https://blog.frankel.ch/first-real-rust-project/) [comments] (https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/)
https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/
submitted by /u/nfrankel (https://www.reddit.com/user/nfrankel)
[link] (https://blog.frankel.ch/first-real-rust-project/) [comments] (https://www.reddit.com/r/programming/comments/1p4slri/my_first_real_rust_project/)
B-Trees: Why Every Database Uses Them
https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/
submitted by /u/m3m3o (https://www.reddit.com/user/m3m3o)
[link] (https://mehmetgoekce.substack.com/p/b-trees-why-every-database-uses-them) [comments] (https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/)
https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/
submitted by /u/m3m3o (https://www.reddit.com/user/m3m3o)
[link] (https://mehmetgoekce.substack.com/p/b-trees-why-every-database-uses-them) [comments] (https://www.reddit.com/r/programming/comments/1p4ti19/btrees_why_every_database_uses_them/)
Alerts: You need a budget!
https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/
<!-- SC_OFF -->No matter the company, the domain, or the culture, I hear devops people complain about alert fatigue. This is not strange. Our work can be demanding and alerts can be a big cause of that "demand". What is strange, in my view, is that there is a general sense of defeatism when it comes to dealing with alert fatigue. Maybe some quick initiative here and there to clean up this or that. But status quo always returns. We have no structural solutions (that I've seen).
So let me try my hand at proposing a simple idea: budgeting. <!-- SC_ON --> submitted by /u/IEavan (https://www.reddit.com/user/IEavan)
[link] (https://eavan.blog/posts/alert-budgeting.html) [comments] (https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/)
https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/
<!-- SC_OFF -->No matter the company, the domain, or the culture, I hear devops people complain about alert fatigue. This is not strange. Our work can be demanding and alerts can be a big cause of that "demand". What is strange, in my view, is that there is a general sense of defeatism when it comes to dealing with alert fatigue. Maybe some quick initiative here and there to clean up this or that. But status quo always returns. We have no structural solutions (that I've seen).
So let me try my hand at proposing a simple idea: budgeting. <!-- SC_ON --> submitted by /u/IEavan (https://www.reddit.com/user/IEavan)
[link] (https://eavan.blog/posts/alert-budgeting.html) [comments] (https://www.reddit.com/r/programming/comments/1p4uvhw/alerts_you_need_a_budget/)
Human Capital Management Software (HCM): Why Modern Businesses Can’t Survive Without It in 2025
https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/
<!-- SC_OFF -->In 2025, HR operations have officially moved beyond spreadsheets and traditional HRMS tools. Hybrid work, compliance pressure, rapid hiring cycles, and AI-driven workforce analytics are pushing companies toward smarter automation. I put together a complete guide covering: What HCM software actually is Why companies are switching from HRM to HCM Core modules every modern HCM must have How AI is transforming recruitment, performance, and employee engagement Development cost breakdown (basic → advanced AI systems) Why custom HCM is becoming the preferred choice over ready-made tools When to build vs. buy Examples of modern HCM capabilities If you're in HR, tech, software development, or building SaaS products — this guide will give you a clear understanding of how HCM is evolving and why it matters. 👉 Read the full guide : Click on the Link Would love to hear feedback from SaaS founders, HR managers, and dev teams using HCM or building something similar. <!-- SC_ON --> submitted by /u/Big-Click2648 (https://www.reddit.com/user/Big-Click2648)
[link] (https://codevian.com/blog/modern-hcm-software-guide/) [comments] (https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/)
https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/
<!-- SC_OFF -->In 2025, HR operations have officially moved beyond spreadsheets and traditional HRMS tools. Hybrid work, compliance pressure, rapid hiring cycles, and AI-driven workforce analytics are pushing companies toward smarter automation. I put together a complete guide covering: What HCM software actually is Why companies are switching from HRM to HCM Core modules every modern HCM must have How AI is transforming recruitment, performance, and employee engagement Development cost breakdown (basic → advanced AI systems) Why custom HCM is becoming the preferred choice over ready-made tools When to build vs. buy Examples of modern HCM capabilities If you're in HR, tech, software development, or building SaaS products — this guide will give you a clear understanding of how HCM is evolving and why it matters. 👉 Read the full guide : Click on the Link Would love to hear feedback from SaaS founders, HR managers, and dev teams using HCM or building something similar. <!-- SC_ON --> submitted by /u/Big-Click2648 (https://www.reddit.com/user/Big-Click2648)
[link] (https://codevian.com/blog/modern-hcm-software-guide/) [comments] (https://www.reddit.com/r/programming/comments/1p5a6v3/human_capital_management_software_hcm_why_modern/)
Why "Start Simple" Should Be Your Default in the AI-Assisted Development Era
https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/
<!-- SC_OFF -->A case for resisting over-engineered AI-generated architectures and instead beginning projects with the smallest viable design. Simple, explicit code provides tighter threat surfaces, faster debugging, and far fewer hidden abstractions that developers only partially understand. Before letting AI optimize anything, build the clear, boring version first so you know what the system actually does and can reason about it when things break. <!-- SC_ON --> submitted by /u/AWildMonomAppears (https://www.reddit.com/user/AWildMonomAppears)
[link] (https://practicalsecurity.substack.com/p/why-starting-simple-is-your-secret) [comments] (https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/)
https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/
<!-- SC_OFF -->A case for resisting over-engineered AI-generated architectures and instead beginning projects with the smallest viable design. Simple, explicit code provides tighter threat surfaces, faster debugging, and far fewer hidden abstractions that developers only partially understand. Before letting AI optimize anything, build the clear, boring version first so you know what the system actually does and can reason about it when things break. <!-- SC_ON --> submitted by /u/AWildMonomAppears (https://www.reddit.com/user/AWildMonomAppears)
[link] (https://practicalsecurity.substack.com/p/why-starting-simple-is-your-secret) [comments] (https://www.reddit.com/r/programming/comments/1p5b0z3/why_start_simple_should_be_your_default_in_the/)
Celebrate fire preventers, not just firefighters. The stories you praise shape your culture. Choose heroes who build systems, not chaos.
https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtube.com/shorts/WuDUJsNNlSM) [comments] (https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/)
https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtube.com/shorts/WuDUJsNNlSM) [comments] (https://www.reddit.com/r/programming/comments/1p5cqmt/celebrate_fire_preventers_not_just_firefighters/)
Finly - Closing the Gap Between Schema-First and Code-First
https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/
submitted by /u/Dan6erbond2 (https://www.reddit.com/user/Dan6erbond2)
[link] (https://finly.ch/engineering-blog/350169-closing-the-gap-between-schema-first-and-code-first-graphql-development) [comments] (https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/)
https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/
submitted by /u/Dan6erbond2 (https://www.reddit.com/user/Dan6erbond2)
[link] (https://finly.ch/engineering-blog/350169-closing-the-gap-between-schema-first-and-code-first-graphql-development) [comments] (https://www.reddit.com/r/programming/comments/1p5dh2b/finly_closing_the_gap_between_schemafirst_and/)
TLS Handshake Latency: When Your Load Balancer Becomes a Bottleneck
https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/
<!-- SC_OFF -->Most engineers think of TLS as network overhead - a few extra round trips that add maybe 50-100ms. But here’s what actually happens: when your load balancer receives a new HTTPS connection, it needs to perform CPU-intensive cryptographic operations. We’re talking RSA signature verification, ECDHE key exchange calculations, and symmetric key derivation. On a quiet Tuesday morning, each handshake takes 20-30ms. During a traffic spike? That same handshake can take 5 seconds. The culprit is queueing. Your load balancer has a fixed number of worker threads handling TLS operations. When requests arrive faster than workers can process them, they queue up. Now you’re not just dealing with the crypto overhead - you’re dealing with wait time in a saturated queue. I’ve seen production load balancers at major tech companies go from 50ms p99 handshake latency to 8 seconds during deployment events when thousands of connections need re-establishment simultaneously. https://systemdr.substack.com/p/tls-handshake-latency-when-your-load https://github.com/sysdr/sdir/tree/main/tls_handshake <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/tls-handshake-latency-when-your-load) [comments] (https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/)
https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/
<!-- SC_OFF -->Most engineers think of TLS as network overhead - a few extra round trips that add maybe 50-100ms. But here’s what actually happens: when your load balancer receives a new HTTPS connection, it needs to perform CPU-intensive cryptographic operations. We’re talking RSA signature verification, ECDHE key exchange calculations, and symmetric key derivation. On a quiet Tuesday morning, each handshake takes 20-30ms. During a traffic spike? That same handshake can take 5 seconds. The culprit is queueing. Your load balancer has a fixed number of worker threads handling TLS operations. When requests arrive faster than workers can process them, they queue up. Now you’re not just dealing with the crypto overhead - you’re dealing with wait time in a saturated queue. I’ve seen production load balancers at major tech companies go from 50ms p99 handshake latency to 8 seconds during deployment events when thousands of connections need re-establishment simultaneously. https://systemdr.substack.com/p/tls-handshake-latency-when-your-load https://github.com/sysdr/sdir/tree/main/tls_handshake <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://systemdr.substack.com/p/tls-handshake-latency-when-your-load) [comments] (https://www.reddit.com/r/programming/comments/1p5f7rq/tls_handshake_latency_when_your_load_balancer/)
Read-Through vs Write-Through Cache
https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/
submitted by /u/stmoreau (https://www.reddit.com/user/stmoreau)
[link] (https://www.systemdesignbutsimple.com/p/read-through-vs-write-through-cache) [comments] (https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/)
https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/
submitted by /u/stmoreau (https://www.reddit.com/user/stmoreau)
[link] (https://www.systemdesignbutsimple.com/p/read-through-vs-write-through-cache) [comments] (https://www.reddit.com/r/programming/comments/1p5fh9z/readthrough_vs_writethrough_cache/)
Shai-Hulud Second Coming: Software Supply Chain Attack Exposing Code and Harvesting Credentials
https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/
<!-- SC_OFF -->The Shai-Hulud attackers are back with a new supply chain attack targeting the npm ecosystem. Multiple popular packages were infected with malicious payload via preinstall noscript. The attack is in progress. Some of the indicators include: Download and installation of bun Executing bun_environment.js using bun Credentials stolen from infected machines and CI/CD are being exposed through GitHub public repositories. https://github.com/search?q=%22Sha1-Hulud%3A%20The%20Second%20Coming%22&type=repositories <!-- SC_ON --> submitted by /u/N1ghtCod3r (https://www.reddit.com/user/N1ghtCod3r)
[link] (https://safedep.io/shai-hulud-second-coming-supply-chain-attack/) [comments] (https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/)
https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/
<!-- SC_OFF -->The Shai-Hulud attackers are back with a new supply chain attack targeting the npm ecosystem. Multiple popular packages were infected with malicious payload via preinstall noscript. The attack is in progress. Some of the indicators include: Download and installation of bun Executing bun_environment.js using bun Credentials stolen from infected machines and CI/CD are being exposed through GitHub public repositories. https://github.com/search?q=%22Sha1-Hulud%3A%20The%20Second%20Coming%22&type=repositories <!-- SC_ON --> submitted by /u/N1ghtCod3r (https://www.reddit.com/user/N1ghtCod3r)
[link] (https://safedep.io/shai-hulud-second-coming-supply-chain-attack/) [comments] (https://www.reddit.com/r/programming/comments/1p5g2ac/shaihulud_second_coming_software_supply_chain/)
How many HTTP requests/second can a Single Machine handle?
https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/
<!-- SC_OFF -->When designing systems and deciding on the architecture, the use of microservices and other complex solutions is often justified on the basis of predicted performance and scalability needs. Out of curiosity then, I decided to tests the performance limits of an extremely simple approach, the simplest possible one: A single instance of an application, with a single instance of a database, deployed to a single machine. To resemble real-world use cases as much as possible, we have the following: Java 21-based REST API built with Spring Boot 3 and using Virtual Threads PostgreSQL as a database, loaded with over one million rows of data External volume for the database - it does not write to the local file system Realistic load characteristics: tests consist primarily of read requests with approximately 20% of writes. They call our REST API which makes use of the PostgreSQL database with a reasonable amount of data (over one million rows) Single Machine in a few versions: 1 CPU, 2 GB of memory 2 CPUs, 4 GB of memory 4 CPUs, 8 GB of memory Single LoadTest file as a testing tool - running on 4 test machines, in parallel, since we usually have many HTTP clients, not just one Everything built and running in Docker DigitalOcean as the infrastructure provider As we can see the results at the bottom: a single machine, with a single database, can handle a lot - way more than most of us will ever need. Unless we have extreme load and performance needs, microservices serve mostly as an organizational tool, allowing many teams to work in parallel more easily. Performance doesn't justify them. The results: Small machine - 1 CPU, 2 GB of memory Can handle sustained load of 200 - 300 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.2s, Mean: 0.013s Percentile 90: 0.026s, Percentile 95: 0.034s Percentile 99: 0.099s Medium machine - 2 CPUs, 4 GB of memory Can handle sustained load of 500 - 1000 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.135s, Mean: 0.004s Percentile 90: 0.007s, Percentile 95: 0.01s Percentile 99: 0.023s Large machine - 4 CPUs, 8 GB of memory Can handle sustained load of 2000 - 3000 RPS For 15 seconds, it was able to handle 4000 RPS with stats: Min: 0.0s, (less than 1ms), Max: 1.05s, Mean: 0.058s Percentile 90: 0.124s, Percentile 95: 0.353s Percentile 99: 0.746s Huge machine - 8 CPUs, 16 GB of memory (not tested) Most likely can handle sustained load of 4000 - 6000 RPS <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://binaryigor.com/how-many-http-requests-can-a-single-machine-handle.html) [comments] (https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/)
https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/
<!-- SC_OFF -->When designing systems and deciding on the architecture, the use of microservices and other complex solutions is often justified on the basis of predicted performance and scalability needs. Out of curiosity then, I decided to tests the performance limits of an extremely simple approach, the simplest possible one: A single instance of an application, with a single instance of a database, deployed to a single machine. To resemble real-world use cases as much as possible, we have the following: Java 21-based REST API built with Spring Boot 3 and using Virtual Threads PostgreSQL as a database, loaded with over one million rows of data External volume for the database - it does not write to the local file system Realistic load characteristics: tests consist primarily of read requests with approximately 20% of writes. They call our REST API which makes use of the PostgreSQL database with a reasonable amount of data (over one million rows) Single Machine in a few versions: 1 CPU, 2 GB of memory 2 CPUs, 4 GB of memory 4 CPUs, 8 GB of memory Single LoadTest file as a testing tool - running on 4 test machines, in parallel, since we usually have many HTTP clients, not just one Everything built and running in Docker DigitalOcean as the infrastructure provider As we can see the results at the bottom: a single machine, with a single database, can handle a lot - way more than most of us will ever need. Unless we have extreme load and performance needs, microservices serve mostly as an organizational tool, allowing many teams to work in parallel more easily. Performance doesn't justify them. The results: Small machine - 1 CPU, 2 GB of memory Can handle sustained load of 200 - 300 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.2s, Mean: 0.013s Percentile 90: 0.026s, Percentile 95: 0.034s Percentile 99: 0.099s Medium machine - 2 CPUs, 4 GB of memory Can handle sustained load of 500 - 1000 RPS For 15 seconds, it was able to handle 1000 RPS with stats: Min: 0.001s, Max: 0.135s, Mean: 0.004s Percentile 90: 0.007s, Percentile 95: 0.01s Percentile 99: 0.023s Large machine - 4 CPUs, 8 GB of memory Can handle sustained load of 2000 - 3000 RPS For 15 seconds, it was able to handle 4000 RPS with stats: Min: 0.0s, (less than 1ms), Max: 1.05s, Mean: 0.058s Percentile 90: 0.124s, Percentile 95: 0.353s Percentile 99: 0.746s Huge machine - 8 CPUs, 16 GB of memory (not tested) Most likely can handle sustained load of 4000 - 6000 RPS <!-- SC_ON --> submitted by /u/BinaryIgor (https://www.reddit.com/user/BinaryIgor)
[link] (https://binaryigor.com/how-many-http-requests-can-a-single-machine-handle.html) [comments] (https://www.reddit.com/r/programming/comments/1p5gins/how_many_http_requestssecond_can_a_single_machine/)