Reducing App & Website Load Time by 40% — Production Notes
https://www.reddit.com/r/programming/comments/1pn1apa/reducing_app_website_load_time_by_40_production/
<!-- SC_OFF -->TL;DR Most real performance wins come from removing work, not adding tools. JavaScript payloads and API over-fetching are the usual culprits. Measure real users, not just lab scores. A disciplined approach can deliver ~40% load-time reduction within a few months. Why This Exists Over two decades, I’ve worked on systems ranging from early PHP monoliths to edge-deployed SPAs and mobile apps at scale. Despite better networks and faster hardware, many modern apps are slower than they should be. This write-up is not marketing. It’s a practical summary of what actually reduced app and website load time by ~40% across multiple real-world systems. What We Measured (And What We Ignored) We stopped obsessing over single Lighthouse scores. Metrics that actually correlated with retention and conversions: TTFB: < ~700–800ms (p95) LCP: < ~2.3–2.5s (real users) INP: < 200ms Total JS executed before interaction: as low as possible Metrics we largely ignored: Perfect lab scores Synthetic-only tests One-off benchmarks without production traffic If it didn’t affect real users, it didn’t matter. JavaScript Was the Biggest Performance Tax Across almost every codebase, JavaScript was the dominant reason pages felt slow. What actually moved the needle: Deleting unused dependencies Removing legacy polyfills Replacing heavy UI libraries with simpler components Shipping less JS instead of “optimizing” more JS A 25–35% JS reduction often resulted in a 15–20% load-time improvement by itself. The fastest pages usually had the least JavaScript. Rendering Strategy Matters More Than Framework Choice The framework wars are mostly noise. What mattered: Server-side rendering for initial content Partial hydration or island-based rendering Avoiding full-client hydration when not required Whether this was done using Next.js, Astro, SvelteKit, or a custom setup mattered less than when and how much code ran on the client. Backend Latency Was Usually Self-Inflicted Slow backends were rarely slow because of hardware. Common causes: Chatty service-to-service calls Over-fetching data “just in case” Poor cache invalidation strategies N+1 queries hiding in plain sight Adding more servers didn’t help. Removing unnecessary calls did. APIs: Fewer, Smaller, Closer API design had a direct impact on load time. Changes that consistently worked: Backend-for-Frontend (BFF) patterns Smaller, purpose-built responses Aggressive response caching Moving latency-sensitive APIs closer to users (edge) HTTP/3 and better transport helped, but payload size and call count mattered more. Images and Media: Still the Low-Hanging Fruit Images often accounted for 50–60% of page weight. Non-negotiables: AVIF / WebP by default Responsive image sizing Lazy loading below the fold CDN-based image transformation Serving raw images in production is still one of the fastest ways to waste bandwidth. Caching: The Fastest Optimization Caching delivered the biggest gains with the least effort. Layers that mattered: Browser cache with long-lived assets CDN caching for HTML where possible Server-side caching for expensive computations API response caching Repeat visits often became 50%+ faster with sane caching alone. Mobile Apps: Startup Time Is the UX On mobile, startup time is the first impression. What worked: Lazy-loading non-critical modules Reducing third-party SDKs Deferring analytics and trackers Caching aggressively on-device Users don’t care why an app is slow. They just uninstall it. Observability Changed Behavior Once teams saw real-user performance data, priorities changed. Effective practices: Real User Monitoring (RUM) Performance budgets enforced in CI Alerts on regression, not just outages Visibility alone prevented many performance regressions. A Simple 90–180 Day Playbook First 90 days:
https://www.reddit.com/r/programming/comments/1pn1apa/reducing_app_website_load_time_by_40_production/
<!-- SC_OFF -->TL;DR Most real performance wins come from removing work, not adding tools. JavaScript payloads and API over-fetching are the usual culprits. Measure real users, not just lab scores. A disciplined approach can deliver ~40% load-time reduction within a few months. Why This Exists Over two decades, I’ve worked on systems ranging from early PHP monoliths to edge-deployed SPAs and mobile apps at scale. Despite better networks and faster hardware, many modern apps are slower than they should be. This write-up is not marketing. It’s a practical summary of what actually reduced app and website load time by ~40% across multiple real-world systems. What We Measured (And What We Ignored) We stopped obsessing over single Lighthouse scores. Metrics that actually correlated with retention and conversions: TTFB: < ~700–800ms (p95) LCP: < ~2.3–2.5s (real users) INP: < 200ms Total JS executed before interaction: as low as possible Metrics we largely ignored: Perfect lab scores Synthetic-only tests One-off benchmarks without production traffic If it didn’t affect real users, it didn’t matter. JavaScript Was the Biggest Performance Tax Across almost every codebase, JavaScript was the dominant reason pages felt slow. What actually moved the needle: Deleting unused dependencies Removing legacy polyfills Replacing heavy UI libraries with simpler components Shipping less JS instead of “optimizing” more JS A 25–35% JS reduction often resulted in a 15–20% load-time improvement by itself. The fastest pages usually had the least JavaScript. Rendering Strategy Matters More Than Framework Choice The framework wars are mostly noise. What mattered: Server-side rendering for initial content Partial hydration or island-based rendering Avoiding full-client hydration when not required Whether this was done using Next.js, Astro, SvelteKit, or a custom setup mattered less than when and how much code ran on the client. Backend Latency Was Usually Self-Inflicted Slow backends were rarely slow because of hardware. Common causes: Chatty service-to-service calls Over-fetching data “just in case” Poor cache invalidation strategies N+1 queries hiding in plain sight Adding more servers didn’t help. Removing unnecessary calls did. APIs: Fewer, Smaller, Closer API design had a direct impact on load time. Changes that consistently worked: Backend-for-Frontend (BFF) patterns Smaller, purpose-built responses Aggressive response caching Moving latency-sensitive APIs closer to users (edge) HTTP/3 and better transport helped, but payload size and call count mattered more. Images and Media: Still the Low-Hanging Fruit Images often accounted for 50–60% of page weight. Non-negotiables: AVIF / WebP by default Responsive image sizing Lazy loading below the fold CDN-based image transformation Serving raw images in production is still one of the fastest ways to waste bandwidth. Caching: The Fastest Optimization Caching delivered the biggest gains with the least effort. Layers that mattered: Browser cache with long-lived assets CDN caching for HTML where possible Server-side caching for expensive computations API response caching Repeat visits often became 50%+ faster with sane caching alone. Mobile Apps: Startup Time Is the UX On mobile, startup time is the first impression. What worked: Lazy-loading non-critical modules Reducing third-party SDKs Deferring analytics and trackers Caching aggressively on-device Users don’t care why an app is slow. They just uninstall it. Observability Changed Behavior Once teams saw real-user performance data, priorities changed. Effective practices: Real User Monitoring (RUM) Performance budgets enforced in CI Alerts on regression, not just outages Visibility alone prevented many performance regressions. A Simple 90–180 Day Playbook First 90 days:
Measure real users Cut JS and media weight Add basic caching Fix obvious backend bottlenecks Next 90 days: Rework rendering strategy Optimize APIs and data access Introduce edge delivery Automate performance checks This cadence repeatedly delivered ~40% load-time reduction without rewriting entire systems. Common Mistakes Adding tools before removing waste Chasing perfect lab scores Ignoring mobile users Treating performance as a one-time task Performance decays unless actively defended. A Note on Our Work At Codevian Technologies, we apply the same constraints internally: measure real users, remove unnecessary work, and prefer boring, maintainable solutions. Most performance wins still come from deleting code. Final Thought Performance is not about being clever. It’s about being disciplined enough to say no to unnecessary work—over and over again. Fast systems are usually simple systems. <!-- SC_ON --> submitted by /u/Big-Click2648 (https://www.reddit.com/user/Big-Click2648)
[link] (https://codevian.com/blog/how-to-reduce-app-and-website-load-time/) [comments] (https://www.reddit.com/r/programming/comments/1pn1apa/reducing_app_website_load_time_by_40_production/)
[link] (https://codevian.com/blog/how-to-reduce-app-and-website-load-time/) [comments] (https://www.reddit.com/r/programming/comments/1pn1apa/reducing_app_website_load_time_by_40_production/)
Understanding mathematics through Lean
https://www.reddit.com/r/programming/comments/1pn1w31/understanding_mathematics_through_lean/
<!-- SC_OFF -->Hi, this is my blog. I hope you like this week's post! <!-- SC_ON --> submitted by /u/mapehe808 (https://www.reddit.com/user/mapehe808)
[link] (https://bytesauna.com/post/proofs-as-types) [comments] (https://www.reddit.com/r/programming/comments/1pn1w31/understanding_mathematics_through_lean/)
https://www.reddit.com/r/programming/comments/1pn1w31/understanding_mathematics_through_lean/
<!-- SC_OFF -->Hi, this is my blog. I hope you like this week's post! <!-- SC_ON --> submitted by /u/mapehe808 (https://www.reddit.com/user/mapehe808)
[link] (https://bytesauna.com/post/proofs-as-types) [comments] (https://www.reddit.com/r/programming/comments/1pn1w31/understanding_mathematics_through_lean/)
gRPC in Spring Boot - Piotr's TechBlog
https://www.reddit.com/r/programming/comments/1pn2km9/grpc_in_spring_boot_piotrs_techblog/
submitted by /u/piotr_minkowski (https://www.reddit.com/user/piotr_minkowski)
[link] (https://piotrminkowski.com/2025/12/15/grpc-spring/) [comments] (https://www.reddit.com/r/programming/comments/1pn2km9/grpc_in_spring_boot_piotrs_techblog/)
https://www.reddit.com/r/programming/comments/1pn2km9/grpc_in_spring_boot_piotrs_techblog/
submitted by /u/piotr_minkowski (https://www.reddit.com/user/piotr_minkowski)
[link] (https://piotrminkowski.com/2025/12/15/grpc-spring/) [comments] (https://www.reddit.com/r/programming/comments/1pn2km9/grpc_in_spring_boot_piotrs_techblog/)
Rejecting rebase and stacked diffs, my way of doing atomic commits
https://www.reddit.com/r/programming/comments/1pn3xns/rejecting_rebase_and_stacked_diffs_my_way_of/
submitted by /u/that_guy_iain (https://www.reddit.com/user/that_guy_iain)
[link] (https://iain.rocks/blog/2025/12/15/rejecting-rebase-and-stack-diffs-my-way-of-doing-atomic-commits) [comments] (https://www.reddit.com/r/programming/comments/1pn3xns/rejecting_rebase_and_stacked_diffs_my_way_of/)
https://www.reddit.com/r/programming/comments/1pn3xns/rejecting_rebase_and_stacked_diffs_my_way_of/
submitted by /u/that_guy_iain (https://www.reddit.com/user/that_guy_iain)
[link] (https://iain.rocks/blog/2025/12/15/rejecting-rebase-and-stack-diffs-my-way-of-doing-atomic-commits) [comments] (https://www.reddit.com/r/programming/comments/1pn3xns/rejecting_rebase_and_stacked_diffs_my_way_of/)
RAG retrieves facts, not state. Why I’m experimenting with "State Injection" for coding.
https://www.reddit.com/r/programming/comments/1pn5hwt/rag_retrieves_facts_not_state_why_im/
<!-- SC_OFF -->I’ve found that RAG is great for documentation ("What is the syntax for X?"), but it fails hard at decision state ("Did we agree to use Factory or Singleton 3 turns ago?"). Even with 128k+ context windows, we hit the "Lost in the Middle" problem. The model effectively forgets negative constraints (e.g., "Don't use Lodash") established at the start of the session, even if they are technically in the history token limit. Instead of stuffing the context or using vector search, I tried treating the LLM session like a State Machine. I run a small local model (Llama-3-8B) in the background to diff the conversation. It ignores the chit-chat and only extracts decisions and negative constraints. This compressed "State Key" gets injected into the System Prompt of every new request, bypassing the chat history entirely. System Prompt attention weight > Chat History attention weight. By forcing the "Rules" into the system slot, the instruction drift basically disappears. You are doubling your compute to run the background compression step. Has anyone else experimented with "State-based" memory architectures rather than vector-based RAG for code? I’m looking for standards on "Semantic Compression" that are more efficient than just asking an LLM to "summarize the diff." <!-- SC_ON --> submitted by /u/Necessary-Ring-6060 (https://www.reddit.com/user/Necessary-Ring-6060)
[link] (https://gist.github.com/justin55afdfdsf5ds45f4ds5f45ds4/da50ed029cffe31f451f84745a9b201c) [comments] (https://www.reddit.com/r/programming/comments/1pn5hwt/rag_retrieves_facts_not_state_why_im/)
https://www.reddit.com/r/programming/comments/1pn5hwt/rag_retrieves_facts_not_state_why_im/
<!-- SC_OFF -->I’ve found that RAG is great for documentation ("What is the syntax for X?"), but it fails hard at decision state ("Did we agree to use Factory or Singleton 3 turns ago?"). Even with 128k+ context windows, we hit the "Lost in the Middle" problem. The model effectively forgets negative constraints (e.g., "Don't use Lodash") established at the start of the session, even if they are technically in the history token limit. Instead of stuffing the context or using vector search, I tried treating the LLM session like a State Machine. I run a small local model (Llama-3-8B) in the background to diff the conversation. It ignores the chit-chat and only extracts decisions and negative constraints. This compressed "State Key" gets injected into the System Prompt of every new request, bypassing the chat history entirely. System Prompt attention weight > Chat History attention weight. By forcing the "Rules" into the system slot, the instruction drift basically disappears. You are doubling your compute to run the background compression step. Has anyone else experimented with "State-based" memory architectures rather than vector-based RAG for code? I’m looking for standards on "Semantic Compression" that are more efficient than just asking an LLM to "summarize the diff." <!-- SC_ON --> submitted by /u/Necessary-Ring-6060 (https://www.reddit.com/user/Necessary-Ring-6060)
[link] (https://gist.github.com/justin55afdfdsf5ds45f4ds5f45ds4/da50ed029cffe31f451f84745a9b201c) [comments] (https://www.reddit.com/r/programming/comments/1pn5hwt/rag_retrieves_facts_not_state_why_im/)
How Mindset Shapes Engineering Success at Startups
https://www.reddit.com/r/programming/comments/1pn6bu3/how_mindset_shapes_engineering_success_at_startups/
submitted by /u/c-digs (https://www.reddit.com/user/c-digs)
[link] (https://chrlschn.medium.com/how-mindset-shapes-engineering-success-at-startups-4e231ebfd5db) [comments] (https://www.reddit.com/r/programming/comments/1pn6bu3/how_mindset_shapes_engineering_success_at_startups/)
https://www.reddit.com/r/programming/comments/1pn6bu3/how_mindset_shapes_engineering_success_at_startups/
submitted by /u/c-digs (https://www.reddit.com/user/c-digs)
[link] (https://chrlschn.medium.com/how-mindset-shapes-engineering-success-at-startups-4e231ebfd5db) [comments] (https://www.reddit.com/r/programming/comments/1pn6bu3/how_mindset_shapes_engineering_success_at_startups/)
Hash tables in Go and advantage of self-hosted compilers
https://www.reddit.com/r/programming/comments/1pn6gc6/hash_tables_in_go_and_advantage_of_selfhosted/
submitted by /u/f311a (https://www.reddit.com/user/f311a)
[link] (https://rushter.com/blog/go-and-hashmaps/) [comments] (https://www.reddit.com/r/programming/comments/1pn6gc6/hash_tables_in_go_and_advantage_of_selfhosted/)
https://www.reddit.com/r/programming/comments/1pn6gc6/hash_tables_in_go_and_advantage_of_selfhosted/
submitted by /u/f311a (https://www.reddit.com/user/f311a)
[link] (https://rushter.com/blog/go-and-hashmaps/) [comments] (https://www.reddit.com/r/programming/comments/1pn6gc6/hash_tables_in_go_and_advantage_of_selfhosted/)
Excel: The World’s Most Successful Functional Programming Platform By Houston Haynes
https://www.reddit.com/r/programming/comments/1pn7cea/excel_the_worlds_most_successful_functional/
<!-- SC_OFF -->Houston Haynes delivered one of the most surprising and thought-provoking talks of the year: a reframing of Excel not just as a spreadsheet tool, but as the world’s most widely adopted functional programming platform. The talk combined personal journey, technical insight, business strategy, and even a bit of FP philosophy — challenging the functional programming community to rethink the boundaries of their craft and the audience it serves. <!-- SC_ON --> submitted by /u/MagnusSedlacek (https://www.reddit.com/user/MagnusSedlacek)
[link] (https://youtu.be/rpe5vrhFATA) [comments] (https://www.reddit.com/r/programming/comments/1pn7cea/excel_the_worlds_most_successful_functional/)
https://www.reddit.com/r/programming/comments/1pn7cea/excel_the_worlds_most_successful_functional/
<!-- SC_OFF -->Houston Haynes delivered one of the most surprising and thought-provoking talks of the year: a reframing of Excel not just as a spreadsheet tool, but as the world’s most widely adopted functional programming platform. The talk combined personal journey, technical insight, business strategy, and even a bit of FP philosophy — challenging the functional programming community to rethink the boundaries of their craft and the audience it serves. <!-- SC_ON --> submitted by /u/MagnusSedlacek (https://www.reddit.com/user/MagnusSedlacek)
[link] (https://youtu.be/rpe5vrhFATA) [comments] (https://www.reddit.com/r/programming/comments/1pn7cea/excel_the_worlds_most_successful_functional/)
CI/CD Evolution: From Pipelines to AI-Powered DevOps • Olaf Molenveld & Julian Wood
https://www.reddit.com/r/programming/comments/1pn7xgt/cicd_evolution_from_pipelines_to_aipowered_devops/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtu.be/5QWQioN-aXc?list=PLEx5khR4g7PJozVmHNpQTVrk1QRC7YaJu) [comments] (https://www.reddit.com/r/programming/comments/1pn7xgt/cicd_evolution_from_pipelines_to_aipowered_devops/)
https://www.reddit.com/r/programming/comments/1pn7xgt/cicd_evolution_from_pipelines_to_aipowered_devops/
submitted by /u/goto-con (https://www.reddit.com/user/goto-con)
[link] (https://youtu.be/5QWQioN-aXc?list=PLEx5khR4g7PJozVmHNpQTVrk1QRC7YaJu) [comments] (https://www.reddit.com/r/programming/comments/1pn7xgt/cicd_evolution_from_pipelines_to_aipowered_devops/)
IPC Mechanisms: Shared Memory vs. Message Queues Performance Benchmarking
https://www.reddit.com/r/programming/comments/1pn84ce/ipc_mechanisms_shared_memory_vs_message_queues/
<!-- SC_OFF -->Pushing 500K messages per second between processes and sys CPU time is through the roof. Your profiler shows mq_send() and mq_receive() dominating the flame graph. Each message is tiny—maybe 64 bytes—but you’re burning 40% CPU just on IPC overhead. This isn’t a hypothetical. LinkedIn’s Kafka producers hit exactly this wall. Message queue syscalls were killing throughput. They switched to shared memory ring buffers and saw context switches drop from 100K/sec to near-zero. The difference? Every message queue operation is a syscall with user→kernel→user memory copies. Shared memory lets you write directly to memory the other process can read. No syscall after setup, no context switch, no copy. The performance cliff sneaks up on you. At low rates, message queues work fine—the kernel handles synchronization and you get clean blocking semantics. But scale up and suddenly you’re paying 60-100ns per syscall, plus the cost of copying data twice and context switching when queues block. Shared memory with lock-free algorithms can hit sub-microsecond latencies, but you’re now responsible for synchronization, cache coherency, and cleanup if a process crashes mid-operation. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://howtech.substack.com/p/ipc-mechanisms-shared-memory-vs-message) [comments] (https://www.reddit.com/r/programming/comments/1pn84ce/ipc_mechanisms_shared_memory_vs_message_queues/)
https://www.reddit.com/r/programming/comments/1pn84ce/ipc_mechanisms_shared_memory_vs_message_queues/
<!-- SC_OFF -->Pushing 500K messages per second between processes and sys CPU time is through the roof. Your profiler shows mq_send() and mq_receive() dominating the flame graph. Each message is tiny—maybe 64 bytes—but you’re burning 40% CPU just on IPC overhead. This isn’t a hypothetical. LinkedIn’s Kafka producers hit exactly this wall. Message queue syscalls were killing throughput. They switched to shared memory ring buffers and saw context switches drop from 100K/sec to near-zero. The difference? Every message queue operation is a syscall with user→kernel→user memory copies. Shared memory lets you write directly to memory the other process can read. No syscall after setup, no context switch, no copy. The performance cliff sneaks up on you. At low rates, message queues work fine—the kernel handles synchronization and you get clean blocking semantics. But scale up and suddenly you’re paying 60-100ns per syscall, plus the cost of copying data twice and context switching when queues block. Shared memory with lock-free algorithms can hit sub-microsecond latencies, but you’re now responsible for synchronization, cache coherency, and cleanup if a process crashes mid-operation. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://howtech.substack.com/p/ipc-mechanisms-shared-memory-vs-message) [comments] (https://www.reddit.com/r/programming/comments/1pn84ce/ipc_mechanisms_shared_memory_vs_message_queues/)
Java 25 virtual threads – what worked and what didn’t for us
https://www.reddit.com/r/programming/comments/1pn9m4u/java_25_virtual_threads_what_worked_and_what/
submitted by /u/SpringJavaLab (https://www.reddit.com/user/SpringJavaLab)
[link] (https://spring-java-lab.blogspot.com/2025/12/java-25-virtual-threads-benchmarks-pitfalls.html) [comments] (https://www.reddit.com/r/programming/comments/1pn9m4u/java_25_virtual_threads_what_worked_and_what/)
https://www.reddit.com/r/programming/comments/1pn9m4u/java_25_virtual_threads_what_worked_and_what/
submitted by /u/SpringJavaLab (https://www.reddit.com/user/SpringJavaLab)
[link] (https://spring-java-lab.blogspot.com/2025/12/java-25-virtual-threads-benchmarks-pitfalls.html) [comments] (https://www.reddit.com/r/programming/comments/1pn9m4u/java_25_virtual_threads_what_worked_and_what/)
Linus Torvalds is 'a huge believer' in using AI to maintain code - just don't call it a revolution
https://www.reddit.com/r/programming/comments/1pnti2v/linus_torvalds_is_a_huge_believer_in_using_ai_to/
submitted by /u/Fcking_Chuck (https://www.reddit.com/user/Fcking_Chuck)
[link] (https://www.zdnet.com/article/linus-torvalds-ai-tool-maintaining-linux-code/) [comments] (https://www.reddit.com/r/programming/comments/1pnti2v/linus_torvalds_is_a_huge_believer_in_using_ai_to/)
https://www.reddit.com/r/programming/comments/1pnti2v/linus_torvalds_is_a_huge_believer_in_using_ai_to/
submitted by /u/Fcking_Chuck (https://www.reddit.com/user/Fcking_Chuck)
[link] (https://www.zdnet.com/article/linus-torvalds-ai-tool-maintaining-linux-code/) [comments] (https://www.reddit.com/r/programming/comments/1pnti2v/linus_torvalds_is_a_huge_believer_in_using_ai_to/)
Building a Brainfuck DSL in Forth using code generation
https://www.reddit.com/r/programming/comments/1pntwxq/building_a_brainfuck_dsl_in_forth_using_code/
submitted by /u/thunderseethe (https://www.reddit.com/user/thunderseethe)
[link] (https://venko.blog/articles/forth-brainfuck) [comments] (https://www.reddit.com/r/programming/comments/1pntwxq/building_a_brainfuck_dsl_in_forth_using_code/)
https://www.reddit.com/r/programming/comments/1pntwxq/building_a_brainfuck_dsl_in_forth_using_code/
submitted by /u/thunderseethe (https://www.reddit.com/user/thunderseethe)
[link] (https://venko.blog/articles/forth-brainfuck) [comments] (https://www.reddit.com/r/programming/comments/1pntwxq/building_a_brainfuck_dsl_in_forth_using_code/)
Analysis of the Xedni Calculus Attack on Elliptic Curves in Python
https://www.reddit.com/r/programming/comments/1pnvfnh/analysis_of_the_xedni_calculus_attack_on_elliptic/
submitted by /u/DataBaeBee (https://www.reddit.com/user/DataBaeBee)
[link] (https://leetarxiv.substack.com/p/analysis-of-the-xedni-calculus-attack) [comments] (https://www.reddit.com/r/programming/comments/1pnvfnh/analysis_of_the_xedni_calculus_attack_on_elliptic/)
https://www.reddit.com/r/programming/comments/1pnvfnh/analysis_of_the_xedni_calculus_attack_on_elliptic/
submitted by /u/DataBaeBee (https://www.reddit.com/user/DataBaeBee)
[link] (https://leetarxiv.substack.com/p/analysis-of-the-xedni-calculus-attack) [comments] (https://www.reddit.com/r/programming/comments/1pnvfnh/analysis_of_the_xedni_calculus_attack_on_elliptic/)
Censorship Explained: Shadowsocks
https://www.reddit.com/r/programming/comments/1pnvv15/censorship_explained_shadowsocks/
submitted by /u/wallpunch_official (https://www.reddit.com/user/wallpunch_official)
[link] (https://wallpunch.net/blog/censorship-explained-shadowsocks/) [comments] (https://www.reddit.com/r/programming/comments/1pnvv15/censorship_explained_shadowsocks/)
https://www.reddit.com/r/programming/comments/1pnvv15/censorship_explained_shadowsocks/
submitted by /u/wallpunch_official (https://www.reddit.com/user/wallpunch_official)
[link] (https://wallpunch.net/blog/censorship-explained-shadowsocks/) [comments] (https://www.reddit.com/r/programming/comments/1pnvv15/censorship_explained_shadowsocks/)
How a Kernel Bug Froze My Machine: Debugging an Async-profiler Deadlock
https://www.reddit.com/r/programming/comments/1pnxhkd/how_a_kernel_bug_froze_my_machine_debugging_an/
submitted by /u/_shadowbannedagain (https://www.reddit.com/user/_shadowbannedagain)
[link] (https://questdb.com/blog/async-profiler-kernel-bug/) [comments] (https://www.reddit.com/r/programming/comments/1pnxhkd/how_a_kernel_bug_froze_my_machine_debugging_an/)
https://www.reddit.com/r/programming/comments/1pnxhkd/how_a_kernel_bug_froze_my_machine_debugging_an/
submitted by /u/_shadowbannedagain (https://www.reddit.com/user/_shadowbannedagain)
[link] (https://questdb.com/blog/async-profiler-kernel-bug/) [comments] (https://www.reddit.com/r/programming/comments/1pnxhkd/how_a_kernel_bug_froze_my_machine_debugging_an/)
JetBrains Fleet dropped for AI products instead
https://www.reddit.com/r/programming/comments/1pnz3n0/jetbrains_fleet_dropped_for_ai_products_instead/
<!-- SC_OFF -->JetBrains Fleet was going to be an alternative to VS Code and seemed quite promising. After over 3 years of development since the first public preview release, it’s now dropped in order to make room for AI (Agentic) products. – “Starting December 22, 2025, Fleet will no longer be available for download. We are now building a new product focused on agentic development” At the very least, they’re considering open sourcing it, but it’s not definite. A comment from the author of the article (https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet/#remark42__comment-f3d6d88b-f10d-4f0a-9579-a6b940314b01) regarding open sourcing Fleet: – “It’s something we’re considering but we don’t have immediate plans for that at the moment.” <!-- SC_ON --> submitted by /u/markmanam (https://www.reddit.com/user/markmanam)
[link] (https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet/) [comments] (https://www.reddit.com/r/programming/comments/1pnz3n0/jetbrains_fleet_dropped_for_ai_products_instead/)
https://www.reddit.com/r/programming/comments/1pnz3n0/jetbrains_fleet_dropped_for_ai_products_instead/
<!-- SC_OFF -->JetBrains Fleet was going to be an alternative to VS Code and seemed quite promising. After over 3 years of development since the first public preview release, it’s now dropped in order to make room for AI (Agentic) products. – “Starting December 22, 2025, Fleet will no longer be available for download. We are now building a new product focused on agentic development” At the very least, they’re considering open sourcing it, but it’s not definite. A comment from the author of the article (https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet/#remark42__comment-f3d6d88b-f10d-4f0a-9579-a6b940314b01) regarding open sourcing Fleet: – “It’s something we’re considering but we don’t have immediate plans for that at the moment.” <!-- SC_ON --> submitted by /u/markmanam (https://www.reddit.com/user/markmanam)
[link] (https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet/) [comments] (https://www.reddit.com/r/programming/comments/1pnz3n0/jetbrains_fleet_dropped_for_ai_products_instead/)
Designing Resilient Event-Driven Systems that Scale
https://www.reddit.com/r/programming/comments/1po1dit/designing_resilient_eventdriven_systems_that_scale/
<!-- SC_OFF -->If you work on highly available & scalable systems, you might find it useful <!-- SC_ON --> submitted by /u/Trust_Me_Bro_4sure (https://www.reddit.com/user/Trust_Me_Bro_4sure)
[link] (https://kapillamba4.medium.com/designing-resilient-event-driven-systems-that-scale-03da6c60b711) [comments] (https://www.reddit.com/r/programming/comments/1po1dit/designing_resilient_eventdriven_systems_that_scale/)
https://www.reddit.com/r/programming/comments/1po1dit/designing_resilient_eventdriven_systems_that_scale/
<!-- SC_OFF -->If you work on highly available & scalable systems, you might find it useful <!-- SC_ON --> submitted by /u/Trust_Me_Bro_4sure (https://www.reddit.com/user/Trust_Me_Bro_4sure)
[link] (https://kapillamba4.medium.com/designing-resilient-event-driven-systems-that-scale-03da6c60b711) [comments] (https://www.reddit.com/r/programming/comments/1po1dit/designing_resilient_eventdriven_systems_that_scale/)
Odin's Most Misunderstood Feature: `context`
https://www.reddit.com/r/programming/comments/1po2i0o/odins_most_misunderstood_feature_context/
submitted by /u/gingerbill (https://www.reddit.com/user/gingerbill)
[link] (https://www.gingerbill.org/article/2025/12/15/odins-most-misunderstood-feature-context/) [comments] (https://www.reddit.com/r/programming/comments/1po2i0o/odins_most_misunderstood_feature_context/)
https://www.reddit.com/r/programming/comments/1po2i0o/odins_most_misunderstood_feature_context/
submitted by /u/gingerbill (https://www.reddit.com/user/gingerbill)
[link] (https://www.gingerbill.org/article/2025/12/15/odins-most-misunderstood-feature-context/) [comments] (https://www.reddit.com/r/programming/comments/1po2i0o/odins_most_misunderstood_feature_context/)
What can I do with ReScript?
https://www.reddit.com/r/programming/comments/1po3i99/what_can_i_do_with_renoscript/
submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://renoscript-lang.org/blog/what-can-i-do-with-renoscript/) [comments] (https://www.reddit.com/r/programming/comments/1po3i99/what_can_i_do_with_renoscript/)
https://www.reddit.com/r/programming/comments/1po3i99/what_can_i_do_with_renoscript/
submitted by /u/BeamMeUpBiscotti (https://www.reddit.com/user/BeamMeUpBiscotti)
[link] (https://renoscript-lang.org/blog/what-can-i-do-with-renoscript/) [comments] (https://www.reddit.com/r/programming/comments/1po3i99/what_can_i_do_with_renoscript/)