Reddit Programming – Telegram
Reddit Programming
211 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
Measure real users Cut JS and media weight Add basic caching Fix obvious backend bottlenecks Next 90 days: Rework rendering strategy Optimize APIs and data access Introduce edge delivery Automate performance checks This cadence repeatedly delivered ~40% load-time reduction without rewriting entire systems. Common Mistakes Adding tools before removing waste Chasing perfect lab scores Ignoring mobile users Treating performance as a one-time task Performance decays unless actively defended. A Note on Our Work At Codevian Technologies, we apply the same constraints internally: measure real users, remove unnecessary work, and prefer boring, maintainable solutions. Most performance wins still come from deleting code. Final Thought Performance is not about being clever. It’s about being disciplined enough to say no to unnecessary work—over and over again. Fast systems are usually simple systems. <!-- SC_ON --> submitted by /u/Big-Click2648 (https://www.reddit.com/user/Big-Click2648)
[link] (https://codevian.com/blog/how-to-reduce-app-and-website-load-time/) [comments] (https://www.reddit.com/r/programming/comments/1pn1apa/reducing_app_website_load_time_by_40_production/)
RAG retrieves facts, not state. Why I’m experimenting with "State Injection" for coding.
https://www.reddit.com/r/programming/comments/1pn5hwt/rag_retrieves_facts_not_state_why_im/

<!-- SC_OFF -->I’ve found that RAG is great for documentation ("What is the syntax for X?"), but it fails hard at decision state ("Did we agree to use Factory or Singleton 3 turns ago?"). Even with 128k+ context windows, we hit the "Lost in the Middle" problem. The model effectively forgets negative constraints (e.g., "Don't use Lodash") established at the start of the session, even if they are technically in the history token limit. Instead of stuffing the context or using vector search, I tried treating the LLM session like a State Machine. I run a small local model (Llama-3-8B) in the background to diff the conversation. It ignores the chit-chat and only extracts decisions and negative constraints. This compressed "State Key" gets injected into the System Prompt of every new request, bypassing the chat history entirely. System Prompt attention weight > Chat History attention weight. By forcing the "Rules" into the system slot, the instruction drift basically disappears. You are doubling your compute to run the background compression step. Has anyone else experimented with "State-based" memory architectures rather than vector-based RAG for code? I’m looking for standards on "Semantic Compression" that are more efficient than just asking an LLM to "summarize the diff." <!-- SC_ON --> submitted by /u/Necessary-Ring-6060 (https://www.reddit.com/user/Necessary-Ring-6060)
[link] (https://gist.github.com/justin55afdfdsf5ds45f4ds5f45ds4/da50ed029cffe31f451f84745a9b201c) [comments] (https://www.reddit.com/r/programming/comments/1pn5hwt/rag_retrieves_facts_not_state_why_im/)
Excel: The World’s Most Successful Functional Programming Platform By Houston Haynes
https://www.reddit.com/r/programming/comments/1pn7cea/excel_the_worlds_most_successful_functional/

<!-- SC_OFF -->Houston Haynes delivered one of the most surprising and thought-provoking talks of the year: a reframing of Excel not just as a spreadsheet tool, but as the world’s most widely adopted functional programming platform. The talk combined personal journey, technical insight, business strategy, and even a bit of FP philosophy — challenging the functional programming community to rethink the boundaries of their craft and the audience it serves. <!-- SC_ON --> submitted by /u/MagnusSedlacek (https://www.reddit.com/user/MagnusSedlacek)
[link] (https://youtu.be/rpe5vrhFATA) [comments] (https://www.reddit.com/r/programming/comments/1pn7cea/excel_the_worlds_most_successful_functional/)
IPC Mechanisms: Shared Memory vs. Message Queues Performance Benchmarking
https://www.reddit.com/r/programming/comments/1pn84ce/ipc_mechanisms_shared_memory_vs_message_queues/

<!-- SC_OFF -->Pushing 500K messages per second between processes and sys CPU time is through the roof. Your profiler shows mq_send() and mq_receive() dominating the flame graph. Each message is tiny—maybe 64 bytes—but you’re burning 40% CPU just on IPC overhead. This isn’t a hypothetical. LinkedIn’s Kafka producers hit exactly this wall. Message queue syscalls were killing throughput. They switched to shared memory ring buffers and saw context switches drop from 100K/sec to near-zero. The difference? Every message queue operation is a syscall with user→kernel→user memory copies. Shared memory lets you write directly to memory the other process can read. No syscall after setup, no context switch, no copy. The performance cliff sneaks up on you. At low rates, message queues work fine—the kernel handles synchronization and you get clean blocking semantics. But scale up and suddenly you’re paying 60-100ns per syscall, plus the cost of copying data twice and context switching when queues block. Shared memory with lock-free algorithms can hit sub-microsecond latencies, but you’re now responsible for synchronization, cache coherency, and cleanup if a process crashes mid-operation. <!-- SC_ON --> submitted by /u/Extra_Ear_10 (https://www.reddit.com/user/Extra_Ear_10)
[link] (https://howtech.substack.com/p/ipc-mechanisms-shared-memory-vs-message) [comments] (https://www.reddit.com/r/programming/comments/1pn84ce/ipc_mechanisms_shared_memory_vs_message_queues/)
JetBrains Fleet dropped for AI products instead
https://www.reddit.com/r/programming/comments/1pnz3n0/jetbrains_fleet_dropped_for_ai_products_instead/

<!-- SC_OFF -->JetBrains Fleet was going to be an alternative to VS Code and seemed quite promising. After over 3 years of development since the first public preview release, it’s now dropped in order to make room for AI (Agentic) products. – “Starting December 22, 2025, Fleet will no longer be available for download. We are now building a new product focused on agentic development” At the very least, they’re considering open sourcing it, but it’s not definite. A comment from the author of the article (https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet/#remark42__comment-f3d6d88b-f10d-4f0a-9579-a6b940314b01) regarding open sourcing Fleet: – “It’s something we’re considering but we don’t have immediate plans for that at the moment.” <!-- SC_ON --> submitted by /u/markmanam (https://www.reddit.com/user/markmanam)
[link] (https://blog.jetbrains.com/fleet/2025/12/the-future-of-fleet/) [comments] (https://www.reddit.com/r/programming/comments/1pnz3n0/jetbrains_fleet_dropped_for_ai_products_instead/)