Building a highly-available web service without a database
https://www.reddit.com/r/programming/comments/1ooxrce/building_a_highlyavailable_web_service_without_a/
submitted by /u/self (https://www.reddit.com/user/self)
[link] (https://screenshotbot.io/blog/building-a-highly-available-web-service-without-a-database) [comments] (https://www.reddit.com/r/programming/comments/1ooxrce/building_a_highlyavailable_web_service_without_a/)
https://www.reddit.com/r/programming/comments/1ooxrce/building_a_highlyavailable_web_service_without_a/
submitted by /u/self (https://www.reddit.com/user/self)
[link] (https://screenshotbot.io/blog/building-a-highly-available-web-service-without-a-database) [comments] (https://www.reddit.com/r/programming/comments/1ooxrce/building_a_highlyavailable_web_service_without_a/)
Optimizing filtered vector queries from tens of seconds to single-digit milliseconds in PostgreSQL
https://www.reddit.com/r/programming/comments/1ooxsov/optimizing_filtered_vector_queries_from_tens_of/
<!-- SC_OFF -->We actively use pgvector in a production setting for maintaining and querying HNSW vector indexes used to power our recommendation algorithms. A couple of weeks ago, however, as we were adding many more candidates into our database, we suddenly noticed our query times increasing linearly with the number of profiles, which turned out to be a result of incorrectly structured and overly complicated SQL queries. Turns out that I hadn't fully internalized how filtering vector queries really worked. I knew vector indexes were fundamentally different from B-trees, hash maps, GIN indexes, etc., but I had not understood that they were essentially incompatible with more standard filtering approaches in the way that they are typically executed. I searched through google until page 10 and beyond with various different searches, but struggled to find thorough examples addressing the issues I was facing in real production scenarios that I could use to ground my expectations and guide my implementation. Now, I wrote a blog post about some of the best practices I learned for filtering vector queries using pgvector with PostgreSQL based on all the information I could find, thoroughly tried and tested, and currently in deployed in production use. In it I try to provide: - Reference points to target when optimizing vector queries' performance
- Clarity about your options for different approaches, such as pre-filtering, post-filtering and integrated filtering with pgvector
- Examples of optimized query structures using both Python + SQLAlchemy and raw SQL, as well as approaches to dynamically building more complex queries using SQLAlchemy
- Tips and tricks for constructing both indexes and queries as well as for understanding them
- Directions for even further optimizations and learning Hopefully it helps, whether you're building standard RAG systems, fully agentic AI applications or good old semantic search! https://www.clarvo.ai/blog/optimizing-filtered-vector-queries-from-tens-of-seconds-to-single-digit-milliseconds-in-postgresql Let me know if there is anything I missed or if you have come up with better strategies! <!-- SC_ON --> submitted by /u/m1r0k3 (https://www.reddit.com/user/m1r0k3)
[link] (https://www.clarvo.ai/blog/optimizing-filtered-vector-queries-from-tens-of-seconds-to-single-digit-milliseconds-in-postgresql) [comments] (https://www.reddit.com/r/programming/comments/1ooxsov/optimizing_filtered_vector_queries_from_tens_of/)
https://www.reddit.com/r/programming/comments/1ooxsov/optimizing_filtered_vector_queries_from_tens_of/
<!-- SC_OFF -->We actively use pgvector in a production setting for maintaining and querying HNSW vector indexes used to power our recommendation algorithms. A couple of weeks ago, however, as we were adding many more candidates into our database, we suddenly noticed our query times increasing linearly with the number of profiles, which turned out to be a result of incorrectly structured and overly complicated SQL queries. Turns out that I hadn't fully internalized how filtering vector queries really worked. I knew vector indexes were fundamentally different from B-trees, hash maps, GIN indexes, etc., but I had not understood that they were essentially incompatible with more standard filtering approaches in the way that they are typically executed. I searched through google until page 10 and beyond with various different searches, but struggled to find thorough examples addressing the issues I was facing in real production scenarios that I could use to ground my expectations and guide my implementation. Now, I wrote a blog post about some of the best practices I learned for filtering vector queries using pgvector with PostgreSQL based on all the information I could find, thoroughly tried and tested, and currently in deployed in production use. In it I try to provide: - Reference points to target when optimizing vector queries' performance
- Clarity about your options for different approaches, such as pre-filtering, post-filtering and integrated filtering with pgvector
- Examples of optimized query structures using both Python + SQLAlchemy and raw SQL, as well as approaches to dynamically building more complex queries using SQLAlchemy
- Tips and tricks for constructing both indexes and queries as well as for understanding them
- Directions for even further optimizations and learning Hopefully it helps, whether you're building standard RAG systems, fully agentic AI applications or good old semantic search! https://www.clarvo.ai/blog/optimizing-filtered-vector-queries-from-tens-of-seconds-to-single-digit-milliseconds-in-postgresql Let me know if there is anything I missed or if you have come up with better strategies! <!-- SC_ON --> submitted by /u/m1r0k3 (https://www.reddit.com/user/m1r0k3)
[link] (https://www.clarvo.ai/blog/optimizing-filtered-vector-queries-from-tens-of-seconds-to-single-digit-milliseconds-in-postgresql) [comments] (https://www.reddit.com/r/programming/comments/1ooxsov/optimizing_filtered_vector_queries_from_tens_of/)
Autark: Rethinking build systems – Integrate, Don’t Outsource
https://www.reddit.com/r/programming/comments/1op09av/autark_rethinking_build_systems_integrate_dont/
submitted by /u/adamansky (https://www.reddit.com/user/adamansky)
[link] (https://blog.annapurna.cc/posts/autark-intro/) [comments] (https://www.reddit.com/r/programming/comments/1op09av/autark_rethinking_build_systems_integrate_dont/)
https://www.reddit.com/r/programming/comments/1op09av/autark_rethinking_build_systems_integrate_dont/
submitted by /u/adamansky (https://www.reddit.com/user/adamansky)
[link] (https://blog.annapurna.cc/posts/autark-intro/) [comments] (https://www.reddit.com/r/programming/comments/1op09av/autark_rethinking_build_systems_integrate_dont/)
SPy: An interpreter and compiler for a fast statically typed variant of Python
https://www.reddit.com/r/programming/comments/1op0fy5/spy_an_interpreter_and_compiler_for_a_fast/
submitted by /u/cheerfulboy (https://www.reddit.com/user/cheerfulboy)
[link] (https://antocuni.eu/2025/10/29/inside-spy-part-1-motivations-and-goals/) [comments] (https://www.reddit.com/r/programming/comments/1op0fy5/spy_an_interpreter_and_compiler_for_a_fast/)
https://www.reddit.com/r/programming/comments/1op0fy5/spy_an_interpreter_and_compiler_for_a_fast/
submitted by /u/cheerfulboy (https://www.reddit.com/user/cheerfulboy)
[link] (https://antocuni.eu/2025/10/29/inside-spy-part-1-motivations-and-goals/) [comments] (https://www.reddit.com/r/programming/comments/1op0fy5/spy_an_interpreter_and_compiler_for_a_fast/)
Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl
https://www.reddit.com/r/programming/comments/1op1ciq/understanding_specdrivendevelopment_kiro_speckit/
submitted by /u/grouvi (https://www.reddit.com/user/grouvi)
[link] (https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html) [comments] (https://www.reddit.com/r/programming/comments/1op1ciq/understanding_specdrivendevelopment_kiro_speckit/)
https://www.reddit.com/r/programming/comments/1op1ciq/understanding_specdrivendevelopment_kiro_speckit/
submitted by /u/grouvi (https://www.reddit.com/user/grouvi)
[link] (https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html) [comments] (https://www.reddit.com/r/programming/comments/1op1ciq/understanding_specdrivendevelopment_kiro_speckit/)
Many-to-Many Relations with 'through' in Django
https://www.reddit.com/r/programming/comments/1op20yy/manytomany_relations_with_through_in_django/
submitted by /u/Funny-Ad-5060 (https://www.reddit.com/user/Funny-Ad-5060)
[link] (https://pythonjournals.com/many-to-many-relations-with-through-in-django/) [comments] (https://www.reddit.com/r/programming/comments/1op20yy/manytomany_relations_with_through_in_django/)
https://www.reddit.com/r/programming/comments/1op20yy/manytomany_relations_with_through_in_django/
submitted by /u/Funny-Ad-5060 (https://www.reddit.com/user/Funny-Ad-5060)
[link] (https://pythonjournals.com/many-to-many-relations-with-through-in-django/) [comments] (https://www.reddit.com/r/programming/comments/1op20yy/manytomany_relations_with_through_in_django/)
Disassembling Terabytes of Random Data with Zig and Capstone to Prove a Point
https://www.reddit.com/r/programming/comments/1op56v8/disassembling_terabytes_of_random_data_with_zig/
submitted by /u/js4845 (https://www.reddit.com/user/js4845)
[link] (https://jstrieb.github.io/posts/random-instructions/) [comments] (https://www.reddit.com/r/programming/comments/1op56v8/disassembling_terabytes_of_random_data_with_zig/)
https://www.reddit.com/r/programming/comments/1op56v8/disassembling_terabytes_of_random_data_with_zig/
submitted by /u/js4845 (https://www.reddit.com/user/js4845)
[link] (https://jstrieb.github.io/posts/random-instructions/) [comments] (https://www.reddit.com/r/programming/comments/1op56v8/disassembling_terabytes_of_random_data_with_zig/)
Git History Graph Command
https://www.reddit.com/r/programming/comments/1op8gkx/git_history_graph_command/
<!-- SC_OFF -->A while back a friend gave me a super useful git command for showing git history in the terminal. Here's the command: git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s'"alias graph="git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s' I just made this alias with it alias graph="git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s'"alias graph="git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s'" I love this command and though I'd share it. Here's what it looks like: [Screenshot-2025-11-05-at-9-58-20-AM.png](https://postimg.cc/Mv6xDKtq) <!-- SC_ON --> submitted by /u/Critical-Volume2360 (https://www.reddit.com/user/Critical-Volume2360)
[link] (https://postimg.cc/Mv6xDKtq) [comments] (https://www.reddit.com/r/programming/comments/1op8gkx/git_history_graph_command/)
https://www.reddit.com/r/programming/comments/1op8gkx/git_history_graph_command/
<!-- SC_OFF -->A while back a friend gave me a super useful git command for showing git history in the terminal. Here's the command: git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s'"alias graph="git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s' I just made this alias with it alias graph="git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s'"alias graph="git log --graph --decorate --all --pretty=format:'%C(auto)%h%d %C(#888888)(%an; %ar)%Creset %s'" I love this command and though I'd share it. Here's what it looks like: [Screenshot-2025-11-05-at-9-58-20-AM.png](https://postimg.cc/Mv6xDKtq) <!-- SC_ON --> submitted by /u/Critical-Volume2360 (https://www.reddit.com/user/Critical-Volume2360)
[link] (https://postimg.cc/Mv6xDKtq) [comments] (https://www.reddit.com/r/programming/comments/1op8gkx/git_history_graph_command/)
I’ve indexed all Strange Loop conference talks so you can use semantic search to find relevant videos
https://www.reddit.com/r/programming/comments/1op95wf/ive_indexed_all_strange_loop_conference_talks_so/
submitted by /u/devblogs-sh (https://www.reddit.com/user/devblogs-sh)
[link] (https://devblogs.sh/library/strangeloop) [comments] (https://www.reddit.com/r/programming/comments/1op95wf/ive_indexed_all_strange_loop_conference_talks_so/)
https://www.reddit.com/r/programming/comments/1op95wf/ive_indexed_all_strange_loop_conference_talks_so/
submitted by /u/devblogs-sh (https://www.reddit.com/user/devblogs-sh)
[link] (https://devblogs.sh/library/strangeloop) [comments] (https://www.reddit.com/r/programming/comments/1op95wf/ive_indexed_all_strange_loop_conference_talks_so/)
nyno-lang can mix Python, JavaScript and PHP extensions for high-performing multi-language (AI) workflows - using the best of each language - sharing context via TCP.
https://www.reddit.com/r/programming/comments/1op9aqz/nynolang_can_mix_python_javanoscript_and_php/
submitted by /u/EveYogaTech (https://www.reddit.com/user/EveYogaTech)
[link] (https://github.com/empowerd-cms/nyno-lang) [comments] (https://www.reddit.com/r/programming/comments/1op9aqz/nynolang_can_mix_python_javanoscript_and_php/)
https://www.reddit.com/r/programming/comments/1op9aqz/nynolang_can_mix_python_javanoscript_and_php/
submitted by /u/EveYogaTech (https://www.reddit.com/user/EveYogaTech)
[link] (https://github.com/empowerd-cms/nyno-lang) [comments] (https://www.reddit.com/r/programming/comments/1op9aqz/nynolang_can_mix_python_javanoscript_and_php/)
Hacking with AI SASTs: An overview of 'AI Security Engineers' / 'LLM Security Scanners' for Penetration Testers and Security Teams
https://www.reddit.com/r/programming/comments/1opasys/hacking_with_ai_sasts_an_overview_of_ai_security/
submitted by /u/alexeyr (https://www.reddit.com/user/alexeyr)
[link] (https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters) [comments] (https://www.reddit.com/r/programming/comments/1opasys/hacking_with_ai_sasts_an_overview_of_ai_security/)
https://www.reddit.com/r/programming/comments/1opasys/hacking_with_ai_sasts_an_overview_of_ai_security/
submitted by /u/alexeyr (https://www.reddit.com/user/alexeyr)
[link] (https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters) [comments] (https://www.reddit.com/r/programming/comments/1opasys/hacking_with_ai_sasts_an_overview_of_ai_security/)
Ruby And Its Neighbors: Smalltalk
https://www.reddit.com/r/programming/comments/1opayac/ruby_and_its_neighbors_smalltalk/
submitted by /u/BrewedDoritos (https://www.reddit.com/user/BrewedDoritos)
[link] (https://noelrappin.com/blog/2025/11/ruby-and-its-neighbors-smalltalk/) [comments] (https://www.reddit.com/r/programming/comments/1opayac/ruby_and_its_neighbors_smalltalk/)
https://www.reddit.com/r/programming/comments/1opayac/ruby_and_its_neighbors_smalltalk/
submitted by /u/BrewedDoritos (https://www.reddit.com/user/BrewedDoritos)
[link] (https://noelrappin.com/blog/2025/11/ruby-and-its-neighbors-smalltalk/) [comments] (https://www.reddit.com/r/programming/comments/1opayac/ruby_and_its_neighbors_smalltalk/)
Cj: a tiny no-deps JIT in C for x86-64 and ARM64
https://www.reddit.com/r/programming/comments/1opbt53/cj_a_tiny_nodeps_jit_in_c_for_x8664_and_arm64/
<!-- SC_OFF -->Hey y’all! About 7 years ago, I had this idea to write a JIT with an autogenerated backend for x86 based on the ISA specs. I sketched something out and then just kinda let it sit. I picked it up again a few weeks ago and made a complete-ish backend for both x86 and ARM64. It has no dependencies, the backends are completely autogenerated (by horrible, horrible JS noscripts), and I built a small abstraciton layer for things like functions prologues etc. It’s super duper early and will probably break on your machine, but it’s good enough to compile some cool examples (look at the examples directory (https://github.com/hellerve-pl-experiments/cj/tree/master/examples), my personal favorite is the minimal language implementation (https://github.com/hellerve-pl-experiments/cj/blob/master/examples/minilang.c)). It doesn’t have anything except basically a fancy JIT assembler with some helpers as of yet. No register allocator, a lot of ABI details will still have to be figured out manually (though of course feel free to add anything to the abstraction layer that’s generally useful and submit a PR!). I honestly don’t know where I’m going with this next. I kind of stumbled into the project, and am not sure whether I’ll consider it as “exercise completed” or whether I should pursue it more. Time will tell. Feedback, questions, and bug reports very welcome—especially on the codegen helpers, additional examples or cool things you come up with, or backend rough edges. P.S.: I also wrote a small announcement blog post on it that you can find here (https://blog.veitheller.de/cj:_Making_a_minimal,_complete_JI... (https://blog.veitheller.de/cj:_Making_a_minimal,_complete_JIT.html)), but it honestly doesn’t add all that much interesting info that you can’t find in the repo <!-- SC_ON --> submitted by /u/hellerve (https://www.reddit.com/user/hellerve)
[link] (https://github.com/hellerve-pl-experiments/cj) [comments] (https://www.reddit.com/r/programming/comments/1opbt53/cj_a_tiny_nodeps_jit_in_c_for_x8664_and_arm64/)
https://www.reddit.com/r/programming/comments/1opbt53/cj_a_tiny_nodeps_jit_in_c_for_x8664_and_arm64/
<!-- SC_OFF -->Hey y’all! About 7 years ago, I had this idea to write a JIT with an autogenerated backend for x86 based on the ISA specs. I sketched something out and then just kinda let it sit. I picked it up again a few weeks ago and made a complete-ish backend for both x86 and ARM64. It has no dependencies, the backends are completely autogenerated (by horrible, horrible JS noscripts), and I built a small abstraciton layer for things like functions prologues etc. It’s super duper early and will probably break on your machine, but it’s good enough to compile some cool examples (look at the examples directory (https://github.com/hellerve-pl-experiments/cj/tree/master/examples), my personal favorite is the minimal language implementation (https://github.com/hellerve-pl-experiments/cj/blob/master/examples/minilang.c)). It doesn’t have anything except basically a fancy JIT assembler with some helpers as of yet. No register allocator, a lot of ABI details will still have to be figured out manually (though of course feel free to add anything to the abstraction layer that’s generally useful and submit a PR!). I honestly don’t know where I’m going with this next. I kind of stumbled into the project, and am not sure whether I’ll consider it as “exercise completed” or whether I should pursue it more. Time will tell. Feedback, questions, and bug reports very welcome—especially on the codegen helpers, additional examples or cool things you come up with, or backend rough edges. P.S.: I also wrote a small announcement blog post on it that you can find here (https://blog.veitheller.de/cj:_Making_a_minimal,_complete_JI... (https://blog.veitheller.de/cj:_Making_a_minimal,_complete_JIT.html)), but it honestly doesn’t add all that much interesting info that you can’t find in the repo <!-- SC_ON --> submitted by /u/hellerve (https://www.reddit.com/user/hellerve)
[link] (https://github.com/hellerve-pl-experiments/cj) [comments] (https://www.reddit.com/r/programming/comments/1opbt53/cj_a_tiny_nodeps_jit_in_c_for_x8664_and_arm64/)
Please Implement This Simple SLO
https://www.reddit.com/r/programming/comments/1opbziq/please_implement_this_simple_slo/
<!-- SC_OFF -->In all the companies I've worked for, engineers have treated SLOs as a simple and boring task. There are, however, many ways that you could do it, and they all have trade-offs.
I wrote this satirical piece to illustrate the underappreciated art of writing good SLOs. <!-- SC_ON --> submitted by /u/IEavan (https://www.reddit.com/user/IEavan)
[link] (https://eavan.blog/posts/implement-an-slo.html) [comments] (https://www.reddit.com/r/programming/comments/1opbziq/please_implement_this_simple_slo/)
https://www.reddit.com/r/programming/comments/1opbziq/please_implement_this_simple_slo/
<!-- SC_OFF -->In all the companies I've worked for, engineers have treated SLOs as a simple and boring task. There are, however, many ways that you could do it, and they all have trade-offs.
I wrote this satirical piece to illustrate the underappreciated art of writing good SLOs. <!-- SC_ON --> submitted by /u/IEavan (https://www.reddit.com/user/IEavan)
[link] (https://eavan.blog/posts/implement-an-slo.html) [comments] (https://www.reddit.com/r/programming/comments/1opbziq/please_implement_this_simple_slo/)
Predictive Thermal Management On Mobile: 0.27°C Accuracy 30 Seconds in Advance
https://www.reddit.com/r/programming/comments/1oprqnh/predictive_thermal_management_on_mobile_027c/
<!-- SC_OFF -->The hardware properties of modern mobile devices are perfect for modeling with physics. Here is what I have found. Total predictions: 2142 Duration: 60 minutes MAE: 1.51°C RMSE: 2.70°C Bias: -0.95°C Within ±1°C: 58.2% Within ±2°C: 75.6% Per-zone MAE: BATTERY : 0.27°C (357 predictions) CHASSIS : 2.92°C (357 predictions) CPU_BIG : 1.60°C (357 predictions) CPU_LITTLE : 2.50°C (357 predictions) GPU : 0.96°C (357 predictions) MODEM : 0.80°C (357 predictions) 0.27°C on the hardware that matters, 30 seconds in advance. On S25+, throttling decisions are made almost entirely based on battery status. Predictive Modeling > Reactive Throttling. By using Newton's Law of Cooling in combination with measured estimates based on hardware constraints and adaptive damping for your specific device, you can predict thermal events before they happen and defer inexpensive operations, pause expensive operations, and emergency shutdown operations in danger territory. This prevents us from ever reaching the 42°C throttle limit. At this limit, Samsung aggressively throttles performance by about 50%, which can cause performance problems, which can generate more heat, and the spiral can get out of hand quickly. Mathematical Model Core equation (Newton's law of cooling): T(t) = T_amb + (T₀ - T_amb)·exp(-t/τ) + (P·R)·(1 - exp(-t/τ)) Where: - τ = thermal time constant (zone-specific) - R = thermal resistance (°C/W) - P = power dissipation (W) - T_amb = ambient temperature Per-zone constants (measured from S25+ hardware): - Battery: τ=540s, C=45 J/K (massive thermal mass) - CPU cores: τ=6-9s, C=0.025-0.05 J/K (fast response) - GPU/Modem: τ=9s, C=0.02-0.035 J/K Prediction horizon: 30s at 10s sampling intervals Adaptive damping: Prediction error feedback loop damping = f(bias, confidence, sample_count) T_predicted_adjusted = T_predicted - damping·ΔT Maintains per-zone error history with confidence weighting. Damping strength scales inversely with thermal time constant (battery gets minimal damping due to high predictability, CPU gets aggressive damping). Result: 0.27°C MAE on battery. My solution is simple: never reach 42° C. <!-- SC_ON --> submitted by /u/DaSettingsPNGN (https://www.reddit.com/user/DaSettingsPNGN)
[link] (https://github.com/DaSettingsPNGN/S25_THERMAL-) [comments] (https://www.reddit.com/r/programming/comments/1oprqnh/predictive_thermal_management_on_mobile_027c/)
https://www.reddit.com/r/programming/comments/1oprqnh/predictive_thermal_management_on_mobile_027c/
<!-- SC_OFF -->The hardware properties of modern mobile devices are perfect for modeling with physics. Here is what I have found. Total predictions: 2142 Duration: 60 minutes MAE: 1.51°C RMSE: 2.70°C Bias: -0.95°C Within ±1°C: 58.2% Within ±2°C: 75.6% Per-zone MAE: BATTERY : 0.27°C (357 predictions) CHASSIS : 2.92°C (357 predictions) CPU_BIG : 1.60°C (357 predictions) CPU_LITTLE : 2.50°C (357 predictions) GPU : 0.96°C (357 predictions) MODEM : 0.80°C (357 predictions) 0.27°C on the hardware that matters, 30 seconds in advance. On S25+, throttling decisions are made almost entirely based on battery status. Predictive Modeling > Reactive Throttling. By using Newton's Law of Cooling in combination with measured estimates based on hardware constraints and adaptive damping for your specific device, you can predict thermal events before they happen and defer inexpensive operations, pause expensive operations, and emergency shutdown operations in danger territory. This prevents us from ever reaching the 42°C throttle limit. At this limit, Samsung aggressively throttles performance by about 50%, which can cause performance problems, which can generate more heat, and the spiral can get out of hand quickly. Mathematical Model Core equation (Newton's law of cooling): T(t) = T_amb + (T₀ - T_amb)·exp(-t/τ) + (P·R)·(1 - exp(-t/τ)) Where: - τ = thermal time constant (zone-specific) - R = thermal resistance (°C/W) - P = power dissipation (W) - T_amb = ambient temperature Per-zone constants (measured from S25+ hardware): - Battery: τ=540s, C=45 J/K (massive thermal mass) - CPU cores: τ=6-9s, C=0.025-0.05 J/K (fast response) - GPU/Modem: τ=9s, C=0.02-0.035 J/K Prediction horizon: 30s at 10s sampling intervals Adaptive damping: Prediction error feedback loop damping = f(bias, confidence, sample_count) T_predicted_adjusted = T_predicted - damping·ΔT Maintains per-zone error history with confidence weighting. Damping strength scales inversely with thermal time constant (battery gets minimal damping due to high predictability, CPU gets aggressive damping). Result: 0.27°C MAE on battery. My solution is simple: never reach 42° C. <!-- SC_ON --> submitted by /u/DaSettingsPNGN (https://www.reddit.com/user/DaSettingsPNGN)
[link] (https://github.com/DaSettingsPNGN/S25_THERMAL-) [comments] (https://www.reddit.com/r/programming/comments/1oprqnh/predictive_thermal_management_on_mobile_027c/)
Anyone interested in F1?
https://www.reddit.com/r/programming/comments/1ops8jh/anyone_interested_in_f1/
<!-- SC_OFF -->I've created a fully functional F1 website that can scrape race data in real time and update it automatically.
If you're interested, you can take a look at the website I created, and we can discuss it. <!-- SC_ON --> submitted by /u/TravelTownEnergy (https://www.reddit.com/user/TravelTownEnergy)
[link] (https://f1-news-site.lumi.ing/) [comments] (https://www.reddit.com/r/programming/comments/1ops8jh/anyone_interested_in_f1/)
https://www.reddit.com/r/programming/comments/1ops8jh/anyone_interested_in_f1/
<!-- SC_OFF -->I've created a fully functional F1 website that can scrape race data in real time and update it automatically.
If you're interested, you can take a look at the website I created, and we can discuss it. <!-- SC_ON --> submitted by /u/TravelTownEnergy (https://www.reddit.com/user/TravelTownEnergy)
[link] (https://f1-news-site.lumi.ing/) [comments] (https://www.reddit.com/r/programming/comments/1ops8jh/anyone_interested_in_f1/)
The Primeagen was right: Vim motions have made me 10x faster. Here's the data to prove it
https://www.reddit.com/r/programming/comments/1opscy5/the_primeagen_was_right_vim_motions_have_made_me/
<!-- SC_OFF -->After 6 months of forcing myself to use Vim keybindings in VS Code, I tracked my productivity metrics. The results are honestly shocking. Key findings: - 43% reduction in time spent navigating files - 67% fewer mouse movements per hour - Average of 2.3 minutes saved per coding task The vim-be-good plugin was a game changer for building muscle memory. Started at 15 WPM with motions, now consistently hitting 85+ WPM. Anyone else have similar experiences? Would love to hear if others have quantified their productivity gains. <!-- SC_ON --> submitted by /u/Ares2010- (https://www.reddit.com/user/Ares2010-)
[link] (https://github.com/ThePrimeagen/vim-be-good) [comments] (https://www.reddit.com/r/programming/comments/1opscy5/the_primeagen_was_right_vim_motions_have_made_me/)
https://www.reddit.com/r/programming/comments/1opscy5/the_primeagen_was_right_vim_motions_have_made_me/
<!-- SC_OFF -->After 6 months of forcing myself to use Vim keybindings in VS Code, I tracked my productivity metrics. The results are honestly shocking. Key findings: - 43% reduction in time spent navigating files - 67% fewer mouse movements per hour - Average of 2.3 minutes saved per coding task The vim-be-good plugin was a game changer for building muscle memory. Started at 15 WPM with motions, now consistently hitting 85+ WPM. Anyone else have similar experiences? Would love to hear if others have quantified their productivity gains. <!-- SC_ON --> submitted by /u/Ares2010- (https://www.reddit.com/user/Ares2010-)
[link] (https://github.com/ThePrimeagen/vim-be-good) [comments] (https://www.reddit.com/r/programming/comments/1opscy5/the_primeagen_was_right_vim_motions_have_made_me/)
PyCon US 2026 website is live & CFP is open
https://www.reddit.com/r/programming/comments/1opslgj/pycon_us_2026_website_is_live_cfp_is_open/
submitted by /u/clairegiordano (https://www.reddit.com/user/clairegiordano)
[link] (https://us.pycon.org/2026/) [comments] (https://www.reddit.com/r/programming/comments/1opslgj/pycon_us_2026_website_is_live_cfp_is_open/)
https://www.reddit.com/r/programming/comments/1opslgj/pycon_us_2026_website_is_live_cfp_is_open/
submitted by /u/clairegiordano (https://www.reddit.com/user/clairegiordano)
[link] (https://us.pycon.org/2026/) [comments] (https://www.reddit.com/r/programming/comments/1opslgj/pycon_us_2026_website_is_live_cfp_is_open/)
free, open-source file scanner
https://www.reddit.com/r/programming/comments/1opsy9w/free_opensource_file_scanner/
submitted by /u/JustSouochi (https://www.reddit.com/user/JustSouochi)
[link] (https://github.com/pompelmi/pompelmi/) [comments] (https://www.reddit.com/r/programming/comments/1opsy9w/free_opensource_file_scanner/)
https://www.reddit.com/r/programming/comments/1opsy9w/free_opensource_file_scanner/
submitted by /u/JustSouochi (https://www.reddit.com/user/JustSouochi)
[link] (https://github.com/pompelmi/pompelmi/) [comments] (https://www.reddit.com/r/programming/comments/1opsy9w/free_opensource_file_scanner/)
I just launched an open-source tool that lets you build and test MCP servers right in the browser 🚀
https://www.reddit.com/r/programming/comments/1optgli/i_just_launched_an_opensource_tool_that_lets_you/
submitted by /u/ArmyBusiness6047 (https://www.reddit.com/user/ArmyBusiness6047)
[link] (https://www.producthunt.com/products/mcpwhiz?utm_source=linkedin&utm_medium=social) [comments] (https://www.reddit.com/r/programming/comments/1optgli/i_just_launched_an_opensource_tool_that_lets_you/)
https://www.reddit.com/r/programming/comments/1optgli/i_just_launched_an_opensource_tool_that_lets_you/
submitted by /u/ArmyBusiness6047 (https://www.reddit.com/user/ArmyBusiness6047)
[link] (https://www.producthunt.com/products/mcpwhiz?utm_source=linkedin&utm_medium=social) [comments] (https://www.reddit.com/r/programming/comments/1optgli/i_just_launched_an_opensource_tool_that_lets_you/)
Postgres is Enough
https://www.reddit.com/r/programming/comments/1opv75r/postgres_is_enough/
submitted by /u/iamkeyur (https://www.reddit.com/user/iamkeyur)
[link] (https://gist.github.com/cpursley/c8fb81fe8a7e5df038158bdfe0f06dbb) [comments] (https://www.reddit.com/r/programming/comments/1opv75r/postgres_is_enough/)
https://www.reddit.com/r/programming/comments/1opv75r/postgres_is_enough/
submitted by /u/iamkeyur (https://www.reddit.com/user/iamkeyur)
[link] (https://gist.github.com/cpursley/c8fb81fe8a7e5df038158bdfe0f06dbb) [comments] (https://www.reddit.com/r/programming/comments/1opv75r/postgres_is_enough/)