SymbAnaFis v0.4 - Symbolic differentiation in Rust
Hey r/rust! Back again with another update on SymbAnaFis, my symbolic math library.
Yes, I should be studying for my 5 exams instead of coding during the holidays. No, I will not be taking questions on my life choices. Happy New Year though! 🎄
# What it does
A symbolic math library for **differentiation, simplification, and fast numerical evaluation**:
* Symbolic derivatives with chain rule, product rule, quotient rule
* 50+ built-in functions (trig, hyperbolic, exponential, gamma, bessel, erf, etc.)
* Algebraic simplification (trig identities, power rules, constant folding)
* Compiled bytecode evaluator with SIMD batch evaluation
* Python bindings via PyO3
use symb_anafis::{diff, symb};
// String API
let result = diff("sin(x)*exp(-x^2)", "x", &[], None)?;
// → "cos(x)*exp(-x^2) - 2*x*sin(x)*exp(-x^2)"
// Type-safe API (Symbol is Copy!)
let x = symb("x");
let expr = x.sin() * (-x.pow(2.0)).exp();
What's new in v0.4.0/v0.4.1
* N-ary AST (Sum/Product instead of binary tree)
* Symbol interning, SIMD batch eval, Rayon parallelism
* log(base, x) arbitrary base logarithms
* Dual numbers for automatic differentiation
* Switched to Apache 2.0 license
# Where it's actually good:
* **Parsing**: 1.2-1.7x faster than Symbolica
* **Raw differentiation**: 1.1-2.4x faster
* **Compilation to bytecode**: 3-290x faster
* **Python bindings** that actually work
# Where it's not so good:
* **Evaluation speed**: Symbolica is 1.3-2.6x faster for small expressions
* **Full pipeline**: Symbolica wins by 1.06-3.2x when including simplification
# Trade-offs:
* **Simplification is slower** but does deep AST restructuring (trig identities, algebraic normalization, power rules). Symbolica only does light term collection. Different goals — I prioritize simplified output over speed, that's why skip\_simplification defaults to false.
* **Evaluator is a stack-based bytecode interpreter**, while Symbolica generates native C++/ASM/SIMD code. Would love help optimizing this if anyone has experience with fast interpreters or JIT compilation.
[Full benchmarks here](https://github.com/CokieMiner/SymbAnaFis/blob/main/benches/BENCHMARK_RESULTS.md)
# Development
I use AI to help with development — I'm a 2nd year physics student learning Rust, not a 10x dev. Everything goes through clippy and a 837-test suite.
# What I'd love help with:
* **Simplification rules**: I'm not an encyclopedia, definitely missing identities
* **Evaluator performance**: Help optimizing the bytecode interpreter (or moving to JIT?)
* **Edge cases**: Physics expressions that break things
# Links
[GitHub](https://github.com/CokieMiner/SymbAnaFis) | [crates.io](https://crates.io/crates/symb_anafis) | [PyPI](https://pypi.org/project/symb-anafis/)
Now if you'll excuse me, I have thermodynamics to avoid studying.
https://redd.it/1pyxxby
@r_rust
Hey r/rust! Back again with another update on SymbAnaFis, my symbolic math library.
Yes, I should be studying for my 5 exams instead of coding during the holidays. No, I will not be taking questions on my life choices. Happy New Year though! 🎄
# What it does
A symbolic math library for **differentiation, simplification, and fast numerical evaluation**:
* Symbolic derivatives with chain rule, product rule, quotient rule
* 50+ built-in functions (trig, hyperbolic, exponential, gamma, bessel, erf, etc.)
* Algebraic simplification (trig identities, power rules, constant folding)
* Compiled bytecode evaluator with SIMD batch evaluation
* Python bindings via PyO3
use symb_anafis::{diff, symb};
// String API
let result = diff("sin(x)*exp(-x^2)", "x", &[], None)?;
// → "cos(x)*exp(-x^2) - 2*x*sin(x)*exp(-x^2)"
// Type-safe API (Symbol is Copy!)
let x = symb("x");
let expr = x.sin() * (-x.pow(2.0)).exp();
What's new in v0.4.0/v0.4.1
* N-ary AST (Sum/Product instead of binary tree)
* Symbol interning, SIMD batch eval, Rayon parallelism
* log(base, x) arbitrary base logarithms
* Dual numbers for automatic differentiation
* Switched to Apache 2.0 license
# Where it's actually good:
* **Parsing**: 1.2-1.7x faster than Symbolica
* **Raw differentiation**: 1.1-2.4x faster
* **Compilation to bytecode**: 3-290x faster
* **Python bindings** that actually work
# Where it's not so good:
* **Evaluation speed**: Symbolica is 1.3-2.6x faster for small expressions
* **Full pipeline**: Symbolica wins by 1.06-3.2x when including simplification
# Trade-offs:
* **Simplification is slower** but does deep AST restructuring (trig identities, algebraic normalization, power rules). Symbolica only does light term collection. Different goals — I prioritize simplified output over speed, that's why skip\_simplification defaults to false.
* **Evaluator is a stack-based bytecode interpreter**, while Symbolica generates native C++/ASM/SIMD code. Would love help optimizing this if anyone has experience with fast interpreters or JIT compilation.
[Full benchmarks here](https://github.com/CokieMiner/SymbAnaFis/blob/main/benches/BENCHMARK_RESULTS.md)
# Development
I use AI to help with development — I'm a 2nd year physics student learning Rust, not a 10x dev. Everything goes through clippy and a 837-test suite.
# What I'd love help with:
* **Simplification rules**: I'm not an encyclopedia, definitely missing identities
* **Evaluator performance**: Help optimizing the bytecode interpreter (or moving to JIT?)
* **Edge cases**: Physics expressions that break things
# Links
[GitHub](https://github.com/CokieMiner/SymbAnaFis) | [crates.io](https://crates.io/crates/symb_anafis) | [PyPI](https://pypi.org/project/symb-anafis/)
Now if you'll excuse me, I have thermodynamics to avoid studying.
https://redd.it/1pyxxby
@r_rust
GitHub
SymbAnaFis/benches/BENCHMARK_RESULTS.md at main · CokieMiner/SymbAnaFis
High-performance symbolic differentiation, simplification, and evaluation engine for Rust and Python. - CokieMiner/SymbAnaFis
Published my first Crate: Sql-Docs
I was recently given a reason to finally publish a crate to crates.io and am proud to announce Sql-Docs, a crate for parsing comments preceding SQL statements in `sql` files. This was a lot of time, mostly polishing what I made and several revisions to try and make sure I was following the expectations of a published crate.
Happy to share it with the community!
https://redd.it/1pz97fm
@r_rust
I was recently given a reason to finally publish a crate to crates.io and am proud to announce Sql-Docs, a crate for parsing comments preceding SQL statements in `sql` files. This was a lot of time, mostly polishing what I made and several revisions to try and make sure I was following the expectations of a published crate.
Happy to share it with the community!
https://redd.it/1pz97fm
@r_rust
crates.io
crates.io: Rust Package Registry
Announcing
err_trail is a generic logging interface over the
All methods and macros in the crate work with the generic backends. Which makes it perfect for libraries, since the downstream chooses the backend. If no backend is selected they are compiled away.
### Macros
Familiar
### New Result and Option methods
New methods are added to
The same methods exist for
https://redd.it/1pzaa5l
@r_rust
err_trail: A generic logging interfaceerr_trail is a generic logging interface over the
tracing, log, defmt crates. It use to be a part of the error_set crate as a feature flag, but I recently broke it into a separate crate. I thought I would post about it here in case anyone else finds it useful.All methods and macros in the crate work with the generic backends. Which makes it perfect for libraries, since the downstream chooses the backend. If no backend is selected they are compiled away.
### Macros
Familiar
error!, warn!, info!, debug!, trace! macros exist to log in a way similar to the built in rust format! macro.use err_trail::{error, warn, info, debug, trace};
fn main() {
error!("An error occurred: {}", "disk full");
warn!("This is a warning: {}", "high memory usage");
info!("Some info: {}", "service started");
debug!("Debugging value: {:?}", vec![1, 2, 3]);
trace!("Trace log: {}", "function entered");
}
### New Result and Option methods
New methods are added to
Result and Option types - error, warn, info, debug, trace. These apply logs are various log levels and can be easily chained with other methods.use err_trail::ErrContext;
fn main() {
let value: Result<(), String> = result().error("If `Err`, this message is logged as error");
let value: Result<(), String> = result().warn("If `Err`, this message is logged as warn");
// Notice these methods can also accept closures for lazy evaluation
let value: Result<(), String> = result().error(|err: &String| format!("If `Err`, this message is logged as error: {}", err));
// If the error type implements `Display` then `()` can be passed to log the error directly if `Err`
let value: Result<(), String> = result().error(());
}
fn result() -> Result<(), String> { Ok(()) }
The same methods exist for
Option too.https://redd.it/1pzaa5l
@r_rust
GitHub
GitHub - mcmah309/err_trail: Add context to errors through logging
Add context to errors through logging. Contribute to mcmah309/err_trail development by creating an account on GitHub.
Dynamic dispatch vs bytes() question
The intent of my question is improve my ability to use the available rust documentation to explain the compilers behavior in the following relatively simple code snippet. As shown, this code fails to compile because "the \`bytes\` method cannot be invoked on a trait object". However, if I enable the `use std::io::Read` statement. It does compile and run as desired. Why? It appears to me that (the same) bytes is attempting to be invoked on a trait object in both cases. How should I read the documentation to have been able to expect the real resulting behavior?
use std::fs::File;
use std::io::{self, BufRead, BufReader};
//use std::io::Read;
...
fn dump_bytes(reader: Box<dyn BufRead>, n:u64) -> Result<()> {
let bytes = reader.bytes().take(n as usize).collect::<Result<Vec<_>, _>>();
print!("{}", String::from_utf8_lossy(&bytes?));
Ok(())
}
As an aside, this puzzle resulted from my efforts toward rust self-education utilizing "Command-Line Rust" and it's headr exercise.
https://redd.it/1pz5tur
@r_rust
The intent of my question is improve my ability to use the available rust documentation to explain the compilers behavior in the following relatively simple code snippet. As shown, this code fails to compile because "the \`bytes\` method cannot be invoked on a trait object". However, if I enable the `use std::io::Read` statement. It does compile and run as desired. Why? It appears to me that (the same) bytes is attempting to be invoked on a trait object in both cases. How should I read the documentation to have been able to expect the real resulting behavior?
use std::fs::File;
use std::io::{self, BufRead, BufReader};
//use std::io::Read;
...
fn dump_bytes(reader: Box<dyn BufRead>, n:u64) -> Result<()> {
let bytes = reader.bytes().take(n as usize).collect::<Result<Vec<_>, _>>();
print!("{}", String::from_utf8_lossy(&bytes?));
Ok(())
}
As an aside, this puzzle resulted from my efforts toward rust self-education utilizing "Command-Line Rust" and it's headr exercise.
https://redd.it/1pz5tur
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
I finally stopped messaging myself to move text between devices
You know that thing where you find a link on your phone and need it on your laptop, so you... message it to yourself? Email it? I had a "Notes to self" Telegram chat dedicated to this for years. It got weird when I started replying to myself.
Apple has Universal Clipboard. Samsung has their thing. But I use a Mac at work, Linux, Windows (until Valve takes the Gaming) at home, and an Android phone - none of that helps.
So I built DecentPaste. Clipboard sync that works across Windows, Mac, Linux, and Android. Written in Rust. No cloud, no account, stays on your local WiFi.
Also doubles as a clipboard history manager - which honestly I use just as much as the sync itself. Copy something, get distracted, come back 10 minutes later and it's still there.
Alpha - I use it daily but there are rough edges.
## How it works
1. Install on your devices
2. Pair with a 4-6-digit PIN
3. Copy on one → paste on all
That's basically it.
## What it does
- P2P only - your clipboard never leaves your network. No server to breach because there's no server.
- E2E encrypted - X25519 key exchange, AES-256-GCM. Keys derived locally via ECDH, never transmitted.
- Share sheet support - share from any app to DecentPaste, or use "Share Now" in the app.
Oh and it's ~15MB, not 200MB. Tauri, not Electron. Your RAM will thank you.
## Tech
Rust backend:
- Tauri v2 for the app
- libp2p - mDNS for discovery, gossipsub for broadcast, request-response for pairing
- IOTA Stronghold - encrypted storage with Argon2id key derivation
First time using libp2p. Learned a lot about gossipsub mesh formation the hard way.
## What doesn't work yet
- Text only for now
- Devices need same WiFi
- No iOS yet (app store testflight only)
- No background sync on mobile - saves battery, keeps it simple
## Links
GitHub: https://github.com/decentpaste/decentpaste
Website: https://decentpaste.com
Play Store / App Store not publicly listed yet. You can grab the Android APK directly from the website, or DM me your email to join the closed testing - helps me get listed faster.
Apache 2.0. If something breaks, open an issue.
https://redd.it/1pzcqts
@r_rust
You know that thing where you find a link on your phone and need it on your laptop, so you... message it to yourself? Email it? I had a "Notes to self" Telegram chat dedicated to this for years. It got weird when I started replying to myself.
Apple has Universal Clipboard. Samsung has their thing. But I use a Mac at work, Linux, Windows (until Valve takes the Gaming) at home, and an Android phone - none of that helps.
So I built DecentPaste. Clipboard sync that works across Windows, Mac, Linux, and Android. Written in Rust. No cloud, no account, stays on your local WiFi.
Also doubles as a clipboard history manager - which honestly I use just as much as the sync itself. Copy something, get distracted, come back 10 minutes later and it's still there.
Alpha - I use it daily but there are rough edges.
## How it works
1. Install on your devices
2. Pair with a 4-6-digit PIN
3. Copy on one → paste on all
That's basically it.
## What it does
- P2P only - your clipboard never leaves your network. No server to breach because there's no server.
- E2E encrypted - X25519 key exchange, AES-256-GCM. Keys derived locally via ECDH, never transmitted.
- Share sheet support - share from any app to DecentPaste, or use "Share Now" in the app.
Oh and it's ~15MB, not 200MB. Tauri, not Electron. Your RAM will thank you.
## Tech
Rust backend:
- Tauri v2 for the app
- libp2p - mDNS for discovery, gossipsub for broadcast, request-response for pairing
- IOTA Stronghold - encrypted storage with Argon2id key derivation
First time using libp2p. Learned a lot about gossipsub mesh formation the hard way.
## What doesn't work yet
- Text only for now
- Devices need same WiFi
- No iOS yet (app store testflight only)
- No background sync on mobile - saves battery, keeps it simple
## Links
GitHub: https://github.com/decentpaste/decentpaste
Website: https://decentpaste.com
Play Store / App Store not publicly listed yet. You can grab the Android APK directly from the website, or DM me your email to join the closed testing - helps me get listed faster.
Apache 2.0. If something breaks, open an issue.
https://redd.it/1pzcqts
@r_rust
GitHub
GitHub - decentpaste/decentpaste: Universal Clipboard for Every Device
Universal Clipboard for Every Device. Contribute to decentpaste/decentpaste development by creating an account on GitHub.
lazy-image: A drop-in replacement for Sharp using Rust, NAPI-RS, and Copy-on-Write architecture
Hi everyone,
I released **lazy-image**, a next-gen image processing library for Node.js.
While `sharp` (libvips) is fast, I wanted to leverage Rust's memory safety and create a zero-copy architecture for format conversions.
**Tech Stack:**
* **Framework**: `napi-rs`
* **Encoders**: `mozjpeg` (via crate), `ravif`, `libwebp-sys`
* **Resizing**: `fast_image_resize` (SIMD accelerated)
**Results:**
* **Faster**: Outperforms sharp in complex pipelines (resize + rotate + filter) by \~1.4x.
* **Smaller Files**: Mozjpeg integration yields \~10% smaller files by default.
* **Safer**: No Segfaults processing user uploads.
It's currently used in my production SaaS. Check it out!
**GitHub**: https://github.com/albert-einshutoin/lazy-image
**npm**: https://www.npmjs.com/package/@alberteinshutoin/lazy-image
https://redd.it/1pzdw9l
@r_rust
Hi everyone,
I released **lazy-image**, a next-gen image processing library for Node.js.
While `sharp` (libvips) is fast, I wanted to leverage Rust's memory safety and create a zero-copy architecture for format conversions.
**Tech Stack:**
* **Framework**: `napi-rs`
* **Encoders**: `mozjpeg` (via crate), `ravif`, `libwebp-sys`
* **Resizing**: `fast_image_resize` (SIMD accelerated)
**Results:**
* **Faster**: Outperforms sharp in complex pipelines (resize + rotate + filter) by \~1.4x.
* **Smaller Files**: Mozjpeg integration yields \~10% smaller files by default.
* **Safer**: No Segfaults processing user uploads.
It's currently used in my production SaaS. Check it out!
**GitHub**: https://github.com/albert-einshutoin/lazy-image
**npm**: https://www.npmjs.com/package/@alberteinshutoin/lazy-image
https://redd.it/1pzdw9l
@r_rust
GitHub
GitHub - albert-einshutoin/lazy-image
Contribute to albert-einshutoin/lazy-image development by creating an account on GitHub.
A machine learning library from scratch in Rust (no torch, no candle, no ndarray) - Iron Learn
This is exactly what my machine can do now. Image Courtesy: Google Gemini
I just finished working on my machine learning library in Rust and using it my machine could "draw" the image fed to it.
To understand how Transformers actually work, I ditched all the library. I was curious to know how merely math can talk to me.
Following are few current highlights of the library:
1. 2D Tensor Support with Parallel CPU Execution
2. Optional NVIDIA acceleration support with GPU Memory Pool
3. Linear Regression
4. Logistic Regression
5. Gradient Descent
6. Neural Net
7. Activation Functions
8. Loss Functions
I have tried to provide as much documentation as possible for all the components.
Here is the repo: Palash90/iron\_learn
Please share your thoughts. :)
I am open to PRs if anyone wants to join me.
https://redd.it/1pzemgm
@r_rust
This is exactly what my machine can do now. Image Courtesy: Google Gemini
I just finished working on my machine learning library in Rust and using it my machine could "draw" the image fed to it.
To understand how Transformers actually work, I ditched all the library. I was curious to know how merely math can talk to me.
Following are few current highlights of the library:
1. 2D Tensor Support with Parallel CPU Execution
2. Optional NVIDIA acceleration support with GPU Memory Pool
3. Linear Regression
4. Logistic Regression
5. Gradient Descent
6. Neural Net
7. Activation Functions
8. Loss Functions
I have tried to provide as much documentation as possible for all the components.
Here is the repo: Palash90/iron\_learn
Please share your thoughts. :)
I am open to PRs if anyone wants to join me.
https://redd.it/1pzemgm
@r_rust
No cars were harmed for this talk: Automotive Rust, amateur style - Frank Lyaruu | EuroRust 2025
https://youtu.be/4sYzxv6YijI
https://redd.it/1pzgeu9
@r_rust
https://youtu.be/4sYzxv6YijI
https://redd.it/1pzgeu9
@r_rust
YouTube
No cars were harmed for this talk: Automotive Rust, amateur style - Frank Lyaruu | EuroRust 2025
No cars were harmed for this talk: Automotive Rust, amateur style - Frank Lyaruu at EuroRust 2025
No cars were harmed for this talk: Automotive Rust, amateur style
Software in cars has a bad reputation. People who love cars think it is too intangible and…
No cars were harmed for this talk: Automotive Rust, amateur style
Software in cars has a bad reputation. People who love cars think it is too intangible and…
This media is not supported in your browser
VIEW IN TELEGRAM
[Media] Nexus: Terminal HTTP client with gRPC support and Postman imports!
https://redd.it/1pzizr0
@r_rust
https://redd.it/1pzizr0
@r_rust
ts-bridge – Rust tsserver shim for Neovim
Hey folks, I’ve been working on ts-bridge, a Rust-native shim that sits between Neovim’s LSP client and Microsoft’s tsserver. Neovim already works with typenoscript-language-server, but that project lives entirely in Node/TypeScript, so every buffer sync gets funneled through a JS runtime that pre-spawns tsserver and marshals JSON before the real compiler sees it.
On large TS workspaces that extra layer becomes sluggish—completions lag, diagnostics stutter, and memory usage climbs just to keep the glue alive. ts-bridge replaces that stack with a single Rust binary and the process lazily launches the tsserver you already have while streaming lsp features without Lua/Node overhead.
Written 100% in Rust and if you’re a Neovim user, give it a shot.
Repo: https://github.com/chojs23/ts-bridge
https://redd.it/1pzik32
@r_rust
Hey folks, I’ve been working on ts-bridge, a Rust-native shim that sits between Neovim’s LSP client and Microsoft’s tsserver. Neovim already works with typenoscript-language-server, but that project lives entirely in Node/TypeScript, so every buffer sync gets funneled through a JS runtime that pre-spawns tsserver and marshals JSON before the real compiler sees it.
On large TS workspaces that extra layer becomes sluggish—completions lag, diagnostics stutter, and memory usage climbs just to keep the glue alive. ts-bridge replaces that stack with a single Rust binary and the process lazily launches the tsserver you already have while streaming lsp features without Lua/Node overhead.
Written 100% in Rust and if you’re a Neovim user, give it a shot.
Repo: https://github.com/chojs23/ts-bridge
https://redd.it/1pzik32
@r_rust
GitHub
GitHub - chojs23/ts-bridge: A Rust TypeScript language-server shim that bridges Neovim's LSP client with tsserver.
A Rust TypeScript language-server shim that bridges Neovim's LSP client with tsserver. - chojs23/ts-bridge
that microsoft rust rewrite post got me thinking about my own c to rust attempt
saw that microsoft post about rewriting c/c++ to rust with ai. reminded me i tried this last year
had a personal c project, around 12k lines. packet analyzer i wrote years ago. wanted to learn rust so figured id port it
tried using ai tools to speed it up. normally use verdent cause i can switch between claude and gpt for different tasks, used claude for the tricky ownership stuff and gpt for basic conversions
basic syntax stuff worked fine. loops and match expressions converted ok
pointers were a disaster tho. ai kept suggesting clone() everywhere or just slapping references on things. had to rethink the whole ownership model
i had this memory pool pattern in c that worked great. ai tried converting it literally. complete nonsense in rust. ended up just using vec and letting rust handle it
took way longer than expected. got maybe half done before i gave up and started over with a cleaner design
the "it compiles" thing bit me hard. borrow checker was happy but runtime behavior was wrong. spent days debugging that
microsofts 1 million lines per month claim seems crazy. maybe for trivial code but real systems have so much implicit knowledge baked in
ai is useful for boilerplate but the hard parts you gotta understand yourself
https://redd.it/1pzmsc9
@r_rust
saw that microsoft post about rewriting c/c++ to rust with ai. reminded me i tried this last year
had a personal c project, around 12k lines. packet analyzer i wrote years ago. wanted to learn rust so figured id port it
tried using ai tools to speed it up. normally use verdent cause i can switch between claude and gpt for different tasks, used claude for the tricky ownership stuff and gpt for basic conversions
basic syntax stuff worked fine. loops and match expressions converted ok
pointers were a disaster tho. ai kept suggesting clone() everywhere or just slapping references on things. had to rethink the whole ownership model
i had this memory pool pattern in c that worked great. ai tried converting it literally. complete nonsense in rust. ended up just using vec and letting rust handle it
took way longer than expected. got maybe half done before i gave up and started over with a cleaner design
the "it compiles" thing bit me hard. borrow checker was happy but runtime behavior was wrong. spent days debugging that
microsofts 1 million lines per month claim seems crazy. maybe for trivial code but real systems have so much implicit knowledge baked in
ai is useful for boilerplate but the hard parts you gotta understand yourself
https://redd.it/1pzmsc9
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Investigating and fixing a nasty clone bug
https://kobzol.github.io/rust/2025/12/30/investigating-and-fixing-a-nasty-clone-bug.html
https://redd.it/1pzjj94
@r_rust
https://kobzol.github.io/rust/2025/12/30/investigating-and-fixing-a-nasty-clone-bug.html
https://redd.it/1pzjj94
@r_rust
Kobzol’s blog
Investigating and fixing a nasty clone bug
Recently I found myself battling with another nasty bug. It took me several hours to understand what is happening, and once I found it, it turned out that the cause of the bug is relevant to the Ergonomic cloning initiative that is currently being discussed…
I ported my first open-source project from Swift to Rust and reduced the ram usage by ~80%
Hi all!
I have been working on a project designed to replicate the feeling of using a trackball on macOS- specifically the "momentum pointer" which is a feature present on iPadOS and some Synaptics touchpads. As far as I know, my project is the only known implementation of this on macOS.
I decided to build the first version of this project in Swift. I chose Swift originally because it appeared very syntactically similar to javanoscript (which is what I use primarily in my day job) and seemed to be the easiest to integrate with Apple frameworks.
Eventually I finished it and was satisfied with the result, but felt that there was a lot of bloat (partially due to some LLM use) and that the application was not efficient enough. It hovered around 30MB of ram constantly and would absolutely destroy my macbook's battery life.
I had been interested in learning Rust for a while and after a quick skim of this great article, I began porting the entire codebase. It was a great learning opportunity for me and I was even able to reduce a lot of the complexity that my LLM use in the Swift version caused. I was pleasantly surprised that I was able to find crates for all of the Apple frameworks I needed to make it work, and even bindings for necessary private functions.
The experiment has been a great success so far, as the application now hovers around 6MB of ram (1/5 that of the Swift version) and has a minimal impact to battery life. I only needed an LLM for one part of the codebase, being the trackpad integration since I didn't have a great understanding of some of the deeper concepts necessary for it. I do plan on going back to it though after I've done a little more learning.
Take a look at the project here: https://github.com/margooey/lapsus\_rust/
Article mentioned: https://rust-unofficial.github.io/too-many-lists/
https://redd.it/1pznbfy
@r_rust
Hi all!
I have been working on a project designed to replicate the feeling of using a trackball on macOS- specifically the "momentum pointer" which is a feature present on iPadOS and some Synaptics touchpads. As far as I know, my project is the only known implementation of this on macOS.
I decided to build the first version of this project in Swift. I chose Swift originally because it appeared very syntactically similar to javanoscript (which is what I use primarily in my day job) and seemed to be the easiest to integrate with Apple frameworks.
Eventually I finished it and was satisfied with the result, but felt that there was a lot of bloat (partially due to some LLM use) and that the application was not efficient enough. It hovered around 30MB of ram constantly and would absolutely destroy my macbook's battery life.
I had been interested in learning Rust for a while and after a quick skim of this great article, I began porting the entire codebase. It was a great learning opportunity for me and I was even able to reduce a lot of the complexity that my LLM use in the Swift version caused. I was pleasantly surprised that I was able to find crates for all of the Apple frameworks I needed to make it work, and even bindings for necessary private functions.
The experiment has been a great success so far, as the application now hovers around 6MB of ram (1/5 that of the Swift version) and has a minimal impact to battery life. I only needed an LLM for one part of the codebase, being the trackpad integration since I didn't have a great understanding of some of the deeper concepts necessary for it. I do plan on going back to it though after I've done a little more learning.
Take a look at the project here: https://github.com/margooey/lapsus\_rust/
Article mentioned: https://rust-unofficial.github.io/too-many-lists/
https://redd.it/1pznbfy
@r_rust
GitHub
GitHub - margooey/lapsus_rust: a re-write of Lapsus in rust for speed and power efficiency
a re-write of Lapsus in rust for speed and power efficiency - margooey/lapsus_rust
Optimizing RAM usage of Rust Analyzer
Do you guys have any tips for optimizing RAM usage? In some of my projects, RAM usage can reach 6 GB. What configurations do you use in your IDEs? I'm using Zed Editor at the moment.
https://redd.it/1pzmhxh
@r_rust
Do you guys have any tips for optimizing RAM usage? In some of my projects, RAM usage can reach 6 GB. What configurations do you use in your IDEs? I'm using Zed Editor at the moment.
https://redd.it/1pzmhxh
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Are we official gRPC yet?
At the gRPC Conf in September, there was a presentation on the official support for gRPC in Rust. During the presentation, some milestones were shared which included a beta release in late 2025. Has anyone seen a status update on this or know where this announcement would be communicated?
https://redd.it/1pzkd63
@r_rust
At the gRPC Conf in September, there was a presentation on the official support for gRPC in Rust. During the presentation, some milestones were shared which included a beta release in late 2025. Has anyone seen a status update on this or know where this announcement would be communicated?
https://redd.it/1pzkd63
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Rust project ideas that stress ownership & lifetimes (beginner-friendly)
I’ve been practicing Rust on Codewars and I’m getting more comfortable with ownership and lifetimes — but I want to apply them in real projects.
I have ~10 hours/week and I’m looking for beginner-friendly projects that naturally force you to think about borrowing, references, and structuring data safely (not just another CRUD app).
So far I’ve done small CLIs and websites, but nothing bigger.
What projects helped you really understand the borrow checker — and why?
https://redd.it/1pzuy2q
@r_rust
I’ve been practicing Rust on Codewars and I’m getting more comfortable with ownership and lifetimes — but I want to apply them in real projects.
I have ~10 hours/week and I’m looking for beginner-friendly projects that naturally force you to think about borrowing, references, and structuring data safely (not just another CRUD app).
So far I’ve done small CLIs and websites, but nothing bigger.
What projects helped you really understand the borrow checker — and why?
https://redd.it/1pzuy2q
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Blowing Up Voxel Asteroids in Rust: SVOs, Physics, and Why Explosions Are Harder Than They Look
I'm working on a voxel space mining game in Rust (wgpu + hecs) and recently finished the explosive system. Thought I'd share how it works since voxel destruction with proper physics is one of those things that sounds simple until you actually try to build it.
GIF
The game has asteroids made of voxels that you can mine or blow apart. When an explosive goes off, it needs to:
1. Carve a spherical hole in the voxel data
2. Spawn debris chunks flying outward
3. Detect if the asteroid split into disconnected pieces
4. Update center of mass and physics for everything
5. Regenerate meshes without hitching
# The Voxel Structure: Sparse Voxel Octree
Asteroids use an SVO instead of a flat 3D array. A 64³ asteroid would need 262k entries in an array, but most of that is empty space. The SVO only stores what's actually there:
pub enum SvoNode {
Leaf(VoxelMaterial),
Branch(Box<Option<SvoNode>; 8>),
}
pub struct Svo {
pub root: Option<SvoNode>,
pub size: u32, // Must be power of 2
pub depth: u32,
}
Each branch divides space into 8 octants. To find which child a coordinate belongs to, you check the relevant bit at each level:
fn childindex(x: u32, y: u32, z: u32, level: u32) -> usize {
let bit = 1 << level;
let ix = ((x & bit) != 0) as usize;
let iy = ((y & bit) != 0) as usize;
let iz = ((z & bit) != 0) as usize;
ix | (iy << 1) | (iz << 2)
}
This gives you O(log n) lookups and inserts, and empty regions don't cost memory.
# Spherical Blast Damage
When a bomb goes off, we need to remove all voxels within the blast radius. The naive approach iterates the bounding box and checks distance:
pub fn applyblastdamage(svo: &mut Svo, center: Vec3, radius: f32) -> u32 {
let mut removed = 0;
let size = svo.size as f32;
let minx = ((center.x - radius).max(0.0)) as u32;
let maxx = ((center.x + radius).min(size - 1.0)) as u32;
// ... same for y, z
for x in minx..=maxx {
for y in miny..=maxy {
for z in minz..=maxz {
let voxelpos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxelpos - center).length() <= radius {
if svo.get(x, y, z) != VoxelMaterial::Empty {
svo.set(x, y, z, VoxelMaterial::Empty);
removed += 1;
}
}
}
}
}
removed
}
With a blast radius of 8 voxels, you're checking at most 16³ = 4096 positions. Not elegant but it runs in microseconds.
# Debris Chunking by Octant
Here's where it gets interesting. The voxels we removed should fly outward as debris. But spawning hundreds of individual voxels would be a mess. Instead, I group them by which octant they're in relative to the blast center:
// Group voxels into chunks based on their octant relative to blast center
let mut chunks: [Vec<(u32, u32, u32, VoxelMaterial)>; 8] = Default::default();
for x in minx..=maxx {
for y in miny..=maxy {
for z in minz..=maxz {
let voxelpos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxelpos - blastcenter).length() <= radius {
let material = svo.get(x, y, z);
if material != VoxelMaterial::Empty {
// Determine octant (0-7) based on position relative to blast center
let octant = ((if voxelpos.x > blastcenter.x { 1 } else { 0 })
| (if voxelpos.y > blastcenter.y { 2 } else { 0 })
| (if voxelpos.z > blastcenter.z { 4 } else { 0 })) as usize;
chunksoctant.push((x, y, z, material));
}
I'm working on a voxel space mining game in Rust (wgpu + hecs) and recently finished the explosive system. Thought I'd share how it works since voxel destruction with proper physics is one of those things that sounds simple until you actually try to build it.
GIF
The game has asteroids made of voxels that you can mine or blow apart. When an explosive goes off, it needs to:
1. Carve a spherical hole in the voxel data
2. Spawn debris chunks flying outward
3. Detect if the asteroid split into disconnected pieces
4. Update center of mass and physics for everything
5. Regenerate meshes without hitching
# The Voxel Structure: Sparse Voxel Octree
Asteroids use an SVO instead of a flat 3D array. A 64³ asteroid would need 262k entries in an array, but most of that is empty space. The SVO only stores what's actually there:
pub enum SvoNode {
Leaf(VoxelMaterial),
Branch(Box<Option<SvoNode>; 8>),
}
pub struct Svo {
pub root: Option<SvoNode>,
pub size: u32, // Must be power of 2
pub depth: u32,
}
Each branch divides space into 8 octants. To find which child a coordinate belongs to, you check the relevant bit at each level:
fn childindex(x: u32, y: u32, z: u32, level: u32) -> usize {
let bit = 1 << level;
let ix = ((x & bit) != 0) as usize;
let iy = ((y & bit) != 0) as usize;
let iz = ((z & bit) != 0) as usize;
ix | (iy << 1) | (iz << 2)
}
This gives you O(log n) lookups and inserts, and empty regions don't cost memory.
# Spherical Blast Damage
When a bomb goes off, we need to remove all voxels within the blast radius. The naive approach iterates the bounding box and checks distance:
pub fn applyblastdamage(svo: &mut Svo, center: Vec3, radius: f32) -> u32 {
let mut removed = 0;
let size = svo.size as f32;
let minx = ((center.x - radius).max(0.0)) as u32;
let maxx = ((center.x + radius).min(size - 1.0)) as u32;
// ... same for y, z
for x in minx..=maxx {
for y in miny..=maxy {
for z in minz..=maxz {
let voxelpos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxelpos - center).length() <= radius {
if svo.get(x, y, z) != VoxelMaterial::Empty {
svo.set(x, y, z, VoxelMaterial::Empty);
removed += 1;
}
}
}
}
}
removed
}
With a blast radius of 8 voxels, you're checking at most 16³ = 4096 positions. Not elegant but it runs in microseconds.
# Debris Chunking by Octant
Here's where it gets interesting. The voxels we removed should fly outward as debris. But spawning hundreds of individual voxels would be a mess. Instead, I group them by which octant they're in relative to the blast center:
// Group voxels into chunks based on their octant relative to blast center
let mut chunks: [Vec<(u32, u32, u32, VoxelMaterial)>; 8] = Default::default();
for x in minx..=maxx {
for y in miny..=maxy {
for z in minz..=maxz {
let voxelpos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxelpos - blastcenter).length() <= radius {
let material = svo.get(x, y, z);
if material != VoxelMaterial::Empty {
// Determine octant (0-7) based on position relative to blast center
let octant = ((if voxelpos.x > blastcenter.x { 1 } else { 0 })
| (if voxelpos.y > blastcenter.y { 2 } else { 0 })
| (if voxelpos.z > blastcenter.z { 4 } else { 0 })) as usize;
chunksoctant.push((x, y, z, material));
}
}
}
}
}
Each octant chunk becomes its own mini-asteroid with its own SVO. This gives you up to 8 debris pieces flying in roughly sensible directions without any fancy clustering algorthm.
# Debris Physics: Inheriting Momentum
The debris velocity calculation is my favorite part. Each chunk needs to inherit the parent asteroid's linear velocity, PLUS the tangential velocity from the asteroid's spin at that point, PLUS an outward explosion impulse:
// Direction: outward from asteroid center
let outwardlocal = chunklocal.normalizeorzero();
let outwardworld = asteroidrotation outward_local;
// World-space offset for tangential velocity calculation
let world_offset = asteroid_rotation chunklocal;
let tangentialvelocity = asteroidangularvelocity.cross(worldoffset);
// Final velocity: parent + spin contribution + explosion
let explosionspeed = DEBRISSPEED * (0.8 + rng.f32() * 0.4);
let velocity = asteroidvelocity + tangentialvelocity + outwardworld explosion_speed;
// Random tumble for visual variety
let angular_velocity = Vec3::new(
rng.f32() 4.0 - 2.0,
rng.f32() 4.0 - 2.0,
rng.f32() 4.0 - 2.0,
);
If the asteroid was spinning when you blew it up, the debris on the leading edge flies faster than the trailing edge. It looks really satisfying when chunks spiral outward.
# Connected Components: Did We Split It?
After the explosion, the parent asteroid might be split into disconnected chunks. We detect this with a basic BFS flood fill:
pub fn findconnectedcomponents(svo: &Svo) -> Vec<HashSet<(u32, u32, u32)>> {
let mut visited = HashSet::new();
let mut components = Vec::new();
for (x, y, z, material) in svo.itervoxels() {
if material == VoxelMaterial::Empty || visited.contains(&(x, y, z)) {
continue;
}
// BFS flood fill from this voxel
let mut component = HashSet::new();
let mut queue = VecDeque::new();
queue.pushback((x, y, z));
while let Some((cx, cy, cz)) = queue.popfront() {
if visited.contains(&(cx, cy, cz)) {
continue;
}
if svo.get(cx, cy, cz) == VoxelMaterial::Empty {
continue;
}
visited.insert((cx, cy, cz));
component.insert((cx, cy, cz));
// Check 6-connected neighbors (face-adjacent only)
let neighbors: [(i32, i32, i32); 6] = [
(1, 0, 0), (-1, 0, 0),
(0, 1, 0), (0, -1, 0),
(0, 0, 1), (0, 0, -1),
];
for (dx, dy, dz) in neighbors {
let nx = cx as i32 + dx;
let ny = cy as i32 + dy;
let nz = cz as i32 + dz;
if nx >= 0 && ny >= 0 && nz >= 0
&& (nx as u32) < svo.size
&& (ny as u32) < svo.size
&& (nz as u32) < svo.size
{
let pos = (nx as u32, ny as u32, nz as u32);
if !visited.contains(&pos) {
queue.pushback(pos);
}
}
}
}
if !component.isempty() {
components.push(component);
}
}
components
}
If we get more than one component, we spawn each as a seperate asteroid. Small fragments (< 50 voxels) just get destroyed since they're not worth tracking.
# Center of Mass Tracking
For physics to feel right, rotation needs to happen around the actual center of mass, not the geometric center. When you mine one voxel at a time, you can update incrementally:
pub fn minevoxel(&mut self, x: u32, y: u32, z: u32) -> VoxelMaterial {
let material = self.svo.remove(x, y, z);
}
}
}
Each octant chunk becomes its own mini-asteroid with its own SVO. This gives you up to 8 debris pieces flying in roughly sensible directions without any fancy clustering algorthm.
# Debris Physics: Inheriting Momentum
The debris velocity calculation is my favorite part. Each chunk needs to inherit the parent asteroid's linear velocity, PLUS the tangential velocity from the asteroid's spin at that point, PLUS an outward explosion impulse:
// Direction: outward from asteroid center
let outwardlocal = chunklocal.normalizeorzero();
let outwardworld = asteroidrotation outward_local;
// World-space offset for tangential velocity calculation
let world_offset = asteroid_rotation chunklocal;
let tangentialvelocity = asteroidangularvelocity.cross(worldoffset);
// Final velocity: parent + spin contribution + explosion
let explosionspeed = DEBRISSPEED * (0.8 + rng.f32() * 0.4);
let velocity = asteroidvelocity + tangentialvelocity + outwardworld explosion_speed;
// Random tumble for visual variety
let angular_velocity = Vec3::new(
rng.f32() 4.0 - 2.0,
rng.f32() 4.0 - 2.0,
rng.f32() 4.0 - 2.0,
);
If the asteroid was spinning when you blew it up, the debris on the leading edge flies faster than the trailing edge. It looks really satisfying when chunks spiral outward.
# Connected Components: Did We Split It?
After the explosion, the parent asteroid might be split into disconnected chunks. We detect this with a basic BFS flood fill:
pub fn findconnectedcomponents(svo: &Svo) -> Vec<HashSet<(u32, u32, u32)>> {
let mut visited = HashSet::new();
let mut components = Vec::new();
for (x, y, z, material) in svo.itervoxels() {
if material == VoxelMaterial::Empty || visited.contains(&(x, y, z)) {
continue;
}
// BFS flood fill from this voxel
let mut component = HashSet::new();
let mut queue = VecDeque::new();
queue.pushback((x, y, z));
while let Some((cx, cy, cz)) = queue.popfront() {
if visited.contains(&(cx, cy, cz)) {
continue;
}
if svo.get(cx, cy, cz) == VoxelMaterial::Empty {
continue;
}
visited.insert((cx, cy, cz));
component.insert((cx, cy, cz));
// Check 6-connected neighbors (face-adjacent only)
let neighbors: [(i32, i32, i32); 6] = [
(1, 0, 0), (-1, 0, 0),
(0, 1, 0), (0, -1, 0),
(0, 0, 1), (0, 0, -1),
];
for (dx, dy, dz) in neighbors {
let nx = cx as i32 + dx;
let ny = cy as i32 + dy;
let nz = cz as i32 + dz;
if nx >= 0 && ny >= 0 && nz >= 0
&& (nx as u32) < svo.size
&& (ny as u32) < svo.size
&& (nz as u32) < svo.size
{
let pos = (nx as u32, ny as u32, nz as u32);
if !visited.contains(&pos) {
queue.pushback(pos);
}
}
}
}
if !component.isempty() {
components.push(component);
}
}
components
}
If we get more than one component, we spawn each as a seperate asteroid. Small fragments (< 50 voxels) just get destroyed since they're not worth tracking.
# Center of Mass Tracking
For physics to feel right, rotation needs to happen around the actual center of mass, not the geometric center. When you mine one voxel at a time, you can update incrementally:
pub fn minevoxel(&mut self, x: u32, y: u32, z: u32) -> VoxelMaterial {
let material = self.svo.remove(x, y, z);
Blowing Up Voxel Asteroids in Rust: SVOs, Physics, and Why Explosions Are Harder Than They Look
I'm working on a voxel space mining game in Rust (wgpu + hecs) and recently finished the explosive system. Thought I'd share how it works since voxel destruction with proper physics is one of those things that sounds simple until you actually try to build it.
[GIF ](https://imgur.com/a/h5Gl5So)
The game has asteroids made of voxels that you can mine or blow apart. When an explosive goes off, it needs to:
1. Carve a spherical hole in the voxel data
2. Spawn debris chunks flying outward
3. Detect if the asteroid split into disconnected pieces
4. Update center of mass and physics for everything
5. Regenerate meshes without hitching
# The Voxel Structure: Sparse Voxel Octree
Asteroids use an SVO instead of a flat 3D array. A 64³ asteroid would need 262k entries in an array, but most of that is empty space. The SVO only stores what's actually there:
pub enum SvoNode {
Leaf(VoxelMaterial),
Branch(Box<[Option<SvoNode>; 8]>),
}
pub struct Svo {
pub root: Option<SvoNode>,
pub size: u32, // Must be power of 2
pub depth: u32,
}
Each branch divides space into 8 octants. To find which child a coordinate belongs to, you check the relevant bit at each level:
fn child_index(x: u32, y: u32, z: u32, level: u32) -> usize {
let bit = 1 << level;
let ix = ((x & bit) != 0) as usize;
let iy = ((y & bit) != 0) as usize;
let iz = ((z & bit) != 0) as usize;
ix | (iy << 1) | (iz << 2)
}
This gives you O(log n) lookups and inserts, and empty regions don't cost memory.
# Spherical Blast Damage
When a bomb goes off, we need to remove all voxels within the blast radius. The naive approach iterates the bounding box and checks distance:
pub fn apply_blast_damage(svo: &mut Svo, center: Vec3, radius: f32) -> u32 {
let mut removed = 0;
let size = svo.size as f32;
let min_x = ((center.x - radius).max(0.0)) as u32;
let max_x = ((center.x + radius).min(size - 1.0)) as u32;
// ... same for y, z
for x in min_x..=max_x {
for y in min_y..=max_y {
for z in min_z..=max_z {
let voxel_pos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxel_pos - center).length() <= radius {
if svo.get(x, y, z) != VoxelMaterial::Empty {
svo.set(x, y, z, VoxelMaterial::Empty);
removed += 1;
}
}
}
}
}
removed
}
With a blast radius of 8 voxels, you're checking at most 16³ = 4096 positions. Not elegant but it runs in microseconds.
# Debris Chunking by Octant
Here's where it gets interesting. The voxels we removed should fly outward as debris. But spawning hundreds of individual voxels would be a mess. Instead, I group them by which octant they're in relative to the blast center:
// Group voxels into chunks based on their octant relative to blast center
let mut chunks: [Vec<(u32, u32, u32, VoxelMaterial)>; 8] = Default::default();
for x in min_x..=max_x {
for y in min_y..=max_y {
for z in min_z..=max_z {
let voxel_pos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxel_pos - blast_center).length() <= radius {
let material = svo.get(x, y, z);
if material != VoxelMaterial::Empty {
// Determine octant (0-7) based on position relative to blast center
let octant = ((if voxel_pos.x > blast_center.x { 1 } else { 0 })
| (if voxel_pos.y > blast_center.y { 2 } else { 0 })
| (if voxel_pos.z > blast_center.z { 4 } else { 0 })) as usize;
chunks[octant].push((x, y, z, material));
}
I'm working on a voxel space mining game in Rust (wgpu + hecs) and recently finished the explosive system. Thought I'd share how it works since voxel destruction with proper physics is one of those things that sounds simple until you actually try to build it.
[GIF ](https://imgur.com/a/h5Gl5So)
The game has asteroids made of voxels that you can mine or blow apart. When an explosive goes off, it needs to:
1. Carve a spherical hole in the voxel data
2. Spawn debris chunks flying outward
3. Detect if the asteroid split into disconnected pieces
4. Update center of mass and physics for everything
5. Regenerate meshes without hitching
# The Voxel Structure: Sparse Voxel Octree
Asteroids use an SVO instead of a flat 3D array. A 64³ asteroid would need 262k entries in an array, but most of that is empty space. The SVO only stores what's actually there:
pub enum SvoNode {
Leaf(VoxelMaterial),
Branch(Box<[Option<SvoNode>; 8]>),
}
pub struct Svo {
pub root: Option<SvoNode>,
pub size: u32, // Must be power of 2
pub depth: u32,
}
Each branch divides space into 8 octants. To find which child a coordinate belongs to, you check the relevant bit at each level:
fn child_index(x: u32, y: u32, z: u32, level: u32) -> usize {
let bit = 1 << level;
let ix = ((x & bit) != 0) as usize;
let iy = ((y & bit) != 0) as usize;
let iz = ((z & bit) != 0) as usize;
ix | (iy << 1) | (iz << 2)
}
This gives you O(log n) lookups and inserts, and empty regions don't cost memory.
# Spherical Blast Damage
When a bomb goes off, we need to remove all voxels within the blast radius. The naive approach iterates the bounding box and checks distance:
pub fn apply_blast_damage(svo: &mut Svo, center: Vec3, radius: f32) -> u32 {
let mut removed = 0;
let size = svo.size as f32;
let min_x = ((center.x - radius).max(0.0)) as u32;
let max_x = ((center.x + radius).min(size - 1.0)) as u32;
// ... same for y, z
for x in min_x..=max_x {
for y in min_y..=max_y {
for z in min_z..=max_z {
let voxel_pos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxel_pos - center).length() <= radius {
if svo.get(x, y, z) != VoxelMaterial::Empty {
svo.set(x, y, z, VoxelMaterial::Empty);
removed += 1;
}
}
}
}
}
removed
}
With a blast radius of 8 voxels, you're checking at most 16³ = 4096 positions. Not elegant but it runs in microseconds.
# Debris Chunking by Octant
Here's where it gets interesting. The voxels we removed should fly outward as debris. But spawning hundreds of individual voxels would be a mess. Instead, I group them by which octant they're in relative to the blast center:
// Group voxels into chunks based on their octant relative to blast center
let mut chunks: [Vec<(u32, u32, u32, VoxelMaterial)>; 8] = Default::default();
for x in min_x..=max_x {
for y in min_y..=max_y {
for z in min_z..=max_z {
let voxel_pos = Vec3::new(x as f32 + 0.5, y as f32 + 0.5, z as f32 + 0.5);
if (voxel_pos - blast_center).length() <= radius {
let material = svo.get(x, y, z);
if material != VoxelMaterial::Empty {
// Determine octant (0-7) based on position relative to blast center
let octant = ((if voxel_pos.x > blast_center.x { 1 } else { 0 })
| (if voxel_pos.y > blast_center.y { 2 } else { 0 })
| (if voxel_pos.z > blast_center.z { 4 } else { 0 })) as usize;
chunks[octant].push((x, y, z, material));
}
}
}
}
}
Each octant chunk becomes its own mini-asteroid with its own SVO. This gives you up to 8 debris pieces flying in roughly sensible directions without any fancy clustering algorthm.
# Debris Physics: Inheriting Momentum
The debris velocity calculation is my favorite part. Each chunk needs to inherit the parent asteroid's linear velocity, PLUS the tangential velocity from the asteroid's spin at that point, PLUS an outward explosion impulse:
// Direction: outward from asteroid center
let outward_local = chunk_local.normalize_or_zero();
let outward_world = asteroid_rotation * outward_local;
// World-space offset for tangential velocity calculation
let world_offset = asteroid_rotation * chunk_local;
let tangential_velocity = asteroid_angular_velocity.cross(world_offset);
// Final velocity: parent + spin contribution + explosion
let explosion_speed = DEBRIS_SPEED * (0.8 + rng.f32() * 0.4);
let velocity = asteroid_velocity + tangential_velocity + outward_world * explosion_speed;
// Random tumble for visual variety
let angular_velocity = Vec3::new(
rng.f32() * 4.0 - 2.0,
rng.f32() * 4.0 - 2.0,
rng.f32() * 4.0 - 2.0,
);
If the asteroid was spinning when you blew it up, the debris on the leading edge flies faster than the trailing edge. It looks really satisfying when chunks spiral outward.
# Connected Components: Did We Split It?
After the explosion, the parent asteroid might be split into disconnected chunks. We detect this with a basic BFS flood fill:
pub fn find_connected_components(svo: &Svo) -> Vec<HashSet<(u32, u32, u32)>> {
let mut visited = HashSet::new();
let mut components = Vec::new();
for (x, y, z, material) in svo.iter_voxels() {
if material == VoxelMaterial::Empty || visited.contains(&(x, y, z)) {
continue;
}
// BFS flood fill from this voxel
let mut component = HashSet::new();
let mut queue = VecDeque::new();
queue.push_back((x, y, z));
while let Some((cx, cy, cz)) = queue.pop_front() {
if visited.contains(&(cx, cy, cz)) {
continue;
}
if svo.get(cx, cy, cz) == VoxelMaterial::Empty {
continue;
}
visited.insert((cx, cy, cz));
component.insert((cx, cy, cz));
// Check 6-connected neighbors (face-adjacent only)
let neighbors: [(i32, i32, i32); 6] = [
(1, 0, 0), (-1, 0, 0),
(0, 1, 0), (0, -1, 0),
(0, 0, 1), (0, 0, -1),
];
for (dx, dy, dz) in neighbors {
let nx = cx as i32 + dx;
let ny = cy as i32 + dy;
let nz = cz as i32 + dz;
if nx >= 0 && ny >= 0 && nz >= 0
&& (nx as u32) < svo.size
&& (ny as u32) < svo.size
&& (nz as u32) < svo.size
{
let pos = (nx as u32, ny as u32, nz as u32);
if !visited.contains(&pos) {
queue.push_back(pos);
}
}
}
}
if !component.is_empty() {
components.push(component);
}
}
components
}
If we get more than one component, we spawn each as a seperate asteroid. Small fragments (< 50 voxels) just get destroyed since they're not worth tracking.
# Center of Mass Tracking
For physics to feel right, rotation needs to happen around the actual center of mass, not the geometric center. When you mine one voxel at a time, you can update incrementally:
pub fn mine_voxel(&mut self, x: u32, y: u32, z: u32) -> VoxelMaterial {
let material = self.svo.remove(x, y, z);
}
}
}
Each octant chunk becomes its own mini-asteroid with its own SVO. This gives you up to 8 debris pieces flying in roughly sensible directions without any fancy clustering algorthm.
# Debris Physics: Inheriting Momentum
The debris velocity calculation is my favorite part. Each chunk needs to inherit the parent asteroid's linear velocity, PLUS the tangential velocity from the asteroid's spin at that point, PLUS an outward explosion impulse:
// Direction: outward from asteroid center
let outward_local = chunk_local.normalize_or_zero();
let outward_world = asteroid_rotation * outward_local;
// World-space offset for tangential velocity calculation
let world_offset = asteroid_rotation * chunk_local;
let tangential_velocity = asteroid_angular_velocity.cross(world_offset);
// Final velocity: parent + spin contribution + explosion
let explosion_speed = DEBRIS_SPEED * (0.8 + rng.f32() * 0.4);
let velocity = asteroid_velocity + tangential_velocity + outward_world * explosion_speed;
// Random tumble for visual variety
let angular_velocity = Vec3::new(
rng.f32() * 4.0 - 2.0,
rng.f32() * 4.0 - 2.0,
rng.f32() * 4.0 - 2.0,
);
If the asteroid was spinning when you blew it up, the debris on the leading edge flies faster than the trailing edge. It looks really satisfying when chunks spiral outward.
# Connected Components: Did We Split It?
After the explosion, the parent asteroid might be split into disconnected chunks. We detect this with a basic BFS flood fill:
pub fn find_connected_components(svo: &Svo) -> Vec<HashSet<(u32, u32, u32)>> {
let mut visited = HashSet::new();
let mut components = Vec::new();
for (x, y, z, material) in svo.iter_voxels() {
if material == VoxelMaterial::Empty || visited.contains(&(x, y, z)) {
continue;
}
// BFS flood fill from this voxel
let mut component = HashSet::new();
let mut queue = VecDeque::new();
queue.push_back((x, y, z));
while let Some((cx, cy, cz)) = queue.pop_front() {
if visited.contains(&(cx, cy, cz)) {
continue;
}
if svo.get(cx, cy, cz) == VoxelMaterial::Empty {
continue;
}
visited.insert((cx, cy, cz));
component.insert((cx, cy, cz));
// Check 6-connected neighbors (face-adjacent only)
let neighbors: [(i32, i32, i32); 6] = [
(1, 0, 0), (-1, 0, 0),
(0, 1, 0), (0, -1, 0),
(0, 0, 1), (0, 0, -1),
];
for (dx, dy, dz) in neighbors {
let nx = cx as i32 + dx;
let ny = cy as i32 + dy;
let nz = cz as i32 + dz;
if nx >= 0 && ny >= 0 && nz >= 0
&& (nx as u32) < svo.size
&& (ny as u32) < svo.size
&& (nz as u32) < svo.size
{
let pos = (nx as u32, ny as u32, nz as u32);
if !visited.contains(&pos) {
queue.push_back(pos);
}
}
}
}
if !component.is_empty() {
components.push(component);
}
}
components
}
If we get more than one component, we spawn each as a seperate asteroid. Small fragments (< 50 voxels) just get destroyed since they're not worth tracking.
# Center of Mass Tracking
For physics to feel right, rotation needs to happen around the actual center of mass, not the geometric center. When you mine one voxel at a time, you can update incrementally:
pub fn mine_voxel(&mut self, x: u32, y: u32, z: u32) -> VoxelMaterial {
let material = self.svo.remove(x, y, z);