Try-rs v0.1.23 released! (A fast Rust CLI for temporary workspaces (now with Windows support)
Processing gif l06skiqq2v9g1...
Hello r/rust! 👋
I’d like to share a series of releases for try-rs, a Rust-based command-line tool designed to manage temporary workspaces and experiments in an organized and predictable way.
Instead of scattering random folders across your filesystem, try-rs automatically organizes them and provides an object-oriented user interface to create, navigate, search, and safely delete them.
🔗 GitHub: https://github.com/tassiovirginio/try-rs
🔗 Crates.io: https://crates.io/crates/try-rs
🔗 Aur: https://aur.archlinux.org/packages/try-rs-bin
# 🆕 What’s new (v0.1.18 → v0.1.23)
- First native Windows build (
- try-rs is now truly cross-platform
- GitHub Actions workflow added to build Windows artifacts
- New
- Improved entry loading and parsing logic
- README updated to clarify directory naming behavior
- Creation timestamp added to entries
- Directory naming logic refactored
- Date prefixes removed from generated/cloned folders
- Clearer documentation and updated GIFs
-
- First external contribution 🎉
https://redd.it/1pxszm9
@r_rust
Processing gif l06skiqq2v9g1...
Hello r/rust! 👋
I’d like to share a series of releases for try-rs, a Rust-based command-line tool designed to manage temporary workspaces and experiments in an organized and predictable way.
Instead of scattering random folders across your filesystem, try-rs automatically organizes them and provides an object-oriented user interface to create, navigate, search, and safely delete them.
🔗 GitHub: https://github.com/tassiovirginio/try-rs
🔗 Crates.io: https://crates.io/crates/try-rs
🔗 Aur: https://aur.archlinux.org/packages/try-rs-bin
# 🆕 What’s new (v0.1.18 → v0.1.23)
- First native Windows build (
.exe) published - try-rs is now truly cross-platform
- GitHub Actions workflow added to build Windows artifacts
- New
is_mise and is_cargo fields in TryEntry - Improved entry loading and parsing logic
- README updated to clarify directory naming behavior
- Creation timestamp added to entries
- Directory naming logic refactored
- Date prefixes removed from generated/cloned folders
- Clearer documentation and updated GIFs
-
TRY_PATH and TRY_CONFIG environment variables can now override default values - First external contribution 🎉
https://redd.it/1pxszm9
@r_rust
GitHub
GitHub - tassiovirginio/try-rs: A blazing fast, Rust-based workspace manager for your temporary experiments.
A blazing fast, Rust-based workspace manager for your temporary experiments. - tassiovirginio/try-rs
Why futures don't implement the typestate pattern?
Calling the
I'm implementing a socket library, and I'm tempted to use the typestate pattern (
So why is it so? Are there any unintended side effects with the typestate pattern? Should I use it anyway in my libraries?
https://redd.it/1pxv07q
@r_rust
Calling the
poll method on a Future in the Poll::Ready state is UB. Why the std library doesn't use the typestate pattern to avoid this issue? In theory it should not make a big performance difference: correctness is validated by the compiler, memory footprint should be the same and the check at runtime should be very cheap thanks to the branch predictor.I'm implementing a socket library, and I'm tempted to use the typestate pattern (
Socket<State> with possible states being Uninitialized, Open, Connecting, Receiving and Closed), but the fact that the std library doesn't use this pattern makes me dubious.So why is it so? Are there any unintended side effects with the typestate pattern? Should I use it anyway in my libraries?
https://redd.it/1pxv07q
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Worlds fastest git status on windows - part 2 - ntdll, more speed, and reinventing the wheel
Hey everyone!
Last week I posted about writing the [fastest git status in the world](https://www.reddit.com/r/rust/comments/1prkzqg/writing_the_fastest_implementation_of_git_status/). (on windows!) There, we statused the **Linux repo in 137ms.** However the code wasn't what I would like it to be, and I had additional questions, so here is me sharing that adventure, and getting some additional nice speedups along the way.
Follow me traveller, into a world of windows apis, cache abuse, and accidentally reinventing the wheel.
Repo with source is at the bottom!
# Funny things with windows libraries
So previously, I used the windows api for file system stuff: `FindFirstFileExW` and `FindNextFileW` To recap - if you iterate per directory in windows, you go 20-50 times faster than if you do it per file, like you do in linux.
However, we have multiple windows apis! Lets try some out!
I had the impression that I could better nail down the cost of the actual directory walking, so I wrote some functions which do that and nothing else, to get a better base time. I should have sat down and done this earlier perhaps, but it was exciting to go into unknown holes of performance on windows, so the first git status I made was wild and unprincipled. This is still wild and unprincipled, but hopefully a little less so.
Lets see what we have:
|Library / Method|Time (ms)|Notes|
|:-|:-|:-|
|Git2 Status|555.70|Cold start - \~300ms if you run again|
|Win32 - directories|25.39|Same API as last time|
|ntdll - directories|20.59|Same as above but using NT|
Previously I got **46ms** for the walk, so this is definitely an improvement!
I don't really like using the NTDLL because you may wonder: how do I use this super fast api? Its easy! just call this:
let status = unsafe {
NtQueryDirectoryFile(
self.0, // FileHandle
ptr::null_mut(), // Event
None, // ApcRoutine
ptr::null_mut(), // ApcContext
&mut io_status, // IoStatusBlock
buffer.as_mut_ptr() as *mut c_void, // FileInformation
(buffer.len() * 8) as u32, // Length
FILE_DIRECTORY_INFORMATION, // FileInformationClass
0, // ReturnSingleEntry
ptr::null_mut(), // FileName
if restart_scan { 1 } else { 0 }, // RestartScan
)
};
See what I mean?
One thing I did is run the benchmarks in a different order. In my experience, there is a bunch of caching going on behind the scenes, and so you want to see how that behaves.
`[dirwalk_win32] walk=77.7819ms | threads=24 | dirs=5994 | files=90332`
Look at that, we're suddenly up to almost 80 milliseconds! What is happening? We know it's some kind of caching, but why is it like this? Running it multiple times in a row is also strange:
[dirwalk_win32] walk=77.8767ms | threads=24 | dirs=5994 | files=90332
[dirwalk_win32] walk=62.6251ms | threads=24 | dirs=5994 | files=90332
[dirwalk_win32] walk=41.3444ms | threads=24 | dirs=5994 | files=90332
[dirwalk_win32] walk=30.9595ms | threads=24 | dirs=5994 | files=90332
But there is more! It looks like the high-level API for this actually has a name cache, which is the cause of the slowness, but also it's per process. Which means that every time you want to run this in a new process, you get hit with the cache miss every time.
How did I get the 25ms time with it? Well it turns out that I was running it after the git2 status check, and, somehow, that maxes out its speed. Even though repeated calls to itself don't.
I'm not sure why it behaves like this, but regardless we see that it's really inconsistent.
By comparison, the NTDLL version is remarkably consistent. It takes a hit the first time the OS hasn't loaded the files
Hey everyone!
Last week I posted about writing the [fastest git status in the world](https://www.reddit.com/r/rust/comments/1prkzqg/writing_the_fastest_implementation_of_git_status/). (on windows!) There, we statused the **Linux repo in 137ms.** However the code wasn't what I would like it to be, and I had additional questions, so here is me sharing that adventure, and getting some additional nice speedups along the way.
Follow me traveller, into a world of windows apis, cache abuse, and accidentally reinventing the wheel.
Repo with source is at the bottom!
# Funny things with windows libraries
So previously, I used the windows api for file system stuff: `FindFirstFileExW` and `FindNextFileW` To recap - if you iterate per directory in windows, you go 20-50 times faster than if you do it per file, like you do in linux.
However, we have multiple windows apis! Lets try some out!
I had the impression that I could better nail down the cost of the actual directory walking, so I wrote some functions which do that and nothing else, to get a better base time. I should have sat down and done this earlier perhaps, but it was exciting to go into unknown holes of performance on windows, so the first git status I made was wild and unprincipled. This is still wild and unprincipled, but hopefully a little less so.
Lets see what we have:
|Library / Method|Time (ms)|Notes|
|:-|:-|:-|
|Git2 Status|555.70|Cold start - \~300ms if you run again|
|Win32 - directories|25.39|Same API as last time|
|ntdll - directories|20.59|Same as above but using NT|
Previously I got **46ms** for the walk, so this is definitely an improvement!
I don't really like using the NTDLL because you may wonder: how do I use this super fast api? Its easy! just call this:
let status = unsafe {
NtQueryDirectoryFile(
self.0, // FileHandle
ptr::null_mut(), // Event
None, // ApcRoutine
ptr::null_mut(), // ApcContext
&mut io_status, // IoStatusBlock
buffer.as_mut_ptr() as *mut c_void, // FileInformation
(buffer.len() * 8) as u32, // Length
FILE_DIRECTORY_INFORMATION, // FileInformationClass
0, // ReturnSingleEntry
ptr::null_mut(), // FileName
if restart_scan { 1 } else { 0 }, // RestartScan
)
};
See what I mean?
One thing I did is run the benchmarks in a different order. In my experience, there is a bunch of caching going on behind the scenes, and so you want to see how that behaves.
`[dirwalk_win32] walk=77.7819ms | threads=24 | dirs=5994 | files=90332`
Look at that, we're suddenly up to almost 80 milliseconds! What is happening? We know it's some kind of caching, but why is it like this? Running it multiple times in a row is also strange:
[dirwalk_win32] walk=77.8767ms | threads=24 | dirs=5994 | files=90332
[dirwalk_win32] walk=62.6251ms | threads=24 | dirs=5994 | files=90332
[dirwalk_win32] walk=41.3444ms | threads=24 | dirs=5994 | files=90332
[dirwalk_win32] walk=30.9595ms | threads=24 | dirs=5994 | files=90332
But there is more! It looks like the high-level API for this actually has a name cache, which is the cause of the slowness, but also it's per process. Which means that every time you want to run this in a new process, you get hit with the cache miss every time.
How did I get the 25ms time with it? Well it turns out that I was running it after the git2 status check, and, somehow, that maxes out its speed. Even though repeated calls to itself don't.
I'm not sure why it behaves like this, but regardless we see that it's really inconsistent.
By comparison, the NTDLL version is remarkably consistent. It takes a hit the first time the OS hasn't loaded the files
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
from disk, but every subsequent run is 20-25ms and persists between process restarts. Nice!
This is because it doesn't have a name cache, and since we are being fast we don't need one. I claim.
Also a side bonus: NT has a feature to open relative paths. This helps by avoiding calculation of the whole path, which makes it faster:
[dirwalk_ntdll] walk=19.4752ms
[dirwalk_ntdll_relative] walk=23.5649ms
Nevermind, it's slower for some reason! Oh well, the absolute API makes more sense anyway so its nice that we aren't leaving anything on the floor by using it.
At this point it was clear I needed to use this, since I wanted at least reasonable worst case performance. So I wrote a wrapper module for this to hide all the constants and function calls with 10 arguments, implemented Drop for the file handle, which made all this a little, but not much safer, and called it a day.
I also looked into asynchronous stuff. But the base calls on the NTDLL are synchronous and so you are kind of stuck. There were more clever things I could have done since I noticed that the threads weren't fully loaded as they were mostly waiting on syscalls but I would have needed (even) more shenanigans to make use of that, so I didn't.
I also looked into IoRing on Windows, which is a new API. But that was added in Windows 11, and I wanted more compatibility than just that. Also I wasn't sure that it would be that much faster, so I decided to skip that one too.
# Towering structures of data
The strongest code starts from the strongest data structures, I claim. Not that my code is anything near strong, but the point stands!
Before we begin, the problem we are trying to solve is something like this:
1. We need to diff the tree to the workspace
2. But that is too slow since you would need to hash everything to see if it matched
3. So everyone uses the index as a cache - that contains file metadata as last seen
4. This way we can skip hashing (almost!) all cases, and use metadata instead
5. But the index doesn't have to match the tree! It could contain anything! Its more like a cache of the tree you're **about** to commit, so staging is this whole thing
6. So everyone diffs the tree with the index, then the index against the workspace to get the status
But I wasn't too happy with that and thought: well what If we had a proper cache for the tree, then we wouldn't need to diff with the tree! We can have that if we have some more cpu friendly data structure which contained one to one matched paths with the head tree, as well as the metadata needed to quickly tell if a file was changed without hashing it.
I started with what I know from games: ***Stick it into an array of structs, and if that's not good enough make that into a struct of arrays.***
That worked! I had a bunch of fields, then each one went into an array, except the strings. Those were slapped into a large vector of u8s where you had a length and an offset into it to get the full path. Everything else is fixed size.
I also had a lookup where I had a hash of the path, which was in an array that I made into a manual hashmap essentially. You hash a path you see during the walk, then you deterministically work out what index it should go into on the array. That then tells you where the entry is! The array is presized to have a larger amount than you will need to avoid clustering, and it works quite well.
However one thing I realized is that I'm looking at it in random access, but a struct of arrays is about iterating through linearly. So I was doing a number of cache misses to load from all these arrays in random order.
Well: what if I packed that into a struct!
#[repr(C, align(64))]
pub struct CacheEntry {
pub oid: [u8; 20],
pub path_offset: u32,
pub clean_disk_size: AtomicU64,
pub worktree_size: AtomicU64,
pub mtime_secs: AtomicU32,
pub mtime_nsecs: AtomicU32,
pub last_mtime_secs: AtomicU32,
pub last_mtime_nsecs: AtomicU32,
pub path_len:
This is because it doesn't have a name cache, and since we are being fast we don't need one. I claim.
Also a side bonus: NT has a feature to open relative paths. This helps by avoiding calculation of the whole path, which makes it faster:
[dirwalk_ntdll] walk=19.4752ms
[dirwalk_ntdll_relative] walk=23.5649ms
Nevermind, it's slower for some reason! Oh well, the absolute API makes more sense anyway so its nice that we aren't leaving anything on the floor by using it.
At this point it was clear I needed to use this, since I wanted at least reasonable worst case performance. So I wrote a wrapper module for this to hide all the constants and function calls with 10 arguments, implemented Drop for the file handle, which made all this a little, but not much safer, and called it a day.
I also looked into asynchronous stuff. But the base calls on the NTDLL are synchronous and so you are kind of stuck. There were more clever things I could have done since I noticed that the threads weren't fully loaded as they were mostly waiting on syscalls but I would have needed (even) more shenanigans to make use of that, so I didn't.
I also looked into IoRing on Windows, which is a new API. But that was added in Windows 11, and I wanted more compatibility than just that. Also I wasn't sure that it would be that much faster, so I decided to skip that one too.
# Towering structures of data
The strongest code starts from the strongest data structures, I claim. Not that my code is anything near strong, but the point stands!
Before we begin, the problem we are trying to solve is something like this:
1. We need to diff the tree to the workspace
2. But that is too slow since you would need to hash everything to see if it matched
3. So everyone uses the index as a cache - that contains file metadata as last seen
4. This way we can skip hashing (almost!) all cases, and use metadata instead
5. But the index doesn't have to match the tree! It could contain anything! Its more like a cache of the tree you're **about** to commit, so staging is this whole thing
6. So everyone diffs the tree with the index, then the index against the workspace to get the status
But I wasn't too happy with that and thought: well what If we had a proper cache for the tree, then we wouldn't need to diff with the tree! We can have that if we have some more cpu friendly data structure which contained one to one matched paths with the head tree, as well as the metadata needed to quickly tell if a file was changed without hashing it.
I started with what I know from games: ***Stick it into an array of structs, and if that's not good enough make that into a struct of arrays.***
That worked! I had a bunch of fields, then each one went into an array, except the strings. Those were slapped into a large vector of u8s where you had a length and an offset into it to get the full path. Everything else is fixed size.
I also had a lookup where I had a hash of the path, which was in an array that I made into a manual hashmap essentially. You hash a path you see during the walk, then you deterministically work out what index it should go into on the array. That then tells you where the entry is! The array is presized to have a larger amount than you will need to avoid clustering, and it works quite well.
However one thing I realized is that I'm looking at it in random access, but a struct of arrays is about iterating through linearly. So I was doing a number of cache misses to load from all these arrays in random order.
Well: what if I packed that into a struct!
#[repr(C, align(64))]
pub struct CacheEntry {
pub oid: [u8; 20],
pub path_offset: u32,
pub clean_disk_size: AtomicU64,
pub worktree_size: AtomicU64,
pub mtime_secs: AtomicU32,
pub mtime_nsecs: AtomicU32,
pub last_mtime_secs: AtomicU32,
pub last_mtime_nsecs: AtomicU32,
pub path_len:
u16,
} // Total: 64 bytes with padding
This is aligned to 64 bytes, which lines up with a cache line. This means you can get everything in one scoop. So you do just one lookup in the hashmaps array, which is sparse, and then grab this and this is all you need. You then can grab the path offset and length to get the file location.
This worked too and was even a little faster! But there was one problem: The data structure was great but building it wasn't. I was spending more time building the cache than walking the directories!
There are some things to help here:
1. We can do it in two phases - walk the tree in some threads, and load the index in another. Then iterate through the cache and fixup the mtimes to correspond from stuff you found in the index. This means you can do most of the work in parallel.
2. Walking the tree in parallel helps. But then you need to glue the results together into one cache. You can either spend time on that in one thread, which was many allocations!
3. Or you can try to do it using multithreaded structures. Either way wasn't ideal, and I found that it was slightly faster to build at the end than to have mutex contention.
I tried lots of things! I made some fixup phase were you go through and fixup the offsets, I slapped a bump allocator behind a mutex! It was all somewhat ok but none of the stuff I needed.
Eventually the dumbest idea just worked.
Forget it, let's just allocate a ginormous arena. And then instead of having the strings in a separate place, we have the lookup table separate and the lookup table points to a place in the arena. The struct is placed there. It's always 64 bytes. Then with a 64 byte offset from the struct, we just stick in the string. Then we stick some padding in. And then the next struct goes.
And then you just do that multi-threaded! Because all you're contending for is a lock on the offset, we essentially made a home-made arena but each thread can easily stick stuff in. And since they're all writing to the same block in memory there's no gluing needed at all! **Max speed!**
Layout in arena:
[Entry0 (64B)][Path0 (N bytes)][Padding to 64B alignment]
[Entry1 (64B)][Path1 (M bytes)][Padding to 64B alignment]
pub struct TreeCache {
/// The Tree OID this cache was built from
pub tree_id: ObjectId,
arena: Box<[AlignedBlock]>,
arena_bump: AtomicUsize,
// Hash table: path_hash -> arena offset + 1 (0 = empty slot)
pub lookup: Box<[AtomicU64]>,
pub mask: usize, // Mask for hash table
entry_count: AtomicUsize, // Number of entries in the cache
}
So the format I ended up with was - a long array of bytes, where you have fixed length headers for each files metadata, and a long string. It is variable length due to the string, and there is some header info for the cache overall.
But hang on that seems familiar. It sounds just like ... **the index!**
Indeed, after all this wanting to get a better index, I ended up with an index at home! But somehow this works! I'm beating the other git status implementations several times in speed! How is this possible?
It has two big changes:
1. It isn't sorted, and is append only. So we can build it easily in parallel.
2. It has a lookup table!
3. The header is smaller and fits into 1 cache line - though this isnt important since you can always put more data into a second cache line which you need less often
4. **The most important difference:** by construction, it matches the tree its caching 1:1, and its structure is immutable once built.
Walking in circles somehow works for me: I was cleaning up my code, extracted a function, extracted it some more, then inlined it again, and somehow ended up with half of the code.
These things are a mystery.
# What next?
Who knows!
I do wonder if I can just take an index, fix it up from the tree, and then use that, but I haven't looked into that yet. I also feel like there is yet more juice to be squeezed here.
People did ask:
} // Total: 64 bytes with padding
This is aligned to 64 bytes, which lines up with a cache line. This means you can get everything in one scoop. So you do just one lookup in the hashmaps array, which is sparse, and then grab this and this is all you need. You then can grab the path offset and length to get the file location.
This worked too and was even a little faster! But there was one problem: The data structure was great but building it wasn't. I was spending more time building the cache than walking the directories!
There are some things to help here:
1. We can do it in two phases - walk the tree in some threads, and load the index in another. Then iterate through the cache and fixup the mtimes to correspond from stuff you found in the index. This means you can do most of the work in parallel.
2. Walking the tree in parallel helps. But then you need to glue the results together into one cache. You can either spend time on that in one thread, which was many allocations!
3. Or you can try to do it using multithreaded structures. Either way wasn't ideal, and I found that it was slightly faster to build at the end than to have mutex contention.
I tried lots of things! I made some fixup phase were you go through and fixup the offsets, I slapped a bump allocator behind a mutex! It was all somewhat ok but none of the stuff I needed.
Eventually the dumbest idea just worked.
Forget it, let's just allocate a ginormous arena. And then instead of having the strings in a separate place, we have the lookup table separate and the lookup table points to a place in the arena. The struct is placed there. It's always 64 bytes. Then with a 64 byte offset from the struct, we just stick in the string. Then we stick some padding in. And then the next struct goes.
And then you just do that multi-threaded! Because all you're contending for is a lock on the offset, we essentially made a home-made arena but each thread can easily stick stuff in. And since they're all writing to the same block in memory there's no gluing needed at all! **Max speed!**
Layout in arena:
[Entry0 (64B)][Path0 (N bytes)][Padding to 64B alignment]
[Entry1 (64B)][Path1 (M bytes)][Padding to 64B alignment]
pub struct TreeCache {
/// The Tree OID this cache was built from
pub tree_id: ObjectId,
arena: Box<[AlignedBlock]>,
arena_bump: AtomicUsize,
// Hash table: path_hash -> arena offset + 1 (0 = empty slot)
pub lookup: Box<[AtomicU64]>,
pub mask: usize, // Mask for hash table
entry_count: AtomicUsize, // Number of entries in the cache
}
So the format I ended up with was - a long array of bytes, where you have fixed length headers for each files metadata, and a long string. It is variable length due to the string, and there is some header info for the cache overall.
But hang on that seems familiar. It sounds just like ... **the index!**
Indeed, after all this wanting to get a better index, I ended up with an index at home! But somehow this works! I'm beating the other git status implementations several times in speed! How is this possible?
It has two big changes:
1. It isn't sorted, and is append only. So we can build it easily in parallel.
2. It has a lookup table!
3. The header is smaller and fits into 1 cache line - though this isnt important since you can always put more data into a second cache line which you need less often
4. **The most important difference:** by construction, it matches the tree its caching 1:1, and its structure is immutable once built.
Walking in circles somehow works for me: I was cleaning up my code, extracted a function, extracted it some more, then inlined it again, and somehow ended up with half of the code.
These things are a mystery.
# What next?
Who knows!
I do wonder if I can just take an index, fix it up from the tree, and then use that, but I haven't looked into that yet. I also feel like there is yet more juice to be squeezed here.
People did ask:
**"Will this make it into git oxide?"**
I say: Maybe! I think that there is still some distance between this and library worthy code, but I do see a path to get there.
People did demand: **"Source?"**
I say: Right here! Help yourself! I spent some time cleaning it up and adding comments! [https://github.com/special-bread/tests-git-status](https://github.com/special-bread/tests-git-status)
People did say: **"Why do this? Again?"**
I say: I am working on a git client I want to be a joy to use. This means making it fast, smooth, strong. And good status checks are a big part of responsiveness. Check it out here: [https://gitcherrytree.com/](https://gitcherrytree.com/)
And so: I left last week wanting to get some confidence in the robustness and approach to my solution, and I wandered, mostly in circles, ending up just where I started.
But my time was not wasted, for somehow we got here twice as fast:
cargo run --release -- C:\projects\linux
Finished `release` profile [optimized] target(s) in 0.16s
Running `target\release\gix-test.exe C:\projects\linux
========== STATUS PERFORMANCE BENCHMARKS ==========
[git2_status] total=430.3057ms | status=430.3027ms | add=0 mod=487 del=0
[status_gitoxide] total=1.2739047s | add=0 mod=487 del=0
[status_by_bread] total=70.9949ms | add=0 mod=487 del=0
========== END BENCHMARKS ==========
https://redd.it/1pxwpf7
@r_rust
I say: Maybe! I think that there is still some distance between this and library worthy code, but I do see a path to get there.
People did demand: **"Source?"**
I say: Right here! Help yourself! I spent some time cleaning it up and adding comments! [https://github.com/special-bread/tests-git-status](https://github.com/special-bread/tests-git-status)
People did say: **"Why do this? Again?"**
I say: I am working on a git client I want to be a joy to use. This means making it fast, smooth, strong. And good status checks are a big part of responsiveness. Check it out here: [https://gitcherrytree.com/](https://gitcherrytree.com/)
And so: I left last week wanting to get some confidence in the robustness and approach to my solution, and I wandered, mostly in circles, ending up just where I started.
But my time was not wasted, for somehow we got here twice as fast:
cargo run --release -- C:\projects\linux
Finished `release` profile [optimized] target(s) in 0.16s
Running `target\release\gix-test.exe C:\projects\linux
========== STATUS PERFORMANCE BENCHMARKS ==========
[git2_status] total=430.3057ms | status=430.3027ms | add=0 mod=487 del=0
[status_gitoxide] total=1.2739047s | add=0 mod=487 del=0
[status_by_bread] total=70.9949ms | add=0 mod=487 del=0
========== END BENCHMARKS ==========
https://redd.it/1pxwpf7
@r_rust
GitHub
GitHub - special-bread/tests-git-status
Contribute to special-bread/tests-git-status development by creating an account on GitHub.
I built a custom Programming Language & IDE from scratch using Rust and WebGPU (It runs UI and Tensors directly on GPU!
https://github.com/qweewq1888pop-a11y/Turkce-KOD-rust-ile-yap-ld-.git
https://preview.redd.it/be6wyvwfgz9g1.png?width=1454&format=png&auto=webp&s=1cf0e1386ea61faaafbb56c0f89ee4b23808e89f
Maybe in v1.0 I can implement a more natural syntax engine!
https://redd.it/1pxxfg4
@r_rust
https://github.com/qweewq1888pop-a11y/Turkce-KOD-rust-ile-yap-ld-.git
https://preview.redd.it/be6wyvwfgz9g1.png?width=1454&format=png&auto=webp&s=1cf0e1386ea61faaafbb56c0f89ee4b23808e89f
Maybe in v1.0 I can implement a more natural syntax engine!
https://redd.it/1pxxfg4
@r_rust
GitHub
GitHub - qweewq1888pop-a11y/Turkce-KOD-rust-ile-yap-ld-: Turkce KOD (rust ile yapıldı)
Turkce KOD (rust ile yapıldı). Contribute to qweewq1888pop-a11y/Turkce-KOD-rust-ile-yap-ld- development by creating an account on GitHub.
How would a Souls-like have looked on PS1? I'm creating a browser-based engine in rust to find out
https://ebonura.github.io/bonnie-engine/
https://redd.it/1py1t0i
@r_rust
https://ebonura.github.io/bonnie-engine/
https://redd.it/1py1t0i
@r_rust
Rust & Ratatui - Guide from 0 to hero
I'm hobbyst and seeing much beautyfull projects(such impala, wiremix, blutui) i decide to make own tool to manage themes on my Archlinux.
I made cargo templates... and, OH GOD! "fn main" as main function run another functions, exclude render function. In addition mysticist schemat of code (i just don't understand anything). Tried based on examples but fuse some in one usually does not work, plus other method to make single paramether or more than single.
Somebody know basic tutorial to rust/ratatui? I will be greatful for every answear, not important is site, yt video or anything...
https://redd.it/1py5bac
@r_rust
I'm hobbyst and seeing much beautyfull projects(such impala, wiremix, blutui) i decide to make own tool to manage themes on my Archlinux.
I made cargo templates... and, OH GOD! "fn main" as main function run another functions, exclude render function. In addition mysticist schemat of code (i just don't understand anything). Tried based on examples but fuse some in one usually does not work, plus other method to make single paramether or more than single.
Somebody know basic tutorial to rust/ratatui? I will be greatful for every answear, not important is site, yt video or anything...
https://redd.it/1py5bac
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
How do you use
I need to use
- Raw bindings by the Rust lang team
- The
- The
- Or writing custom bindings
What options do you prefer and why? using raw bindings to allow maximum flexibility and performance? Or safe bindings like
https://redd.it/1pxv5jk
@r_rust
libc in your projects?I need to use
libc in my library, and I see there are several options:- Raw bindings by the Rust lang team
- The
rustix crate, safe bindings for libc written in Rust by the bytecode alliance- The
nix crate - Or writing custom bindings
What options do you prefer and why? using raw bindings to allow maximum flexibility and performance? Or safe bindings like
Rustix? Or something else?https://redd.it/1pxv5jk
@r_rust
crates.io
crates.io: Rust Package Registry
Can miri or another interpreter be used as a profiler?
Let's suppose I have a program and want to see what parts are slowest, not be running code a zillion times, but by interpreting it (once!) and counting how many MIR instructions get executed.
I know that miri in particular is intended for identifying (in)correctness, but it happens to be able to simulate other platforms with, e.g. different int sizes , different endians, etc, which could make it a multi platform profiler, too.
https://redd.it/1pyccl6
@r_rust
Let's suppose I have a program and want to see what parts are slowest, not be running code a zillion times, but by interpreting it (once!) and counting how many MIR instructions get executed.
I know that miri in particular is intended for identifying (in)correctness, but it happens to be able to simulate other platforms with, e.g. different int sizes , different endians, etc, which could make it a multi platform profiler, too.
https://redd.it/1pyccl6
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Using Rust to provide C-compatible dependencies
Disclaimer: I am **not** a Rust developer (or a software developer for that matter), but a sysadmin. I've had these questions mulling around in my mind for a while.
To you all with actual Rust experience, how much of an extra burden is it to write and maintain a library that exposes its public API not just via Rust but also unmangled C?
I'm thinking about this from the perspectives of simpler multi-language binding interop and enabling existing non-Rust projects to acquire more safety with drop-in replacements. Putting aside the human/community complexities for a moment, would it make any sense to effectively create/port C libraries in Rust? Does this actually provide increased safety? Is it generally a reasonable strategy?
For example, providing OpenSSL API/ABI compatible versions of libssl and libcrypto. Or GNOME/Freedesktop libraries, [insert favorite C library here\], etc. Entirely new, greenfield projects, too!
The main idea is to provide underlying components in memory-safe languages without needing to upend entire platforms or existing ecosystems.
\---
For fun I posed the question to Claude (Opus 4.5 w/reasoning) and was given a response that I was expecting but can't fully vet on my own. Do you agree or disagree with its analysis?
https://gist.github.com/omenos/cadd1ccdf2de300b7b3cf23c7fa4440e
https://redd.it/1pyenu5
@r_rust
Disclaimer: I am **not** a Rust developer (or a software developer for that matter), but a sysadmin. I've had these questions mulling around in my mind for a while.
To you all with actual Rust experience, how much of an extra burden is it to write and maintain a library that exposes its public API not just via Rust but also unmangled C?
I'm thinking about this from the perspectives of simpler multi-language binding interop and enabling existing non-Rust projects to acquire more safety with drop-in replacements. Putting aside the human/community complexities for a moment, would it make any sense to effectively create/port C libraries in Rust? Does this actually provide increased safety? Is it generally a reasonable strategy?
For example, providing OpenSSL API/ABI compatible versions of libssl and libcrypto. Or GNOME/Freedesktop libraries, [insert favorite C library here\], etc. Entirely new, greenfield projects, too!
The main idea is to provide underlying components in memory-safe languages without needing to upend entire platforms or existing ecosystems.
\---
For fun I posed the question to Claude (Opus 4.5 w/reasoning) and was given a response that I was expecting but can't fully vet on my own. Do you agree or disagree with its analysis?
https://gist.github.com/omenos/cadd1ccdf2de300b7b3cf23c7fa4440e
https://redd.it/1pyenu5
@r_rust
Gist
Using Rust to Develop C Libraries
Using Rust to Develop C Libraries. GitHub Gist: instantly share code, notes, and snippets.
Architecture Dilemma: Tauri Mobile vs. React Native for a companion app for a Rust-heavy Local-First App
Hi everyone,
I’m currently building a privacy-focused, local-first Personal Finance Management application. I am hitting a fork in the road regarding strategy for the mobile version and would love feedback.
The Current Stack (Desktop):
Framework: Tauri v2 ( so rust backend)
Database: SQLite (local) + JSON cache for precomputed results
Frontend: React
The Rust backend is heavy. It handles complex database rollups for analytics, database migrations, and multi-currency conversions.
Now as this is personal finance type application users will like to use mobile version to log data on the fly.
I am torn between two architectural approaches.
Option A: Use Tauri for Mobile also
I port my existing Tauri app to Android/iOS.
Architecture: The exact same Rust binary runs on the phone. It manages its own local SQLite DB and runs the full analytics engine.
Sync: sync to merge two states ( still figuring this out ).
The Issue: I keep reading that Tauri Mobile (even v2) still fights the OS on things like build chains, permissions, and UI jankiness, unstability.
Option B: React Native
I build a React Native with Expo app.
Here also i thought of two ways->
1. Create a dumb mobile app which will just act as a logging platform and show end results. The mobile app has zero business logic. It has a simple "Pending Transactions" queue. It pushes pending items to the Desktop. The Desktop acts as the Server, processes the logic/rollups, and returns a computed JSON snapshot for the phone to display. The phone is "read-only" for analytics and requires a sync to show updated stats, but the UI is guaranteed to be stable and smooth
2. Create a full replica in React-native. But in this case there can be performance issue as database size increases and also will have to maintain two separate codebase with same logic.
My questions
1. Is Tauri Mobile stable enough in late 2025 for a production app?
2. Are the "build chain nightmares" and Android permission issues (specifically for local networking) still a major blocker in late 2025?
3. Should i just use react-native for mobile for best user experience?
4. For data sync which is better IROH,IPFS, libp2p or just a standard desktop server with just on demand sync. Has anyone used this in a React Native context? Does it play nice with Expo, or will I be fighting the JSI/Native Module bridge
5. Has anyone successfully implemented this Desktop as a Server pattern with Tauri? Are there hidden pitfalls with local network discovery?
Any insights are appreciated!
https://redd.it/1pyexyr
@r_rust
Hi everyone,
I’m currently building a privacy-focused, local-first Personal Finance Management application. I am hitting a fork in the road regarding strategy for the mobile version and would love feedback.
The Current Stack (Desktop):
Framework: Tauri v2 ( so rust backend)
Database: SQLite (local) + JSON cache for precomputed results
Frontend: React
The Rust backend is heavy. It handles complex database rollups for analytics, database migrations, and multi-currency conversions.
Now as this is personal finance type application users will like to use mobile version to log data on the fly.
I am torn between two architectural approaches.
Option A: Use Tauri for Mobile also
I port my existing Tauri app to Android/iOS.
Architecture: The exact same Rust binary runs on the phone. It manages its own local SQLite DB and runs the full analytics engine.
Sync: sync to merge two states ( still figuring this out ).
The Issue: I keep reading that Tauri Mobile (even v2) still fights the OS on things like build chains, permissions, and UI jankiness, unstability.
Option B: React Native
I build a React Native with Expo app.
Here also i thought of two ways->
1. Create a dumb mobile app which will just act as a logging platform and show end results. The mobile app has zero business logic. It has a simple "Pending Transactions" queue. It pushes pending items to the Desktop. The Desktop acts as the Server, processes the logic/rollups, and returns a computed JSON snapshot for the phone to display. The phone is "read-only" for analytics and requires a sync to show updated stats, but the UI is guaranteed to be stable and smooth
2. Create a full replica in React-native. But in this case there can be performance issue as database size increases and also will have to maintain two separate codebase with same logic.
My questions
1. Is Tauri Mobile stable enough in late 2025 for a production app?
2. Are the "build chain nightmares" and Android permission issues (specifically for local networking) still a major blocker in late 2025?
3. Should i just use react-native for mobile for best user experience?
4. For data sync which is better IROH,IPFS, libp2p or just a standard desktop server with just on demand sync. Has anyone used this in a React Native context? Does it play nice with Expo, or will I be fighting the JSI/Native Module bridge
5. Has anyone successfully implemented this Desktop as a Server pattern with Tauri? Are there hidden pitfalls with local network discovery?
Any insights are appreciated!
https://redd.it/1pyexyr
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Hey Rustaceans! Got a question? Ask here (53/2025)!
Mystified about strings? Borrow checker has you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet. Please note that if you include code examples to e.g. show a compiler error or surprising result, linking a playground with the code will improve your chances of getting help quickly.
If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so ahaving your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.
Here are some other venues where help may be found:
/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.
The official Rust user forums: https://users.rust-lang.org/.
The official Rust Programming Language Discord: https://discord.gg/rust-lang
The unofficial Rust community Discord: https://bit.ly/rust-community
Also check out last week's thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.
Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.
https://redd.it/1pygtfj
@r_rust
Mystified about strings? Borrow checker has you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet. Please note that if you include code examples to e.g. show a compiler error or surprising result, linking a playground with the code will improve your chances of getting help quickly.
If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so ahaving your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.
Here are some other venues where help may be found:
/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.
The official Rust user forums: https://users.rust-lang.org/.
The official Rust Programming Language Discord: https://discord.gg/rust-lang
The unofficial Rust community Discord: https://bit.ly/rust-community
Also check out last week's thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.
Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.
https://redd.it/1pygtfj
@r_rust
play.rust-lang.org
Rust Playground
A browser interface to the Rust compiler to experiment with the language
What's everyone working on this week (53/2025)?
New week, new Rust! What are you folks up to? Answer here or over at rust-users!
https://redd.it/1pygulr
@r_rust
New week, new Rust! What are you folks up to? Answer here or over at rust-users!
https://redd.it/1pygulr
@r_rust
The Rust Programming Language Forum
What's everyone working on this week (53/2025)?
New week, new Rust! What are you folks up to?
Simple Housekeeping (Bash) Script for Rust Projects
I have a tendency to create a dedicated directory for a topic that I'm currently learning in Rust. After several years, the number of directories has grown to hundreds and with that, came the storage space choke. Taking advantage of year-end holiday, I developed this simple bash noscript to handle the housekeeping better. I wrote about this experience in my blog: Simple Housekeeping Script for Rust Projects
I hope it could be useful for others. Please have a look and I'm happy to hear your feedback!
https://redd.it/1pyh2uq
@r_rust
I have a tendency to create a dedicated directory for a topic that I'm currently learning in Rust. After several years, the number of directories has grown to hundreds and with that, came the storage space choke. Taking advantage of year-end holiday, I developed this simple bash noscript to handle the housekeeping better. I wrote about this experience in my blog: Simple Housekeeping Script for Rust Projects
I hope it could be useful for others. Please have a look and I'm happy to hear your feedback!
https://redd.it/1pyh2uq
@r_rust
Gandhi's Musings
Simple Housekeeping Script for Rust Projects
This post describes how to create a simple housekeeping noscript to automate cleanup and maintenance tasks for Rust projects…
Hauchiwa 0.9: A type-safe, graph-based static site generator library
Hauchiwa is a flexible static site generator library designed for those who want to build their own custom SSG pipeline in Rust. Version 0.9 brings a massive evolution, I have completely rewritten the core architecture to use a graph for build tasks.
What's new in 0.9?
* Hauchiwa now builds a dependency graph (DAG) of your build pipeline. This allows for precise parallel execution and smart incremental rebuilds.
* When a task produces data (like parsed Markdown or compiled CSS), it returns a Handle<T>. You pass this handle to dependent tasks. If the types don't match, your build code won't compile.
* A new macro makes wiring up these dependencies ergonomic and clean.
* Support for `importmap`, and improved support for Svelte SSR via Deno & Esbuild.
Why use Hauchiwa?
If you are tired of rigid frameworks (Hugo/Jekyll/Zola) or fighting with complex config files, Hauchiwa lets you define your build logic in Rust. You define the inputs (loaders), transformations (tasks), and dependencies (handles). Hauchiwa handles the caching, parallelism, and watching.
A taste:
let mut config = Blueprint::<()>::new();
// 1. Load Markdown
let posts = config.load_documents::<Post>("content/**/*.md")?;
// 2. Render pages (depends on `posts`)
hauchiwa::task!(config, |ctx, posts| {
let mut pages = Vec::new();
for post in posts.values() {
pages.push(Output::html(&post.path, render(&post)));
}
Ok(pages)
});
config.finish().build(())?;
I'd love to hear your feedback!
* [https://crates.io/crates/hauchiwa](https://crates.io/crates/hauchiwa)
* [https://docs.rs/hauchiwa/latest/hauchiwa/](https://docs.rs/hauchiwa/latest/hauchiwa/)
https://redd.it/1pyjg24
@r_rust
Hauchiwa is a flexible static site generator library designed for those who want to build their own custom SSG pipeline in Rust. Version 0.9 brings a massive evolution, I have completely rewritten the core architecture to use a graph for build tasks.
What's new in 0.9?
* Hauchiwa now builds a dependency graph (DAG) of your build pipeline. This allows for precise parallel execution and smart incremental rebuilds.
* When a task produces data (like parsed Markdown or compiled CSS), it returns a Handle<T>. You pass this handle to dependent tasks. If the types don't match, your build code won't compile.
* A new macro makes wiring up these dependencies ergonomic and clean.
* Support for `importmap`, and improved support for Svelte SSR via Deno & Esbuild.
Why use Hauchiwa?
If you are tired of rigid frameworks (Hugo/Jekyll/Zola) or fighting with complex config files, Hauchiwa lets you define your build logic in Rust. You define the inputs (loaders), transformations (tasks), and dependencies (handles). Hauchiwa handles the caching, parallelism, and watching.
A taste:
let mut config = Blueprint::<()>::new();
// 1. Load Markdown
let posts = config.load_documents::<Post>("content/**/*.md")?;
// 2. Render pages (depends on `posts`)
hauchiwa::task!(config, |ctx, posts| {
let mut pages = Vec::new();
for post in posts.values() {
pages.push(Output::html(&post.path, render(&post)));
}
Ok(pages)
});
config.finish().build(())?;
I'd love to hear your feedback!
* [https://crates.io/crates/hauchiwa](https://crates.io/crates/hauchiwa)
* [https://docs.rs/hauchiwa/latest/hauchiwa/](https://docs.rs/hauchiwa/latest/hauchiwa/)
https://redd.it/1pyjg24
@r_rust
crates.io
crates.io: Rust Package Registry
Looking for good quic libraries with http3 support in Rust.
Well I'm building an experimental http3 framework using Rust's h3 crate followed by Quinn rs especially for JS Runtime since they don't have http3 support yet. Therefore there is no well established articles or documents regarding quic, udp servers and http3 protocols. As an experienced engineer I have seen a lot of corporates networks were blocking udp ports just because they cannot introduce the MITM breach thing during quic hand shakes. Because every single packets gets encrypted leaving with no traces even headers are encrypted. Tbh this might sound fantastic that quic servers are safer than tcp. But it comes up with a huge cost. Running this rustls crypto on both sides each request become more cpu intensive ultimately you get best end to end protection with increased max cpu usage. Well I have been using oha to test my http3 server. It gave very low Rps compared to years of optimized tcp stacks like axum and rocket.
Coming back to my project it's already in alpha stage I moved those rust native addons to node Js. It worked really well even on deno and bun. I have implemented stream support by pulling h3-Quinn internals and forged like express ergonomics
Most of the corporates hides this quic related projects just by phrasing udp ports are unsafe vulnerable to attacks. I would like to seek help all regarding benchmarking http3 Rust libraries plz do share me rust h3 / quic libraries.
https://redd.it/1pymhdv
@r_rust
Well I'm building an experimental http3 framework using Rust's h3 crate followed by Quinn rs especially for JS Runtime since they don't have http3 support yet. Therefore there is no well established articles or documents regarding quic, udp servers and http3 protocols. As an experienced engineer I have seen a lot of corporates networks were blocking udp ports just because they cannot introduce the MITM breach thing during quic hand shakes. Because every single packets gets encrypted leaving with no traces even headers are encrypted. Tbh this might sound fantastic that quic servers are safer than tcp. But it comes up with a huge cost. Running this rustls crypto on both sides each request become more cpu intensive ultimately you get best end to end protection with increased max cpu usage. Well I have been using oha to test my http3 server. It gave very low Rps compared to years of optimized tcp stacks like axum and rocket.
Coming back to my project it's already in alpha stage I moved those rust native addons to node Js. It worked really well even on deno and bun. I have implemented stream support by pulling h3-Quinn internals and forged like express ergonomics
res.stream(chunks) for streaming large files seamlessly. My framework actually performed well compared to golang's - h3 library. But haven't open sourced it yet because people barely showing interest they just switch to tcp stacks without knowing the underlying vulnerabilities. Most of the corporates hides this quic related projects just by phrasing udp ports are unsafe vulnerable to attacks. I would like to seek help all regarding benchmarking http3 Rust libraries plz do share me rust h3 / quic libraries.
https://redd.it/1pymhdv
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community