toml-spanner: Fully compliant, 10x faster TOML parsing with 1/2 the build time
toml-spanner a fork of toml-span, adding
full TOML v1.1.0 compliance including date-time support, reducing build time to half
and improving parsing performance significantly.
See Benchmarks
#### What changed
- Parse directly from bytes into the final value tree, no lexing nor intermediate trees.
- Tables are order-preserving flat arrays with a shared key index for larger tables, replacing toml-span's per-table BTreeMap.
- Compact Value and Span: Items (Span + Value) are now 24 bytes, half of the originals 48 bytes (on 64-bit platforms).
- Arena allocate the tree.
There are a bunch of other smaller optimizations, but I've added stuff like:
table"alpha"0"bravo".asstr()
Null Coalescing Index Operators and other quality of life improvements see, [API Documentation](https://docs.rs/toml-spanner/latest/tomlspanner/) for more examples.
The original toml-span had no unsafe, whereas toml-spanner does need it for the compact data structures
and the arena. But it has comprehensive testing under MIRI, fuzzing with memory sanitizer
and debug asserts, plus really rigorous review. I'm confident it's sound. (Totally not baiting you into auditing the crate.)
The extensive fuzzing found three bugs in the
github repo if your curious, for which epage has done a fabulous job resolving each issue within like 1 business day. After fixing my own bugs, I'm now pretty confident that
Also, the maximum supported TOML document size is now 512 MB.
If anyone ever hits that limit, I hope it gives them pause to reconsider their life choices.
Why fork and instead of upstream? The API's are different enough it might as well be a different crate and well
although API surface and code-gen wise
and internal invariants are much more complex.
Well TOML parsing might not be the most exciting, I did go pretty deep on this over the last couple weeks,
balancing compilation time against performance and features, all well trying to shape the API to
my will. This required making lot of decisions and constantly weighing trade offs. Feel free to ask any
questions.
https://redd.it/1rbk4t2
@r_rust
toml-spanner a fork of toml-span, adding
full TOML v1.1.0 compliance including date-time support, reducing build time to half
and improving parsing performance significantly.
See Benchmarks
#### What changed
- Parse directly from bytes into the final value tree, no lexing nor intermediate trees.
- Tables are order-preserving flat arrays with a shared key index for larger tables, replacing toml-span's per-table BTreeMap.
- Compact Value and Span: Items (Span + Value) are now 24 bytes, half of the originals 48 bytes (on 64-bit platforms).
- Arena allocate the tree.
There are a bunch of other smaller optimizations, but I've added stuff like:
table"alpha"0"bravo".asstr()
Null Coalescing Index Operators and other quality of life improvements see, [API Documentation](https://docs.rs/toml-spanner/latest/tomlspanner/) for more examples.
The original toml-span had no unsafe, whereas toml-spanner does need it for the compact data structures
and the arena. But it has comprehensive testing under MIRI, fuzzing with memory sanitizer
and debug asserts, plus really rigorous review. I'm confident it's sound. (Totally not baiting you into auditing the crate.)
The extensive fuzzing found three bugs in the
toml crate, issues #1096, #1103 and #1106 in the toml-rs/toml github repo if your curious, for which epage has done a fabulous job resolving each issue within like 1 business day. After fixing my own bugs, I'm now pretty confident that
toml and toml-spanner are pretty aligned. Also, the maximum supported TOML document size is now 512 MB.
If anyone ever hits that limit, I hope it gives them pause to reconsider their life choices.
Why fork and instead of upstream? The API's are different enough it might as well be a different crate and well
although API surface and code-gen wise
toml-spanner simpler in some sense, the actual implementation details and internal invariants are much more complex.
Well TOML parsing might not be the most exciting, I did go pretty deep on this over the last couple weeks,
balancing compilation time against performance and features, all well trying to shape the API to
my will. This required making lot of decisions and constantly weighing trade offs. Feel free to ask any
questions.
https://redd.it/1rbk4t2
@r_rust
GitHub
GitHub - exrok/toml-spanner: High Performance Toml parser and deserializer for Rust that preserves span information with fast compile…
High Performance Toml parser and deserializer for Rust that preserves span information with fast compile times. - exrok/toml-spanner
Hey Rustaceans! Got a question? Ask here (8/2026)!
Mystified about strings? Borrow checker has you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet. Please note that if you include code examples to e.g. show a compiler error or surprising result, linking a playground with the code will improve your chances of getting help quickly.
If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so ahaving your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.
Here are some other venues where help may be found:
/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.
The official Rust user forums: https://users.rust-lang.org/.
The official Rust Programming Language Discord: https://discord.gg/rust-lang
The unofficial Rust community Discord: https://bit.ly/rust-community
Also check out last week's thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.
Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.
https://redd.it/1rcdmg4
@r_rust
Mystified about strings? Borrow checker has you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet. Please note that if you include code examples to e.g. show a compiler error or surprising result, linking a playground with the code will improve your chances of getting help quickly.
If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so ahaving your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.
Here are some other venues where help may be found:
/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.
The official Rust user forums: https://users.rust-lang.org/.
The official Rust Programming Language Discord: https://discord.gg/rust-lang
The unofficial Rust community Discord: https://bit.ly/rust-community
Also check out last week's thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.
Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.
https://redd.it/1rcdmg4
@r_rust
play.rust-lang.org
Rust Playground
A browser interface to the Rust compiler to experiment with the language
Searching 1GB JSON on a phone: 44s to 1.8s, a journey through every wrong approach
https://redd.it/1rgzhhl
@r_rust
https://redd.it/1rgzhhl
@r_rust
Servo v0.0.5 released
https://github.com/servo/servo/releases/tag/v0.0.5
https://redd.it/1rh8w41
@r_rust
https://github.com/servo/servo/releases/tag/v0.0.5
https://redd.it/1rh8w41
@r_rust
GitHub
Release v0.0.5 · servo/servo
v0.0.5
Servo 0.0.5 includes:
<link rel=preload> (@TimvdLippe, @jdm, #40059)
<style blocking> and <link blocking> (@TimvdLippe, #42096)
<img align> (@mrobinson, #42220)
<...
Servo 0.0.5 includes:
<link rel=preload> (@TimvdLippe, @jdm, #40059)
<style blocking> and <link blocking> (@TimvdLippe, #42096)
<img align> (@mrobinson, #42220)
<...
I built a 1 GiB/s file encryption CLI using iouring, ODIRECT, and a lock-free triple buffer
Hey r/rust,
I got frustrated with how slow standard encryption tools (like GPG or age) get when you throw a massive 50GB database backup or disk image at them. They are incredibly secure, but their core ciphers are largely single-threaded, usually topping out around 200-400 MiB/s.
I wanted to see if I could saturate a Gen4 NVMe drive while encrypting, so I built Concryptor.
GitHub: https://github.com/FrogSnot/Concryptor
I started out just mapping files into memory, but to hit multi-gigabyte/s throughput without locking up the CPU or thrashing the kernel page cache, the architecture evolved into something pretty crazy:
Lock-Free Triple-Buffering: Instead of using async MPSC channels (which introduced severe lock contention on small chunks), I built a 3-stage rotating state machine. While io\_uring writes batch N-2 to disk, Rayon encrypts batch N-1 across all 12 CPU cores, and io\_uring reads batch N.
Zero-Copy O_DIRECT: I wrote a custom 4096-byte aligned memory allocator using std::alloc. This pads the header and chunk slots so the Linux kernel can bypass the page cache entirely and DMA straight to the drive.
Security Architecture: It uses ring for assembly-optimized AES-256-GCM and ChaCha20-Poly1305. To prevent chunk-reordering attacks, it uses a TLS 1.3-style nonce derivation (base\_nonce XOR chunk\_index).
STREAM-style AAD: The full serialized file header (which contains the Argon2id parameters, salt, and base nonce) plus an is_final flag are bound into every single chunk's AAD. This mathematically prevents truncation and append attacks.
It reliably pushes 1+ GiB/s entirely CPU-bound, and scales beautifully with cores.
The README has a massive deep-dive into the binary file format, the memory alignment math, and the threat model. I'd love for the community to tear into the architecture or the code and tell me what I missed.
Let me know what you think!
https://redd.it/1rh9tj5
@r_rust
Hey r/rust,
I got frustrated with how slow standard encryption tools (like GPG or age) get when you throw a massive 50GB database backup or disk image at them. They are incredibly secure, but their core ciphers are largely single-threaded, usually topping out around 200-400 MiB/s.
I wanted to see if I could saturate a Gen4 NVMe drive while encrypting, so I built Concryptor.
GitHub: https://github.com/FrogSnot/Concryptor
I started out just mapping files into memory, but to hit multi-gigabyte/s throughput without locking up the CPU or thrashing the kernel page cache, the architecture evolved into something pretty crazy:
Lock-Free Triple-Buffering: Instead of using async MPSC channels (which introduced severe lock contention on small chunks), I built a 3-stage rotating state machine. While io\_uring writes batch N-2 to disk, Rayon encrypts batch N-1 across all 12 CPU cores, and io\_uring reads batch N.
Zero-Copy O_DIRECT: I wrote a custom 4096-byte aligned memory allocator using std::alloc. This pads the header and chunk slots so the Linux kernel can bypass the page cache entirely and DMA straight to the drive.
Security Architecture: It uses ring for assembly-optimized AES-256-GCM and ChaCha20-Poly1305. To prevent chunk-reordering attacks, it uses a TLS 1.3-style nonce derivation (base\_nonce XOR chunk\_index).
STREAM-style AAD: The full serialized file header (which contains the Argon2id parameters, salt, and base nonce) plus an is_final flag are bound into every single chunk's AAD. This mathematically prevents truncation and append attacks.
It reliably pushes 1+ GiB/s entirely CPU-bound, and scales beautifully with cores.
The README has a massive deep-dive into the binary file format, the memory alignment math, and the threat model. I'd love for the community to tear into the architecture or the code and tell me what I missed.
Let me know what you think!
https://redd.it/1rh9tj5
@r_rust
GitHub
GitHub - FrogSnot/Concryptor: A gigabyte-per-second, multi-threaded file encryption engine. Achieves extreme throughput using a…
A gigabyte-per-second, multi-threaded file encryption engine. Achieves extreme throughput using a lock-free, triple-buffered io_uring pipeline, Rayon parallel chunking, and hardware-accelerated...
Life outside Tokio: Success stories with Compio or iouring runtimes
Are io\uring based async runtimes a lost cause?
This is a space to discuss about async solutions outside epoll based design, what you have been doing with compio? How much performing it is compared with tokio? Which is your use case?
https://redd.it/1rh7lfe
@r_rust
Are io\uring based async runtimes a lost cause?
This is a space to discuss about async solutions outside epoll based design, what you have been doing with compio? How much performing it is compared with tokio? Which is your use case?
https://redd.it/1rh7lfe
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Is there any significant performance cost to using
And is `array.get(idx).ok_or(Error::Whoops)` faster than checking against known bounds explicitly with an `if` statement?
I'm doing a lot of indexing that doesn't lend itself nicely to an iterator. I suppose I could do a performance test, but I figured someone probably already knows the answer.
Thanks in advance <3
https://redd.it/1rhb97r
@r_rust
array.get(idx).ok_or(Error::Whoops) over array[idx]?And is `array.get(idx).ok_or(Error::Whoops)` faster than checking against known bounds explicitly with an `if` statement?
I'm doing a lot of indexing that doesn't lend itself nicely to an iterator. I suppose I could do a performance test, but I figured someone probably already knows the answer.
Thanks in advance <3
https://redd.it/1rhb97r
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Published my first crate - in response to a nasty production bug I'd caused
https://crates.io/crates/axum-socket-backpressure
https://redd.it/1rhc7ac
@r_rust
https://crates.io/crates/axum-socket-backpressure
https://redd.it/1rhc7ac
@r_rust
crates.io
crates.io: Rust Package Registry
Another minimal quantity library in rust (mainly for practice, feedback welcome!)
Another quantity library in rust... I know there are many, and they are probably better than mine (i.e. uom). However, I wanted to practice some aspects of Rust including procedural macros. I learned a lot from this project!
Feedback is encouraged and very much welcome!
https://github.com/Audrique/quantity-rs/tree/main
Me rambling:
I only started properly working as a software engineer around half a year ago and have been dabbling in Rust over a year. As I use Python at my current job, my main question for you is if I am doing stuff a 'non-idiomatic' way. For example, I was searching on how I could write interface tests for every struct that implements the 'Quantity' trait in my library. In Python, you can write one set of interface tests and let implementation tests inherit it, thus running the interface tests for each implementation. I guess it is not needed in Rust since you can't override traits?
https://redd.it/1rhcaxs
@r_rust
Another quantity library in rust... I know there are many, and they are probably better than mine (i.e. uom). However, I wanted to practice some aspects of Rust including procedural macros. I learned a lot from this project!
Feedback is encouraged and very much welcome!
https://github.com/Audrique/quantity-rs/tree/main
Me rambling:
I only started properly working as a software engineer around half a year ago and have been dabbling in Rust over a year. As I use Python at my current job, my main question for you is if I am doing stuff a 'non-idiomatic' way. For example, I was searching on how I could write interface tests for every struct that implements the 'Quantity' trait in my library. In Python, you can write one set of interface tests and let implementation tests inherit it, thus running the interface tests for each implementation. I guess it is not needed in Rust since you can't override traits?
https://redd.it/1rhcaxs
@r_rust
GitHub
GitHub - Audrique/quantity-rs: A minimal library for defining and working with quantities.
A minimal library for defining and working with quantities. - Audrique/quantity-rs
I used Tauri and Rust to build a native Windows Git context menu that replaces heavy Electron GUI clients. (OpenSource)
Hey r/rust,
I wanted to share a desktop utility I recently built called GitPop. It’s a Windows File Explorer extension that brings a Git UI directly to your right-click context menu.
https://github.com/vinzify/gitpop
# Why Rust and Tauri?
A context menu popup needs to open instantly. I initially looked at Electron, but shipping a 100MB+ Chromium instance just to show a tiny Git status window felt unacceptable. Using Tauri v2 let me keep the binary size small and the startup time nearly instantaneous.
# A few fun implementation details
# OS integration (registry binding)
I used the
# Headless Git (no libgit2)
Instead of linking
To prevent Windows CMD boxes from flashing on the screen, I had to use
# Local LLMs for commit messages
I implemented a feature that pipes
# UI quirks
Building a transparent, glassmorphism UI on Windows 11 WebView2 had a few quirky panics, but the Tauri v2 APIs handled it cleanly once configured.
The source code is fully open-source if anyone wants to see how the context-menu registry binding or hidden child processes were implemented!
https://redd.it/1rhjsat
@r_rust
Hey r/rust,
I wanted to share a desktop utility I recently built called GitPop. It’s a Windows File Explorer extension that brings a Git UI directly to your right-click context menu.
https://github.com/vinzify/gitpop
# Why Rust and Tauri?
A context menu popup needs to open instantly. I initially looked at Electron, but shipping a 100MB+ Chromium instance just to show a tiny Git status window felt unacceptable. Using Tauri v2 let me keep the binary size small and the startup time nearly instantaneous.
# A few fun implementation details
# OS integration (registry binding)
I used the
winreg crate to dynamically find the app’s executable and bind it to the Directory\\Background\\shell registry keys during setup.# Headless Git (no libgit2)
Instead of linking
libgit2 (which can be a headache and often ignores the user’s global .gitconfig), the Rust backend spawns child processes to run native Git CLI binaries.To prevent Windows CMD boxes from flashing on the screen, I had to use
CommandExt plus the CREATE_NO_WINDOW flag.# Local LLMs for commit messages
I implemented a feature that pipes
git diff output to a local Ollama instance (via reqwest) to auto-generate commit messages entirely on-device, keeping source code private.# UI quirks
Building a transparent, glassmorphism UI on Windows 11 WebView2 had a few quirky panics, but the Tauri v2 APIs handled it cleanly once configured.
The source code is fully open-source if anyone wants to see how the context-menu registry binding or hidden child processes were implemented!
https://redd.it/1rhjsat
@r_rust
GitHub
GitHub - vinzify/gitpop: An AI-powered Git context menu for Windows. Right-click any repo to instantly view changes, auto-generate…
An AI-powered Git context menu for Windows. Right-click any repo to instantly view changes, auto-generate commit messages, and push without opening an IDE. - vinzify/gitpop
How much did Rust help you in your work?
After years of obsessed learning for Rust along with its practices and semantics, it is really helping in my career, so much so that I would not shy away from admitting that Rust has been the prime factory in making me a hireable profile.
I basically have to thank Rust for making me able to write code that can go in production and not break even under unconventional circumstances.
I was wondering how much is Rust helping with careers and whatnot over here.
I wanna clarify, I did not simply "land a Rust job", I adopted Rust in my habits and it made me capable to subscribe to good contracts and deliver.
https://redd.it/1rhts1u
@r_rust
After years of obsessed learning for Rust along with its practices and semantics, it is really helping in my career, so much so that I would not shy away from admitting that Rust has been the prime factory in making me a hireable profile.
I basically have to thank Rust for making me able to write code that can go in production and not break even under unconventional circumstances.
I was wondering how much is Rust helping with careers and whatnot over here.
I wanna clarify, I did not simply "land a Rust job", I adopted Rust in my habits and it made me capable to subscribe to good contracts and deliver.
https://redd.it/1rhts1u
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Released domain-check 1.0 — Rust CLI + async library + MCP server (1,200+ TLDs)
Hey folks 👋
I just released **v1.0** of a project I’ve been building called **domain-check**, a Rust-based domain exploration engine available as:
* CLI
* Async Rust library
* MCP server for AI agents
Some highlights:
• RDAP-first engine with automatic WHOIS fallback
• \~1,200+ TLDs via IANA bootstrap (32 hardcoded fallback for offline use)
• Up to 100 concurrent checks
• Pattern-based name generation (\\w, \\d, ?)
• JSON / CSV / streaming output
• CI-safe (no TTY prompts when piped)
For Rust folks specifically:
• Library-first architecture (domain-check-lib)
• Separate MCP server crate (domain-check-mcp)
• Built on rmcp (Rust MCP SDK)
• Binary size reduced from \~5.9MB → \~2.7MB (LTO + dep cleanup)
Repo: [https://github.com/saidutt46/domain-check](https://github.com/saidutt46/domain-check)
would love to hear your feedback
https://redd.it/1rhubzg
@r_rust
Hey folks 👋
I just released **v1.0** of a project I’ve been building called **domain-check**, a Rust-based domain exploration engine available as:
* CLI
* Async Rust library
* MCP server for AI agents
Some highlights:
• RDAP-first engine with automatic WHOIS fallback
• \~1,200+ TLDs via IANA bootstrap (32 hardcoded fallback for offline use)
• Up to 100 concurrent checks
• Pattern-based name generation (\\w, \\d, ?)
• JSON / CSV / streaming output
• CI-safe (no TTY prompts when piped)
For Rust folks specifically:
• Library-first architecture (domain-check-lib)
• Separate MCP server crate (domain-check-mcp)
• Built on rmcp (Rust MCP SDK)
• Binary size reduced from \~5.9MB → \~2.7MB (LTO + dep cleanup)
Repo: [https://github.com/saidutt46/domain-check](https://github.com/saidutt46/domain-check)
would love to hear your feedback
https://redd.it/1rhubzg
@r_rust
GitHub
GitHub - saidutt46/domain-check: Fast, universal domain availability checker - 1,200+ TLDs, pattern generation, RDAP with WHOIS…
Fast, universal domain availability checker - 1,200+ TLDs, pattern generation, RDAP with WHOIS fallback. CLI + Rust library + MCP server for AI agents. - saidutt46/domain-check
🌊 semwave: Fast semver bump propagation
Hey everyone!
Recently I started working on the tool to solve a specific problem at my company: incorrect version bump propagation in Rust project, given some bumps of dependencies. This problem leads to many bad things, including breaking downstream code, internal registry inconsistencies, angry coworkers, etc.
Basically, it answers the question:
>"If I bump crates A, B and C in this Rust project - what else do I need to bump and how?"
Under the hood, it walks the workspace dependency graph starting from the seeds. For each dependent, it checks whether the crate leaks any seed types in its public API by analyzing its
I find it really useful for large Cargo workspaces, like
> semwave --direct arrayvec,itertools
Direct mode: assuming BREAKING change for {"arrayvec", "itertools"}
Analyzing stdx for public API exposure of "itertools"
-> stdx leaks itertools (Minor):
-> xtask is binary-only, no public API to leak
Analyzing vfs for public API exposure of "stdx"
-> vfs leaks stdx (Minor):
Analyzing test-utils for public API exposure of "stdx"
-> test-utils leaks stdx (Minor):
Analyzing vfs-notify for public API exposure of "stdx", "vfs"
-> vfs-notify leaks stdx (Minor):
-> vfs-notify leaks vfs (Minor):
Analyzing syntax for public API exposure of "itertools", "stdx"
...
=== Analysis Complete ===
MAJOR-bump list (Requires MAJOR bump / ↑.0.0): {}
MINOR-bump list (Requires MINOR bump / x.↑.0): {"project-model", "syntax-bridge", "proc-macro-srv", "load-cargo", "hir-expand", "ide-completion", "hir-def", "cfg", "vfs", "ide-diagnostics", "ide", "ide-db", "span", "ide-ssr", "rust-analyzer", "ide-assists", "base-db", "stdx", "syntax", "test-utils", "vfs-notify", "hir-ty", "proc-macro-api", "tt", "test-fixture", "hir", "mbe", "proc-macro-srv-cli"}
PATCH-bump list (Requires PATCH bump / x.y.↑): {"xtask"}
I would really appreciate any activity under this post and/or Github repo as well as any questions/suggestions.
P.S. The tool is in active development and is unstable at the moment. Additionally, for the first version of the tool I used LLM (to quickly validate the idea), so please beware of that. Now I don't use language models and write the tool all by myself.
https://redd.it/1rhvrbm
@r_rust
Hey everyone!
Recently I started working on the tool to solve a specific problem at my company: incorrect version bump propagation in Rust project, given some bumps of dependencies. This problem leads to many bad things, including breaking downstream code, internal registry inconsistencies, angry coworkers, etc.
cargo-semver-checks won't help here (as it only checks the code for breaking changes, without propagating bumps to dependents that 'leak' this code in their public API), and private dependencies are not ready yet. That's why I decided to make `semwave`.Basically, it answers the question:
>"If I bump crates A, B and C in this Rust project - what else do I need to bump and how?"
semwave will take the crates that changed their versions (the "seeds") in a breaking manner and "propagate" the bump wave through your workspace, so you don't have to wonder "Does crate X depends on Y in a breaking or a non-breaking way"? The result is three lists: MAJOR bumps, MINOR bumps, and PATCH bumps, plus optional warnings when it had to guess conservatively. It doesn't need conventional commits and it is super light and fast, as we only operate on versions (not the code) of crates and their dependents.Under the hood, it walks the workspace dependency graph starting from the seeds. For each dependent, it checks whether the crate leaks any seed types in its public API by analyzing its
rustdoc JSON. If it does, that crate itself needs a bump - and becomes a new seed, triggering the same check on its dependents, and so on until the wave settles.I find it really useful for large Cargo workspaces, like
rust-analyzer repo (although you can use it for simple crates too). For example, here's my tool answering the question "What happens if we introduce breaking changes to arrayvec AND itertools in rust-analyzer repo?":> semwave --direct arrayvec,itertools
Direct mode: assuming BREAKING change for {"arrayvec", "itertools"}
Analyzing stdx for public API exposure of "itertools"
-> stdx leaks itertools (Minor):
-> xtask is binary-only, no public API to leak
Analyzing vfs for public API exposure of "stdx"
-> vfs leaks stdx (Minor):
Analyzing test-utils for public API exposure of "stdx"
-> test-utils leaks stdx (Minor):
Analyzing vfs-notify for public API exposure of "stdx", "vfs"
-> vfs-notify leaks stdx (Minor):
-> vfs-notify leaks vfs (Minor):
Analyzing syntax for public API exposure of "itertools", "stdx"
...
=== Analysis Complete ===
MAJOR-bump list (Requires MAJOR bump / ↑.0.0): {}
MINOR-bump list (Requires MINOR bump / x.↑.0): {"project-model", "syntax-bridge", "proc-macro-srv", "load-cargo", "hir-expand", "ide-completion", "hir-def", "cfg", "vfs", "ide-diagnostics", "ide", "ide-db", "span", "ide-ssr", "rust-analyzer", "ide-assists", "base-db", "stdx", "syntax", "test-utils", "vfs-notify", "hir-ty", "proc-macro-api", "tt", "test-fixture", "hir", "mbe", "proc-macro-srv-cli"}
PATCH-bump list (Requires PATCH bump / x.y.↑): {"xtask"}
I would really appreciate any activity under this post and/or Github repo as well as any questions/suggestions.
P.S. The tool is in active development and is unstable at the moment. Additionally, for the first version of the tool I used LLM (to quickly validate the idea), so please beware of that. Now I don't use language models and write the tool all by myself.
https://redd.it/1rhvrbm
@r_rust
How to Interface PyO3 libraries.
Hi, I am working on a project. It runs mostly on python because it involves communicating with NIVIDIA inference system and other libraries that are mature in python. However when it comes to perform my core tasks, in order to manage complexity and performance I prefer to use Rust :)
So I have three rust libraries exposed in Python through PyO3. They work on a producer-consumer scheme. And basically I am running one process for each component that pipes its result to the following component.
For now I bind the inputs / outputs as Python dictionaries. However I would to robustify (and less boilerplate prone) the interface between each component. That is, let's say I have component A (rust) that gives in python an output (for now a dicitionary) which is taken as an input of the component B.
My question is : "What methods would you use to properly interface each library/component"
\----
My thoughts are:
1. keep the dictionary methods
2. Make PyClasses (but how should the libraries share those classes ?)
3. Make dataclasses (but looks like same boiler plate than the dictionary methods ?)
If you can share your ideas and experience it would be really kind :)
<3
https://redd.it/1rhxni3
@r_rust
Hi, I am working on a project. It runs mostly on python because it involves communicating with NIVIDIA inference system and other libraries that are mature in python. However when it comes to perform my core tasks, in order to manage complexity and performance I prefer to use Rust :)
So I have three rust libraries exposed in Python through PyO3. They work on a producer-consumer scheme. And basically I am running one process for each component that pipes its result to the following component.
For now I bind the inputs / outputs as Python dictionaries. However I would to robustify (and less boilerplate prone) the interface between each component. That is, let's say I have component A (rust) that gives in python an output (for now a dicitionary) which is taken as an input of the component B.
My question is : "What methods would you use to properly interface each library/component"
\----
My thoughts are:
1. keep the dictionary methods
2. Make PyClasses (but how should the libraries share those classes ?)
3. Make dataclasses (but looks like same boiler plate than the dictionary methods ?)
If you can share your ideas and experience it would be really kind :)
<3
https://redd.it/1rhxni3
@r_rust
Reddit
From the rust community on Reddit
Explore this post and more from the rust community
Supercharge Rust functions with implicit arguments using CGP v0.7.0
https://contextgeneric.dev/blog/v0.7.0-release/
https://redd.it/1rhwxnd
@r_rust
https://contextgeneric.dev/blog/v0.7.0-release/
https://redd.it/1rhwxnd
@r_rust
contextgeneric.dev
Supercharge Rust functions with implicit arguments using CGP v0.7.0 | Context-Generic Programming
CGP v0.7.0 has been released, bringing a major expansion to the CGP macro toolkit. The centerpiece of this release is a suite of new annotations — #[cgpfn], #[implicit], #[uses], #[extend], #[useprovider], and #[use_type] — that let you write context-generic…
nabla — Pure Rust GPU math engine: PyTorch-familiar API, zero C++ deps, 4 backends
https://github.com/fumishiki/nabla
https://redd.it/1ri6652
@r_rust
https://github.com/fumishiki/nabla
https://redd.it/1ri6652
@r_rust
GitHub
GitHub - fumishiki/nabla
Contribute to fumishiki/nabla development by creating an account on GitHub.