[OC] Tired of "blind" C++ debugging in VS Code for Computer Vision? I built CV DebugMate C++ to view cv::Mat and 3D Point Clouds directly.
Hey everyone,
As a developer working on **SLAM and Computer Vision projects in C++**, I was constantly frustrated by the lack of proper debugging tools in VS Code after moving away from Visual Studio's Image Watch. Staring at memory addresses for cv::Mat and std::vector<cv::Point3f> felt like debugging blind!
So, I decided to build what I needed and open-source it: [CV DebugMate C++](https://marketplace.visualstudio.com/items?itemName=zwdai.cv-debugmate-cpp).
It's a **VS Code extension** that brings back essential visual debugging capabilities for C++ projects, with a special focus on 3D/CV applications.
**🌟 Key Features**
**1.** 🖼️ **Powerful cv::Mat Visualization**
* Diverse Types: Supports various depths (uint8, float, double) and channels (Grayscale, BGR, RGBA).
* Pixel-Level Inspection: Hover your mouse to see real-time pixel values, with zoom and grid support.
* Pro Export: Exports to common formats like PNG, and crucially, TIFF for preserving floating-point data integrity (a must for deep CV analysis
**2.** 📊 **Exclusive: Real-Time 3D Point Cloud Viewing**
* Direct Rendering: Directly renders your **std::vector<cv::Point3f>** or **cv::Point3d** variables as an interactive 3D point cloud.
* Interactive 3D: Built on Three.js, allowing you to drag, rotate, and zoom the point cloud right within your debugger session. Say goodbye to blindly debugging complex 3D algorithm
**3. 🔍 CV DebugMate Panel**
[](https://github.com/dull-bird/cv_debug_mate_cpp/tree/main#-cv-debugmate-panel)
* Automatic Variable Collection: Automatically detects all visualizable OpenCV variables in the current stack frame.
* Dedicated Sidebar View: A new view in the Debug sidebar for quick access to all Mat and Point Cloud variables.
* Type Identification: Distinct icons for images (Mat) and 3D data (Point Cloud).
* One-Click Viewing: Quick-action buttons to open visualization tabs without using context menus
**4. Wide Debugger Support**
Confirmed compatibility with common setups: Windows (MSVC/MinGW), Linux (GDB), and macOS (LLDB). (Check the documentation for the full list).
**🛠 How to Use**
It's designed to be plug-and-play. During a debug session, simply Right-Click on your cv::Mat or std::vector<cv::Point3f> variable in the Locals/Watch panel and select "View by CV DebugMate".**🔗 Get It & Support**
The plugin is completely free and open-source. It's still early in development, so feedback and bug reports are highly welcome!
**VS Code Marketplace**: Search for CV DebugMate or zwdai
**GitHub Repository**: [https://github.com/dull-bird/cv\_debug\_mate\_cpp](https://github.com/dull-bird/cv_debug_mate_cpp)
If you find it useful, please consider giving it a Star on GitHub or a rating on the Marketplace—it's the fuel for continued bug fixes and feature development! 🙏
https://redd.it/1ps4a8n
@r_cpp
Hey everyone,
As a developer working on **SLAM and Computer Vision projects in C++**, I was constantly frustrated by the lack of proper debugging tools in VS Code after moving away from Visual Studio's Image Watch. Staring at memory addresses for cv::Mat and std::vector<cv::Point3f> felt like debugging blind!
So, I decided to build what I needed and open-source it: [CV DebugMate C++](https://marketplace.visualstudio.com/items?itemName=zwdai.cv-debugmate-cpp).
It's a **VS Code extension** that brings back essential visual debugging capabilities for C++ projects, with a special focus on 3D/CV applications.
**🌟 Key Features**
**1.** 🖼️ **Powerful cv::Mat Visualization**
* Diverse Types: Supports various depths (uint8, float, double) and channels (Grayscale, BGR, RGBA).
* Pixel-Level Inspection: Hover your mouse to see real-time pixel values, with zoom and grid support.
* Pro Export: Exports to common formats like PNG, and crucially, TIFF for preserving floating-point data integrity (a must for deep CV analysis
**2.** 📊 **Exclusive: Real-Time 3D Point Cloud Viewing**
* Direct Rendering: Directly renders your **std::vector<cv::Point3f>** or **cv::Point3d** variables as an interactive 3D point cloud.
* Interactive 3D: Built on Three.js, allowing you to drag, rotate, and zoom the point cloud right within your debugger session. Say goodbye to blindly debugging complex 3D algorithm
**3. 🔍 CV DebugMate Panel**
[](https://github.com/dull-bird/cv_debug_mate_cpp/tree/main#-cv-debugmate-panel)
* Automatic Variable Collection: Automatically detects all visualizable OpenCV variables in the current stack frame.
* Dedicated Sidebar View: A new view in the Debug sidebar for quick access to all Mat and Point Cloud variables.
* Type Identification: Distinct icons for images (Mat) and 3D data (Point Cloud).
* One-Click Viewing: Quick-action buttons to open visualization tabs without using context menus
**4. Wide Debugger Support**
Confirmed compatibility with common setups: Windows (MSVC/MinGW), Linux (GDB), and macOS (LLDB). (Check the documentation for the full list).
**🛠 How to Use**
It's designed to be plug-and-play. During a debug session, simply Right-Click on your cv::Mat or std::vector<cv::Point3f> variable in the Locals/Watch panel and select "View by CV DebugMate".**🔗 Get It & Support**
The plugin is completely free and open-source. It's still early in development, so feedback and bug reports are highly welcome!
**VS Code Marketplace**: Search for CV DebugMate or zwdai
**GitHub Repository**: [https://github.com/dull-bird/cv\_debug\_mate\_cpp](https://github.com/dull-bird/cv_debug_mate_cpp)
If you find it useful, please consider giving it a Star on GitHub or a rating on the Marketplace—it's the fuel for continued bug fixes and feature development! 🙏
https://redd.it/1ps4a8n
@r_cpp
Visualstudio
CV DebugMate C++ - Visual Studio Marketplace
Extension for Visual Studio Code - Visualize OpenCV Mat and Point Cloud during C++ debugging. Inspired by Image Watch.
link me a bgfx guide
has anyone found a good guide on how to use the bgfx library? i have been searching for days and i only found bad ones
https://redd.it/1ps6lqv
@r_cpp
has anyone found a good guide on how to use the bgfx library? i have been searching for days and i only found bad ones
https://redd.it/1ps6lqv
@r_cpp
Reddit
From the cpp community on Reddit
Explore this post and more from the cpp community
My 70+ video playlist exploring Unreal Engine's unique flavor of C++ (eg language additions, data structures, networking APIs, etc.)
https://www.youtube.com/playlist?list=PL22CMuqloY0oZBwPaqRbu_WGA1btwhfC2
https://redd.it/1ps8937
@r_cpp
https://www.youtube.com/playlist?list=PL22CMuqloY0oZBwPaqRbu_WGA1btwhfC2
https://redd.it/1ps8937
@r_cpp
YouTube
Unreal C++ Tutorials
My personal series on Unreal C++. Here I show you what I have learned about using C++ effectively with Unreal. Watching this should let you bypass hours of i...
Constvector: Log-structured std:vector alternative – 30-40% faster push/pop
Usually std::vector starts with 'N' capacity and grows to '2 * N' capacity once its size crosses X; at that time, we also copy the data from the old array to the new array. That has few problems
1. Copy cost,
2. OS needs to manage the small capacity array (size N) that's freed by the application.
3. L1 and L2 cache need to invalidate the array items, since the array moved to new location, and CPU need to fetch to L1/L2 since it's new data for CPU, but in reality it's not.
std::vector's reallocations and recopies are amortised O(1), but at low level they have lot of negative impact. Here's a log-structured alternative (constvector) with power-of-2 blocks: Push: 3.5 ns/op (vs 5 ns std::vector) Pop: 3.4 ns/op (vs 5.3 ns) Index: minor slowdown (3.8 vs 3.4 ns) Strict worst-case O(1), Θ(N) space trade-off, only log(N) extra compared to std::vector.
It reduces internal memory fragmentation. It won't invalidate L1, L2 cache without modifications, hence improving performance: In the github I benchmarked for 1K to 1B size vectors and this consistently improved showed better performance for push and pop operations.
Github: https://github.com/tendulkar/constvector
Youtube: https://youtu.be/ledS08GkD40
Practically we can use 64 size for meta array (for the log(N)) as extra space. I implemented the bare vector operations to compare, since the actual std::vector implementations have a lot of iterator validation code, causing the extra overhead.
https://redd.it/1ps8k53
@r_cpp
Usually std::vector starts with 'N' capacity and grows to '2 * N' capacity once its size crosses X; at that time, we also copy the data from the old array to the new array. That has few problems
1. Copy cost,
2. OS needs to manage the small capacity array (size N) that's freed by the application.
3. L1 and L2 cache need to invalidate the array items, since the array moved to new location, and CPU need to fetch to L1/L2 since it's new data for CPU, but in reality it's not.
std::vector's reallocations and recopies are amortised O(1), but at low level they have lot of negative impact. Here's a log-structured alternative (constvector) with power-of-2 blocks: Push: 3.5 ns/op (vs 5 ns std::vector) Pop: 3.4 ns/op (vs 5.3 ns) Index: minor slowdown (3.8 vs 3.4 ns) Strict worst-case O(1), Θ(N) space trade-off, only log(N) extra compared to std::vector.
It reduces internal memory fragmentation. It won't invalidate L1, L2 cache without modifications, hence improving performance: In the github I benchmarked for 1K to 1B size vectors and this consistently improved showed better performance for push and pop operations.
Github: https://github.com/tendulkar/constvector
Youtube: https://youtu.be/ledS08GkD40
Practically we can use 64 size for meta array (for the log(N)) as extra space. I implemented the bare vector operations to compare, since the actual std::vector implementations have a lot of iterator validation code, causing the extra overhead.
https://redd.it/1ps8k53
@r_cpp
GitHub
GitHub - tendulkar/constvector
Contribute to tendulkar/constvector development by creating an account on GitHub.
Crunch: A Message Definition and Serialization Tool Written in Modern C++
https://github.com/sam-w-yellin/crunch
https://redd.it/1psa0pg
@r_cpp
https://github.com/sam-w-yellin/crunch
https://redd.it/1psa0pg
@r_cpp
GitHub
GitHub - sam-w-yellin/crunch: A small, fast message definition and serdes protocol with built-in field and message validation.
A small, fast message definition and serdes protocol with built-in field and message validation. - sam-w-yellin/crunch
[Project] Parallax - Universal GPU Acceleration for C++ Parallel Algorithms
Hey r/cpp!
I'm excited to share **Parallax**, an open-source project that brings automatic GPU acceleration to C++ standard parallel algorithms.
# The Idea
Use `std::execution::par` in your code, link with Parallax, and your parallel algorithms run on the GPU. No code changes, no vendor lock-in, works on any GPU with Vulkan support (AMD, NVIDIA, Intel, mobile).
# Example
std::vector<float> data(1'000'000);
std::for_each(std::execution::par, data.begin(), data.end(),
[](float& x) { x *= 2.0f; });
With Parallax, this runs on the GPU automatically. 30-40x speedup on typical workloads.
# Why Vulkan?
* **Universal**: Works on all major GPU vendors
* **Modern**: Actively developed, not deprecated like OpenCL
* **Fast**: Direct compute access, no translation overhead
* **Open**: No vendor lock-in like CUDA/HIP
# Current Status
This is an early MVP (v0.1.0-dev):
* ✅ Vulkan backend (all platforms)
* ✅ Unified memory management
* ✅ macOS (MoltenVK), Linux, Windows
* 🔨 Compiler integration (in progress)
* 🔨 Full algorithm coverage (coming soon)
# Architecture
Built on:
* Vulkan 1.2+ for compute
* C ABI for stability
* LLVM/Clang for future compiler integration
* Lessons learned from vkStdpar
# Looking for Contributors
We need help with:
* LLVM/Clang plugin development
* Algorithm implementations
* Testing on different GPUs
* Documentation
# Links
* GitHub: [https://github.com/parallax-compiler/parallax-runtime](https://github.com/parallax-compiler/parallax-runtime)
* Docs: [https://github.com/parallax-compiler/parallax-docs](https://github.com/parallax-compiler/parallax-docs)
* License: Apache 2.0
Would love to hear your thoughts and feedback!
https://redd.it/1psfb7p
@r_cpp
Hey r/cpp!
I'm excited to share **Parallax**, an open-source project that brings automatic GPU acceleration to C++ standard parallel algorithms.
# The Idea
Use `std::execution::par` in your code, link with Parallax, and your parallel algorithms run on the GPU. No code changes, no vendor lock-in, works on any GPU with Vulkan support (AMD, NVIDIA, Intel, mobile).
# Example
std::vector<float> data(1'000'000);
std::for_each(std::execution::par, data.begin(), data.end(),
[](float& x) { x *= 2.0f; });
With Parallax, this runs on the GPU automatically. 30-40x speedup on typical workloads.
# Why Vulkan?
* **Universal**: Works on all major GPU vendors
* **Modern**: Actively developed, not deprecated like OpenCL
* **Fast**: Direct compute access, no translation overhead
* **Open**: No vendor lock-in like CUDA/HIP
# Current Status
This is an early MVP (v0.1.0-dev):
* ✅ Vulkan backend (all platforms)
* ✅ Unified memory management
* ✅ macOS (MoltenVK), Linux, Windows
* 🔨 Compiler integration (in progress)
* 🔨 Full algorithm coverage (coming soon)
# Architecture
Built on:
* Vulkan 1.2+ for compute
* C ABI for stability
* LLVM/Clang for future compiler integration
* Lessons learned from vkStdpar
# Looking for Contributors
We need help with:
* LLVM/Clang plugin development
* Algorithm implementations
* Testing on different GPUs
* Documentation
# Links
* GitHub: [https://github.com/parallax-compiler/parallax-runtime](https://github.com/parallax-compiler/parallax-runtime)
* Docs: [https://github.com/parallax-compiler/parallax-docs](https://github.com/parallax-compiler/parallax-docs)
* License: Apache 2.0
Would love to hear your thoughts and feedback!
https://redd.it/1psfb7p
@r_cpp
GitHub
GitHub - parallax-compiler/parallax-runtime: Universal GPU runtime for C++ parallel algorithms
Universal GPU runtime for C++ parallel algorithms. Contribute to parallax-compiler/parallax-runtime development by creating an account on GitHub.
CRTP-based Singleton with private construction token — looking for feedback
I experimented with a CRTP-based Singleton that enforces construction via a private token. Curious to hear thoughts.
So, I wanted to implement a singleton in my ECS crtp engine for design and architectural reasons, and I sat down to think about an efficient and crtp-friendly way to do this kind of pattern without necessarily having to alter the original Singleton class contract. The solution is a crtp-based Singleton in which the Derived (the original singleton) inherits from the base Singleton, which exposes the methods required for instantiation and the single exposure of the object. Simply put, instead of boilerplating the class with the classic Singleton code (op = delete), we move this logic and transform it into a proxy that returns a static instance of the derivative without the derivative even being aware of it.
In this way, we manage private instantiation with a struct token which serves as a specific specialization for the constructor and which allows, among other things, making the construction exclusive to objects that have this token.
This keeps the singleton type-safe, zero-cost, CRTP-friendly, and easy to integrate with proxy-based or ECS-style architectures.
Link to the GitHub repo
https://redd.it/1psmcaf
@r_cpp
I experimented with a CRTP-based Singleton that enforces construction via a private token. Curious to hear thoughts.
So, I wanted to implement a singleton in my ECS crtp engine for design and architectural reasons, and I sat down to think about an efficient and crtp-friendly way to do this kind of pattern without necessarily having to alter the original Singleton class contract. The solution is a crtp-based Singleton in which the Derived (the original singleton) inherits from the base Singleton, which exposes the methods required for instantiation and the single exposure of the object. Simply put, instead of boilerplating the class with the classic Singleton code (op = delete), we move this logic and transform it into a proxy that returns a static instance of the derivative without the derivative even being aware of it.
In this way, we manage private instantiation with a struct token which serves as a specific specialization for the constructor and which allows, among other things, making the construction exclusive to objects that have this token.
This keeps the singleton type-safe, zero-cost, CRTP-friendly, and easy to integrate with proxy-based or ECS-style architectures.
Link to the GitHub repo
https://redd.it/1psmcaf
@r_cpp
GitHub
GitHub - unrays/crtp-singleton: A type-safe, CRTP-based Singleton pattern in C++ with secure construction and proxy support.
A type-safe, CRTP-based Singleton pattern in C++ with secure construction and proxy support. - unrays/crtp-singleton
Any Libraries for Asynchronous requests with HTTP2
Ive recently picked up C++ and am looking to port a program that i had previously written in python using aiohttp, but im having trouble finding a library that makes it easy to handle asynchronous http requests. I initially tried using liburing in conjunction with nghttp2, but quickly found that that was way over my level of knowledge. does anyone have any possible suggestions on what i should do. I cant use any libraries like boost because i need HTTP2 for its multiplexing capabilities.
https://redd.it/1psmrx6
@r_cpp
Ive recently picked up C++ and am looking to port a program that i had previously written in python using aiohttp, but im having trouble finding a library that makes it easy to handle asynchronous http requests. I initially tried using liburing in conjunction with nghttp2, but quickly found that that was way over my level of knowledge. does anyone have any possible suggestions on what i should do. I cant use any libraries like boost because i need HTTP2 for its multiplexing capabilities.
https://redd.it/1psmrx6
@r_cpp
Reddit
From the cpp community on Reddit
Explore this post and more from the cpp community
how i can share my projects in reddit if they run in console?
Should i send all code in messeage? Or file of my code? And if file, how?
https://redd.it/1pt0fdx
@r_cpp
Should i send all code in messeage? Or file of my code? And if file, how?
https://redd.it/1pt0fdx
@r_cpp
Reddit
From the cpp community on Reddit
Explore this post and more from the cpp community
Maintaining the Legacy: Total-Random takes over pcg-cpp maintenance (Support for Win ARM64, MSVC fixes, and Modern C++)
Hi everyone,
Like many of you, we consider the PCG (Permuted Congruential Generator) family of PRNGs by Prof. Melissa O'Neill to be the gold standard for performance and statistical quality. However, the original pcg-cpp repository has been inactive for over 3 years, leaving many critical community-submitted patches unmerged.
To ensure this vital library remains usable in modern development environments, we have formed Total-Random, a community-led organization dedicated to maintaining and modernizing legacy RNG libraries.
We have just released our first stable version of the Total-Random/pcg-cpp fork, which includes:
Windows ARM64 Support: Integrated fixes for ARM64 architecture (thanks to Demonese/LuaSTG).
MSVC Compatibility: Resolved C2678 ambiguous operator errors and other MSVC-specific build failures.
Empty Base Class Optimization (EBCO): Enabled __declspec(empty_bases) for MSVC to ensure optimal memory layout, matching GCC/Clang behavior.
Robust 128-bit Fallback: Improved handling for platforms lacking native __uint128_t support.
Improved unxorshift: Replaced the recursive implementation with a more efficient iterative doubling loop to prevent stack issues and improve clarity.
Our goal is to keep the library header-only, bit-for-bit compatible with the original algorithm, and ready for C++11/17/20/23.
Community Recognition: We are honored to have received early attention and feedback from researchers in the field, including Ben Haller (@bhaller) from Cornell University. You can see the community discussion regarding our transition here:https://github.com/imneme/pcg-cpp/issues/106
Check us out on GitHub: Total-Random/pcg-cpp
We welcome PRs, issues, and feedback from the community. Let's keep the best PRNG alive and kicking!
Best regards, The Total-Random Team
https://redd.it/1pt25rg
@r_cpp
Hi everyone,
Like many of you, we consider the PCG (Permuted Congruential Generator) family of PRNGs by Prof. Melissa O'Neill to be the gold standard for performance and statistical quality. However, the original pcg-cpp repository has been inactive for over 3 years, leaving many critical community-submitted patches unmerged.
To ensure this vital library remains usable in modern development environments, we have formed Total-Random, a community-led organization dedicated to maintaining and modernizing legacy RNG libraries.
We have just released our first stable version of the Total-Random/pcg-cpp fork, which includes:
Windows ARM64 Support: Integrated fixes for ARM64 architecture (thanks to Demonese/LuaSTG).
MSVC Compatibility: Resolved C2678 ambiguous operator errors and other MSVC-specific build failures.
Empty Base Class Optimization (EBCO): Enabled __declspec(empty_bases) for MSVC to ensure optimal memory layout, matching GCC/Clang behavior.
Robust 128-bit Fallback: Improved handling for platforms lacking native __uint128_t support.
Improved unxorshift: Replaced the recursive implementation with a more efficient iterative doubling loop to prevent stack issues and improve clarity.
Our goal is to keep the library header-only, bit-for-bit compatible with the original algorithm, and ready for C++11/17/20/23.
Community Recognition: We are honored to have received early attention and feedback from researchers in the field, including Ben Haller (@bhaller) from Cornell University. You can see the community discussion regarding our transition here:https://github.com/imneme/pcg-cpp/issues/106
Check us out on GitHub: Total-Random/pcg-cpp
We welcome PRs, issues, and feedback from the community. Let's keep the best PRNG alive and kicking!
Best regards, The Total-Random Team
https://redd.it/1pt25rg
@r_cpp
GitHub
Continuing the Journey: A Community-Maintained Fork for Modern C++ and Platform Support · Issue #106 · imneme/pcg-cpp
Dear Professor O'Neill (@imneme) and the PCG community, First of all, thank you for the incredible work on the PCG family of generators. It has been a cornerstone for high-quality random number...
Is my project good enough for CV?
Well, I’m currently a Polish IT student, and I’m looking for a job. Since I don’t have any professional experience yet, I decided to create something meaningful to put on my CV.
Initially, the idea was to build a parser that uses RPN to evaluate expressions. However, over time I kept adding more features: user-defined functions and variables, recursion, short-circuiting, assignment operations, references, local variables, sequential execution, loops, and multi-line input. All of this eventually required building an AST and dealing with a lot of pointer-related complexity.
I’ve gone through several refactorings (I still consider myself a beginner at programming) and even one complete rewrite of the code. I also noticed that there isn’t much detailed information about some parsing topics—at least beyond Wikipedia.
At this point, the project feels more like a very weak version of Desmos (without graphs) than just a calculator. Now I’m wondering: should I continue developing this project further, should I move on to something more complex, or is this already enough for a CV pet project?
Here’s the GitHub link in case anyone is interested:
https://github.com/YaroslavPryatkin/CoolCalculator
https://redd.it/1pt9veo
@r_cpp
Well, I’m currently a Polish IT student, and I’m looking for a job. Since I don’t have any professional experience yet, I decided to create something meaningful to put on my CV.
Initially, the idea was to build a parser that uses RPN to evaluate expressions. However, over time I kept adding more features: user-defined functions and variables, recursion, short-circuiting, assignment operations, references, local variables, sequential execution, loops, and multi-line input. All of this eventually required building an AST and dealing with a lot of pointer-related complexity.
I’ve gone through several refactorings (I still consider myself a beginner at programming) and even one complete rewrite of the code. I also noticed that there isn’t much detailed information about some parsing topics—at least beyond Wikipedia.
At this point, the project feels more like a very weak version of Desmos (without graphs) than just a calculator. Now I’m wondering: should I continue developing this project further, should I move on to something more complex, or is this already enough for a CV pet project?
Here’s the GitHub link in case anyone is interested:
https://github.com/YaroslavPryatkin/CoolCalculator
https://redd.it/1pt9veo
@r_cpp
GitHub
GitHub - YaroslavPryatkin/CoolCalculator: My project of creating a calculator that can handle user-defined functions, variables…
My project of creating a calculator that can handle user-defined functions, variables, recursion. Also it is really easy to add new system-defined functions. - GitHub - YaroslavPryatkin/CoolCalcul...
Microsoft (CoreAI) is targeting to eliminate all C/C++ code by 2030
https://www.linkedin.com/posts/galenh_principal-software-engineer-coreai-microsoft-activity-7407863239289729024-WTzf?utm_source=share&utm_medium=member_ios&rcm=ACoAADgSZWYBfz-mLxzSdYSPqeNrByfljD_nKg4
https://redd.it/1ptokql
@r_cpp
https://www.linkedin.com/posts/galenh_principal-software-engineer-coreai-microsoft-activity-7407863239289729024-WTzf?utm_source=share&utm_medium=member_ios&rcm=ACoAADgSZWYBfz-mLxzSdYSPqeNrByfljD_nKg4
https://redd.it/1ptokql
@r_cpp
Linkedin
Principal Software Engineer (CoreAI) | Microsoft Careers | Galen Hunt | 18 comments
I have an open position in my team for a IC5 Principal Software Engineer. The position is in-person in Redmond.
My goal is to eliminate every line of C and C++ from Microsoft by 2030. Our strategy is to combine AI *and* Algorithms to rewrite Microsoft’s…
My goal is to eliminate every line of C and C++ from Microsoft by 2030. Our strategy is to combine AI *and* Algorithms to rewrite Microsoft’s…
iceoryx2 v0.8 released
It’s Christmas, which means it’s time for the iceoryx2 "Christmas" release!
Check it out: https://github.com/eclipse-iceoryx/iceoryx2 Full release announcement: https://ekxide.io/blog/iceoryx2-0.8-release/
iceoryx2 is a true zero-copy communication middleware designed to build robust and efficient systems. It enables ultra-low-latency communication between processes - comparable to Unix domain sockets or message queues, but significantly faster and easier to use.
The library provides language bindings for C, C++, Python, Rust, and C#, and runs on Linux, macOS, Windows, FreeBSD, and QNX, with experimental support for Android and VxWorks.
With this release we added the memory‑layout compatible types
The blackboard messaging pattern – a key‑value repository in shared memory that can be accessed from multiple processes – is now fully integrated, and the C++ language bindings are complete.
I wish you a Merry Christmas and happy hacking if you’d like to experiment with the new features!
https://redd.it/1ptyu8a
@r_cpp
It’s Christmas, which means it’s time for the iceoryx2 "Christmas" release!
Check it out: https://github.com/eclipse-iceoryx/iceoryx2 Full release announcement: https://ekxide.io/blog/iceoryx2-0.8-release/
iceoryx2 is a true zero-copy communication middleware designed to build robust and efficient systems. It enables ultra-low-latency communication between processes - comparable to Unix domain sockets or message queues, but significantly faster and easier to use.
The library provides language bindings for C, C++, Python, Rust, and C#, and runs on Linux, macOS, Windows, FreeBSD, and QNX, with experimental support for Android and VxWorks.
With this release we added the memory‑layout compatible types
StaticString and StaticVector, which have Rust counterparts that let you exchange complex data structures between C++ and Rust without serialization.The blackboard messaging pattern – a key‑value repository in shared memory that can be accessed from multiple processes – is now fully integrated, and the C++ language bindings are complete.
I wish you a Merry Christmas and happy hacking if you’d like to experiment with the new features!
https://redd.it/1ptyu8a
@r_cpp
GitHub
GitHub - eclipse-iceoryx/iceoryx2: Eclipse iceoryx2™ - true zero-copy inter-process-communication with a Rust core
Eclipse iceoryx2™ - true zero-copy inter-process-communication with a Rust core - eclipse-iceoryx/iceoryx2
A little Introduction to Control Flow Integrity - James McNellis - Keynote Meeting C++ 2025
https://www.youtube.com/watch?v=_eX7AVB4qzM
https://redd.it/1pty5kj
@r_cpp
https://www.youtube.com/watch?v=_eX7AVB4qzM
https://redd.it/1pty5kj
@r_cpp
Reddit
From the cpp community on Reddit: A little Introduction to Control Flow Integrity - James McNellis - Keynote Meeting C++ 2025
Explore this post and more from the cpp community
tieredsort - 3.8x faster than std::sort for integers, header-only
Made a sorting library that detects data patterns before sorting.
Results (n=100k):
Random: 3.8x faster than std::sort, 1.6x faster than ska_sort
Dense data (ages, sensors): 30x faster than std::sort, 9x faster than ska_sort
The idea: real data isn't random. Ages are 0-100. Sensors are 12-bit. When the range is small, counting sort beats everything.
Detection cost: 12 comparisons + 64 samples. Negligible.
C++17, header-only, no SIMD needed.
GitHub: https://github.com/Cranot/tieredsort
Looking for feedback on edge cases I might have missed.
https://redd.it/1pu554f
@r_cpp
Made a sorting library that detects data patterns before sorting.
Results (n=100k):
Random: 3.8x faster than std::sort, 1.6x faster than ska_sort
Dense data (ages, sensors): 30x faster than std::sort, 9x faster than ska_sort
The idea: real data isn't random. Ages are 0-100. Sensors are 12-bit. When the range is small, counting sort beats everything.
Detection cost: 12 comparisons + 64 samples. Negligible.
C++17, header-only, no SIMD needed.
GitHub: https://github.com/Cranot/tieredsort
Looking for feedback on edge cases I might have missed.
https://redd.it/1pu554f
@r_cpp
GitHub
GitHub - Cranot/tieredsort: A fast, header-only C++17 sorting library for numeric types. 3.6x faster than std::sort, up to 21x…
A fast, header-only C++17 sorting library for numeric types. 3.6x faster than std::sort, up to 21x on dense data - Cranot/tieredsort
Is this the end of C++ at Microsoft?
Given the goals and push for Rust https://www.linkedin.com/posts/galenh_principal-software-engineer-coreai-microsoft-activity-7407863239289729024-WTzf
https://redd.it/1pu6cf7
@r_cpp
Given the goals and push for Rust https://www.linkedin.com/posts/galenh_principal-software-engineer-coreai-microsoft-activity-7407863239289729024-WTzf
https://redd.it/1pu6cf7
@r_cpp
Linkedin
Principal Software Engineer (CoreAI) | Microsoft Careers | Galen Hunt | 31 comments
Update:
It appears my post generated far more attention than I intended... with a lot of speculative reading between the lines.
Just to clarify... Windows is *NOT* being rewritten in Rust with AI.
My team’s project is a research project. We are building…
It appears my post generated far more attention than I intended... with a lot of speculative reading between the lines.
Just to clarify... Windows is *NOT* being rewritten in Rust with AI.
My team’s project is a research project. We are building…
New 0-copy deserialization protocol
Hello all! Seems like serialization is a popular topic these days for some reason...
I've posted before about the c++ library "zerialize" (https://github.com/colinator/zerialize), which offers serialization/deserialization and translation across multiple dynamic (self-describing) serialization formats, including json, flexbuffers, cbor, and message pack. The big benefit is that when the underlying protocol supports it, it supports 0-copy deserialization, including directly into xtensor/eigen matrices.
Well, I've added two things to it:
1) Run-time serialization. Before this, you would have to define your serialized objects at compile-time. Now you can do it at run-time too (although, of course, it's slower).
2) A new built-in protocol! I call it "ZERA" for ZERo-copy Arena". With all other protocols, I cannot guarantee that tensors will be properly aligned when 'coming off the wire', and so the tensor deserialization will perform a copy if the data isn't properly aligned. ZERA _does_ support this though - if the caller can guarantee that the underlying bytes are, say, 8-byte aligned, then everything inside the message will also be properly aligned. This results in the fastest 0-copy tensor deserialization, and works well for SIMD etc. And it's fast (but not compact)! Check out the benchmark_compare directory.
Definitely open to feedback or requests!
https://redd.it/1pu6zwe
@r_cpp
Hello all! Seems like serialization is a popular topic these days for some reason...
I've posted before about the c++ library "zerialize" (https://github.com/colinator/zerialize), which offers serialization/deserialization and translation across multiple dynamic (self-describing) serialization formats, including json, flexbuffers, cbor, and message pack. The big benefit is that when the underlying protocol supports it, it supports 0-copy deserialization, including directly into xtensor/eigen matrices.
Well, I've added two things to it:
1) Run-time serialization. Before this, you would have to define your serialized objects at compile-time. Now you can do it at run-time too (although, of course, it's slower).
2) A new built-in protocol! I call it "ZERA" for ZERo-copy Arena". With all other protocols, I cannot guarantee that tensors will be properly aligned when 'coming off the wire', and so the tensor deserialization will perform a copy if the data isn't properly aligned. ZERA _does_ support this though - if the caller can guarantee that the underlying bytes are, say, 8-byte aligned, then everything inside the message will also be properly aligned. This results in the fastest 0-copy tensor deserialization, and works well for SIMD etc. And it's fast (but not compact)! Check out the benchmark_compare directory.
Definitely open to feedback or requests!
https://redd.it/1pu6zwe
@r_cpp
GitHub
GitHub - colinator/zerialize: Zero-copy multi-format serialization for c++.
Zero-copy multi-format serialization for c++. Contribute to colinator/zerialize development by creating an account on GitHub.
Wait-Free Chunked I/O Buffer
We’re building a database and recently implemented a custom `I/O buffer` to handle the Postgres wire protocol. We considered `folly::IOBuf` and `absl::Cord`, but decided to implement a specialized version to avoid mutexes and simplify "late" size-prefixing.
**Key Technical Features:**
* **Chunked Storage:** Prevents large reallocations and minimizes `memcpy` by using a chain of fixed-size buffers.
* **Wait-Free:** Designed for high-concurrency network I/O without mutex contention.
* **Uncommitted Writes:** Allows reserving space at the start of a message for a size prefix that is only known after the payload is serialized, avoiding data shifts.
**Why custom?** Most generic "Cord" implementations were either slow or not truly concurrent. Our buffer allows one writer and one reader to work at the same time without locks and it actually works quite well to the benchmarks.
**Code & Details:**
* [Benchmarks & Blog Post](https://www.serenedb.com/blog/io-buffer)
* [Source Code (GitHub)](https://github.com/serenedb/serenedb/blob/main/libs/basics/message_buffer.h)
I'd love to hear your thoughts on our approach and if anyone has seen similar wins by moving away from `std::mutex` in their transport layers.
https://redd.it/1pu8cfs
@r_cpp
We’re building a database and recently implemented a custom `I/O buffer` to handle the Postgres wire protocol. We considered `folly::IOBuf` and `absl::Cord`, but decided to implement a specialized version to avoid mutexes and simplify "late" size-prefixing.
**Key Technical Features:**
* **Chunked Storage:** Prevents large reallocations and minimizes `memcpy` by using a chain of fixed-size buffers.
* **Wait-Free:** Designed for high-concurrency network I/O without mutex contention.
* **Uncommitted Writes:** Allows reserving space at the start of a message for a size prefix that is only known after the payload is serialized, avoiding data shifts.
**Why custom?** Most generic "Cord" implementations were either slow or not truly concurrent. Our buffer allows one writer and one reader to work at the same time without locks and it actually works quite well to the benchmarks.
**Code & Details:**
* [Benchmarks & Blog Post](https://www.serenedb.com/blog/io-buffer)
* [Source Code (GitHub)](https://github.com/serenedb/serenedb/blob/main/libs/basics/message_buffer.h)
I'd love to hear your thoughts on our approach and if anyone has seen similar wins by moving away from `std::mutex` in their transport layers.
https://redd.it/1pu8cfs
@r_cpp
Serenedb
Blog | SereneDB
C++ is actually a great language for LLMs
I remember hearing a few months ago that c++ isn't a great language for tools like copilot, cursor or IDE replacements. Personally, it's really integrated into my workflow and I want to say I'm having a lot of positive experiences. So I wanted to share that a bit to those still in the mindset that these tools are a negative.
For one, I keep my scope small. I try to provide just the context it needs. Sometimes I will checkout the code of a third party library just so it can pull in that context if it needs. I can't provide all the best advice on this, because some of it has nothing to do with the language, other people have written great articles, and this is a skill you develop over time.
But for small and large wins, c++ is a great language. Questions like "are there any unnecessary string copies?", "are there any objects that are accidentally being passed by value?", to more beefy stuff like improving the performance of individual functions, or removing unnecessary blocks in your threading lifecycle. It understands the cost of memory allocations if you tell it that is important, flatten data structures to keep it contiguous, and it will adhere to the design of your codebase.
Anyway, I'm having a lot of fun with cursor in a c++ codebase and just wanted to evangelize a little - if you haven't integrated this into your codebase then you really are missing a very fundamental shift in software engineering role.
I will also say that there is such a variance in AI tools. I like neovim, but having to provide the context of individual files was painful. Cursor is able to use external tools to perform its job and search. The use of one vs the use of the other feel like performing a completely different role (neovim + plugins might be better now I don't know).
And a caveat: these tools can be used negatively and carelessly. I'm not here to argue that some form of SWE hasn't degraded, especially when you're working with coworkers who aren't taking care in their use. The trick is to keep the scope small, tell it what is important to you in your codebase, and increase the scope as you get more comfortable with the tool.
https://redd.it/1pu78s9
@r_cpp
I remember hearing a few months ago that c++ isn't a great language for tools like copilot, cursor or IDE replacements. Personally, it's really integrated into my workflow and I want to say I'm having a lot of positive experiences. So I wanted to share that a bit to those still in the mindset that these tools are a negative.
For one, I keep my scope small. I try to provide just the context it needs. Sometimes I will checkout the code of a third party library just so it can pull in that context if it needs. I can't provide all the best advice on this, because some of it has nothing to do with the language, other people have written great articles, and this is a skill you develop over time.
But for small and large wins, c++ is a great language. Questions like "are there any unnecessary string copies?", "are there any objects that are accidentally being passed by value?", to more beefy stuff like improving the performance of individual functions, or removing unnecessary blocks in your threading lifecycle. It understands the cost of memory allocations if you tell it that is important, flatten data structures to keep it contiguous, and it will adhere to the design of your codebase.
Anyway, I'm having a lot of fun with cursor in a c++ codebase and just wanted to evangelize a little - if you haven't integrated this into your codebase then you really are missing a very fundamental shift in software engineering role.
I will also say that there is such a variance in AI tools. I like neovim, but having to provide the context of individual files was painful. Cursor is able to use external tools to perform its job and search. The use of one vs the use of the other feel like performing a completely different role (neovim + plugins might be better now I don't know).
And a caveat: these tools can be used negatively and carelessly. I'm not here to argue that some form of SWE hasn't degraded, especially when you're working with coworkers who aren't taking care in their use. The trick is to keep the scope small, tell it what is important to you in your codebase, and increase the scope as you get more comfortable with the tool.
https://redd.it/1pu78s9
@r_cpp
Reddit
From the cpp community on Reddit
Explore this post and more from the cpp community
Why do I rarely come across 'using namespace std?'
For context, I'm pretty new to this language. I'm about halfway through 'C++ A Beginners Guide by Herbert Schildt,' but I have explored the language past this book (embedded things).
In the book, the standard namespace is used for every program.
In C++ programs that I typically see, std:: is used (e.g., std::cout).
Is there a disadvantage to using the namespace? Is it that it's outdated?
(The book im reading is from the early 2000's)
https://redd.it/1pubujb
@r_cpp
For context, I'm pretty new to this language. I'm about halfway through 'C++ A Beginners Guide by Herbert Schildt,' but I have explored the language past this book (embedded things).
In the book, the standard namespace is used for every program.
In C++ programs that I typically see, std:: is used (e.g., std::cout).
Is there a disadvantage to using the namespace? Is it that it's outdated?
(The book im reading is from the early 2000's)
https://redd.it/1pubujb
@r_cpp
Reddit
From the cpp community on Reddit
Explore this post and more from the cpp community
All the other cool languages have try...finally. C++ says "We have try...finally at home."
https://devblogs.microsoft.com/oldnewthing/20251222-00/?p=111890
https://redd.it/1pudexb
@r_cpp
https://devblogs.microsoft.com/oldnewthing/20251222-00/?p=111890
https://redd.it/1pudexb
@r_cpp
Microsoft News
All the other cool languages have try…finally. C++ says “We have try…finally at home.”
The destructor serves as the "finally".