Don't we need to shift existing and new open source projects to memory, CPU and GPU efficient code?
There was a time when operating systems and various programs required minimal resources (memory, storage, CPU) to run. I see a stark difference in the response of applications like VS Code that are built on Electron, versus IDE's like Zed that is built on Rust. I miss the nimble and fast response of Windows XP. The fast execution and response of games and programs built with C++. I know any language can be compiled to machine language and it'll automatically become fast, but the point I'm trying to make is that there was a time when engineers dedicated at least some effort to ensuring the resource efficiency of their programs. Today, that seems to be lost, with the focus shifting to quick delivery.
Programs written in C and C++ have their issues with memory safety, and I've heard that many Ubuntu modules are being re-written in Rust. That's one good choice. But when I see various other frameworks like React, Flutter, many Python frameworks (even when it's a wrapper around C++), or even just in time compilation, etc, and I see how slow and bulky they are, I realize that it not only creates a poor user experience of getting annoyed at the slowness of the program, it also consumes a lot more resources on the server, thus massively increasing the cost of running operations. Perhaps another optimization would be to have modules that automatically detect various types of GPU's and APU's and are able to not only shift a lot of the processing to the GPU, but also able to detect the GPU and recommend an appropriate driver if the user has not yet installed the right one (that can happen with users like me who did not know that AMD APU's needed a separate, specific ROCm driver).
It would be nice if the open source community considered slowly migrating to (and building) resource efficient code everywhere. I'm already doing that, by migrating my latest open source program from Python to C++.
Another important aspect to consider is syntax and semantics. Recently introduced languages have such weird syntax and nested code that it's mind-numbing to have to keep learning new syntax that was created based on the whims of some developer.
https://redd.it/1ph71nb
@r_opensource
There was a time when operating systems and various programs required minimal resources (memory, storage, CPU) to run. I see a stark difference in the response of applications like VS Code that are built on Electron, versus IDE's like Zed that is built on Rust. I miss the nimble and fast response of Windows XP. The fast execution and response of games and programs built with C++. I know any language can be compiled to machine language and it'll automatically become fast, but the point I'm trying to make is that there was a time when engineers dedicated at least some effort to ensuring the resource efficiency of their programs. Today, that seems to be lost, with the focus shifting to quick delivery.
Programs written in C and C++ have their issues with memory safety, and I've heard that many Ubuntu modules are being re-written in Rust. That's one good choice. But when I see various other frameworks like React, Flutter, many Python frameworks (even when it's a wrapper around C++), or even just in time compilation, etc, and I see how slow and bulky they are, I realize that it not only creates a poor user experience of getting annoyed at the slowness of the program, it also consumes a lot more resources on the server, thus massively increasing the cost of running operations. Perhaps another optimization would be to have modules that automatically detect various types of GPU's and APU's and are able to not only shift a lot of the processing to the GPU, but also able to detect the GPU and recommend an appropriate driver if the user has not yet installed the right one (that can happen with users like me who did not know that AMD APU's needed a separate, specific ROCm driver).
It would be nice if the open source community considered slowly migrating to (and building) resource efficient code everywhere. I'm already doing that, by migrating my latest open source program from Python to C++.
Another important aspect to consider is syntax and semantics. Recently introduced languages have such weird syntax and nested code that it's mind-numbing to have to keep learning new syntax that was created based on the whims of some developer.
https://redd.it/1ph71nb
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
is there who will start with me this project
# Social Media & Digital Accounts Marketplace — Full Project Cheat Sheet
# 1️⃣ Project Concept
**Goal:**
Build a **secure multi-vendor marketplace** for **buying and selling social media and digital accounts** (Instagram, TikTok, Facebook, Twitter, YouTube, etc.) with:
* AI-powered account evaluation (authenticity, engagement, potential value)
* Escrow system for safe transactions
* Multi-store support (each seller has their own store)
* Escrow moderators (limited admin) for disputes
* Analytics and KPI tracking for sellers and mods
* Buyer reviews and comments
**Target Audience:**
* Sellers: People who own social media accounts
* Buyers: People looking to buy verified, high-quality social media accounts
* Admin & Mods: Ensure security and trust
# 2️⃣ Project Roadmap (Step-by-Step)
# Phase 1 — Planning & Requirements
* Define target social platforms
* List core features & user stories
* Create role hierarchy (Admin, MOD, Seller, Buyer)
* Determine payment gateways & escrow rules
* Define analytics & reporting requirements
# Phase 2 — Tech Stack & Architecture
* Frontend: Next.js + Tailwind CSS + shadcn/ui
* Backend: NestJS + TypeORM/Prisma
* Database: PostgreSQL
* Cache / Queue: Redis + BullMQ
* Storage: AWS S3 / MinIO
* Payments: PayPal, Binance Pay, Paystack/Flutterwave, Internal Wallet + Escrow
* Realtime: WebSockets / [Socket.io](http://socket.io/)
* AI: GPT-4.1/5-mini for account evaluation
# Phase 3 — Database & API Design
* Tables: users, roles, stores, listings, orders, escrows, disputes, reviews, permissions\_mods
* Role-based access control (RBAC)
* RESTful APIs (or GraphQL if preferred) for all operations
# Phase 4 — Core Marketplace Features
* Multi-vendor stores
* List/edit/delete social/digital accounts
* AI account evaluation
* Buy Now / Auction / Silent Bid (future)
* Escrow system with MOD approval
* Buyer comments & ratings
* Automated notifications (email/push)
# Phase 5 — Escrow & Moderator System
* Lock funds in escrow when order is placed
* Escrow MOD reviews disputes & can release/refund/hold
* Fraud flags & internal notes
* Escalation to admin if unresolved
# Phase 6 — Analytics & Reporting
* Seller Dashboard: total sales, successful/failed, disputes, buyer comments, average rating, success rate
* Escrow MOD Dashboard: total escrows, active/resolved disputes, fraud flags, top sellers with disputes
# Phase 7 — Security & Compliance
* KYC / Identity verification (optional)
* GDPR / Cookie policy compliance
* Recaptcha + security best practices
* Payment & wallet security
# Phase 8 — Testing & Deployment
* Unit tests & integration tests
* Security audit
* Docker deployment + CI/CD (GitHub Actions)
* Cloudflare CDN / WAF for protection
* Production monitoring & logs
# 3️⃣ Roles & Permissions
|Role|Permissions|
|:-|:-|
||
|Admin|Full control: users, stores, categories, payments, mods, system settings|
|Escrow MOD|Manage disputes & escrows only|
|Seller|CRUD listings, manage store, view analytics, respond to comments|
|Buyer|Browse, buy, report issues, leave reviews/comments|
|Guest|Browse only|
# 4️⃣ Database Structure (Key Tables)
# users → id, role_id, name, email, password, wallet_balance, created_at
# roles → id, name (admin, mod, seller, buyer)
# stores → id, seller_id, name, denoscription, created_at
# listings → id, store_id, noscript, platform, price, details, status, ai_score, created_at
# orders → id, buyer_id, seller_id, listing_id, amount, status, created_at
# escrows → id, order_id, buyer_id, seller_id, mod_id, status, locked_amount, created_at
# disputes → id, order_id, mod_id, status, decision, notes, created_at
# reviews → id, order_id, buyer_id, seller_id, rating, comment, created_at
# permissions_mods → id, mod_id, permission (escrow.view, escrow.decide, dispute.manage, listing.flag, user.flag)
# 5️⃣ API Endpoints (Essential)
**Seller**
GET /api/seller/{seller_id}/stats
GET /api/seller/{seller_id}/listings
POST /api/seller/listing
PUT /api/seller/listing/{id}
DELETE
# Social Media & Digital Accounts Marketplace — Full Project Cheat Sheet
# 1️⃣ Project Concept
**Goal:**
Build a **secure multi-vendor marketplace** for **buying and selling social media and digital accounts** (Instagram, TikTok, Facebook, Twitter, YouTube, etc.) with:
* AI-powered account evaluation (authenticity, engagement, potential value)
* Escrow system for safe transactions
* Multi-store support (each seller has their own store)
* Escrow moderators (limited admin) for disputes
* Analytics and KPI tracking for sellers and mods
* Buyer reviews and comments
**Target Audience:**
* Sellers: People who own social media accounts
* Buyers: People looking to buy verified, high-quality social media accounts
* Admin & Mods: Ensure security and trust
# 2️⃣ Project Roadmap (Step-by-Step)
# Phase 1 — Planning & Requirements
* Define target social platforms
* List core features & user stories
* Create role hierarchy (Admin, MOD, Seller, Buyer)
* Determine payment gateways & escrow rules
* Define analytics & reporting requirements
# Phase 2 — Tech Stack & Architecture
* Frontend: Next.js + Tailwind CSS + shadcn/ui
* Backend: NestJS + TypeORM/Prisma
* Database: PostgreSQL
* Cache / Queue: Redis + BullMQ
* Storage: AWS S3 / MinIO
* Payments: PayPal, Binance Pay, Paystack/Flutterwave, Internal Wallet + Escrow
* Realtime: WebSockets / [Socket.io](http://socket.io/)
* AI: GPT-4.1/5-mini for account evaluation
# Phase 3 — Database & API Design
* Tables: users, roles, stores, listings, orders, escrows, disputes, reviews, permissions\_mods
* Role-based access control (RBAC)
* RESTful APIs (or GraphQL if preferred) for all operations
# Phase 4 — Core Marketplace Features
* Multi-vendor stores
* List/edit/delete social/digital accounts
* AI account evaluation
* Buy Now / Auction / Silent Bid (future)
* Escrow system with MOD approval
* Buyer comments & ratings
* Automated notifications (email/push)
# Phase 5 — Escrow & Moderator System
* Lock funds in escrow when order is placed
* Escrow MOD reviews disputes & can release/refund/hold
* Fraud flags & internal notes
* Escalation to admin if unresolved
# Phase 6 — Analytics & Reporting
* Seller Dashboard: total sales, successful/failed, disputes, buyer comments, average rating, success rate
* Escrow MOD Dashboard: total escrows, active/resolved disputes, fraud flags, top sellers with disputes
# Phase 7 — Security & Compliance
* KYC / Identity verification (optional)
* GDPR / Cookie policy compliance
* Recaptcha + security best practices
* Payment & wallet security
# Phase 8 — Testing & Deployment
* Unit tests & integration tests
* Security audit
* Docker deployment + CI/CD (GitHub Actions)
* Cloudflare CDN / WAF for protection
* Production monitoring & logs
# 3️⃣ Roles & Permissions
|Role|Permissions|
|:-|:-|
||
|Admin|Full control: users, stores, categories, payments, mods, system settings|
|Escrow MOD|Manage disputes & escrows only|
|Seller|CRUD listings, manage store, view analytics, respond to comments|
|Buyer|Browse, buy, report issues, leave reviews/comments|
|Guest|Browse only|
# 4️⃣ Database Structure (Key Tables)
# users → id, role_id, name, email, password, wallet_balance, created_at
# roles → id, name (admin, mod, seller, buyer)
# stores → id, seller_id, name, denoscription, created_at
# listings → id, store_id, noscript, platform, price, details, status, ai_score, created_at
# orders → id, buyer_id, seller_id, listing_id, amount, status, created_at
# escrows → id, order_id, buyer_id, seller_id, mod_id, status, locked_amount, created_at
# disputes → id, order_id, mod_id, status, decision, notes, created_at
# reviews → id, order_id, buyer_id, seller_id, rating, comment, created_at
# permissions_mods → id, mod_id, permission (escrow.view, escrow.decide, dispute.manage, listing.flag, user.flag)
# 5️⃣ API Endpoints (Essential)
**Seller**
GET /api/seller/{seller_id}/stats
GET /api/seller/{seller_id}/listings
POST /api/seller/listing
PUT /api/seller/listing/{id}
DELETE
/api/seller/listing/{id}
**Buyer**
GET /api/listings
POST /api/order
POST /api/order/{order_id}/report
POST /api/order/{order_id}/review
**Escrow Moderator**
GET /api/mod/escrow
GET /api/mod/escrow/{id}
POST /api/mod/escrow/{id}/release
POST /api/mod/escrow/{id}/refund
POST /api/mod/escrow/{id}/hold
GET /api/mod/disputes
POST /api/mod/dispute/{id}/resolve
POST /api/mod/dispute/{id}/escalate
**Admin**
GET /api/admin/users
GET /api/admin/stores
POST /api/admin/category
PUT /api/admin/settings
# 6️⃣ Analytics Formulas
* **Success Rate** = `(successful_sales / total_sales) * 100`
* **Dispute Rate** = `((active_disputes + closed_disputes) / total_sales) * 100`
# 7️⃣ Workflow (Buyer → Seller → MOD → Admin)
1. Buyer buys → funds locked in escrow
2. Seller delivers credentials
3. Buyer confirms → release funds
4. If dispute → MOD reviews: release/refund/hold
5. Escalate to Admin if unresolved/fraud detected
# 8️⃣ Tech Stack Summary
|Layer|Tech|
|:-|:-|
||
|Frontend|Next.js + Tailwind CSS + shadcn/ui|
|Backend|NestJS + TypeORM / Prisma|
|Database|PostgreSQL|
|Cache / Queue|Redis + BullMQ|
|Storage|AWS S3 / MinIO|
|Payments|PayPal, Binance Pay, Paystack/Flutterwave, Internal Wallet + Escrow|
|Realtime|WebSockets / [Socket.io](http://socket.io/)|
|AI|GPT-4.1/5-mini, OCR for screenshots|
|DevOps|Docker, Nginx, CI/CD, Cloudflare|
# 9️⃣ Optional / Future Enhancements
* Auction / Silent Bid system
* Multi-language
* Seller subnoscription (one-time/monthly)
* AI auto-validation of credentials/screenshots
* Export CSV/PDF of sales & disputes
* Fraud prediction AI
* Mobile app integration (React Native or Flutter)
✅ **Cheat Sheet Purpose:**
* Gives devs **full scope of the project**
* Defines **roles, DB, API, tech, workflows, dashboards**
* Can be used as a **roadmap + reference** during development
https://redd.it/1ph8qbw
@r_opensource
**Buyer**
GET /api/listings
POST /api/order
POST /api/order/{order_id}/report
POST /api/order/{order_id}/review
**Escrow Moderator**
GET /api/mod/escrow
GET /api/mod/escrow/{id}
POST /api/mod/escrow/{id}/release
POST /api/mod/escrow/{id}/refund
POST /api/mod/escrow/{id}/hold
GET /api/mod/disputes
POST /api/mod/dispute/{id}/resolve
POST /api/mod/dispute/{id}/escalate
**Admin**
GET /api/admin/users
GET /api/admin/stores
POST /api/admin/category
PUT /api/admin/settings
# 6️⃣ Analytics Formulas
* **Success Rate** = `(successful_sales / total_sales) * 100`
* **Dispute Rate** = `((active_disputes + closed_disputes) / total_sales) * 100`
# 7️⃣ Workflow (Buyer → Seller → MOD → Admin)
1. Buyer buys → funds locked in escrow
2. Seller delivers credentials
3. Buyer confirms → release funds
4. If dispute → MOD reviews: release/refund/hold
5. Escalate to Admin if unresolved/fraud detected
# 8️⃣ Tech Stack Summary
|Layer|Tech|
|:-|:-|
||
|Frontend|Next.js + Tailwind CSS + shadcn/ui|
|Backend|NestJS + TypeORM / Prisma|
|Database|PostgreSQL|
|Cache / Queue|Redis + BullMQ|
|Storage|AWS S3 / MinIO|
|Payments|PayPal, Binance Pay, Paystack/Flutterwave, Internal Wallet + Escrow|
|Realtime|WebSockets / [Socket.io](http://socket.io/)|
|AI|GPT-4.1/5-mini, OCR for screenshots|
|DevOps|Docker, Nginx, CI/CD, Cloudflare|
# 9️⃣ Optional / Future Enhancements
* Auction / Silent Bid system
* Multi-language
* Seller subnoscription (one-time/monthly)
* AI auto-validation of credentials/screenshots
* Export CSV/PDF of sales & disputes
* Fraud prediction AI
* Mobile app integration (React Native or Flutter)
✅ **Cheat Sheet Purpose:**
* Gives devs **full scope of the project**
* Defines **roles, DB, API, tech, workflows, dashboards**
* Can be used as a **roadmap + reference** during development
https://redd.it/1ph8qbw
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
SQLShell – Desktop SQL tool for querying data files, and I use it daily at work. Looking for feedback.
I'm a data professional who lives in SQL. It's my primary tool for analysis, and I'd say I have a "black belt" in SQL at this point. I was frustrated by the friction of querying local data files (CSVs, Parquet, Excel) – either I'd spin up a database, write throwaway Python noscripts, or use tools that felt clunky for quick analytical work.
So I built SQLShell – a desktop SQL interface for querying data files directly. No database server needed. You load files, write SQL, get results. That's it.
# What makes it useful (at least for me):
DuckDB under the hood – fast analytical engine. I regularly query million-row files without waiting.
Load anything – CSV, Parquet, Excel, JSON, Delta Lake, SQLite. Drag-and-drop or file browser.
F5/F9 execution – F5 runs everything, F9 runs only the current statement. Perfect for iterative exploration (if you use SSMS, SQL Developer or similar tools, this feels familiar).
Ctrl+F search – instant filtering across all result columns
Context-aware autocomplete – knows your tables and columns
Right-click column profiling – quick stats, distributions, null counts
# What I'm looking for:
Feedback from other SQL-heavy users
Missing features that would make this useful to you
UX issues I might be blind to
General thoughts on the approach
Links:
Landing page: [https://oyvinrog.github.io/SQLShell/](https://oyvinrog.github.io/SQLShell/)
GitHub: https://github.com/oyvinrog/SQLShell
PyPI: pip install sqlshell && sqls
Pre-built binaries for Windows (.exe) and Linux (.deb) on the releases page
https://redd.it/1ph9stm
@r_opensource
I'm a data professional who lives in SQL. It's my primary tool for analysis, and I'd say I have a "black belt" in SQL at this point. I was frustrated by the friction of querying local data files (CSVs, Parquet, Excel) – either I'd spin up a database, write throwaway Python noscripts, or use tools that felt clunky for quick analytical work.
So I built SQLShell – a desktop SQL interface for querying data files directly. No database server needed. You load files, write SQL, get results. That's it.
# What makes it useful (at least for me):
DuckDB under the hood – fast analytical engine. I regularly query million-row files without waiting.
Load anything – CSV, Parquet, Excel, JSON, Delta Lake, SQLite. Drag-and-drop or file browser.
F5/F9 execution – F5 runs everything, F9 runs only the current statement. Perfect for iterative exploration (if you use SSMS, SQL Developer or similar tools, this feels familiar).
Ctrl+F search – instant filtering across all result columns
Context-aware autocomplete – knows your tables and columns
Right-click column profiling – quick stats, distributions, null counts
# What I'm looking for:
Feedback from other SQL-heavy users
Missing features that would make this useful to you
UX issues I might be blind to
General thoughts on the approach
Links:
Landing page: [https://oyvinrog.github.io/SQLShell/](https://oyvinrog.github.io/SQLShell/)
GitHub: https://github.com/oyvinrog/SQLShell
PyPI: pip install sqlshell && sqls
Pre-built binaries for Windows (.exe) and Linux (.deb) on the releases page
https://redd.it/1ph9stm
@r_opensource
oyvinrog.github.io
SQLShell - Fast SQL Interface for Data Files
Query CSV, Parquet, Excel files with SQL. DuckDB powered.
Open source client alternative for Spotify ?
Hey everyone, I'm looking for an open-source client alternative for Spotify mobile. Basically an app that let's me login to my spotify account (bcs I have lot's of playlists) and let's me play the songs offline.
On PC I have Spicetify which has no ads, but I'm struggling to find a mobile alternative.
If you can recommend me some clients it would be perfect, thank you in advanced.
https://redd.it/1ph8jsf
@r_opensource
Hey everyone, I'm looking for an open-source client alternative for Spotify mobile. Basically an app that let's me login to my spotify account (bcs I have lot's of playlists) and let's me play the songs offline.
On PC I have Spicetify which has no ads, but I'm struggling to find a mobile alternative.
If you can recommend me some clients it would be perfect, thank you in advanced.
https://redd.it/1ph8jsf
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I use an iPhone but my daily driver is Linux. Apple's Universal Clipboard won't help me, so I built my own.
Copy on iPhone → Paste on Linux. That's it.
I got tired of emailing myself screenshots and texting links to my own number or having to manually use localsend for everything. Apple's Universal Clipboard only works with Macs, so I made Velocity Bridge.
How it works:
\- Runs a tiny local server on your Linux box
\- iOS Shortcuts send clipboard data over your home network
\- Text/images land directly in your Linux clipboard
\- No cloud, no account, no Apple tax
Pro tip: Set up Back Tap (Settings → Accessibility → Touch → Back Tap) to trigger the shortcut. Double-tap the back of your phone = instant paste on Linux. It's stupidly satisfying.
Install:
\- Fedora: `sudo dnf copr enable trex099/velocity-bridge && sudo dnf install velocity-bridge`
\- Arch: `yay -S velocity-bridge`
\- Any distro: One-liner curl noscript or AppImage
Comes with a GUI for easy setup, or run it headless as a systemd service.
GitHub: https://github.com/Trex099/Velocity-Bridge
Built this for myself, figured others might want it too. Feedback welcome!
https://redd.it/1phaxew
@r_opensource
Copy on iPhone → Paste on Linux. That's it.
I got tired of emailing myself screenshots and texting links to my own number or having to manually use localsend for everything. Apple's Universal Clipboard only works with Macs, so I made Velocity Bridge.
How it works:
\- Runs a tiny local server on your Linux box
\- iOS Shortcuts send clipboard data over your home network
\- Text/images land directly in your Linux clipboard
\- No cloud, no account, no Apple tax
Pro tip: Set up Back Tap (Settings → Accessibility → Touch → Back Tap) to trigger the shortcut. Double-tap the back of your phone = instant paste on Linux. It's stupidly satisfying.
Install:
\- Fedora: `sudo dnf copr enable trex099/velocity-bridge && sudo dnf install velocity-bridge`
\- Arch: `yay -S velocity-bridge`
\- Any distro: One-liner curl noscript or AppImage
Comes with a GUI for easy setup, or run it headless as a systemd service.
GitHub: https://github.com/Trex099/Velocity-Bridge
Built this for myself, figured others might want it too. Feedback welcome!
https://redd.it/1phaxew
@r_opensource
GitHub
GitHub - Trex099/Velocity-Bridge: Copy on iPhone. Paste on Linux. No cloud, no macOS required.
Copy on iPhone. Paste on Linux. No cloud, no macOS required. - Trex099/Velocity-Bridge
SerpApi MCP Server for Google and other search engine results
https://github.com/serpapi/serpapi-mcp
https://redd.it/1ph7cd0
@r_opensource
https://github.com/serpapi/serpapi-mcp
https://redd.it/1ph7cd0
@r_opensource
GitHub
GitHub - serpapi/serpapi-mcp: SerpApi MCP Server for Google and other search engine results
SerpApi MCP Server for Google and other search engine results - serpapi/serpapi-mcp
Not good at understanding licences - Can I include flac.exe along with my compiled freeware?
Hello,
I have made a free Windows desktop utility that can use flac.exe (which I think is open source) (it may someday use a library but for now it's flac.exe ). I think it's approximately a decade old now.
I do not plan to make my own project open-source. On one hand I admire open-source, on the other hand I'm not comfortable sharing my source code/this code to the public. Though, it will remain free, not collect any user data or such. It does accept donations but I don't receive any for this particular project. I'm not even sure if it has actual users other than myself and I don't really care.
I have various understandings of open-source licences:
I think that sometimes you cannot include an open-source tool along with your project if you project itself it not open source (I think that FLAC falls into this category)
I think that sometimes you can include an open-source tool if the user is free to replace with another version of that tool, that might have been recompiled from the tool's original source code. (That would work for my project... but I think that's something I read about C++ Qt license and not FLAC.)
flac.exe is currently not include along with the project file, it's up to the user to point to their version of flac.exe .
Can someone who understands these better explain me if I could legally include flac.exe along with a freeware?
(Also, I do not want to share the project publicly.)
https://redd.it/1phdkmp
@r_opensource
Hello,
I have made a free Windows desktop utility that can use flac.exe (which I think is open source) (it may someday use a library but for now it's flac.exe ). I think it's approximately a decade old now.
I do not plan to make my own project open-source. On one hand I admire open-source, on the other hand I'm not comfortable sharing my source code/this code to the public. Though, it will remain free, not collect any user data or such. It does accept donations but I don't receive any for this particular project. I'm not even sure if it has actual users other than myself and I don't really care.
I have various understandings of open-source licences:
I think that sometimes you cannot include an open-source tool along with your project if you project itself it not open source (I think that FLAC falls into this category)
I think that sometimes you can include an open-source tool if the user is free to replace with another version of that tool, that might have been recompiled from the tool's original source code. (That would work for my project... but I think that's something I read about C++ Qt license and not FLAC.)
flac.exe is currently not include along with the project file, it's up to the user to point to their version of flac.exe .
Can someone who understands these better explain me if I could legally include flac.exe along with a freeware?
(Also, I do not want to share the project publicly.)
https://redd.it/1phdkmp
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
OpenQuestCapture - an open source, MIT licensed Meta Quest 3D Reconstruction pipeline
Hey all! I just released OpenQuestCapture, an MIT licensed Quest 3 app and pipeline for capturing spatial data from Meta Quest sensors for use for 3D reconstruction.
Why:
Meta recently launched Horizon Hyperscape, which produces impressive 3D reconstructions from Quest 3 sensor data. But all your data stays locked in their ecosystem. You don't control it, can't export it, and can't process it yourself. In fact, just 2 weeks ago they significantly reduced the quality of peoples' reconstructions without any notice.
I think that's the wrong approach. Spatial data should belong to the user.
What it does:
OpenQuestCapture captures Quest 3 depth maps, RGB images, and pose data to generate point clouds. While you're capturing, it shows you a live 3D point cloud visualization so you can see what areas (and from which angles) you've covered.
Then, the repo also has a helper noscript that converts that raw data into to COLMAP format for Gaussian Splatting or whatever 3D reconstruction pipeline you prefer. You can run everything locally.
Here's the GitHub repo: [https://github.com/samuelm2/OpenQuestCapture](https://github.com/samuelm2/OpenQuestCapture)
It's still pretty new and barebones, and the raw capture files are quite large. The quality isn't quite as good as HyperScape yet, but I'm hoping this might push them to be more open with Hyperscape data. At minimum, it's something the community can build on and improve.
There's still a lot to improve upon for the app. Here are some of the things that are top of mind for me:
* An intermediary step of the reconstruction post-process is a high quality, Matterport-like triangulated colored 3D mesh. That itself could be very valuable as an artifact for users. So maybe there could be more pipeline development around extracting and exporting that.
* Also, the visualization UX could be improved. I haven't found a UX that does an amazing job at showing you exactly what (and from what angles) you've captured. So if anyone has any ideas or wants to contribute, please feel free to submit a PR!
* The raw quest sensor data files are massive right now. So, I'm considering doing some more advanced Quest-side compression of the raw data. I'm probably going to add QOI compression to the raw RGB data at capture time, which should be able to losslessly compress the raw data by 50% or so.
If anyone wants to take on one of these (or any other cool idea!), would love to collaborate. And, if you decide to try it out, let me know if you have any questions or run into issues. Or file a Github issue. Always happy to hear feedback!
https://redd.it/1phj040
@r_opensource
Hey all! I just released OpenQuestCapture, an MIT licensed Quest 3 app and pipeline for capturing spatial data from Meta Quest sensors for use for 3D reconstruction.
Why:
Meta recently launched Horizon Hyperscape, which produces impressive 3D reconstructions from Quest 3 sensor data. But all your data stays locked in their ecosystem. You don't control it, can't export it, and can't process it yourself. In fact, just 2 weeks ago they significantly reduced the quality of peoples' reconstructions without any notice.
I think that's the wrong approach. Spatial data should belong to the user.
What it does:
OpenQuestCapture captures Quest 3 depth maps, RGB images, and pose data to generate point clouds. While you're capturing, it shows you a live 3D point cloud visualization so you can see what areas (and from which angles) you've covered.
Then, the repo also has a helper noscript that converts that raw data into to COLMAP format for Gaussian Splatting or whatever 3D reconstruction pipeline you prefer. You can run everything locally.
Here's the GitHub repo: [https://github.com/samuelm2/OpenQuestCapture](https://github.com/samuelm2/OpenQuestCapture)
It's still pretty new and barebones, and the raw capture files are quite large. The quality isn't quite as good as HyperScape yet, but I'm hoping this might push them to be more open with Hyperscape data. At minimum, it's something the community can build on and improve.
There's still a lot to improve upon for the app. Here are some of the things that are top of mind for me:
* An intermediary step of the reconstruction post-process is a high quality, Matterport-like triangulated colored 3D mesh. That itself could be very valuable as an artifact for users. So maybe there could be more pipeline development around extracting and exporting that.
* Also, the visualization UX could be improved. I haven't found a UX that does an amazing job at showing you exactly what (and from what angles) you've captured. So if anyone has any ideas or wants to contribute, please feel free to submit a PR!
* The raw quest sensor data files are massive right now. So, I'm considering doing some more advanced Quest-side compression of the raw data. I'm probably going to add QOI compression to the raw RGB data at capture time, which should be able to losslessly compress the raw data by 50% or so.
If anyone wants to take on one of these (or any other cool idea!), would love to collaborate. And, if you decide to try it out, let me know if you have any questions or run into issues. Or file a Github issue. Always happy to hear feedback!
https://redd.it/1phj040
@r_opensource
GitHub
GitHub - samuelm2/OpenQuestCapture: Capture sensor data for Meta Quest 3 for 3DGS reconstruction.
Capture sensor data for Meta Quest 3 for 3DGS reconstruction. - samuelm2/OpenQuestCapture
Need honest opinion
Hi there! I’d love your honest opinion, roast me if you want, but I really want to know what you think about my open source framework:
https://github.com/entropy-flux/TorchSystem
And the documentation:
https://entropy-flux.github.io/TorchSystem/
(https://entropy-flux.github.io/TorchSystem/)
The idea of this idea of creating event driven IA training systems, and build big and complex pipelines in a modular style, using proper programming principles.
I’m looking for feedback to help improve it, make the documentation easier to understand, and make the framework more useful for common use cases. I’d love to hear what you really think , what you like, and more importantly, what you don’t.
https://redd.it/1phiiru
@r_opensource
Hi there! I’d love your honest opinion, roast me if you want, but I really want to know what you think about my open source framework:
https://github.com/entropy-flux/TorchSystem
And the documentation:
https://entropy-flux.github.io/TorchSystem/
(https://entropy-flux.github.io/TorchSystem/)
The idea of this idea of creating event driven IA training systems, and build big and complex pipelines in a modular style, using proper programming principles.
I’m looking for feedback to help improve it, make the documentation easier to understand, and make the framework more useful for common use cases. I’d love to hear what you really think , what you like, and more importantly, what you don’t.
https://redd.it/1phiiru
@r_opensource
GitHub
GitHub - entropy-flux/TorchSystem: A framework for creating message-driven training systems with PyTorch
A framework for creating message-driven training systems with PyTorch - entropy-flux/TorchSystem
OpsOrch – Unified API for Incidents, Logs, Metrics, and Tickets
https://www.opsorch.com
https://redd.it/1phom73
@r_opensource
https://www.opsorch.com
https://redd.it/1phom73
@r_opensource
OpsOrch
OpsOrch | Unified Ops Platform
OpsOrch stitches together telemetry, incident response, and automation so teams can see, decide, and act with confidence.
Beliarg is a dark, gamified productivity and finance management ecosystem.
This is an open source project - free to use, modify, and distribute. It has been reforged into a Full-Stack Web Application (PWA). It combines a React 19 frontend (built with Vite) with a Node.js & PostgreSQL backend to ensure your data survives even the apocalypse)). It features a unique "Hellish" aesthetic, turning daily tasks into "Chains", expenses into "Sacrifices", and habits into "Rituals". https://github.com/D371L/beliarg feel free to leave any feedback
https://redd.it/1phplp9
@r_opensource
This is an open source project - free to use, modify, and distribute. It has been reforged into a Full-Stack Web Application (PWA). It combines a React 19 frontend (built with Vite) with a Node.js & PostgreSQL backend to ensure your data survives even the apocalypse)). It features a unique "Hellish" aesthetic, turning daily tasks into "Chains", expenses into "Sacrifices", and habits into "Rituals". https://github.com/D371L/beliarg feel free to leave any feedback
https://redd.it/1phplp9
@r_opensource
GitHub
GitHub - D371L/beliarg: Eternal Forge Manager
Eternal Forge Manager. Contribute to D371L/beliarg development by creating an account on GitHub.
RANDEVU - Universal Probabilistic Daily Reminder Coordination System for Anything
https://github.com/TypicalHog/randevu
https://redd.it/1phrmw4
@r_opensource
https://github.com/TypicalHog/randevu
https://redd.it/1phrmw4
@r_opensource
GitHub
GitHub - TypicalHog/randevu: Universal Probabilistic Daily Reminder Coordination System for Anything
Universal Probabilistic Daily Reminder Coordination System for Anything - TypicalHog/randevu
merox-erudite – MIT-licensed Astro blogging theme with newsletter, comments, analytics & AdSense built-in
I just published an open-source Astro blogging theme that’s now part of the official Astro themes directory:
https://astro.build/themes/details/merox-erudite/
It’s a fork of the excellent astro-erudite, but with a lot of the “real-world” stuff already implemented and ready to use:
Brevo/Sendinblue newsletter integration
Lazy-loaded Disqus comments
Google Analytics + Umami support
Structured data (FAQPage, HowTo, etc.)
Google AdSense ready
Enhanced homepage (experience timeline + skills showcase)
100% free and open-source under the MIT license.
GitHub: https://github.com/meroxdotdev/merox-erudite
Live example (my own blog): https://merox-erudite.vercel.app/ and https://merox.dev
https://redd.it/1pi1sib
@r_opensource
I just published an open-source Astro blogging theme that’s now part of the official Astro themes directory:
https://astro.build/themes/details/merox-erudite/
It’s a fork of the excellent astro-erudite, but with a lot of the “real-world” stuff already implemented and ready to use:
Brevo/Sendinblue newsletter integration
Lazy-loaded Disqus comments
Google Analytics + Umami support
Structured data (FAQPage, HowTo, etc.)
Google AdSense ready
Enhanced homepage (experience timeline + skills showcase)
100% free and open-source under the MIT license.
GitHub: https://github.com/meroxdotdev/merox-erudite
Live example (my own blog): https://merox-erudite.vercel.app/ and https://merox.dev
https://redd.it/1pi1sib
@r_opensource
Astro
merox-erudite | Astro
A batteries-included Astro blogging theme with newsletter integration, comments, analytics, SEO enhancements, and more. Forked from astro-erudite with production-ready features.
Built a tool to catch package.json/package-lock.json inconsistencies before npm ci fails
Hey everyone! I just published a new npm package that I've been working on, and I'd love to get some feedback from the community.
What it does:
The tool analyzes your package.json and package-lock.json files to detect inconsistencies before you run `npm ci`. If you've ever had `npm ci` fail because of mismatches between these files, this is designed to catch those issues early and explain exactly what's wrong.
Current features:
* Compares package.json and package-lock.json for inconsistencies
* Provides detailed warnings about what doesn't match
* Checks for Git installation in your project
* Verifies npm version compatibility with package-lock.json's version
Planned features:
* Automatic fixes for detected inconsistencies (suggestions/PRs welcome!)
Why I built this:
`npm ci` is great for reproducible builds, but the error messages when it fails aren't always clear about *why* your lock file doesn't match your package.json. I wanted something that could be run as a pre-CI check or git hook to catch these issues locally.
This also can be added to your CI/CD workflow, and prevent from deploying in case of an error.
Installation:
npm install npm-ci-guard
GitHub: [https://github.com/yaronpen/npm-ci-guard](https://github.com/yaronpen/npm-ci-guard)
I'm still early in development and would really appreciate any feedback, suggestions, or contributions. What features would make this more useful for your workflow?
https://redd.it/1pi1qvo
@r_opensource
Hey everyone! I just published a new npm package that I've been working on, and I'd love to get some feedback from the community.
What it does:
The tool analyzes your package.json and package-lock.json files to detect inconsistencies before you run `npm ci`. If you've ever had `npm ci` fail because of mismatches between these files, this is designed to catch those issues early and explain exactly what's wrong.
Current features:
* Compares package.json and package-lock.json for inconsistencies
* Provides detailed warnings about what doesn't match
* Checks for Git installation in your project
* Verifies npm version compatibility with package-lock.json's version
Planned features:
* Automatic fixes for detected inconsistencies (suggestions/PRs welcome!)
Why I built this:
`npm ci` is great for reproducible builds, but the error messages when it fails aren't always clear about *why* your lock file doesn't match your package.json. I wanted something that could be run as a pre-CI check or git hook to catch these issues locally.
This also can be added to your CI/CD workflow, and prevent from deploying in case of an error.
Installation:
npm install npm-ci-guard
GitHub: [https://github.com/yaronpen/npm-ci-guard](https://github.com/yaronpen/npm-ci-guard)
I'm still early in development and would really appreciate any feedback, suggestions, or contributions. What features would make this more useful for your workflow?
https://redd.it/1pi1qvo
@r_opensource
GitHub
GitHub - yaronpen/npm-ci-guard
Contribute to yaronpen/npm-ci-guard development by creating an account on GitHub.
Wrapper tool for Google Drive seamless integration into Linux
rclone4gdrive is an open-source tool for seamless, automated, and transparent two-way Google Drive backup on Linux.
rclone4gdrive eliminates the hassle of configuring and maintaining routinely cloud syncs by providing true "set-and-forget" synchronization directly from your Linux filesystem to your personal Google Drive.
GitHub: https://github.com/thisisnotgcsar/rclone4gdrive
This is a project I built in my free time, and it’s one of my first contributions to the open-source community. If you notice anything that can be improved or corrected, feel free to let me know or open a pull request. Any help you give to improve this tool also helps me grow as a developer, so your contributions are truly appreciated!
https://redd.it/1pi2dva
@r_opensource
rclone4gdrive is an open-source tool for seamless, automated, and transparent two-way Google Drive backup on Linux.
rclone4gdrive eliminates the hassle of configuring and maintaining routinely cloud syncs by providing true "set-and-forget" synchronization directly from your Linux filesystem to your personal Google Drive.
GitHub: https://github.com/thisisnotgcsar/rclone4gdrive
This is a project I built in my free time, and it’s one of my first contributions to the open-source community. If you notice anything that can be improved or corrected, feel free to let me know or open a pull request. Any help you give to improve this tool also helps me grow as a developer, so your contributions are truly appreciated!
https://redd.it/1pi2dva
@r_opensource
GitHub
GitHub - thisisnotgcsar/rclone4gdrive: Seamless, automated, and transparent two-way Google Drive backup for Linux.
Seamless, automated, and transparent two-way Google Drive backup for Linux. - thisisnotgcsar/rclone4gdrive
I built a distributed key-value store in Rust (Raft + 2PC + custom storage engine)
https://github.com/whispem/minikv
https://redd.it/1pi3rtr
@r_opensource
https://github.com/whispem/minikv
https://redd.it/1pi3rtr
@r_opensource
GitHub
GitHub - whispem/minikv: A production-ready distributed key-value store with Raft consensus.
A production-ready distributed key-value store with Raft consensus. - whispem/minikv
Built a container management + logs viewer that finally feels right to me
hi everyone, i have been doing lots of self-hosting and running things off a vps, the most difficult thing i had to live with was all the time having to ssh into a server to debug things going on, read logs or restart containers.
So I built LogDeck. It's fast (handles 10k+ logs without breaking a sweat), supports multi-host management from one UI, has auth built in, streaming, log downloads, etc
Would love to have your feedback.
github.com/AmoabaKelvin/logdeck
logdeck.dev
https://redd.it/1pi59h1
@r_opensource
hi everyone, i have been doing lots of self-hosting and running things off a vps, the most difficult thing i had to live with was all the time having to ssh into a server to debug things going on, read logs or restart containers.
So I built LogDeck. It's fast (handles 10k+ logs without breaking a sweat), supports multi-host management from one UI, has auth built in, streaming, log downloads, etc
Would love to have your feedback.
github.com/AmoabaKelvin/logdeck
logdeck.dev
https://redd.it/1pi59h1
@r_opensource
GitHub
GitHub - AmoabaKelvin/logdeck: logs viewing and container management shouldn't be that hard. shipping 🚢
logs viewing and container management shouldn't be that hard. shipping 🚢 - AmoabaKelvin/logdeck
DataKit: your all in browser data studio is open source now
Hello all. I'm super happy to announce DataKit https://datakit.page/ is open source from today!
https://github.com/Datakitpage/Datakit
DataKit is a browser-based data analysis platform that processes multi-gigabyte files (Parquet, CSV, JSON, etc) locally (with the help of duckdb-wasm). All processing happens in the browser - no data is sent to external servers. You can also connect to remote sources like Motherduck and Postgres with a datakit server in the middle.
I've been making this over the past couple of months on my side job and finally decided its the time to get the help of others on this. I would love to get your thoughts, see your stars and chat around it!
https://redd.it/1pi4zul
@r_opensource
Hello all. I'm super happy to announce DataKit https://datakit.page/ is open source from today!
https://github.com/Datakitpage/Datakit
DataKit is a browser-based data analysis platform that processes multi-gigabyte files (Parquet, CSV, JSON, etc) locally (with the help of duckdb-wasm). All processing happens in the browser - no data is sent to external servers. You can also connect to remote sources like Motherduck and Postgres with a datakit server in the middle.
I've been making this over the past couple of months on my side job and finally decided its the time to get the help of others on this. I would love to get your thoughts, see your stars and chat around it!
https://redd.it/1pi4zul
@r_opensource
GitHub
GitHub - Datakitpage/Datakit: DataKit is a browser-based data analysis platform that processes multi-gigabyte files locally. All…
DataKit is a browser-based data analysis platform that processes multi-gigabyte files locally. All processing happens in your browser - no data is sent to external servers. - Datakitpage/Datakit
Recommendation for privacy friendly open source software to create a (stolen) bike register?
Bike registers such as bikeindex (US) bikeregister (UK), bicycode (FR) or mybike (BE) prevent bike theft, increase chances of recovering stolen bikes and help to identify thieves. But they are not interoperable and custom solutions.
I wonder which open source privacy friendly solution could be used to create a similar 'open' register to be used by every country (or entrepreneur, bike theft insurance) which wants to use it. User would upload photo and denoscription (frame number, brand and model, colour etc., presumably in structured format), user could declare a bike 'stolen, and everybody (or just authorised users) could search/filter the list of stolen bikes by brand, frame number (fuzzy search) and then have an anonymous way to send a message to the owner of the stolen bike.
The solution should have a decent interface, not just a spreadsheet, and ideally not be easy to scrape/spam. And of course top protection of the private data.
Any sugggestions what would work best, and how much work would be needed to adapt it to the denoscription above?
Thanks a lot in advance for your help!
https://redd.it/1pi7ycs
@r_opensource
Bike registers such as bikeindex (US) bikeregister (UK), bicycode (FR) or mybike (BE) prevent bike theft, increase chances of recovering stolen bikes and help to identify thieves. But they are not interoperable and custom solutions.
I wonder which open source privacy friendly solution could be used to create a similar 'open' register to be used by every country (or entrepreneur, bike theft insurance) which wants to use it. User would upload photo and denoscription (frame number, brand and model, colour etc., presumably in structured format), user could declare a bike 'stolen, and everybody (or just authorised users) could search/filter the list of stolen bikes by brand, frame number (fuzzy search) and then have an anonymous way to send a message to the owner of the stolen bike.
The solution should have a decent interface, not just a spreadsheet, and ideally not be easy to scrape/spam. And of course top protection of the private data.
Any sugggestions what would work best, and how much work would be needed to adapt it to the denoscription above?
Thanks a lot in advance for your help!
https://redd.it/1pi7ycs
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
My Android opensource project: DayExam
A powerful Android application designed to help you efficiently parse your school paper and store in your phone, then you can study anywhere
It is simple but useful.
opensource at github, link: https://github.com/newerZGQ/day\_exam
or you can download it at fdroid: https://f-droid.org/packages/com.gorden.dayexam/
https://redd.it/1pi3n2g
@r_opensource
A powerful Android application designed to help you efficiently parse your school paper and store in your phone, then you can study anywhere
It is simple but useful.
opensource at github, link: https://github.com/newerZGQ/day\_exam
or you can download it at fdroid: https://f-droid.org/packages/com.gorden.dayexam/
https://redd.it/1pi3n2g
@r_opensource
GitHub
GitHub - newerZGQ/day_exam
Contribute to newerZGQ/day_exam development by creating an account on GitHub.