Open Source SigNoz MCP Server
we built a Go mcp signoz server
[https://github.com/CalmoAI/mcp-server-signoz](https://github.com/CalmoAI/mcp-server-signoz)
* `signoz_test_connection`: Verify connectivity to your Signoz instance and configuration
* `signoz_fetch_dashboards`: List all available dashboards from Signoz
* `signoz_fetch_dashboard_details`: Retrieve detailed information about a specific dashboard by its ID
* `signoz_fetch_dashboard_data`: Fetch all panel data for a given dashboard by name and time range
* `signoz_fetch_apm_metrics`: Retrieve standard APM metrics (request rate, error rate, latency, apdex) for a given service and time range
* `signoz_fetch_services`: Fetch all instrumented services from Signoz with optional time range filtering
* `signoz_execute_clickhouse_query`: Execute custom ClickHouse SQL queries via the Signoz API with time range support
* `signoz_execute_builder_query`: Execute Signoz builder queries for custom metrics and aggregations with time range support
* `signoz_fetch_traces_or_logs`: Fetch traces or logs from SigNoz using ClickHouse SQL
https://redd.it/1mozqrx
@r_opensource
we built a Go mcp signoz server
[https://github.com/CalmoAI/mcp-server-signoz](https://github.com/CalmoAI/mcp-server-signoz)
* `signoz_test_connection`: Verify connectivity to your Signoz instance and configuration
* `signoz_fetch_dashboards`: List all available dashboards from Signoz
* `signoz_fetch_dashboard_details`: Retrieve detailed information about a specific dashboard by its ID
* `signoz_fetch_dashboard_data`: Fetch all panel data for a given dashboard by name and time range
* `signoz_fetch_apm_metrics`: Retrieve standard APM metrics (request rate, error rate, latency, apdex) for a given service and time range
* `signoz_fetch_services`: Fetch all instrumented services from Signoz with optional time range filtering
* `signoz_execute_clickhouse_query`: Execute custom ClickHouse SQL queries via the Signoz API with time range support
* `signoz_execute_builder_query`: Execute Signoz builder queries for custom metrics and aggregations with time range support
* `signoz_fetch_traces_or_logs`: Fetch traces or logs from SigNoz using ClickHouse SQL
https://redd.it/1mozqrx
@r_opensource
GitHub
GitHub - CalmoAI/mcp-server-signoz: Signoz MCP Server - Golang
Signoz MCP Server - Golang. Contribute to CalmoAI/mcp-server-signoz development by creating an account on GitHub.
Funding Open Source like public infrastructure
https://dri.es/funding-open-source-like-public-infrastructure
https://redd.it/1mp1jrd
@r_opensource
https://dri.es/funding-open-source-like-public-infrastructure
https://redd.it/1mp1jrd
@r_opensource
dri.es
Funding Open Source like public infrastructure
To protect the digital foundation of essential government services, governments should invest in Open Source as public infrastructure and shift from consumption to contribution.
ID Verifier - helping us avoid another Tea app fiasco
https://github.com/universal-verify/id-verifier
https://redd.it/1mp39u1
@r_opensource
https://github.com/universal-verify/id-verifier
https://redd.it/1mp39u1
@r_opensource
GitHub
GitHub - universal-verify/id-verifier: Library to simplify digital id verification
Library to simplify digital id verification. Contribute to universal-verify/id-verifier development by creating an account on GitHub.
Markdown-ui: Render UI Inside Markdown At Runtime
Currently markdown is used widely across documentation, blogs and AI output for its simplicity and content-first focus. But they do not allow users to interact with the content.
Existing attempts like MDX, web components, and embedding html/jss directly in markdown are compile time only, non-portable, and security risks.
This is why I created Markdown UI, an open MIT standard for easily embedding UI in markdown. The UI widgets are just simple JSON objects inside the markdown and are parsed into web component XML tags. Then any renderer (React/Svelte/Vue/Swift etc.) can render the component into actual UI on the platform and emit standardised {id, value} events to the application for capturing and processing.
The standard is designed to be minimal, extensible, and secure.
Here is the live demo: markdown-ui.com
Here is the GitHub: https://github.com/BlueprintDesignLab/markdown-ui/
https://redd.it/1mp5f5c
@r_opensource
Currently markdown is used widely across documentation, blogs and AI output for its simplicity and content-first focus. But they do not allow users to interact with the content.
Existing attempts like MDX, web components, and embedding html/jss directly in markdown are compile time only, non-portable, and security risks.
This is why I created Markdown UI, an open MIT standard for easily embedding UI in markdown. The UI widgets are just simple JSON objects inside the markdown and are parsed into web component XML tags. Then any renderer (React/Svelte/Vue/Swift etc.) can render the component into actual UI on the platform and emit standardised {id, value} events to the application for capturing and processing.
The standard is designed to be minimal, extensible, and secure.
Here is the live demo: markdown-ui.com
Here is the GitHub: https://github.com/BlueprintDesignLab/markdown-ui/
https://redd.it/1mp5f5c
@r_opensource
Apache 2.0 900+ Neural TTS Voices 100% Local In-Browser with No Downloads (Kitten TTS, Piper, Kokoro)
Hey all! Last week, I posted a Kitten TTS web demo to r/localllama that many people liked, so I decided to take it a step further and add Piper and Kokoro to the project! The project lets you load Kitten TTS, Piper Voices, or Kokoro completely in the browser, 100% local. It also has a quick preview feature in the voice selection dropdowns.
# **Online Demo** (GitHub Pages)
Repo (Apache 2.0): https://github.com/clowerweb/tts-studio
The Kitten TTS standalone was also updated to include a bunch of your feedback including bug fixes and requested features! There's also a Piper standalone available.
Lemme know what you think and if you've got any feedback or suggestions!
If this project helps you save a few GPU hours, please consider grabbing me a coffee! ☕
https://redd.it/1mp3sk4
@r_opensource
Hey all! Last week, I posted a Kitten TTS web demo to r/localllama that many people liked, so I decided to take it a step further and add Piper and Kokoro to the project! The project lets you load Kitten TTS, Piper Voices, or Kokoro completely in the browser, 100% local. It also has a quick preview feature in the voice selection dropdowns.
# **Online Demo** (GitHub Pages)
Repo (Apache 2.0): https://github.com/clowerweb/tts-studio
The Kitten TTS standalone was also updated to include a bunch of your feedback including bug fixes and requested features! There's also a Piper standalone available.
Lemme know what you think and if you've got any feedback or suggestions!
If this project helps you save a few GPU hours, please consider grabbing me a coffee! ☕
https://redd.it/1mp3sk4
@r_opensource
Reddit
From the LocalLLaMA community on Reddit
Explore this post and more from the LocalLLaMA community
A Technical Deep-Dive for the Security-Conscious- Persistent Memory CLI Tool-Free to Use
Since transparency and verifiability are core to the project, here’s a deeper dive into the technical implementation.
The entire security posture is built on a zero-trust, local-first foundation. The tool assumes it's operating in a potentially untrusted environment and gives you the power to verify its behavior and lock down its capabilities.
1. Verifiable Zero-Egress
We claim the tool is air-gapped, but you shouldn't have to take our word for it.
How it works: At startup, the CLI can monkey-patch Node.js's http and https modules. Any outbound request is intercepted. If the destination isn't on an explicit allowlist (e.g., localhost for a local vector server), the request is blocked, and the process exits with a non-zero status code.
How to verify: Run agm prove-offline. This command attempts to make a DNS lookup to a public resolver. It will fail and print a confirmation that the network guard is active. This allows you to confirm at any time that no data is leaving your machine.
2. Supply Chain Integrity for Shared Context: The .agmctx Bundle
When you share context with a colleague, you need to be sure it hasn't been tampered with. The .agmctx bundle format is designed for this.
When you run agm export-context --sign --zip:
Checksums First: A checksums.json file is created, containing the SHA-256 hash of every file in the export (the manifest, the vector map, etc.).
Cryptographic Signature: An Ed25519 key pair (generated and stored locally in keys) is used to sign the SHA-256 hash of the concatenated checksums. This signature is stored in signature.bin.
Verification on Import: When agm import-context runs, it performs the checks in reverse order:
It first verifies that the checksum of every file matches the value in checksums.json. If any file has been altered, it fails immediately with exit code 4 (Checksum Mismatch). This prevents wasting CPU cycles on a tampered package.
If the checksums match, it then verifies the signature against the public key. If the signature is invalid, it fails with exit code 3 (Invalid Signature).
This layered approach ensures both integrity and authenticity.
3. Policy-Driven Operation
The tool is governed by a policy.json file in your project's .antigoldfishmode directory. This file is your control panel for the tool's behavior.
Command Whitelisting: You can restrict which agm commands are allowed to run. For example, you could disable export-context entirely in a highly sensitive project.
File Path Globs: Restrict the tool to only read from specific directories (e.g., src and docs, but not dist or node_modules).
Enforced Signing Policies:
"requireSignedContext": true: The tool will refuse to import any .agmctx bundle that isn't signed with a valid signature. This is a critical security control for teams.
"forceSignedExports": true: This makes signing non-optional. Even if a user tries to export with --no-sign, the policy will override it and sign the export.
4. Transparent Auditing via Receipts and Journal
You should never have to wonder what the tool did.
Receipts: Every significant command (export, import, index-code, etc.) generates a JSON receipt in receipts. This receipt contains a cryptographic hash of the inputs and outputs, timing data, and a summary of the operation.
Journal: A journal.jsonl file provides a chronological, append-only log of every command executed and its corresponding receipt ID. This gives you a complete, verifiable audit trail of all actions performed by the tool.
This combination of features is designed to provide a tool that is not only powerful but also transparent, verifiable, and secure enough for the most sensitive development environments.
I would love your feedback.
You can check out the source code on GitHub: https://github.com/jahboukie/antigoldfish
If you find it useful, please consider sponsoring the project:
Since transparency and verifiability are core to the project, here’s a deeper dive into the technical implementation.
The entire security posture is built on a zero-trust, local-first foundation. The tool assumes it's operating in a potentially untrusted environment and gives you the power to verify its behavior and lock down its capabilities.
1. Verifiable Zero-Egress
We claim the tool is air-gapped, but you shouldn't have to take our word for it.
How it works: At startup, the CLI can monkey-patch Node.js's http and https modules. Any outbound request is intercepted. If the destination isn't on an explicit allowlist (e.g., localhost for a local vector server), the request is blocked, and the process exits with a non-zero status code.
How to verify: Run agm prove-offline. This command attempts to make a DNS lookup to a public resolver. It will fail and print a confirmation that the network guard is active. This allows you to confirm at any time that no data is leaving your machine.
2. Supply Chain Integrity for Shared Context: The .agmctx Bundle
When you share context with a colleague, you need to be sure it hasn't been tampered with. The .agmctx bundle format is designed for this.
When you run agm export-context --sign --zip:
Checksums First: A checksums.json file is created, containing the SHA-256 hash of every file in the export (the manifest, the vector map, etc.).
Cryptographic Signature: An Ed25519 key pair (generated and stored locally in keys) is used to sign the SHA-256 hash of the concatenated checksums. This signature is stored in signature.bin.
Verification on Import: When agm import-context runs, it performs the checks in reverse order:
It first verifies that the checksum of every file matches the value in checksums.json. If any file has been altered, it fails immediately with exit code 4 (Checksum Mismatch). This prevents wasting CPU cycles on a tampered package.
If the checksums match, it then verifies the signature against the public key. If the signature is invalid, it fails with exit code 3 (Invalid Signature).
This layered approach ensures both integrity and authenticity.
3. Policy-Driven Operation
The tool is governed by a policy.json file in your project's .antigoldfishmode directory. This file is your control panel for the tool's behavior.
Command Whitelisting: You can restrict which agm commands are allowed to run. For example, you could disable export-context entirely in a highly sensitive project.
File Path Globs: Restrict the tool to only read from specific directories (e.g., src and docs, but not dist or node_modules).
Enforced Signing Policies:
"requireSignedContext": true: The tool will refuse to import any .agmctx bundle that isn't signed with a valid signature. This is a critical security control for teams.
"forceSignedExports": true: This makes signing non-optional. Even if a user tries to export with --no-sign, the policy will override it and sign the export.
4. Transparent Auditing via Receipts and Journal
You should never have to wonder what the tool did.
Receipts: Every significant command (export, import, index-code, etc.) generates a JSON receipt in receipts. This receipt contains a cryptographic hash of the inputs and outputs, timing data, and a summary of the operation.
Journal: A journal.jsonl file provides a chronological, append-only log of every command executed and its corresponding receipt ID. This gives you a complete, verifiable audit trail of all actions performed by the tool.
This combination of features is designed to provide a tool that is not only powerful but also transparent, verifiable, and secure enough for the most sensitive development environments.
I would love your feedback.
You can check out the source code on GitHub: https://github.com/jahboukie/antigoldfish
If you find it useful, please consider sponsoring the project:
Building an open source P2P password manager: Looking for collaborators
Hello all who read,
I am looking for collaborators to build a truly P2P password manager from scratch that is robust, extensible, and wholly secure.
Most current password managers store data in the centralized cloud servers, creating attractive targets for attackers. A P2P approach puts users in complete control of their data--eliminating the honeypot problem whilst shifting security responsibility to the individual users. Such an approach, I believe, would lead to a higher ceiling of security, which may be of interest to many users--particularly those who value privacy and examine app architecture to determine their security.
Right now, Rust with the libp2p library is the stack I am thinking of, primarily for performance and cross-platform support, but I am open to discussion on the stack.
The key goals of this project include:
\- True P2P sync (no servers)
\- Strong conflict resolution
\- Cross-platform (desktop/mobile)
\- Usable UX and CLI option for power users
I am looking for developers interested in P2P networking, cryptography, systems programming, or just people passionate about privacy tech.
I have a decent amount of experience in both Rust, specifically in lower level graphics and networking, and some experience with libp2p. I also have experience with JS, TS, Go, Python, C, Cpp, and other languages, but most of my networking experience lies in Rust and Go. Here is my GitHub if anyone wants to take a look: https://github.com/gituser12981u2.
Here is the GitHub link to the project:
https://github.com/gituser12981u2/p2p\_password\_manager
There is not much code yet since I want all us collaborators to make architectural decisions together. I have a CI pipeline setup and plan to make ADRs for any decisions.
As I said, this would be a collaborative effort--let us figure out the architecture together.
Anyone interested in exploring this?
https://redd.it/1mpb2et
@r_opensource
Hello all who read,
I am looking for collaborators to build a truly P2P password manager from scratch that is robust, extensible, and wholly secure.
Most current password managers store data in the centralized cloud servers, creating attractive targets for attackers. A P2P approach puts users in complete control of their data--eliminating the honeypot problem whilst shifting security responsibility to the individual users. Such an approach, I believe, would lead to a higher ceiling of security, which may be of interest to many users--particularly those who value privacy and examine app architecture to determine their security.
Right now, Rust with the libp2p library is the stack I am thinking of, primarily for performance and cross-platform support, but I am open to discussion on the stack.
The key goals of this project include:
\- True P2P sync (no servers)
\- Strong conflict resolution
\- Cross-platform (desktop/mobile)
\- Usable UX and CLI option for power users
I am looking for developers interested in P2P networking, cryptography, systems programming, or just people passionate about privacy tech.
I have a decent amount of experience in both Rust, specifically in lower level graphics and networking, and some experience with libp2p. I also have experience with JS, TS, Go, Python, C, Cpp, and other languages, but most of my networking experience lies in Rust and Go. Here is my GitHub if anyone wants to take a look: https://github.com/gituser12981u2.
Here is the GitHub link to the project:
https://github.com/gituser12981u2/p2p\_password\_manager
There is not much code yet since I want all us collaborators to make architectural decisions together. I have a CI pipeline setup and plan to make ADRs for any decisions.
As I said, this would be a collaborative effort--let us figure out the architecture together.
Anyone interested in exploring this?
https://redd.it/1mpb2et
@r_opensource
GitHub
gituser12981u2 - Overview
gituser12981u2 has 25 repositories available. Follow their code on GitHub.
I rebuilt the Eisenhower Matrix for modern use, here’s why
A few months ago, I was looking for a simple, focused Eisenhower Matrix app.
I wanted something clean, distraction-free, and fast, but everything I found was either outdated, bloated with features I didn’t need, or just… ugly.
So, I decided to build my own.
This week, I released version 2.0, shaped entirely by feedback from the small group of early users. The interface is fully redesigned with a calmer, more focused look, and I finally added due times and smart notifications so tasks don’t slip through the cracks.
What I’m most proud of is that it’s still minimalist. No endless menus, no complex setup. Just four quadrants to sort your tasks, and a few thoughtful touches to make it more human.
If you’re curious, the project’s open-source and you can check it out here:
🔗 **github.com/Appaxaap/Focus**
I’m curious, for those who’ve tried using an Eisenhower Matrix (or a similar system), what’s the one feature you wish more productivity apps had?
https://redd.it/1mpc25k
@r_opensource
A few months ago, I was looking for a simple, focused Eisenhower Matrix app.
I wanted something clean, distraction-free, and fast, but everything I found was either outdated, bloated with features I didn’t need, or just… ugly.
So, I decided to build my own.
This week, I released version 2.0, shaped entirely by feedback from the small group of early users. The interface is fully redesigned with a calmer, more focused look, and I finally added due times and smart notifications so tasks don’t slip through the cracks.
What I’m most proud of is that it’s still minimalist. No endless menus, no complex setup. Just four quadrants to sort your tasks, and a few thoughtful touches to make it more human.
If you’re curious, the project’s open-source and you can check it out here:
🔗 **github.com/Appaxaap/Focus**
I’m curious, for those who’ve tried using an Eisenhower Matrix (or a similar system), what’s the one feature you wish more productivity apps had?
https://redd.it/1mpc25k
@r_opensource
GitHub
GitHub - Appaxaap/Focus: Offline-first Eisenhower Matrix productivity app
Offline-first Eisenhower Matrix productivity app. Contribute to Appaxaap/Focus development by creating an account on GitHub.
Amical: Open Source AI Dictation App. Type 3x faster, no keyboard needed.
https://github.com/amicalhq/amical
https://redd.it/1mpclv6
@r_opensource
https://github.com/amicalhq/amical
https://redd.it/1mpclv6
@r_opensource
GitHub
GitHub - amicalhq/amical: 🎙️ AI Dictation App - Open Source and Local-first ⚡ Type 3x faster, no keyboard needed. 🆓 Powered by…
🎙️ AI Dictation App - Open Source and Local-first ⚡ Type 3x faster, no keyboard needed. 🆓 Powered by open source models, works offline, fast and accurate. - amicalhq/amical
Bulk email verifier
Found these 2, i was curios if you tested it or if you have any other alternative:
https://github.com/truemail-rb/truemail-rack
https://github.com/reacherhq/check-if-email-exists
https://redd.it/1mpg3xw
@r_opensource
Found these 2, i was curios if you tested it or if you have any other alternative:
https://github.com/truemail-rb/truemail-rack
https://github.com/reacherhq/check-if-email-exists
https://redd.it/1mpg3xw
@r_opensource
GitHub
GitHub - truemail-rb/truemail-rack: Truemail server. Lightweight rack based web API 🚀
Truemail server. Lightweight rack based web API 🚀. Contribute to truemail-rb/truemail-rack development by creating an account on GitHub.
Open-source ATS-friendly resume builder focused on privacy
I’ve built an open-source CV builder designed to create resumes that are ATS-compatible and privacy friendly. All processing happens locally in the browser, with no servers or external tracking involved.
The application supports six professional templates, real-time preview, instant PDF generation, and multiple languages (Portuguese, English, Spanish). Data is stored only in the user’s browser and can be exported or imported via XML.
Built with Next.js 15, TypeScript and Tailwind CSS, it’s fully responsive and works on desktop and mobile. Licensed under MIT.
GitHub: https://github.com/goncalojbsousa/EasyPeasyCV
Live demo: https://www.easypeasycv.com
Feedback and contributions are welcome.
https://redd.it/1mpke1a
@r_opensource
I’ve built an open-source CV builder designed to create resumes that are ATS-compatible and privacy friendly. All processing happens locally in the browser, with no servers or external tracking involved.
The application supports six professional templates, real-time preview, instant PDF generation, and multiple languages (Portuguese, English, Spanish). Data is stored only in the user’s browser and can be exported or imported via XML.
Built with Next.js 15, TypeScript and Tailwind CSS, it’s fully responsive and works on desktop and mobile. Licensed under MIT.
GitHub: https://github.com/goncalojbsousa/EasyPeasyCV
Live demo: https://www.easypeasycv.com
Feedback and contributions are welcome.
https://redd.it/1mpke1a
@r_opensource
GitHub
GitHub - goncalojbsousa/EasyPeasyCV: Modern, privacy-first CV builder with real-time preview, PDF export, and multi-language support.
Modern, privacy-first CV builder with real-time preview, PDF export, and multi-language support. - goncalojbsousa/EasyPeasyCV
Right to Repair: An Open Source Approach to Hardware Freedom
https://brainnoises.com/blog/the-ethical-battle-for-the-right-to-repair/
https://redd.it/1mplkmp
@r_opensource
https://brainnoises.com/blog/the-ethical-battle-for-the-right-to-repair/
https://redd.it/1mplkmp
@r_opensource
Brainnoises
Your Owner, Not Your Master: The Ethical Battle for the Right to Repair
A critical analysis of how tech giants create a monopoly over repairing their own devices, turning ownership into a disguised subnoscription. Let’s discuss why the right to repair is not just about economics, but about freedom and control.
See the faces of open source creators
https://www.facesofopensource.com
https://redd.it/1mpox77
@r_opensource
https://www.facesofopensource.com
https://redd.it/1mpox77
@r_opensource
Faces of Open Source
All Faces. Faces of Open Source is an on-going photographic documentation of the people behind the development and advancement of the open source revolution that has transformed the technology industry.
Open Source, Self Hosted Google Keep Notes alternative
One-click Docker install (web app + API in seconds).
Import Google Keep notes from Google Takeout
Real-time collaboration for checklists — share and tick items together live.
Markdown editor & viewer (.md) with built-in auth (no third-party APIs).
Link: https://github.com/nikunjsingh93/react-glass-keep
https://redd.it/1mpqh65
@r_opensource
One-click Docker install (web app + API in seconds).
Import Google Keep notes from Google Takeout
.json files.Real-time collaboration for checklists — share and tick items together live.
Markdown editor & viewer (.md) with built-in auth (no third-party APIs).
Link: https://github.com/nikunjsingh93/react-glass-keep
https://redd.it/1mpqh65
@r_opensource
GitHub
GitHub - nikunjsingh93/react-glass-keep: Glass Keep is Keep Notes alternative using Glass design. Made in React + Tailwind
Glass Keep is Keep Notes alternative using Glass design. Made in React + Tailwind - nikunjsingh93/react-glass-keep
Open source book on user experience
Hello open-source community, I've noticed that unfortunately, user experience is given little attention in many, even large, open-source projects. In my opinion, this is mainly because access to user experience knowledge isn't low-threshold enough, meaning books and texts on user experience are simply too expensive. There's still so much to learn. That's why I've decided to start writing a book about user experience and make it available as open source.
https://code.metalisp.dev/marcuskammer/user-centered-development-book
https://redd.it/1mpu9oh
@r_opensource
Hello open-source community, I've noticed that unfortunately, user experience is given little attention in many, even large, open-source projects. In my opinion, this is mainly because access to user experience knowledge isn't low-threshold enough, meaning books and texts on user experience are simply too expensive. There's still so much to learn. That's why I've decided to start writing a book about user experience and make it available as open source.
https://code.metalisp.dev/marcuskammer/user-centered-development-book
https://redd.it/1mpu9oh
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
I snagged $25k in AWS credits and want to contribute to some open source robotics repo/work, ideas?
I somehow ( don't ask me how ) was able to get my hands on $25k in AWS credits. I want to make some nice contribution to open source robotics - something that people in the open source community will value and also I can maybe put on my resume/GitHub so that hiring companies can see my contribution. Any ideas on what I can do? I'm a Robotics engineer with decent experience from a top tier uni in USA. Any ideas appreciated. I want to either train something/ build something that is useful for someone!
https://redd.it/1mpumaw
@r_opensource
I somehow ( don't ask me how ) was able to get my hands on $25k in AWS credits. I want to make some nice contribution to open source robotics - something that people in the open source community will value and also I can maybe put on my resume/GitHub so that hiring companies can see my contribution. Any ideas on what I can do? I'm a Robotics engineer with decent experience from a top tier uni in USA. Any ideas appreciated. I want to either train something/ build something that is useful for someone!
https://redd.it/1mpumaw
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
KDE Gear 25.08 released
https://kde.org/announcements/gear/25.08.0/
https://redd.it/1mpw8n5
@r_opensource
https://kde.org/announcements/gear/25.08.0/
https://redd.it/1mpw8n5
@r_opensource
kde.org
KDE 🌞 Gear 25.08
Travel Itinerary Itinerary is your app for planning journeys and traveling. Itinerary works on your desktop and phone and can hold information on your accommodation, generate QRs for your boarding passes, inform you of delays and cancellations, find alternative…
Wrote a guide to self-host a XMPP server and connect FLOSS clients that support OMEMO
https://github.com/usg-ishimura/chat-control-prepper-guide
https://redd.it/1mpzf35
@r_opensource
https://github.com/usg-ishimura/chat-control-prepper-guide
https://redd.it/1mpzf35
@r_opensource
GitHub
GitHub - usg-ishimura/chat-control-prepper-guide: XMPP self-hosting server and OMEMO client guide
XMPP self-hosting server and OMEMO client guide. Contribute to usg-ishimura/chat-control-prepper-guide development by creating an account on GitHub.
Monedsa - Income & Expense Tracker
Monedsa is a simple and user-friendly mobile app designed to help you track your income and expenses, making personal finance management easy and secure. Available on Google Play, Monedsa is completely open-source, allowing anyone to explore, modify, and contribute to the project.
Your privacy is our top priority. Monedsa does not share your data with any third-party services or organizations. All your financial information stays securely on your device, ensuring complete control over your personal data.
Project website: https://vu4ll.com.tr/projects/monedsa
Github: https://github.com/Vu4ll/monedsa
Play Store: https://play.google.com/store/apps/details?id=com.vu4ll.monedsa
https://redd.it/1mpyywr
@r_opensource
Monedsa is a simple and user-friendly mobile app designed to help you track your income and expenses, making personal finance management easy and secure. Available on Google Play, Monedsa is completely open-source, allowing anyone to explore, modify, and contribute to the project.
Your privacy is our top priority. Monedsa does not share your data with any third-party services or organizations. All your financial information stays securely on your device, ensuring complete control over your personal data.
Project website: https://vu4ll.com.tr/projects/monedsa
Github: https://github.com/Vu4ll/monedsa
Play Store: https://play.google.com/store/apps/details?id=com.vu4ll.monedsa
https://redd.it/1mpyywr
@r_opensource
monedsa.vu4ll.com.tr
Monedsa - Income and Expense Tracking Application
Easily track your income and expenses on your Android devices. Simple, fast, and secure.
MatrixNet: A Blueprint for a New Internet Architecture
Hi everyone,
Fair warning, this is a long post, so I've added a TL;DR at the very end for those short on time.
I know the concept has its problems, but I believe with the right minds, we can find the right solutions.
I'd like to share a conceptual framework for a different kind of internet or network at least, one designed from the ground up to be decentralized, censorship‑resistant, and hyper‑compressed. This isn't a finished product or a formal whitepaper. It’s a thought experiment I’m calling MatrixNet for now, and I'm sharing it to spark discussion, gather feedback, and see if it resonates.
The current web is fragile. Data disappears when servers go down, links rot, and valuable information is lost forever when a torrent runs out of seeders. What if we could build a system where data becomes a permanent, reconstructable resource, independent of its original host? Imagine if it were theoretically possible to hold a key to the entire internet in just 1 TB of data, allowing you to browse and download vast amounts of information completely offline.
## The Core Idea: Data as a Recipe
Imagine if, instead of shipping a fully built Lego castle, we only shipped a tiny instruction booklet. The recipient could build the castle perfectly because they, like everyone else, already owned the same universal set of Lego bricks.
MatrixNet operates on this principle. All data, websites, files, videos, applications, are not stored or transferred directly. Instead, it is represented as a "Recipe": a small set of instructions that explains how to reconstruct the original data using a shared, universal library of "building blocks."
Let's break down how this would work, step by step.
## Phase 1: Forging the Universal Matrix
The foundation of the entire system is a massive, static, and globally shared dataset called the Matrix.
### Gathering Public Data
We start by collecting a vast and diverse corpus of public, unencrypted data. Think of it as a digital Library of Alexandria:
- The entirety of Wikipedia.
- Open‑source code repositories (like all of GitHub).
- Public domain literature from Project Gutenberg.
- Common web assets (CSS frameworks, JavaScript libraries, fonts, icons).
- Open‑access scientific papers and datasets.
- Common data assets (videos, images).
### Creating the Building Blocks
This public dataset is then processed. The goal isn't to create a colossal file, but the most efficient and small Matrix possible.
The dataset is:
1. Broken down into small, fixed‑size chunks (e.g., 4 KB each).
2. Connected to a hashed index for fast retrieval, and all duplicates are removed.
The result is the Matrix: a universal, deduplicated collection of unique data “atoms” that forms the shared vocabulary for the entire network. Every peer would eventually hold a copy of this Matrix, or at least the parts they need. It is designed to be static; it is built once and distributed, not constantly updated.
The bigger it is, the more efficient it is at representing data, but the more impractical it becomes. We need to find the right balance—perhaps start with 10 GB / 100 GB trials. I foresee that with just 1 TB we could represent the entirety of the internet using some tricks described later.
## Phase 2: Encoding Information into Recipes
Now, let's say a user wants to share a file, document, photo, or even an entire application/website. They don't upload the file itself; they encode it.
### Chunking the Source File
The user's file is split into its own 4 KB chunks.
### Finding the Blocks
For each chunk, the system searches the Matrix for the most similar building block (using the hash table as an index).
- If an identical chunk already exists in the Matrix (common for known formats or text), the system simply points to it.
- If no exact match is found, it identifies the closest match—the Matrix chunk that requires the fewest changes/transformations to become the target chunk.
### Creating the Recipe
This process generates a small
Hi everyone,
Fair warning, this is a long post, so I've added a TL;DR at the very end for those short on time.
I know the concept has its problems, but I believe with the right minds, we can find the right solutions.
I'd like to share a conceptual framework for a different kind of internet or network at least, one designed from the ground up to be decentralized, censorship‑resistant, and hyper‑compressed. This isn't a finished product or a formal whitepaper. It’s a thought experiment I’m calling MatrixNet for now, and I'm sharing it to spark discussion, gather feedback, and see if it resonates.
The current web is fragile. Data disappears when servers go down, links rot, and valuable information is lost forever when a torrent runs out of seeders. What if we could build a system where data becomes a permanent, reconstructable resource, independent of its original host? Imagine if it were theoretically possible to hold a key to the entire internet in just 1 TB of data, allowing you to browse and download vast amounts of information completely offline.
## The Core Idea: Data as a Recipe
Imagine if, instead of shipping a fully built Lego castle, we only shipped a tiny instruction booklet. The recipient could build the castle perfectly because they, like everyone else, already owned the same universal set of Lego bricks.
MatrixNet operates on this principle. All data, websites, files, videos, applications, are not stored or transferred directly. Instead, it is represented as a "Recipe": a small set of instructions that explains how to reconstruct the original data using a shared, universal library of "building blocks."
Let's break down how this would work, step by step.
## Phase 1: Forging the Universal Matrix
The foundation of the entire system is a massive, static, and globally shared dataset called the Matrix.
### Gathering Public Data
We start by collecting a vast and diverse corpus of public, unencrypted data. Think of it as a digital Library of Alexandria:
- The entirety of Wikipedia.
- Open‑source code repositories (like all of GitHub).
- Public domain literature from Project Gutenberg.
- Common web assets (CSS frameworks, JavaScript libraries, fonts, icons).
- Open‑access scientific papers and datasets.
- Common data assets (videos, images).
### Creating the Building Blocks
This public dataset is then processed. The goal isn't to create a colossal file, but the most efficient and small Matrix possible.
The dataset is:
1. Broken down into small, fixed‑size chunks (e.g., 4 KB each).
2. Connected to a hashed index for fast retrieval, and all duplicates are removed.
The result is the Matrix: a universal, deduplicated collection of unique data “atoms” that forms the shared vocabulary for the entire network. Every peer would eventually hold a copy of this Matrix, or at least the parts they need. It is designed to be static; it is built once and distributed, not constantly updated.
The bigger it is, the more efficient it is at representing data, but the more impractical it becomes. We need to find the right balance—perhaps start with 10 GB / 100 GB trials. I foresee that with just 1 TB we could represent the entirety of the internet using some tricks described later.
## Phase 2: Encoding Information into Recipes
Now, let's say a user wants to share a file, document, photo, or even an entire application/website. They don't upload the file itself; they encode it.
### Chunking the Source File
The user's file is split into its own 4 KB chunks.
### Finding the Blocks
For each chunk, the system searches the Matrix for the most similar building block (using the hash table as an index).
- If an identical chunk already exists in the Matrix (common for known formats or text), the system simply points to it.
- If no exact match is found, it identifies the closest match—the Matrix chunk that requires the fewest changes/transformations to become the target chunk.
### Creating the Recipe
This process generates a small