Opensource by Reddit – Telegram
Opensource by Reddit
22 subscribers
5 photos
2 videos
9.61K links
Reddit's ♨️ take on Open Source Technology.

Join the discussion ➡️ @opensource_chats

Channel Inquiries ➡️ @group_contacts_bot

👄 TIPS ➡️➡️➡️ https://news.1rj.ru/str/addlist/mB9fRZOHTUk5ZjZk

🌈 made possible by
@reddit2telegram
@r_channels
Download Telegram
I rebuilt the Eisenhower Matrix for modern use, here’s why

A few months ago, I was looking for a simple, focused Eisenhower Matrix app.
I wanted something clean, distraction-free, and fast, but everything I found was either outdated, bloated with features I didn’t need, or just… ugly.

So, I decided to build my own.

This week, I released version 2.0, shaped entirely by feedback from the small group of early users. The interface is fully redesigned with a calmer, more focused look, and I finally added due times and smart notifications so tasks don’t slip through the cracks.

What I’m most proud of is that it’s still minimalist. No endless menus, no complex setup. Just four quadrants to sort your tasks, and a few thoughtful touches to make it more human.

If you’re curious, the project’s open-source and you can check it out here:
🔗 **github.com/Appaxaap/Focus**

I’m curious, for those who’ve tried using an Eisenhower Matrix (or a similar system), what’s the one feature you wish more productivity apps had?

https://redd.it/1mpc25k
@r_opensource
Open-source ATS-friendly resume builder focused on privacy

I’ve built an open-source CV builder designed to create resumes that are ATS-compatible and privacy friendly. All processing happens locally in the browser, with no servers or external tracking involved.

The application supports six professional templates, real-time preview, instant PDF generation, and multiple languages (Portuguese, English, Spanish). Data is stored only in the user’s browser and can be exported or imported via XML.

Built with Next.js 15, TypeScript and Tailwind CSS, it’s fully responsive and works on desktop and mobile. Licensed under MIT.

GitHub: https://github.com/goncalojbsousa/EasyPeasyCV
Live demo: https://www.easypeasycv.com

Feedback and contributions are welcome.

https://redd.it/1mpke1a
@r_opensource
Open Source, Self Hosted Google Keep Notes alternative

One-click Docker install (web app + API in seconds).
Import Google Keep notes from Google Takeout .json files.
Real-time collaboration for checklists — share and tick items together live.
Markdown editor & viewer (.md) with built-in auth (no third-party APIs).

Link: https://github.com/nikunjsingh93/react-glass-keep

https://redd.it/1mpqh65
@r_opensource
Open source book on user experience

Hello open-source community, I've noticed that unfortunately, user experience is given little attention in many, even large, open-source projects. In my opinion, this is mainly because access to user experience knowledge isn't low-threshold enough, meaning books and texts on user experience are simply too expensive. There's still so much to learn. That's why I've decided to start writing a book about user experience and make it available as open source.

https://code.metalisp.dev/marcuskammer/user-centered-development-book

https://redd.it/1mpu9oh
@r_opensource
I snagged $25k in AWS credits and want to contribute to some open source robotics repo/work, ideas?

I somehow ( don't ask me how ) was able to get my hands on $25k in AWS credits. I want to make some nice contribution to open source robotics - something that people in the open source community will value and also I can maybe put on my resume/GitHub so that hiring companies can see my contribution. Any ideas on what I can do? I'm a Robotics engineer with decent experience from a top tier uni in USA. Any ideas appreciated. I want to either train something/ build something that is useful for someone!

https://redd.it/1mpumaw
@r_opensource
Monedsa - Income & Expense Tracker

Monedsa is a simple and user-friendly mobile app designed to help you track your income and expenses, making personal finance management easy and secure. Available on Google Play, Monedsa is completely open-source, allowing anyone to explore, modify, and contribute to the project.



Your privacy is our top priority. Monedsa does not share your data with any third-party services or organizations. All your financial information stays securely on your device, ensuring complete control over your personal data.



Project website: https://vu4ll.com.tr/projects/monedsa

Github: https://github.com/Vu4ll/monedsa

Play Store: https://play.google.com/store/apps/details?id=com.vu4ll.monedsa

https://redd.it/1mpyywr
@r_opensource
MatrixNet: A Blueprint for a New Internet Architecture

Hi everyone,

Fair warning, this is a long post, so I've added a TL;DR at the very end for those short on time.
I know the concept has its problems, but I believe with the right minds, we can find the right solutions.
I'd like to share a conceptual framework for a different kind of internet or network at least, one designed from the ground up to be decentralized, censorship‑resistant, and hyper‑compressed. This isn't a finished product or a formal whitepaper. It’s a thought experiment I’m calling MatrixNet for now, and I'm sharing it to spark discussion, gather feedback, and see if it resonates.

The current web is fragile. Data disappears when servers go down, links rot, and valuable information is lost forever when a torrent runs out of seeders. What if we could build a system where data becomes a permanent, reconstructable resource, independent of its original host? Imagine if it were theoretically possible to hold a key to the entire internet in just 1 TB of data, allowing you to browse and download vast amounts of information completely offline.

## The Core Idea: Data as a Recipe

Imagine if, instead of shipping a fully built Lego castle, we only shipped a tiny instruction booklet. The recipient could build the castle perfectly because they, like everyone else, already owned the same universal set of Lego bricks.

MatrixNet operates on this principle. All data, websites, files, videos, applications, are not stored or transferred directly. Instead, it is represented as a "Recipe": a small set of instructions that explains how to reconstruct the original data using a shared, universal library of "building blocks."

Let's break down how this would work, step by step.

## Phase 1: Forging the Universal Matrix

The foundation of the entire system is a massive, static, and globally shared dataset called the Matrix.

### Gathering Public Data
We start by collecting a vast and diverse corpus of public, unencrypted data. Think of it as a digital Library of Alexandria:

- The entirety of Wikipedia.
- Open‑source code repositories (like all of GitHub).
- Public domain literature from Project Gutenberg.
- Common web assets (CSS frameworks, JavaScript libraries, fonts, icons).
- Open‑access scientific papers and datasets.
- Common data assets (videos, images).

### Creating the Building Blocks
This public dataset is then processed. The goal isn't to create a colossal file, but the most efficient and small Matrix possible.

The dataset is:

1. Broken down into small, fixed‑size chunks (e.g., 4 KB each).
2. Connected to a hashed index for fast retrieval, and all duplicates are removed.

The result is the Matrix: a universal, deduplicated collection of unique data “atoms” that forms the shared vocabulary for the entire network. Every peer would eventually hold a copy of this Matrix, or at least the parts they need. It is designed to be static; it is built once and distributed, not constantly updated.

The bigger it is, the more efficient it is at representing data, but the more impractical it becomes. We need to find the right balance—perhaps start with 10 GB / 100 GB trials. I foresee that with just 1 TB we could represent the entirety of the internet using some tricks described later.

## Phase 2: Encoding Information into Recipes

Now, let's say a user wants to share a file, document, photo, or even an entire application/website. They don't upload the file itself; they encode it.

### Chunking the Source File
The user's file is split into its own 4 KB chunks.

### Finding the Blocks
For each chunk, the system searches the Matrix for the most similar building block (using the hash table as an index).

- If an identical chunk already exists in the Matrix (common for known formats or text), the system simply points to it.
- If no exact match is found, it identifies the closest match—the Matrix chunk that requires the fewest changes/transformations to become the target chunk.

### Creating the Recipe
This process generates a small
JSON file called a Recipe—the instruction booklet. For each original chunk it contains:

- A pointer to the base building block in the Matrix (its hash).
- A transformation—a tiny piece of data (e.g., an XOR mask) that describes how to modify the Matrix block to perfectly recreate the original chunk. If the match is exact, the transformation is empty.

#### Example Recipe (conceptual)

{
"filename": "MyProject.zip",
"filesize": 81920,
"chunk_order": ["hash1", "hash2", "hash3", "..."],
"chunk_map": {
"hash1": {
"matrix_block": "matrix_hash_A",
"transform": "XOR_data_1"
},
"hash2": {
"matrix_block": "matrix_hash_B",
"transform": null // Exact match
},
"hash3": {
"matrix_block": "matrix_hash_C",
"transform": "XOR_data_2"
}
// … and so on for every chunk
}
}


The Recipe itself is just data, so it can be chunked, encoded, and given its own link. This allows nesting: a website's Recipe could link to Recipes for its images, CSS, etc.

Because links point to recipes (e.g., matrix://reddit…), clicking a hyperlink triggers decoding of a recipe file that then decodes the real website or data. The webpage will contain other links pointing to further recipes, creating a chain of reconstruction instructions.

### Handling Encrypted Data

Encrypted files have high entropy and appear as random noise, so finding matching chunks in a public‑data Matrix is practically impossible.

- We Do Not Expand the Matrix: It stays static and contains only publicly available data; we never pollute it with encrypted material.
- Approximate & Transform: For each encrypted chunk we perform a nearest‑neighbor search to find the Matrix block that is mathematically closest (i.e., has the smallest bitwise difference).
- The Difference Is the Key: The system records the exact difference between the chosen Matrix block and the encrypted chunk using operations such as XOR, byte reordering, or other lightweight transformations. These transformation instructions are stored in the recipe.

Reconstruction: Retrieve the specified Matrix block, apply the recorded transformation, and you obtain the original encrypted chunk bit‑for‑bit. In this way the encrypted data is effectively “steganographically” embedded within innocuous public blocks, while the heavy lifting (the transformations) lives in a tiny Recipe file.

## Phase 3: A Truly Decentralized Web (Even Offline)

When files are represented only by recipes, the whole architecture of the web can change.

- Links Point to Recipes: Hyperlinks no longer resolve to IP addresses or domain names; they reference the hash of a Recipe.
- Offline Browsing: If you have the Matrix stored locally (e.g., on an external drive), you can browse huge portions of the network completely offline. Clicking a link simply fetches another tiny Recipe, which then reconstructs the target content using the local Matrix. Your browser becomes a reconstructor rather than a traditional downloader.
- The Network Is the Data: Going “online” merely means syncing the universal Matrix and exchanging new Recipes with peers.

### Solving Classic P2P Problems

1. Seeder Problem: In BitTorrent, a file disappears when there are no seeders. In MatrixNet, files never truly die because the Matrix is a permanent commons seeded by everyone. As long as a tiny Recipe exists somewhere (and it’s easy to back up or publish), the full file can be resurrected at any time.

2. Storage & Bandwidth Inefficiency: Sharing a 1 GB file traditionally requires transferring the whole gigabyte. With MatrixNet you only need to transfer a few kilobytes—the Recipe. The heavy data (the Matrix) is already widely replicated, so bandwidth usage drops dramatically.

## Challenges and Open Questions

- Computational Cost: Finding the “most similar chunk” for every 4 KB piece is CPU‑intensive. Viable solutions will likely need:
- Locality‑Sensitive Hashing or other ANN (approximate nearest neighbor) techniques.
- GPU/FPGA acceleration
for bulk similarity searches.
- Possible machine‑learning models to predict good candidate blocks.

- Dynamic Content: Real‑time applications, databases, and live streaming don’t fit neatly into static recipes. Additional layers—perhaps streaming recipes or mutable matrix extensions—would be required.

- Integration with the Existing Internet: Adoption hinges on low entry barriers (e.g., browser plugins, easy Matrix bootstrapping). Bridging mechanisms to fetch traditional HTTP resources when a recipe is unavailable will ease transition.

## Final Thoughts: A Paradigm Shift

MatrixNet invites us to rethink data sharing as reconstruction rather than copying. It envisions a future where our collective digital heritage isn’t locked in fragile silos but woven into a shared, permanent fabric.

- What if files never die, because their pieces already exist everywhere, just in a different shape?
- What if the only thing we need to share is how to rebuild information, not the information itself?

These questions are powerful. I’m sure there are flaws and challenges I haven’t covered—your critiques, ideas, and expertise are welcome.

Let’s collaborate to build a new internet that empowers users rather than corporations or governments.

If you’re a software engineer, cryptographer, network/security specialist, machine‑learning researcher, or simply passionate about decentralized systems, please reach out. I’ve created a GitHub repo for the community to start prototyping:

https://github.com/anedsa/Matrix-Net

For this I’m seeking collaborators to help run and grow this project, if you’d like to contribute, please DM me.
---

## TL;DR

MatrixNet = hyper‑compressed, decentralized web.
- Problem: Current web is fragile, censored, and bandwidth‑inefficient; data vanishes when servers go down.
- Idea: Share only a tiny Recipe (a few KB) that tells a device which chunks from a shared Matrix to pull and how to tweak them to recreate the original file.
- Benefits: Massive bandwidth savings, permanent availability (as long as the Recipe exists), censorship resistance, and offline browsing if you store the Matrix locally.
- Catch: Finding similar chunks is computationally heavy; dynamic content needs extra layers—but it’s a promising thought experiment for a more resilient web.

Feel free to comment, critique, or join the effort!

Edit: post missing a section

https://redd.it/1mq27by
@r_opensource
My first open source project : ClearTx

Hey folks,
I built ClearTx, an open-source tool to organize and track your UPI transactions without sending your data to any server.

Works completely offline — your data stays with you
Simple tagging & filtering for accounts, merchants, or purposes
Clean UI for quick insights
Export reports whenever you need

Repo link: ClearTx

Would love feedback, feature suggestions, or contributions from fellow devs!

https://redd.it/1mq7ixj
@r_opensource
I made a telegram bot template

I made this template for python-telegram-bot which covers almost every integral part of a telegram bot in addition to some nice decorators and utils. After about 6 years of python telegram bot development (not full time) I can finally say this template is indeed perfect, at least for me. Hope it'll be of use for you too

https://github.com/zmn-hamid/TeleTemplate

https://redd.it/1mqckdj
@r_opensource
🎬 FrameExtractionTool - Extract Perfect Frames from Videos with SwiftUI

**Hey Everyone!**
I just released my latest side project - **FrameExtractionTool** \- a simple iOS app for extracting high-quality frames from videos.

**📱 What it does:**

* Video Selection: Pick any video from your photo library
* Frame-Perfect Playback: Custom video player with precise timeline control
* Frame Marking: Mark specific moments during playback
* High-Quality Extraction: Save frames at original video resolution
* Custom Albums: Organize extracted frames in custom photo albums

**🛠️ Built with:**

* **SwiftUI** \+ **AVFoundation**
* **GitHub Actions** for automated builds

⚠️ **Important Disclaimer:**

This is a **very barebone app** as a side project of mine. The main goals were to:

* Learn how AI can help build apps
* Play around with SwiftUI and modern iOS development
* Experiment with SF Symbols and Icon Composer
* Explore automated CI/CD with GitHub Actions

**This app is very heavily developed using AI.** Bugs are expected! 🐛

**🎯 Why I built this:**

I often needed to extract specific frames from videos for presentations, memes, or reference images. And I don't see a same app that offers similar functionality for free. Therefore, I tried using AI and built it myself.

**🔗 Links:**

* **GitHub**: [FrameExtractionTool](https://github.com/CasperOng/FrameExtractionTool/)
* **Releases**: Check the releases page for unsigned IPA files.

**🤝 Contributing:**

Feel free to:

* Open issues for bugs 🐛
* Submit pull requests with fixes 🔧
* Suggest new features 💡
* Roast my (AI's) code (gently please) 😅

**TL;DR**: Made a simple frame extraction app with SwiftUI as an AI-assisted learning project. It works, has bugs, and is open source. Come try it! 😄

https://redd.it/1mqa4ib
@r_opensource