Tpipe – Apple-style Audio Transparency for Linux (JACK / PipeWire)
open-source audio transparency tool for Unix-based systems built on the JACK audio engine. It provides real-time voice isolation and adaptive audio ducking, similar in concept to Apple’s Transparency mode
GitHub: https://github.com/beaterblank/tpipe
I’d appreciate feedback on the design and suggestions for improvement.
https://redd.it/1ptx9kb
@r_opensource
open-source audio transparency tool for Unix-based systems built on the JACK audio engine. It provides real-time voice isolation and adaptive audio ducking, similar in concept to Apple’s Transparency mode
GitHub: https://github.com/beaterblank/tpipe
I’d appreciate feedback on the design and suggestions for improvement.
https://redd.it/1ptx9kb
@r_opensource
GitHub
GitHub - beaterblank/tpipe: Audio Transparency tool for Unix-based systems that use JACK audio engine.
Audio Transparency tool for Unix-based systems that use JACK audio engine. - beaterblank/tpipe
LabFyre: Cus I got tired working around upstream limitations...
In short order, I was developing a few noscripts that would simulate sticky keys and Omacarhy's universal copy/paste, except it's on ctrl and not meta/super. I ran into a myriad of issues with this though, I'd end up with a feedback loop with universial copy/paste, thanks to dotool, and the sticky key implementation wasn't 1:1 with KDE, GNOME, or Windows. It had it's flaws and would actively affect gaming when I had it turned off due to how labwc does keybinds.
As a result I initially forked labwc to add keybind toggles, device blacklisting/whitelisting, and conditionals based on shell commands. I knew none of this would be merged into upstream, as they only want Labwc to understand wayland protocols and WL-roots protocols, no D-Bus,IPC, or anything else, probably including the flags I added to the binary to control it. So the features kind of spiraled from there into what I have LabFyre is currently.
As far as feature set compared to upstream, there's quite a bit.
multiple methods of turning on or off or limiting keybinds (by command flag, by device, and by the output of a shell command)
a noscript that fires upon reconfiguring the compositor
workspace control via command flag
a (WIP) tiling mode. (grid snapping mode works fine-ish, but smart resizing is experimental)
This still hold into the means of not being controllable via D-Bus or IPC, the only compositor control outside of wayland and WL-roots will be from command flags to the binary. So you could write plugins in any language. Bash, Zsh, Xonsh, python, java, zig... So long as it can run system commands, you can use it to control the compositor. Openbox themes are still supported as well as configs for upstream Labwc.
Note that the README isn't 100% deviod of Labwc links and mentions. I'm going to move all the documentation to the GitHub Wiki at some point, but the scdocs will still be maintained for offline reading. You'll need to compile it yourself and make a desktop file for your greeter, I am taking PRs for a PKGBUILD and hopefully someone can get it onto the AUR for me, as I can't figure out the needed keys to do it...
Obligatory link to the project: https://github.com/FyreX-opensource-design/labFyre/tree/master
https://redd.it/1ptvkow
@r_opensource
In short order, I was developing a few noscripts that would simulate sticky keys and Omacarhy's universal copy/paste, except it's on ctrl and not meta/super. I ran into a myriad of issues with this though, I'd end up with a feedback loop with universial copy/paste, thanks to dotool, and the sticky key implementation wasn't 1:1 with KDE, GNOME, or Windows. It had it's flaws and would actively affect gaming when I had it turned off due to how labwc does keybinds.
As a result I initially forked labwc to add keybind toggles, device blacklisting/whitelisting, and conditionals based on shell commands. I knew none of this would be merged into upstream, as they only want Labwc to understand wayland protocols and WL-roots protocols, no D-Bus,IPC, or anything else, probably including the flags I added to the binary to control it. So the features kind of spiraled from there into what I have LabFyre is currently.
As far as feature set compared to upstream, there's quite a bit.
multiple methods of turning on or off or limiting keybinds (by command flag, by device, and by the output of a shell command)
a noscript that fires upon reconfiguring the compositor
workspace control via command flag
a (WIP) tiling mode. (grid snapping mode works fine-ish, but smart resizing is experimental)
This still hold into the means of not being controllable via D-Bus or IPC, the only compositor control outside of wayland and WL-roots will be from command flags to the binary. So you could write plugins in any language. Bash, Zsh, Xonsh, python, java, zig... So long as it can run system commands, you can use it to control the compositor. Openbox themes are still supported as well as configs for upstream Labwc.
Note that the README isn't 100% deviod of Labwc links and mentions. I'm going to move all the documentation to the GitHub Wiki at some point, but the scdocs will still be maintained for offline reading. You'll need to compile it yourself and make a desktop file for your greeter, I am taking PRs for a PKGBUILD and hopefully someone can get it onto the AUR for me, as I can't figure out the needed keys to do it...
Obligatory link to the project: https://github.com/FyreX-opensource-design/labFyre/tree/master
https://redd.it/1ptvkow
@r_opensource
GitHub
GitHub - FyreX-opensource-design/labFyre: A Wayland window-stacking and tiling compositor that doesn't sacrifice modularity
A Wayland window-stacking and tiling compositor that doesn't sacrifice modularity - FyreX-opensource-design/labFyre
tchat-cli: prefill your AI chat app from the terminal with prompts and files!
https://github.com/varo6/tchat-cli
https://redd.it/1pu0ja4
@r_opensource
https://github.com/varo6/tchat-cli
https://redd.it/1pu0ja4
@r_opensource
GitHub
GitHub - varo6/tchat-cli
Contribute to varo6/tchat-cli development by creating an account on GitHub.
Looking for feedback and contributors on an open-source React Native + Expo mobile app
Hi everyone,
I’m working on an open-source mobile app built with React Native + Expo, and I’m trying to do the development as openly and transparently as possible.
At this stage, I’m not looking to promote a “finished product”, but rather to get help improving the project itself. I would really appreciate feedback or contributions in areas like:
\- Project structure and architecture
\- README and developer onboarding
\- Documentation quality
\- Performance and rendering patterns
\- Internationalization (currently switching between Georgian and Russian)
\- General React Native / Expo best practices
The repository is open-source and still evolving, and I’m very open to criticism, suggestions, and refactors. If you enjoy reviewing code, improving docs, or helping shape early-stage OSS projects, I’d love your input.
Repository:
https://github.com/georgetoloraia/selflink-mobile
If this isn’t the right place or flair for this kind of post, feel free to let me know and I’ll adjust. Thanks for your time.
https://redd.it/1ptzyq6
@r_opensource
Hi everyone,
I’m working on an open-source mobile app built with React Native + Expo, and I’m trying to do the development as openly and transparently as possible.
At this stage, I’m not looking to promote a “finished product”, but rather to get help improving the project itself. I would really appreciate feedback or contributions in areas like:
\- Project structure and architecture
\- README and developer onboarding
\- Documentation quality
\- Performance and rendering patterns
\- Internationalization (currently switching between Georgian and Russian)
\- General React Native / Expo best practices
The repository is open-source and still evolving, and I’m very open to criticism, suggestions, and refactors. If you enjoy reviewing code, improving docs, or helping shape early-stage OSS projects, I’d love your input.
Repository:
https://github.com/georgetoloraia/selflink-mobile
If this isn’t the right place or flair for this kind of post, feel free to let me know and I’ll adjust. Thanks for your time.
https://redd.it/1ptzyq6
@r_opensource
GitHub
GitHub - georgetoloraia/selflink-mobile: you can see swagger
you can see swagger. Contribute to georgetoloraia/selflink-mobile development by creating an account on GitHub.
I built an open-source MFA toolkit so apps can add MFA without replacing their login system
I built OpenAuth, an open-source MFA toolkit for developers who already have authentication but want to drop-in MFA.
It handles only MFA not login, users, or sessions.
It includes:
Ready-to-use backend (already deployed)
JS SDK for TOTP & Email OTP
CLI for app & key management
Optional React Native MFA screens
You keep your existing auth (Firebase etc.), and OpenAuth just adds MFA on top of it
Backend is Django-based and designed to be extensible
If anyone tries it, I would like to hear your feedback and have you as contributor to our project.
GitHub link if you want to see source :
OpenAuth Repo Link
NPM packages :
npm package link
https://redd.it/1ptvfis
@r_opensource
I built OpenAuth, an open-source MFA toolkit for developers who already have authentication but want to drop-in MFA.
It handles only MFA not login, users, or sessions.
It includes:
Ready-to-use backend (already deployed)
JS SDK for TOTP & Email OTP
CLI for app & key management
Optional React Native MFA screens
You keep your existing auth (Firebase etc.), and OpenAuth just adds MFA on top of it
Backend is Django-based and designed to be extensible
If anyone tries it, I would like to hear your feedback and have you as contributor to our project.
GitHub link if you want to see source :
OpenAuth Repo Link
NPM packages :
npm package link
https://redd.it/1ptvfis
@r_opensource
GitHub
GitHub - amanuelcm27/openauth-frontend: openauth frontend sdk , cli and ui package repo
openauth frontend sdk , cli and ui package repo. Contribute to amanuelcm27/openauth-frontend development by creating an account on GitHub.
Looking for Products list and sub products
Hi im Looking for Products and sub products list database mysql or json, similar to the list used in alibaba and global sources and made in china website
Any help please
https://redd.it/1pu51od
@r_opensource
Hi im Looking for Products and sub products list database mysql or json, similar to the list used in alibaba and global sources and made in china website
Any help please
https://redd.it/1pu51od
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Voice-to-text with MCP support. System-wide dictation (hold fn) and AI agent mode (hold fn+ctrl) that connects to any MCP server. Cross-platform desktop app with local Whisper trannoscription.
https://github.com/acoyfellow/t2t
https://redd.it/1pu6mn7
@r_opensource
https://github.com/acoyfellow/t2t
https://redd.it/1pu6mn7
@r_opensource
GitHub
GitHub - acoyfellow/t2t: Voice-to-text with MCP support. System-wide dictation (hold fn) and AI agent mode (hold fn+ctrl) that…
Voice-to-text with MCP support. System-wide dictation (hold fn) and AI agent mode (hold fn+ctrl) that connects to any MCP server. Cross-platform desktop app with local Whisper trannoscription. - acoy...
colorpp - yet another C++ color library, which respects NO_COLOR
https://github.com/calishu/colorpp
https://redd.it/1pu5uc8
@r_opensource
https://github.com/calishu/colorpp
https://redd.it/1pu5uc8
@r_opensource
GitHub
GitHub - calishu/colorpp: Yet another color library for C++
Yet another color library for C++. Contribute to calishu/colorpp development by creating an account on GitHub.
Reasons open source is NOT good?
I’m strongly in favor of open-source software, and both I and my professional network have worked with it for years.
That said, I’m curious why some individuals and organizations oppose it.
Is it mainly about maintaining a competitive advantage, or are there other well-documented reasons?
Are there credible sources that systematically discuss the drawbacks, trade-offs, or limits of open source compared to closed or proprietary models?
https://redd.it/1pu9t2u
@r_opensource
I’m strongly in favor of open-source software, and both I and my professional network have worked with it for years.
That said, I’m curious why some individuals and organizations oppose it.
Is it mainly about maintaining a competitive advantage, or are there other well-documented reasons?
Are there credible sources that systematically discuss the drawbacks, trade-offs, or limits of open source compared to closed or proprietary models?
https://redd.it/1pu9t2u
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Here i explained why Plebbit will be the first open source pure peer-to-peer social media protocol
https://github.com/plebbit/whitepaper/blob/master/FAQ.md
https://redd.it/1puasi7
@r_opensource
https://github.com/plebbit/whitepaper/blob/master/FAQ.md
https://redd.it/1puasi7
@r_opensource
GitHub
whitepaper/FAQ.md at master · plebbit/whitepaper
Contribute to plebbit/whitepaper development by creating an account on GitHub.
share an app!
i would like to share soomething ive been working on for quite a while hopefully soon i get the courage to allow some of yall to see it.
https://redd.it/1pufbqh
@r_opensource
i would like to share soomething ive been working on for quite a while hopefully soon i get the courage to allow some of yall to see it.
https://redd.it/1pufbqh
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
What are the best truly open-source alternatives to Akeneo PIM today?
The best truly open-source alternatives to Akeneo PIM available today include the following solutions, all of which can be self-hosted and customized without vendor lock-in:
List of Open-Source PIM Alternatives
Pimcore – A powerful enterprise-grade open-source platform offering PIM, DAM, MDM, and CMS capabilities for complex product data management.
AtroPIM – A flexible, API-first open-source PIM designed for advanced data modeling and seamless system integrations.
LibrePIM – A community-driven open-source fork of Akeneo Community Edition that provides long-term support and ongoing improvements.
OpenPIM – A simple open-source PIM solution focused on basic product data centralization and management.
https://redd.it/1puibd7
@r_opensource
The best truly open-source alternatives to Akeneo PIM available today include the following solutions, all of which can be self-hosted and customized without vendor lock-in:
List of Open-Source PIM Alternatives
Pimcore – A powerful enterprise-grade open-source platform offering PIM, DAM, MDM, and CMS capabilities for complex product data management.
AtroPIM – A flexible, API-first open-source PIM designed for advanced data modeling and seamless system integrations.
LibrePIM – A community-driven open-source fork of Akeneo Community Edition that provides long-term support and ongoing improvements.
OpenPIM – A simple open-source PIM solution focused on basic product data centralization and management.
https://redd.it/1puibd7
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
t2t: Voice-to-text with MCP support. System-wide dictation (hold fn) and AI agent mode (hold fn+ctrl) that connects to any MCP server. Tauri desktop app with local Whisper trannoscription.
https://github.com/acoyfellow/t2t
https://redd.it/1pukdpp
@r_opensource
https://github.com/acoyfellow/t2t
https://redd.it/1pukdpp
@r_opensource
GitHub
GitHub - acoyfellow/t2t: Voice-to-text with MCP support. System-wide dictation (hold fn) and AI agent mode (hold fn+ctrl) that…
Voice-to-text with MCP support. System-wide dictation (hold fn) and AI agent mode (hold fn+ctrl) that connects to any MCP server. Cross-platform desktop app with local Whisper trannoscription. - acoy...
After too many noisy PR reviews, I built a deterministic one
I didn’t start PRFlow to make reviews faster , I built it to make them reliable.
PR reviews weren’t blocked because people were slow.They were blocked because reviews were **inconsistent, noisy and context poor**.
So I built PRFlow around a different premise:
**Every PR deserves a consistent, low-noise first pass before humans step in.**
What PRFlow is optimized for :
* **Deterministic reviews** Same change → same feedback. No randomness, no “AI mood swings.”
* **Context-aware analysis** Looks beyond the diff and respects how the codebase actually works, not just what changed.
* **Low-noise comments** No long essays, no nitpicks. Only issues that matter to correctness, risk and maintainability.
* **First-pass focus** It’s not trying to replace reviewers. It handles the repetitive baseline so humans can focus on intent and architecture.
* **GitHub-native workflow** Lives directly in PRs. No new dashboards, no process changes.
What I’ve learned while building this:
* Review noise kills trust faster than slow feedback
* Consistency matters more than “smart” suggestions
* One bad review can undo ten good ones
* Fewer comments often lead to better discussions
PRFlow is currently in beta and being tested by teams running real production PRs. Most of what’s shipping next is coming directly from early feedback and real review pain, not feature wishlists.
If you want to check it out:[ ](https://graphbit.ai/prflow)[https://graphbit.ai/prflow](https://graphbit.ai/prflow)
https://redd.it/1puo1uq
@r_opensource
I didn’t start PRFlow to make reviews faster , I built it to make them reliable.
PR reviews weren’t blocked because people were slow.They were blocked because reviews were **inconsistent, noisy and context poor**.
So I built PRFlow around a different premise:
**Every PR deserves a consistent, low-noise first pass before humans step in.**
What PRFlow is optimized for :
* **Deterministic reviews** Same change → same feedback. No randomness, no “AI mood swings.”
* **Context-aware analysis** Looks beyond the diff and respects how the codebase actually works, not just what changed.
* **Low-noise comments** No long essays, no nitpicks. Only issues that matter to correctness, risk and maintainability.
* **First-pass focus** It’s not trying to replace reviewers. It handles the repetitive baseline so humans can focus on intent and architecture.
* **GitHub-native workflow** Lives directly in PRs. No new dashboards, no process changes.
What I’ve learned while building this:
* Review noise kills trust faster than slow feedback
* Consistency matters more than “smart” suggestions
* One bad review can undo ten good ones
* Fewer comments often lead to better discussions
PRFlow is currently in beta and being tested by teams running real production PRs. Most of what’s shipping next is coming directly from early feedback and real review pain, not feature wishlists.
If you want to check it out:[ ](https://graphbit.ai/prflow)[https://graphbit.ai/prflow](https://graphbit.ai/prflow)
https://redd.it/1puo1uq
@r_opensource
Graphbit
Graphbit - Enterprise-Grade LLM Framework
Open-source enterprise-grade LLM framework for building production-ready AI applications with security, scalability, and reliability at its core.
I built a simple client to manage DNS on Windows
https://github.com/altendorfme/sdfw
https://redd.it/1puom7j
@r_opensource
https://github.com/altendorfme/sdfw
https://redd.it/1puom7j
@r_opensource
open-source Electron + Angular text editor
Hi
I wanted to share a project I have been working on, it's really early in it's development but i wanted to share it and possibility get some other's who are interested to help out.
I’m working on Flufy, an open-source desktop text editor built with Electron and Angular, and I’m looking for early feedback and potential contributors.
Flufy started as a learning project, but I’m aiming to grow it into a genuinely usable, well-architected editor with a strong focus on tooling and performance.
What I’m looking for
Feedback on architecture and design
Suggestions for editor features or UX improvements
Contributors interested in:
Editor internals
UI/UX
Performance
Electron or Angular tooling
Anything really
The project is still evolving, but I’ve tried to keep things structured and approachable. I’m very happy to guide new contributors.
Repo https://github.com/UmbrellaCrow612/fluf
https://redd.it/1puq6pr
@r_opensource
Hi
I wanted to share a project I have been working on, it's really early in it's development but i wanted to share it and possibility get some other's who are interested to help out.
I’m working on Flufy, an open-source desktop text editor built with Electron and Angular, and I’m looking for early feedback and potential contributors.
Flufy started as a learning project, but I’m aiming to grow it into a genuinely usable, well-architected editor with a strong focus on tooling and performance.
What I’m looking for
Feedback on architecture and design
Suggestions for editor features or UX improvements
Contributors interested in:
Editor internals
UI/UX
Performance
Electron or Angular tooling
Anything really
The project is still evolving, but I’ve tried to keep things structured and approachable. I’m very happy to guide new contributors.
Repo https://github.com/UmbrellaCrow612/fluf
https://redd.it/1puq6pr
@r_opensource
GitHub
GitHub - UmbrellaCrow612/fluf: Text editor like VSCode written in angular and electron
Text editor like VSCode written in angular and electron - UmbrellaCrow612/fluf
We’re building PushFlow — an open-source real-time push notification framework
Hey everyone!
We’re working on PushFlow, an open-source, standalone real-time push notification framework built with a microservice architecture.
PushFlow is designed to handle notification requests through a gateway service, route them using a dispatcher, and deliver them in real time through dedicated delivery services (such as WebSockets or SSE). The focus is on keeping responsibilities clear, communication asynchronous, and the system easy to self-host and extend.
The stack is intentionally straightforward: NestJS with TypeScript, Docker for local and production setups, Redis for event coordination, and MongoDB for persistence. The goal is to provide a clean and understandable foundation for real-time notification delivery, without being tied to a specific application or platform.
We’re publishing the project in the open from the start and would really appreciate feedback on the architecture, design decisions, and overall direction. If you’re interested in real-time systems, distributed services, or notification infrastructure, we’d love to hear your thoughts.
More updates and the repository coming soon.
GitHub link: https://github.com/pyankie/pushflow
https://redd.it/1purj16
@r_opensource
Hey everyone!
We’re working on PushFlow, an open-source, standalone real-time push notification framework built with a microservice architecture.
PushFlow is designed to handle notification requests through a gateway service, route them using a dispatcher, and deliver them in real time through dedicated delivery services (such as WebSockets or SSE). The focus is on keeping responsibilities clear, communication asynchronous, and the system easy to self-host and extend.
The stack is intentionally straightforward: NestJS with TypeScript, Docker for local and production setups, Redis for event coordination, and MongoDB for persistence. The goal is to provide a clean and understandable foundation for real-time notification delivery, without being tied to a specific application or platform.
We’re publishing the project in the open from the start and would really appreciate feedback on the architecture, design decisions, and overall direction. If you’re interested in real-time systems, distributed services, or notification infrastructure, we’d love to hear your thoughts.
More updates and the repository coming soon.
GitHub link: https://github.com/pyankie/pushflow
https://redd.it/1purj16
@r_opensource
Reddit
From the opensource community on Reddit: We’re building PushFlow — an open-source real-time push notification framework
Explore this post and more from the opensource community
Best Caller and Contact App
Hi everyone,
I am facing an issue with my caller and the contact app preinstalled in my Android phone. The contact set a default sim and I cannot make it ask every time. Similarly, the caller app is using some sensor to keep my screen alive while calling, but that sensor is damaged as my phone fell on the floor multiple times, and I have to change the panel.
The list goes on.
What alternatives you guys are using?
https://redd.it/1pustp4
@r_opensource
Hi everyone,
I am facing an issue with my caller and the contact app preinstalled in my Android phone. The contact set a default sim and I cannot make it ask every time. Similarly, the caller app is using some sensor to keep my screen alive while calling, but that sensor is damaged as my phone fell on the floor multiple times, and I have to change the panel.
The list goes on.
What alternatives you guys are using?
https://redd.it/1pustp4
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Built a small pdf reader because highlights are useless in most apps
I built a mobile app (Loci) over the past few days..
Repo: https://github.com/Sriram-PR/loci-mobile
I made it because I read long pdfs, and in most readers highlights are hard to revisit. If a document has 100+ pages, finding out what you already marked becomes annoying and slow.
Loci is a memory first pdf reader. Highlights are the main view. You can search them instantly and jump back to the exact page and context in pdf. The goal is fast recall, not rereading everything.
It's built with flutter. I did not have time to properly learn Dart, so this was very much vibe coded to solve my own problem. It's early but usable.
Would appreciate any feedback, especially on ux and missing features.
https://redd.it/1puncmv
@r_opensource
I built a mobile app (Loci) over the past few days..
Repo: https://github.com/Sriram-PR/loci-mobile
I made it because I read long pdfs, and in most readers highlights are hard to revisit. If a document has 100+ pages, finding out what you already marked becomes annoying and slow.
Loci is a memory first pdf reader. Highlights are the main view. You can search them instantly and jump back to the exact page and context in pdf. The goal is fast recall, not rereading everything.
It's built with flutter. I did not have time to properly learn Dart, so this was very much vibe coded to solve my own problem. It's early but usable.
Would appreciate any feedback, especially on ux and missing features.
https://redd.it/1puncmv
@r_opensource
GitHub
GitHub - Sriram-PR/loci-mobile
Contribute to Sriram-PR/loci-mobile development by creating an account on GitHub.
A dementia-management tool I'd like to exist
A few years ago, I had an idea for an open-source project that I'd like to exist, to help people with severe dementia and their caregivers. I have pretty much none of the skills that would be helpful in making it happen. I have a few decades of programming experience, but none in software architecture, and at present my health problems limit my ability to learn major new skills. I'm throwing this idea out here, basically in the hope that it gets the attention of someone who can do something with it.
Background: my wife has several serious health problems, and has spent a lot of time in hospitals. Frequently, when I visit her there, I can hear people with dementia yelling from other rooms: they don't understand where they are, they're confused, they want someone to help them now. Apart from their own distress, they're upsetting other patients and visitors, and it takes a toll on the caregivers as well.
In one hospital stay, my wife was sharing a room with an extremely frail, elderly woman. She was barely moving, apart from talking incessantly: "Margaret? Is my oxygen on? Are you there? Is my oxygen okay? Margaret?" Occasionally a nurse would come in and reassure her that her oxygen was fine, and that would settle her... for about five minutes. And then she would start again, because she couldn't hold the thought.
It occurred to me that what she really could use was an automated system that could listen for her to say the word "oxygen", and tell her in the voice of Margaret (whoever that was!) something like "Your oxygen is fine. You're safe, in the hospital."
The logic of such a system wouldn't need to be any more complex than what the old "Eliza" program did. Check for keywords, spit out a recorded response. It would be complicated by difficulty in training it, given the non-ideal conditions for sound pickup and that many such patients don't enunciate well. It would be simplified by not necessarily even having to do full speech-to-text; merely being able to recognize sound patterns (e.g. vowel pattern "aw-ih-eh" for "oxygen") might suffice. It wouldn't even have to generate its own speech, since it could simply play canned messages recorded by someone. Margaret, whoever that was... (I'll admit to having had thoughts of "Are you there, Margaret? It's me, God.")
My wife is a dementia specialist, and she thinks that something like this would be extremely beneficial towards "reorienting" people with severe dementia. I've asked a few hospital nurses, and they love the idea. I've discussed the concept with a friend who was a linguistics expert with Dragon Naturally Speaking, and he agrees with my assessments of the problems.
My impression is that this shouldn't require especially powerful hardware. I'd like the thing to be freely available, since medical costs are high; it ought to be able to run on something like a relatively-old mobile device. I don't want it locked down to a particular platform, or owned by some company for profit.
Is there some way that this can happen?
https://redd.it/1puvnrt
@r_opensource
A few years ago, I had an idea for an open-source project that I'd like to exist, to help people with severe dementia and their caregivers. I have pretty much none of the skills that would be helpful in making it happen. I have a few decades of programming experience, but none in software architecture, and at present my health problems limit my ability to learn major new skills. I'm throwing this idea out here, basically in the hope that it gets the attention of someone who can do something with it.
Background: my wife has several serious health problems, and has spent a lot of time in hospitals. Frequently, when I visit her there, I can hear people with dementia yelling from other rooms: they don't understand where they are, they're confused, they want someone to help them now. Apart from their own distress, they're upsetting other patients and visitors, and it takes a toll on the caregivers as well.
In one hospital stay, my wife was sharing a room with an extremely frail, elderly woman. She was barely moving, apart from talking incessantly: "Margaret? Is my oxygen on? Are you there? Is my oxygen okay? Margaret?" Occasionally a nurse would come in and reassure her that her oxygen was fine, and that would settle her... for about five minutes. And then she would start again, because she couldn't hold the thought.
It occurred to me that what she really could use was an automated system that could listen for her to say the word "oxygen", and tell her in the voice of Margaret (whoever that was!) something like "Your oxygen is fine. You're safe, in the hospital."
The logic of such a system wouldn't need to be any more complex than what the old "Eliza" program did. Check for keywords, spit out a recorded response. It would be complicated by difficulty in training it, given the non-ideal conditions for sound pickup and that many such patients don't enunciate well. It would be simplified by not necessarily even having to do full speech-to-text; merely being able to recognize sound patterns (e.g. vowel pattern "aw-ih-eh" for "oxygen") might suffice. It wouldn't even have to generate its own speech, since it could simply play canned messages recorded by someone. Margaret, whoever that was... (I'll admit to having had thoughts of "Are you there, Margaret? It's me, God.")
My wife is a dementia specialist, and she thinks that something like this would be extremely beneficial towards "reorienting" people with severe dementia. I've asked a few hospital nurses, and they love the idea. I've discussed the concept with a friend who was a linguistics expert with Dragon Naturally Speaking, and he agrees with my assessments of the problems.
My impression is that this shouldn't require especially powerful hardware. I'd like the thing to be freely available, since medical costs are high; it ought to be able to run on something like a relatively-old mobile device. I don't want it locked down to a particular platform, or owned by some company for profit.
Is there some way that this can happen?
https://redd.it/1puvnrt
@r_opensource
Reddit
From the opensource community on Reddit
Explore this post and more from the opensource community
Made a simple tool to see what languages are actually in your project!
I made a quick bash noscript called kats that shows you exactly what languages are in your project with actual percentages!
Install is literally one line:
bash
curl -sSL https://raw.githubusercontent.com/lattiece/kats/main/install.sh | bash
Then just run
It's fast, supports 40+ languages, and doesn't need any dependencies installed.
GitHub: https://github.com/lattiece/kats
lmk if you think it's useful or if I should add something!
https://redd.it/1puvrgc
@r_opensource
I made a quick bash noscript called kats that shows you exactly what languages are in your project with actual percentages!
Install is literally one line:
bash
curl -sSL https://raw.githubusercontent.com/lattiece/kats/main/install.sh | bash
Then just run
kats in any project folder.It's fast, supports 40+ languages, and doesn't need any dependencies installed.
GitHub: https://github.com/lattiece/kats
lmk if you think it's useful or if I should add something!
https://redd.it/1puvrgc
@r_opensource