Tangent: Log processing without DSLs (built on Rust & WebAssembly)
https://github.com/telophasehq/tangent/
Hey y'all – The problem Ive been dealing with is that each company I work at implements many of the same log transformations. Additionally, LLMs are much better at writing python and go than DSLs.
WASM has recently made major performance improvements (with more exciting things to come like async!) and it felt like a good time to experiment to see if we could build a better pipeline on top of it.
Check it out and let me know what you think :)
https://redd.it/1okv8ys
@r_devops
https://github.com/telophasehq/tangent/
Hey y'all – The problem Ive been dealing with is that each company I work at implements many of the same log transformations. Additionally, LLMs are much better at writing python and go than DSLs.
WASM has recently made major performance improvements (with more exciting things to come like async!) and it felt like a good time to experiment to see if we could build a better pipeline on top of it.
Check it out and let me know what you think :)
https://redd.it/1okv8ys
@r_devops
GitHub
GitHub - telophasehq/tangent: Stream processing with real languages, not DSLs.
Stream processing with real languages, not DSLs. Contribute to telophasehq/tangent development by creating an account on GitHub.
How do you get secrets into VMs without baking them into the image?
Hey folks,
I’m used to working with AWS, where you can just attach an instance profile and have the instance securely pull secrets from Secrets Manager or SSM Parameter Store without hardcoding anything.
Now I’m working in DigitalOcean, and that model doesn’t translate well. I’m using Infisical for secret management, but I’m trying to figure out the best way to get those secrets into my droplets securely at boot time — without baking them into the AMI or passing them as plain user data.
So I’m curious:
How do you all handle secret injection in environments like DigitalOcean, Hetzner, or other non-AWS clouds?
How do you handle initial authentication when there’s no instance identity mechanism like AWS provides?
https://redd.it/1okxnz4
@r_devops
Hey folks,
I’m used to working with AWS, where you can just attach an instance profile and have the instance securely pull secrets from Secrets Manager or SSM Parameter Store without hardcoding anything.
Now I’m working in DigitalOcean, and that model doesn’t translate well. I’m using Infisical for secret management, but I’m trying to figure out the best way to get those secrets into my droplets securely at boot time — without baking them into the AMI or passing them as plain user data.
So I’m curious:
How do you all handle secret injection in environments like DigitalOcean, Hetzner, or other non-AWS clouds?
How do you handle initial authentication when there’s no instance identity mechanism like AWS provides?
https://redd.it/1okxnz4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
do you guys still code, or just debug what ai writes?
lately at work i’ve been using ChatGPT, Cosine, and sometimes Claude to speed up feature work. it’s great half my commits are ready in hours instead of days. but sometimes i look at the codebase and realize i barely remember how certain parts even work.
it’s like my role slowly shifted from developer to prompt engineer. i’m mostly reviewing, debugging, and refactoring what the bot spits out.
curious how others feel
https://redd.it/1okz9hc
@r_devops
lately at work i’ve been using ChatGPT, Cosine, and sometimes Claude to speed up feature work. it’s great half my commits are ready in hours instead of days. but sometimes i look at the codebase and realize i barely remember how certain parts even work.
it’s like my role slowly shifted from developer to prompt engineer. i’m mostly reviewing, debugging, and refactoring what the bot spits out.
curious how others feel
https://redd.it/1okz9hc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Non-vscode AI agents
Hi guys, recently my claude sonnet 4 disappeared from vscode. Can anyone help me?
He literally wrote the code for me on the front-end, then I could calmly develop the back-end.
If anyone has another agent alternative that can write, update, edit, delete, etc. in vacode or another ide. Thanks
https://redd.it/1okyltc
@r_devops
Hi guys, recently my claude sonnet 4 disappeared from vscode. Can anyone help me?
He literally wrote the code for me on the front-end, then I could calmly develop the back-end.
If anyone has another agent alternative that can write, update, edit, delete, etc. in vacode or another ide. Thanks
https://redd.it/1okyltc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tell me if I'm in the wrong here
Context: I work on a very large contract. The different technical disciplines are broken up into authoritative departments. I'm on Platform Engineering. We're responsible for building application images and deploying them. There is also a Cybersecurity team, which largely sets policy and pushes out requests for patches and such.
Before I explain this process I offer this disclaimer: I know this process is crap. I hate it and I'm working very hard to change it. But as it stands now, this is what they ask me to do:
We are asked by the CSD team about every 3 months to take the newest CPU base image from WebLogic and run pipelines that build images for each of the apps on a specific cluster. You read that right - cluster. Why? Well, because instead of injecting the .ear file at runtime, they build an image with a very long-ass tag name that has the base image, the specific app and the specific app version on it. These pipelines call to a configuration management database which says "Here is the image name and version" and uses that to make an individual tailored image for that.
After that's done, they have a "mass deploy" pipeline which then deploys the snowflake images for dozens of applications into a Kubernetes cluster.
Now, this is where I get pissed.
I played nice and did the mass build pipeline. However, because its a fucking convoluted process I missed a step and had to re-run it. It takes like 3 hours every time it runs because its Jenkins. (Another huge problem.) This delayed my timeline according to CSD and they were already getting hot and bothered by it. However, after the success of building all those images, I decided this was where I take my stand. I said I would not deploy all these apps to our development cluster. Instead, I would rather that we deploy a few apps and scream-test them with some dev teams. Why? Because we have NO FUCKING QA. We just expect its gonna work. I am not gonna do that.
That didn't make CSD happy but they played along until I said I wasn't going to run the mass deploy pipeline on a Friday afternoon on Halloween. They wanted me to run it because "It's just dev" and "It's no big deal". To me, it is a big deal, because if we plan to promote to the test cluster on Monday, I want more time from the devs to give me feedback. I want testing of the pods and dependent services. I want some actual feedback that we have spot checked scenarios before they make their way up to prod. Dev would be the place to catch it before it gets out of hand because if we find something we promoted to test is wrong then we now have twice as many apps to rollback. The devs also have families too. I'm not going to put more stress on them because the CSD wanted to rush something out.
Anyway, CSD is now tussling with my boss because I unplugged my computer and went home. I am going to play video games the rest of the day and then go trick or treating with my kids. They can have some other sucker do their dirty work.
But am I wrong? Didn't I make a mountain out of a molehill? Or am I correct that this is a disaster waiting to happen and I need to draw the line in the sand here and now?
https://redd.it/1ol38s2
@r_devops
Context: I work on a very large contract. The different technical disciplines are broken up into authoritative departments. I'm on Platform Engineering. We're responsible for building application images and deploying them. There is also a Cybersecurity team, which largely sets policy and pushes out requests for patches and such.
Before I explain this process I offer this disclaimer: I know this process is crap. I hate it and I'm working very hard to change it. But as it stands now, this is what they ask me to do:
We are asked by the CSD team about every 3 months to take the newest CPU base image from WebLogic and run pipelines that build images for each of the apps on a specific cluster. You read that right - cluster. Why? Well, because instead of injecting the .ear file at runtime, they build an image with a very long-ass tag name that has the base image, the specific app and the specific app version on it. These pipelines call to a configuration management database which says "Here is the image name and version" and uses that to make an individual tailored image for that.
After that's done, they have a "mass deploy" pipeline which then deploys the snowflake images for dozens of applications into a Kubernetes cluster.
Now, this is where I get pissed.
I played nice and did the mass build pipeline. However, because its a fucking convoluted process I missed a step and had to re-run it. It takes like 3 hours every time it runs because its Jenkins. (Another huge problem.) This delayed my timeline according to CSD and they were already getting hot and bothered by it. However, after the success of building all those images, I decided this was where I take my stand. I said I would not deploy all these apps to our development cluster. Instead, I would rather that we deploy a few apps and scream-test them with some dev teams. Why? Because we have NO FUCKING QA. We just expect its gonna work. I am not gonna do that.
That didn't make CSD happy but they played along until I said I wasn't going to run the mass deploy pipeline on a Friday afternoon on Halloween. They wanted me to run it because "It's just dev" and "It's no big deal". To me, it is a big deal, because if we plan to promote to the test cluster on Monday, I want more time from the devs to give me feedback. I want testing of the pods and dependent services. I want some actual feedback that we have spot checked scenarios before they make their way up to prod. Dev would be the place to catch it before it gets out of hand because if we find something we promoted to test is wrong then we now have twice as many apps to rollback. The devs also have families too. I'm not going to put more stress on them because the CSD wanted to rush something out.
Anyway, CSD is now tussling with my boss because I unplugged my computer and went home. I am going to play video games the rest of the day and then go trick or treating with my kids. They can have some other sucker do their dirty work.
But am I wrong? Didn't I make a mountain out of a molehill? Or am I correct that this is a disaster waiting to happen and I need to draw the line in the sand here and now?
https://redd.it/1ol38s2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
API first vs GUI for 3rd party services
Your teams decided to buy a new tool to solve a problem. You have narrowed down the options to
Tool A:
Minimal UI, Mainly API driven, good docs and sdks
Tool B:
Nearly all work is done inside the tool UI either browser based or desktop app. Minimal APIs exposed no sdks
Assume all the features are the same it’s just the way you interact with the tool. What one are you advocating for? What one do you see your team adopting?
https://redd.it/1okyiqz
@r_devops
Your teams decided to buy a new tool to solve a problem. You have narrowed down the options to
Tool A:
Minimal UI, Mainly API driven, good docs and sdks
Tool B:
Nearly all work is done inside the tool UI either browser based or desktop app. Minimal APIs exposed no sdks
Assume all the features are the same it’s just the way you interact with the tool. What one are you advocating for? What one do you see your team adopting?
https://redd.it/1okyiqz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
"Validate problems before rushing into tools, frameworks etc" quote
Weird question and sorry that it's probably inappropriate for the sub, but someone posted an image of this lady in a (platform?) convention with a caption that goes something like the noscript.
To be honest I can't even remember if it were posted here or in r/kubernetes, I did try to find it myself but to no avail. Does it ring a bell to anyone? I would really like to watch the presentation myself, or at the very least find the image itself. Thanks!
https://redd.it/1ol5g1z
@r_devops
Weird question and sorry that it's probably inappropriate for the sub, but someone posted an image of this lady in a (platform?) convention with a caption that goes something like the noscript.
To be honest I can't even remember if it were posted here or in r/kubernetes, I did try to find it myself but to no avail. Does it ring a bell to anyone? I would really like to watch the presentation myself, or at the very least find the image itself. Thanks!
https://redd.it/1ol5g1z
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
launching my new side project pipedash today - a desktop app for managing ci/cd pipelines from multiple providers
ideally we'd just use one ci/cd platform for everything and this wouldn't need to exist. but most of us deal with multiple platforms and i kept forgetting which pipeline was where. got tired of it so i built this.
it's new and still rough around the edges, so bugs will happen... if you run into any, just open an issue. drop a star if it helps :D
https://github.com/hcavarsan/pipedash
https://redd.it/1olc3q4
@r_devops
ideally we'd just use one ci/cd platform for everything and this wouldn't need to exist. but most of us deal with multiple platforms and i kept forgetting which pipeline was where. got tired of it so i built this.
it's new and still rough around the edges, so bugs will happen... if you run into any, just open an issue. drop a star if it helps :D
https://github.com/hcavarsan/pipedash
https://redd.it/1olc3q4
@r_devops
GitHub
GitHub - hcavarsan/pipedash: Manage CI/CD pipelines from multiple providers (self hosted and desktop app)
Manage CI/CD pipelines from multiple providers (self hosted and desktop app) - GitHub - hcavarsan/pipedash: Manage CI/CD pipelines from multiple providers (self hosted and desktop app)
LDAP Injection: The Forgotten Injection Attack on Enterprise Authentication 🏢
https://instatunnel.my/blog/ldap-injection-the-forgotten-injection-attack-on-enterprise-authentication
https://redd.it/1olfb06
@r_devops
https://instatunnel.my/blog/ldap-injection-the-forgotten-injection-attack-on-enterprise-authentication
https://redd.it/1olfb06
@r_devops
InstaTunnel
LDAP Injection: The Forgotten Enterprise Security Threat
Discover how LDAP injection attacks bypass Active Directory authentication, extract user data, and escalate privileges.Learn prevention techniques and realworld
I need your advice/feedback on "webhooks as a service" platforms
Hello everyone,
About a year ago, I started a side project to create a "Webhook as a Service" platform. Essentially, it lets you create a proxy between services that send webhooks to your API-like Stripe, GitHub, Shopify, and redirects them to multiple destinations (your API, Slack, …).
All of this with automatic retries, filters, payload transformation with JavaScript, monitoring, and alerts.
Additionally, I built a webhook inspector, a tool to simply debug webhooks and visualise the headers, body, etc.
The problem is that the vast majority of users are only using the webhook inspector.
I know there are already some competitors in this sector, but, as developers or infrastructure engineers, do you see this as something useful? Or should I pivot Hooklistener to something else?
Thanks to everyone for the feedback.
https://redd.it/1olk3s4
@r_devops
Hello everyone,
About a year ago, I started a side project to create a "Webhook as a Service" platform. Essentially, it lets you create a proxy between services that send webhooks to your API-like Stripe, GitHub, Shopify, and redirects them to multiple destinations (your API, Slack, …).
All of this with automatic retries, filters, payload transformation with JavaScript, monitoring, and alerts.
Additionally, I built a webhook inspector, a tool to simply debug webhooks and visualise the headers, body, etc.
The problem is that the vast majority of users are only using the webhook inspector.
I know there are already some competitors in this sector, but, as developers or infrastructure engineers, do you see this as something useful? Or should I pivot Hooklistener to something else?
Thanks to everyone for the feedback.
https://redd.it/1olk3s4
@r_devops
Hooklistener
Hooklistener - Webhook Gateway & Visual Debugger | Reliable Event Delivery
Queue, transform, and route events with automatic retries, monitoring, and alerts — plus a built-in visual webhook debugger.
The plane that crashed because of a light bulb - and what it teaches about DevOps focus
In December 1972, Eastern Air Lines Flight 401 was on final approach to Miami when a small green light failed to illuminate. That tiny bulb indicated whether the nose landing gear was locked. The crew couldn’t confirm if the landing gear was down, so they climbed to 2,000 feet to troubleshoot.
All three flight crew members became fixated on that light. While they pulled apart the panel to check the bulb, the captain accidentally nudged the control yoke, which disengaged the autopilot’s altitude hold. Slowly, silently, the aircraft began to descend. Nobody noticed. The warning chime sounded, the altimeter unwound-but everyone’s attention was still on the light. Minutes later, the wide-body jet slammed into the Everglades, killing more than 100 people 1.
The landing gear was fine. It was just the bulb. The crash happened because nobody was “flying the plane.”
For DevOps and SRE teams, this is a hauntingly familiar pattern. During incidents, we sometimes fixate on one metric, one alert, one suspicious log line-while the real problem is unfolding elsewhere. Flight 401’s lesson is simple but deep: when pressure mounts, someone must always keep an eye on the system’s overall health. In aviation, they call it “Aviate, Navigate, Communicate.” In operations, it’s “Stabilize, Observe, Diagnose.”
Have clear roles. Designate an incident commander whose job is to maintain situational awareness. Don’t let a small mystery consume all attention while the system degrades unnoticed. Above all, remember to fly the plane.
I’ve explored more incidents like this-and what software teams can learn from aviation’s culture of safety-in my book Code from the Cockpit link below. But even if you never read it, I hope Flight 401’s story stays with you next time an alert goes off.
Sources:
1 National Transportation Safety Board, Aircraft Accident Report NTSB/AAR-73-14, “Eastern Air Lines L-1011, N310EA, Miami, Florida, December 29, 1972” (Official Investigation Report)
Book reference: Code from the Cockpit – What Software Engineering Can Learn from Aviation Disasters (https://www.amazon.com/dp/B0FKTV3NX2)
https://redd.it/1oll8zb
@r_devops
In December 1972, Eastern Air Lines Flight 401 was on final approach to Miami when a small green light failed to illuminate. That tiny bulb indicated whether the nose landing gear was locked. The crew couldn’t confirm if the landing gear was down, so they climbed to 2,000 feet to troubleshoot.
All three flight crew members became fixated on that light. While they pulled apart the panel to check the bulb, the captain accidentally nudged the control yoke, which disengaged the autopilot’s altitude hold. Slowly, silently, the aircraft began to descend. Nobody noticed. The warning chime sounded, the altimeter unwound-but everyone’s attention was still on the light. Minutes later, the wide-body jet slammed into the Everglades, killing more than 100 people 1.
The landing gear was fine. It was just the bulb. The crash happened because nobody was “flying the plane.”
For DevOps and SRE teams, this is a hauntingly familiar pattern. During incidents, we sometimes fixate on one metric, one alert, one suspicious log line-while the real problem is unfolding elsewhere. Flight 401’s lesson is simple but deep: when pressure mounts, someone must always keep an eye on the system’s overall health. In aviation, they call it “Aviate, Navigate, Communicate.” In operations, it’s “Stabilize, Observe, Diagnose.”
Have clear roles. Designate an incident commander whose job is to maintain situational awareness. Don’t let a small mystery consume all attention while the system degrades unnoticed. Above all, remember to fly the plane.
I’ve explored more incidents like this-and what software teams can learn from aviation’s culture of safety-in my book Code from the Cockpit link below. But even if you never read it, I hope Flight 401’s story stays with you next time an alert goes off.
Sources:
1 National Transportation Safety Board, Aircraft Accident Report NTSB/AAR-73-14, “Eastern Air Lines L-1011, N310EA, Miami, Florida, December 29, 1972” (Official Investigation Report)
Book reference: Code from the Cockpit – What Software Engineering Can Learn from Aviation Disasters (https://www.amazon.com/dp/B0FKTV3NX2)
https://redd.it/1oll8zb
@r_devops
Tooling price rises
Hey,
Who here runs a lab environment to practice coding/DevOps techs?
I have an environment with TeamCity, Octopus Deploy, Prometheus, k3s, etc.
However, has anyone noticed the constant price rises in tooling?
Octopus Deploy went up (there's threads here from a year or two ago).
TeamCity renewal licensing has changed.
And for a lot of system admin tooling, likewise, eg Veeam and VMWare.
It makes running a lab environment difficult.
https://redd.it/1olmixw
@r_devops
Hey,
Who here runs a lab environment to practice coding/DevOps techs?
I have an environment with TeamCity, Octopus Deploy, Prometheus, k3s, etc.
However, has anyone noticed the constant price rises in tooling?
Octopus Deploy went up (there's threads here from a year or two ago).
TeamCity renewal licensing has changed.
And for a lot of system admin tooling, likewise, eg Veeam and VMWare.
It makes running a lab environment difficult.
https://redd.it/1olmixw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
API Gateway horror stories?
Recently came over a post mentioning that if API endpoint gets discovered by a mischievous bot - it may drain lots of funds off your account. Could somebody explain please?
And maybe stories from own experience? Thanks all!
https://redd.it/1oljk5m
@r_devops
Recently came over a post mentioning that if API endpoint gets discovered by a mischievous bot - it may drain lots of funds off your account. Could somebody explain please?
And maybe stories from own experience? Thanks all!
https://redd.it/1oljk5m
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I created an Open Source tool to fork Kubernetes environments it is like "Git Fork" but for k8s.
Hi Folks,
I created an open-source tool that lets you create, fork, and hibernate entire Kubernetes environments.
With Forkspacer, you can fork your deployments while also migrating your data.. not just the manifests, but the entire data plane as well. We support different modes of forking: by default, every fork spins up a managed, dedicated virtual cluster, but you can also point the destination of your fork to a self-managed cluster. You can even set up multi-cloud environments and fork an environment from one provider (e.g., AWS) to another (e.g., GKE, AKE, or on-prem).
You can clone full setups, test changes in isolation, and automatically hibernate idle workspaces to save resources all declaratively, with GitOps-style reproducibility.
It’s especially useful for spinning up dev, test, pre-prod, and prod environments, and for teams where each developer needs a personal, forked environment from a shared baseline.
License is Apace 2.0 and it is written in Go using Kubebuilder SDK
***https://github.com/forkspacer/forkspacer*** \- source code
Please give it a try let me know, thank you
https://redd.it/1olo2l4
@r_devops
Hi Folks,
I created an open-source tool that lets you create, fork, and hibernate entire Kubernetes environments.
With Forkspacer, you can fork your deployments while also migrating your data.. not just the manifests, but the entire data plane as well. We support different modes of forking: by default, every fork spins up a managed, dedicated virtual cluster, but you can also point the destination of your fork to a self-managed cluster. You can even set up multi-cloud environments and fork an environment from one provider (e.g., AWS) to another (e.g., GKE, AKE, or on-prem).
You can clone full setups, test changes in isolation, and automatically hibernate idle workspaces to save resources all declaratively, with GitOps-style reproducibility.
It’s especially useful for spinning up dev, test, pre-prod, and prod environments, and for teams where each developer needs a personal, forked environment from a shared baseline.
License is Apace 2.0 and it is written in Go using Kubebuilder SDK
***https://github.com/forkspacer/forkspacer*** \- source code
Please give it a try let me know, thank you
https://redd.it/1olo2l4
@r_devops
GitHub
GitHub - forkspacer/forkspacer: Forkspacer is a Kubernetes-native tool for orchestrating and managing workspaces with modular environments…
Forkspacer is a Kubernetes-native tool for orchestrating and managing workspaces with modular environments and automation hooks. - forkspacer/forkspacer
Understanding Docker Multi-platform Builds with QEMU
https://cefboud.com/posts/qemu-virtualzation-docker-multi-build/
https://redd.it/1olod4p
@r_devops
https://cefboud.com/posts/qemu-virtualzation-docker-multi-build/
https://redd.it/1olod4p
@r_devops
Moncef Abboud
Inside Multi-Platform Docker Builds with QEMU
A deep dive into how Docker uses QEMU and binfmt-misc to build and run multi-architecture container images, enabling cross-platform support for x86, ARM, and beyond.
VS Code extension for dependency CVE scanning
VulScan-MCP scans project manifests for security vulnerabilities.
Queries NVD and OSV APIs for CVE data. Integrates with GitHub Copilot via Model Context Protocol.
Supports npm, pip, Maven, Go modules, Cargo, and more.
Open source: https://github.com/abhishekrai43/VulScan-MCP
Try it if you want CVE scanning in your editor.
https://redd.it/1olpe9t
@r_devops
VulScan-MCP scans project manifests for security vulnerabilities.
Queries NVD and OSV APIs for CVE data. Integrates with GitHub Copilot via Model Context Protocol.
Supports npm, pip, Maven, Go modules, Cargo, and more.
Open source: https://github.com/abhishekrai43/VulScan-MCP
Try it if you want CVE scanning in your editor.
https://redd.it/1olpe9t
@r_devops
GitHub
GitHub - abhishekrai43/VulScan-MCP
Contribute to abhishekrai43/VulScan-MCP development by creating an account on GitHub.
A simple shell noscript that creates rootless podman containers to automate any task, building of github projects, kernels, applications etc
Denoscription: A simple shell noscript that uses buildah to create customized OCI/docker images and podman to deploy rootless containers designed to automate compilation/building of github projects, applications and kernels, including any other conainerized task or service. Pre-defined environment variables, various command options, native integration of all containers with apt-cacher-ng, live log monitoring with neovim and the use of tmux to consolidate container access, ensures maximum flexibility and efficiency during container use.
Url: https://github.com/tabletseeker/pod-buildah
https://redd.it/1olzbox
@r_devops
Denoscription: A simple shell noscript that uses buildah to create customized OCI/docker images and podman to deploy rootless containers designed to automate compilation/building of github projects, applications and kernels, including any other conainerized task or service. Pre-defined environment variables, various command options, native integration of all containers with apt-cacher-ng, live log monitoring with neovim and the use of tmux to consolidate container access, ensures maximum flexibility and efficiency during container use.
Url: https://github.com/tabletseeker/pod-buildah
https://redd.it/1olzbox
@r_devops
GitHub
GitHub - tabletseeker/pod-buildah: A simple shell noscript that creates rootless podman containers to automate compilation/building…
A simple shell noscript that creates rootless podman containers to automate compilation/building of github projects, applications, kernels or any other task, while offering full integration with tmux...
A simple shell noscript that creates rootless podman containers to automate any task, building of github projects, kernels, applications etc.
Denoscription: A simple shell noscript that uses buildah to create customized OCI/docker images and podman to deploy rootless containers designed to automate compilation/building of github projects, applications and kernels, including any other conainerized task or service. Pre-defined environment variables, various command options, native integration of all containers with apt-cacher-ng, live log monitoring with neovim and the use of tmux to consolidate container access, ensures maximum flexibility and efficiency during container use.
Url: https://github.com/tabletseeker/pod-buildah
https://redd.it/1olz4ug
@r_devops
Denoscription: A simple shell noscript that uses buildah to create customized OCI/docker images and podman to deploy rootless containers designed to automate compilation/building of github projects, applications and kernels, including any other conainerized task or service. Pre-defined environment variables, various command options, native integration of all containers with apt-cacher-ng, live log monitoring with neovim and the use of tmux to consolidate container access, ensures maximum flexibility and efficiency during container use.
Url: https://github.com/tabletseeker/pod-buildah
https://redd.it/1olz4ug
@r_devops
GitHub
GitHub - tabletseeker/pod-buildah: A simple shell noscript that creates rootless podman containers to automate compilation/building…
A simple shell noscript that creates rootless podman containers to automate compilation/building of github projects, applications, kernels or any other task, while offering full integration with tmux...
Exploring low latency audio AI agents for live communication 🎧
I’ve been messing with some real-time audio based AI Agents to handle latency, reasoning, and synchronization when assisting during live human interviews, meetings and conferences etc.
The best examples I’ve found so far are Cogniear, LockedIn and Parakeet AI agents, all focused on real-time live spoken coaches rather than text.
-Cogniear.com works as an end-to-end reasoning loop: listens to and understands to whisper a full, spoken response in under 2 seconds.
-LockedInAI acts as a contextual tone coach, analyzing your confidence and phrasing during meetings.
-ParakeetAI focuses on improving clarity, cadence, and emotional delivery in real time.
It feels like early-stage “symbiotic audio reasoning” where human speech and AI processing overlap instead of alternating turns.
Questions for devs:
\-What’s the most efficient way to reduce inference lag in real-time voice reasoning systems?
\-How can multi-agent voice models maintain coherent dialogue flow without desyncing?
\-Anyone try prototyping something similar using streaming inference or hybrid STT/TTS pipelines?
Has anyone here tried something like that?Would love to hear your experiences with any real-time audio based AI Agents
https://redd.it/1om2cxh
@r_devops
I’ve been messing with some real-time audio based AI Agents to handle latency, reasoning, and synchronization when assisting during live human interviews, meetings and conferences etc.
The best examples I’ve found so far are Cogniear, LockedIn and Parakeet AI agents, all focused on real-time live spoken coaches rather than text.
-Cogniear.com works as an end-to-end reasoning loop: listens to and understands to whisper a full, spoken response in under 2 seconds.
-LockedInAI acts as a contextual tone coach, analyzing your confidence and phrasing during meetings.
-ParakeetAI focuses on improving clarity, cadence, and emotional delivery in real time.
It feels like early-stage “symbiotic audio reasoning” where human speech and AI processing overlap instead of alternating turns.
Questions for devs:
\-What’s the most efficient way to reduce inference lag in real-time voice reasoning systems?
\-How can multi-agent voice models maintain coherent dialogue flow without desyncing?
\-Anyone try prototyping something similar using streaming inference or hybrid STT/TTS pipelines?
Has anyone here tried something like that?Would love to hear your experiences with any real-time audio based AI Agents
https://redd.it/1om2cxh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Hosting my CI/CD setup on a smaller EU cloud turned out smoother than I expected
I’ve been testing a few European clouds for my CI/CD setup, mainly Xelon.ch, Hetzner, and Scaleway. All of them did fine tbh , but I really liked how smooth and minimal the Xelon dashboard feels.
Setup took maybe 15 mins total, Jenkins + containers + auto backups. Everything runs from ISO-certified Swiss data centers, which adds a nice layer of trust.
Hetzner’s pricing is great, Scaleway’s UI is clean, but Xelon felt like the right mix of speed, compliance, and stability.
Anyone else here using EU-based clouds for pipelines or container work?
https://redd.it/1om5tqo
@r_devops
I’ve been testing a few European clouds for my CI/CD setup, mainly Xelon.ch, Hetzner, and Scaleway. All of them did fine tbh , but I really liked how smooth and minimal the Xelon dashboard feels.
Setup took maybe 15 mins total, Jenkins + containers + auto backups. Everything runs from ISO-certified Swiss data centers, which adds a nice layer of trust.
Hetzner’s pricing is great, Scaleway’s UI is clean, but Xelon felt like the right mix of speed, compliance, and stability.
Anyone else here using EU-based clouds for pipelines or container work?
https://redd.it/1om5tqo
@r_devops
www.xelon.ch
Xelon | Cloud Infrastructure as a Service | Switzerland
Mit dem Xelon HQ erstellt und betreibt ihre eure eigenen Cloud-Infrastrukturen auf einer Schweizer Cloud-Management-Plattform. Einfach, schnell und sicher.
GraphQL Batching Attacks: How 100 Queries Become 10,000 Database Calls 📊
https://instatunnel.my/blog/graphql-batching-attacks-how-100-queries-become-10000-database-calls
https://redd.it/1om7jq7
@r_devops
https://instatunnel.my/blog/graphql-batching-attacks-how-100-queries-become-10000-database-calls
https://redd.it/1om7jq7
@r_devops
InstaTunnel
GraphQL Batching Attacks: 100 Queries = 10,000 DB Calls
Learn how GraphQL batching attacks amplify 100 queries into 10,000 database calls. Discover defense strategies against DoS, brute force, and 2FA bypass attacks