Automate KVM image creation for testing purposes
I'm trying to clean up the testing workflow for a project I'm working on, a database built on top of
Right now I'm using KVM and its NVMe device emulator to power the dev environment, but the developer experience is poor: I have a noscript to recreate the KVM image but it requires some manual steps, and I don't want to commit the KVM image itself for obvious reasons
My questions are:
- Is there an alternative to dockerfiles for KVM images?
- If not, what are my best options for my use case?
- What other options do I have to emulate NVMe devices?
Things I tried:
- Running an
- Mocking an NVMe device with some code and a memory backed file, but it's not real testing
https://redd.it/1pln6hj
@r_devops
I'm trying to clean up the testing workflow for a project I'm working on, a database built on top of
io_uring and NVMe.Right now I'm using KVM and its NVMe device emulator to power the dev environment, but the developer experience is poor: I have a noscript to recreate the KVM image but it requires some manual steps, and I don't want to commit the KVM image itself for obvious reasons
My questions are:
- Is there an alternative to dockerfiles for KVM images?
- If not, what are my best options for my use case?
- What other options do I have to emulate NVMe devices?
Things I tried:
- Running an
nvmevirt device emulator, but it's not suitable for my test environment because it requires to load a kernel module- Mocking an NVMe device with some code and a memory backed file, but it's not real testing
https://redd.it/1pln6hj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Exposing Services on a KIND Cluster on Contabo VPS, MetalLB vs cloud-provider-kind?
I'm setting up a test Kubernetes environment on a Contabo VPS and KIND to spin up the cluster.
I’m figuring out the least hacky way to expose services externally.
So far, I see two main options:
1. MetalLB
2. cloud-provider-kind
My goal isn’t production traffic, but I do want something that:
Behaves close to real Kubernetes networking
Doesn’t rely on NodePort hacks
Is reasonable for CI/testing
For those who’ve run KIND on VPS providers like Contabo/Hetzner:
Which approach did you settle on?
Any gotchas with MetalLB on a single-node KIND cluster?
https://redd.it/1plv49u
@r_devops
I'm setting up a test Kubernetes environment on a Contabo VPS and KIND to spin up the cluster.
I’m figuring out the least hacky way to expose services externally.
So far, I see two main options:
1. MetalLB
2. cloud-provider-kind
My goal isn’t production traffic, but I do want something that:
Behaves close to real Kubernetes networking
Doesn’t rely on NodePort hacks
Is reasonable for CI/testing
For those who’ve run KIND on VPS providers like Contabo/Hetzner:
Which approach did you settle on?
Any gotchas with MetalLB on a single-node KIND cluster?
https://redd.it/1plv49u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitHub - eznix86/kseal: CLI tool to view, export, and encrypt Kubernetes SealedSecrets.
I’ve been using *kubeseal* (the Bitnami sealed-secrets CLI) on my clusters for a while now, and all my secrets stay sealed with Bitnami SealedSecrets so I can safely commit them to Git.
At first I had a bunch of *bash* one-liners and little helpers to export secrets, view them, or re-encrypt them in place. That worked… until it didn’t. Every time I wanted to peek inside a secret or grab all the sealed secrets out into plaintext for debugging, I’d end up reinventing the wheel. So naturally I thought:
>“Why not wrap this up in a proper noscript?”
Fast forward a few hours later and I ended up with **kseal** — a tiny Python CLI that sits on top of kubeseal and gives me a few things that made my life easier:
* `kseal cat`: print a decrypted secret right in the terminal
* `kseal export`: dump secrets to files (local or from cluster)
* `kseal encrypt`: seal plaintext secrets using `kubeseal`
* `kseal init`: generate a config so you don’t have to rerun the same flags forever
You can install it with pip/pipx and run it wherever you already have access to your cluster. It’s basically just automating the stuff I was doing manually and providing a consistent interface instead of a pile of ad-hoc noscripts. ([GitHub](https://github.com/eznix86/kseal/))
It is just something that *helped me* and maybe helps someone else who’s tired of:
* remembering kubeseal flags
* juggling secrets in different dirs
* reinventing small helper noscripts every few weeks
Check it out if you’re in the same boat: [https://github.com/eznix86/kseal/](https://github.com/eznix86/kseal/)
https://redd.it/1plw3n7
@r_devops
I’ve been using *kubeseal* (the Bitnami sealed-secrets CLI) on my clusters for a while now, and all my secrets stay sealed with Bitnami SealedSecrets so I can safely commit them to Git.
At first I had a bunch of *bash* one-liners and little helpers to export secrets, view them, or re-encrypt them in place. That worked… until it didn’t. Every time I wanted to peek inside a secret or grab all the sealed secrets out into plaintext for debugging, I’d end up reinventing the wheel. So naturally I thought:
>“Why not wrap this up in a proper noscript?”
Fast forward a few hours later and I ended up with **kseal** — a tiny Python CLI that sits on top of kubeseal and gives me a few things that made my life easier:
* `kseal cat`: print a decrypted secret right in the terminal
* `kseal export`: dump secrets to files (local or from cluster)
* `kseal encrypt`: seal plaintext secrets using `kubeseal`
* `kseal init`: generate a config so you don’t have to rerun the same flags forever
You can install it with pip/pipx and run it wherever you already have access to your cluster. It’s basically just automating the stuff I was doing manually and providing a consistent interface instead of a pile of ad-hoc noscripts. ([GitHub](https://github.com/eznix86/kseal/))
It is just something that *helped me* and maybe helps someone else who’s tired of:
* remembering kubeseal flags
* juggling secrets in different dirs
* reinventing small helper noscripts every few weeks
Check it out if you’re in the same boat: [https://github.com/eznix86/kseal/](https://github.com/eznix86/kseal/)
https://redd.it/1plw3n7
@r_devops
GitHub
GitHub - eznix86/kseal: CLI tool to view, export, and encrypt Kubernetes Secrets.
CLI tool to view, export, and encrypt Kubernetes Secrets. - eznix86/kseal
Looking for Slack App Feedback - Slack --> Github/Linear Issues
As a systems engineer(clearly used to writing too many user stories) I tend to have many ideas that get lost in chat or I need to copy pasta over to Github. Was playing around in Discord and got a pretty handy tool(for me at least) going where I react to urls or messages and port those over into Github. I refer to the proces as Capture Clean Create.
**What it does:**
\- React with an emoji to any message with a URL → creates a GitHub issue or Linear ticket
\- Use `/idea capture` to summarize the last N messages into a structured issue
\- AI extracts noscript, summary, category, and key points automatically
Just looking for some feedback on if this is a useful tool for you, mostly for developers/PMs. Outside of Slack/Github it currently supports Linear, Discord. Jira and Teams are next up.
https://slack.com/oauth/v2/authorize?client\_id=9193114002786.10095883648134&scope=channels:history,channels:read,chat:write,reactions:read,users:read,team:read,commands&user\_scope=
https://redd.it/1pltrez
@r_devops
As a systems engineer(clearly used to writing too many user stories) I tend to have many ideas that get lost in chat or I need to copy pasta over to Github. Was playing around in Discord and got a pretty handy tool(for me at least) going where I react to urls or messages and port those over into Github. I refer to the proces as Capture Clean Create.
**What it does:**
\- React with an emoji to any message with a URL → creates a GitHub issue or Linear ticket
\- Use `/idea capture` to summarize the last N messages into a structured issue
\- AI extracts noscript, summary, category, and key points automatically
Just looking for some feedback on if this is a useful tool for you, mostly for developers/PMs. Outside of Slack/Github it currently supports Linear, Discord. Jira and Teams are next up.
https://slack.com/oauth/v2/authorize?client\_id=9193114002786.10095883648134&scope=channels:history,channels:read,chat:write,reactions:read,users:read,team:read,commands&user\_scope=
https://redd.it/1pltrez
@r_devops
Multi region AI deployment and every country has different data residency laws, compliance is impossible.
We are expanding AI product to europe and asia and thought we had compliance figured out but germany requires data processed in germany, france has different rules, singapore different, japan even more strict. We tried regional deployments but then we have data sync problems and model consistency issues, tried to centralize but that violates residency laws.
The legal team sent us a spreadsheet with 47 rows of different rules per country and some contradict each other. How are companies with global AI products handling this? feels like we need different deployment per country which is impossible to maintain.
https://redd.it/1plyiz1
@r_devops
We are expanding AI product to europe and asia and thought we had compliance figured out but germany requires data processed in germany, france has different rules, singapore different, japan even more strict. We tried regional deployments but then we have data sync problems and model consistency issues, tried to centralize but that violates residency laws.
The legal team sent us a spreadsheet with 47 rows of different rules per country and some contradict each other. How are companies with global AI products handling this? feels like we need different deployment per country which is impossible to maintain.
https://redd.it/1plyiz1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Terraform still? - I live under a rock
Apparently, I live under a rock and missed that terraform/IBM caused quite a bit of drama this year.
I'm a DE who is working to build his own server where ill be using it for fun and some learning for a little job security. My employer does not have an IaC solution right now or I would just choose whatever they were going with, but I am kind of at a loss on what tool I should be using. Ill be using Proxmox and will be usong a mix of LXC's and VM's to deploy Ubuntu server and SQL Server instances as well as some Azure resources.
Originally I planned on using terraform, but with everything I've been reading it sounds like terraform is losing its marketshare to OpenTofu and Pulumi. With my focus being on learning and job security as a date engineer, is there an obvious choice in IaC solution for me?
Go easy, I fully admit I'm a rookie here.
https://redd.it/1pm49co
@r_devops
Apparently, I live under a rock and missed that terraform/IBM caused quite a bit of drama this year.
I'm a DE who is working to build his own server where ill be using it for fun and some learning for a little job security. My employer does not have an IaC solution right now or I would just choose whatever they were going with, but I am kind of at a loss on what tool I should be using. Ill be using Proxmox and will be usong a mix of LXC's and VM's to deploy Ubuntu server and SQL Server instances as well as some Azure resources.
Originally I planned on using terraform, but with everything I've been reading it sounds like terraform is losing its marketshare to OpenTofu and Pulumi. With my focus being on learning and job security as a date engineer, is there an obvious choice in IaC solution for me?
Go easy, I fully admit I'm a rookie here.
https://redd.it/1pm49co
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I need help figuring out what this is called and where to start.
My manager just let me know that I will be taking over the terraform repo for Azure AI/ML because one of my teammate left and the one who trained under him did not pick up anything.
The AI/ML project will be resuming next month with the dev side starting to train their own models. My manager told me to self study to prep myself for it.
Right now the terraform repo is used to deploy models and build the endpoints but that is it. At least from what I see it. I was able to deploy a test instance and learn how to deploy them in different regions, etc. However, my manager said as of right now, I will also be responsible for building out the infra for devs to train their own ML models and make sure we have high availablility. I may be doing more but we are not sure yet. The dev that I talked to also said the same thing.
Is this considered platform ops? MLops? AI engineer? Would the Azure AI Engineer cert be the thing for me?
Does anyone do something similar and can give me some recommendations on learning resources? Or can give me an idea of what other things you do related to this? (build out, iac, pipeline, etc. ) I can try to ask my company for pluralsight access if there is anything good there. I already have kodekloud but haven't been through the material since I've been busy but is there anything there that you would recommend?
I'm super excited but also overwhelmed since this is new to me and the company.
https://redd.it/1pm5r94
@r_devops
My manager just let me know that I will be taking over the terraform repo for Azure AI/ML because one of my teammate left and the one who trained under him did not pick up anything.
The AI/ML project will be resuming next month with the dev side starting to train their own models. My manager told me to self study to prep myself for it.
Right now the terraform repo is used to deploy models and build the endpoints but that is it. At least from what I see it. I was able to deploy a test instance and learn how to deploy them in different regions, etc. However, my manager said as of right now, I will also be responsible for building out the infra for devs to train their own ML models and make sure we have high availablility. I may be doing more but we are not sure yet. The dev that I talked to also said the same thing.
Is this considered platform ops? MLops? AI engineer? Would the Azure AI Engineer cert be the thing for me?
Does anyone do something similar and can give me some recommendations on learning resources? Or can give me an idea of what other things you do related to this? (build out, iac, pipeline, etc. ) I can try to ask my company for pluralsight access if there is anything good there. I already have kodekloud but haven't been through the material since I've been busy but is there anything there that you would recommend?
I'm super excited but also overwhelmed since this is new to me and the company.
https://redd.it/1pm5r94
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Sensitive Data in Error Messages: When Your Stack Traces Give Away the Database Schema 📋
https://instatunnel.my/blog/sensitive-data-in-error-messages-when-your-stack-traces-give-away-the-database-schema
https://redd.it/1pm4bs0
@r_devops
https://instatunnel.my/blog/sensitive-data-in-error-messages-when-your-stack-traces-give-away-the-database-schema
https://redd.it/1pm4bs0
@r_devops
InstaTunnel
Sensitive Data in Error Messages: How Stack Traces Expose
Discover how verbose error messages and stack traces leak database schemas, file paths, and secrets. Learn why production apps must hide detailed errors in 2025
Supply chain compromises why runtime matters
​
Even if your dependencies are “safe” at build time, runtime can reveal malicious activity. It’s kind of scary how one tiny package can create huge issues once workloads are live.
This blog explains how these runtime threats show up:link
Do you monitor runtime behaviors for dependencies, or mostly rely on pre-deployment scans?
https://redd.it/1pmcc2c
@r_devops
​
Even if your dependencies are “safe” at build time, runtime can reveal malicious activity. It’s kind of scary how one tiny package can create huge issues once workloads are live.
This blog explains how these runtime threats show up:link
Do you monitor runtime behaviors for dependencies, or mostly rely on pre-deployment scans?
https://redd.it/1pmcc2c
@r_devops
ARMO
The Real Cloud Attack Vectors to Watch in 2026- ARMO
Learn the 3 most prevalent runtime threat vectors behind modern cloud breaches: application-layer attacks, supply chain compromises, and stolen cloud identities.
Built an LLM-powered GitHub Actions failure analyzer (no PR spam, advisory-only)
Hi all,
As a DevOps engineer, I often realize that I still spend too much time reading failed GitHub Actions logs.
After a quick search, I couldn’t find anything that focuses specifically on **post-mortem analysis of failed CI jobs**, so I built one myself.
What it does:
\- Runs only when a GitHub Actions job fails
\- Collects and normalizes job logs
\- Uses an LLM to explain the root cause and suggest possible fixes
\- Publishes the result directly into the Job Summary (no PR spam, no comments)
Key points:
\- Language-agnostic (works with almost any stack that produces logs)
\- LLM-agnostic (OpenAI / Claude / OpenRouter / self-hosted)
\- Designed for DevOps workflows, not code review
\- Optimizes logs before sending them to the LLM to reduce token cost
This is advisory-only (no autofix), by design.
You can find and try it here:
https://github.com/ratibor78/actions-ai-advisor
I’d really appreciate feedback from people who live in CI/CD every day:
What would make this genuinely useful for you?
https://redd.it/1pmdb1i
@r_devops
Hi all,
As a DevOps engineer, I often realize that I still spend too much time reading failed GitHub Actions logs.
After a quick search, I couldn’t find anything that focuses specifically on **post-mortem analysis of failed CI jobs**, so I built one myself.
What it does:
\- Runs only when a GitHub Actions job fails
\- Collects and normalizes job logs
\- Uses an LLM to explain the root cause and suggest possible fixes
\- Publishes the result directly into the Job Summary (no PR spam, no comments)
Key points:
\- Language-agnostic (works with almost any stack that produces logs)
\- LLM-agnostic (OpenAI / Claude / OpenRouter / self-hosted)
\- Designed for DevOps workflows, not code review
\- Optimizes logs before sending them to the LLM to reduce token cost
This is advisory-only (no autofix), by design.
You can find and try it here:
https://github.com/ratibor78/actions-ai-advisor
I’d really appreciate feedback from people who live in CI/CD every day:
What would make this genuinely useful for you?
https://redd.it/1pmdb1i
@r_devops
GitHub
GitHub - ratibor78/actions-ai-advisor: GitHub Actions errors explanation by AI
GitHub Actions errors explanation by AI. Contribute to ratibor78/actions-ai-advisor development by creating an account on GitHub.
BCP/DR/GRC at your company real readiness — or mostly paperwork?
Entering position as SRE group lead.
I’m trying to better understand how **BCP, DR, and GRC actually work in practice**, not how they’re supposed to work on paper.
In many companies I’ve seen, there are:
* Policies, runbooks, and risk registers
* SOC2 / ISO / internal audits that get “passed”
* Diagrams and recovery plans that look good in reviews
But I’m curious about the **day-to-day reality**:
* When something breaks, **do people actually use the DR/BCP docs?**
* How often are DR or recovery plans *really* tested end-to-end?
* Do incident learnings meaningfully feed back into controls and risk tracking - or does that break down?
* Where do things still rely on spreadsheets, docs, or tribal knowledge?
I’m not looking to judge — just trying to learn from people who live this.
What surprised you the most during a real incident or audit?
(LMK what's the company size - cause I guess it's different in each size)
https://redd.it/1pmg8a9
@r_devops
Entering position as SRE group lead.
I’m trying to better understand how **BCP, DR, and GRC actually work in practice**, not how they’re supposed to work on paper.
In many companies I’ve seen, there are:
* Policies, runbooks, and risk registers
* SOC2 / ISO / internal audits that get “passed”
* Diagrams and recovery plans that look good in reviews
But I’m curious about the **day-to-day reality**:
* When something breaks, **do people actually use the DR/BCP docs?**
* How often are DR or recovery plans *really* tested end-to-end?
* Do incident learnings meaningfully feed back into controls and risk tracking - or does that break down?
* Where do things still rely on spreadsheets, docs, or tribal knowledge?
I’m not looking to judge — just trying to learn from people who live this.
What surprised you the most during a real incident or audit?
(LMK what's the company size - cause I guess it's different in each size)
https://redd.it/1pmg8a9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best way to isolate Development Environments without Docker/Hyper-V?
I really dislike polluting my main OS with development tools, runtimes, and dependencies.
For a while, I’ve been using Docker to solve this problem, and in many ways it works well.
However, It’s a very advanced tool, it has lots of things and I never actually use and for the simple goal of isolating each project’s development environment, therefore it often feels like overkill.
On top of that, Docker tends to run in the background if you forget to shut it down, consumes a noticeable amount of system resources, and (on Windows) requires Hyper-V/WSL2, which adds even more overhead.
I’m wondering if there are simpler or lighter alternatives for keeping development environments isolated without “polluting” the host OS. I just want to keep it simple.
https://redd.it/1pmfb2e
@r_devops
I really dislike polluting my main OS with development tools, runtimes, and dependencies.
For a while, I’ve been using Docker to solve this problem, and in many ways it works well.
However, It’s a very advanced tool, it has lots of things and I never actually use and for the simple goal of isolating each project’s development environment, therefore it often feels like overkill.
On top of that, Docker tends to run in the background if you forget to shut it down, consumes a noticeable amount of system resources, and (on Windows) requires Hyper-V/WSL2, which adds even more overhead.
I’m wondering if there are simpler or lighter alternatives for keeping development environments isolated without “polluting” the host OS. I just want to keep it simple.
https://redd.it/1pmfb2e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
ingress-nginx retiring March 2026 - what's your migration plan?
So the official **Kubernetes ingress-nginx** is being retired (announcement from SIG Network in November). Best-effort maintenance **until March 2026**, then no more updates or security patches.
Currently evaluating options for our GKE clusters (\~160 ingress):
* **Envoy Gateway** (Gateway API native) - seems like the "future-proof" choice
* **F5 NGINX Ingress Controller** \- different project, still maintained, easier migration path
* **Traefik** \- heard good things, anyone running it at scale?
* **Istio Gateway** \- feels overkill if we don't need full service mesh
For those already migrating or who've made the switch:
* What did you choose and why?
* How painful was moving away from annotation hell?
* Is Gateway API mature enough for prod?
Leaning toward Envoy Gateway but curious about real-world experiences.
https://redd.it/1pmkjqq
@r_devops
So the official **Kubernetes ingress-nginx** is being retired (announcement from SIG Network in November). Best-effort maintenance **until March 2026**, then no more updates or security patches.
Currently evaluating options for our GKE clusters (\~160 ingress):
* **Envoy Gateway** (Gateway API native) - seems like the "future-proof" choice
* **F5 NGINX Ingress Controller** \- different project, still maintained, easier migration path
* **Traefik** \- heard good things, anyone running it at scale?
* **Istio Gateway** \- feels overkill if we don't need full service mesh
For those already migrating or who've made the switch:
* What did you choose and why?
* How painful was moving away from annotation hell?
* Is Gateway API mature enough for prod?
Leaning toward Envoy Gateway but curious about real-world experiences.
https://redd.it/1pmkjqq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
One Ubuntu setting that quietly breaks services: ulimit -n
I’ve seen enough strange production issues turn out to be one OS limit most of us never check.
Wrote this from personal debugging pain, not theory.
Curious how many others have been bitten by this.
Link : https://medium.com/stackademic/the-one-setting-in-ubuntu-that-quietly-breaks-your-apps-ulimit-n-f458ab437b7d?sk=4e540d4a7b6d16eb826f469de8b8f9ad
https://redd.it/1pmjooe
@r_devops
I’ve seen enough strange production issues turn out to be one OS limit most of us never check.
ulimit -n caused random 500s, frozen JVMs, dropped SSH sessions, and broken containers.Wrote this from personal debugging pain, not theory.
Curious how many others have been bitten by this.
Link : https://medium.com/stackademic/the-one-setting-in-ubuntu-that-quietly-breaks-your-apps-ulimit-n-f458ab437b7d?sk=4e540d4a7b6d16eb826f469de8b8f9ad
https://redd.it/1pmjooe
@r_devops
Medium
The One Setting in Ubuntu That Quietly Breaks Your Apps: ulimit -n
Why one hidden OS limit makes services stall, logs stop, and developers blame everything except the real cause.
Stay in a stable job or work for an AI company.
Hi,
I am working for a company in Berlin as an senior infrastructure engineer. The company is stable but does not pay well. I am working on impactful projects and working hard. I asked for a raise, but it seems I will not get a significant increase, maybe 5-8%.
Meanwhile, I am having an interview for an AI company, not EU-based. It got 130M investment last year and wants to expand in EMAE.
They pay ~30% more than what I make at the moment.
Given the market, does it make sense to take the risk or stay in a stable job for a while until the market gets better?
https://redd.it/1pmn9hh
@r_devops
Hi,
I am working for a company in Berlin as an senior infrastructure engineer. The company is stable but does not pay well. I am working on impactful projects and working hard. I asked for a raise, but it seems I will not get a significant increase, maybe 5-8%.
Meanwhile, I am having an interview for an AI company, not EU-based. It got 130M investment last year and wants to expand in EMAE.
They pay ~30% more than what I make at the moment.
Given the market, does it make sense to take the risk or stay in a stable job for a while until the market gets better?
https://redd.it/1pmn9hh
@r_devops
Anyone automating their i18n/localization workflow in CI/CD?
My team is building towards launching in new markets, and the manual translation process is becoming a real bottleneck. We've been exploring ways to integrate localization automation into our DevOps pipeline.
Our current setup involves manually extracting JSON strings, sending them out for translation, and then manually re-integrating them—it’s slow and error-prone. I've been looking at ways to make this a seamless part of our "develop → commit → deploy" flow.
One tool I came across and have started testing for this is the Lingo.dev CLI. It's an open-source, AI-powered toolkit designed to handle translation automation locally and fits into a CI/CD pipeline . Its core feature seems to be that you point it at your translation files, and it can automatically translate them using a specified LLM, outputting files in the correct structure .
The concept of integrating this into a pipeline looks powerful. For instance, you can configure a GitHub Action to run the lingo.dev i18n command on every push or pull request. It uses an i18n.lock file with content checksums to translate only changed text, which keeps costs down and speeds things up .
I'm curious about the practical side from other DevOps/SRE folks:
When does automation make sense? Do you run translations on every PR, on merges to main, or as a scheduled job?
Handling the output: Do you commit the newly generated translation files directly back to the feature branch or PR? What does that review process look like?
Provider choice: The CLI seems to support both "bring your own key" (e.g., OpenAI, Anthropic) and a managed cloud option . Any strong opinions on managing API keys/credential rotation in CI vs. using a managed service?
Rollback & state: The checksum-based lock file seems crucial for idempotency . How do you handle scenarios where you need to roll back a batch of translations or audit what was changed?
Basically, I'm trying to figure out if this "set it and forget it" approach is viable or if it introduces more complexity than it solves. I'd love to hear about your real-world implementations, pitfalls, or any alternative tools in this space.
https://redd.it/1pmnax4
@r_devops
My team is building towards launching in new markets, and the manual translation process is becoming a real bottleneck. We've been exploring ways to integrate localization automation into our DevOps pipeline.
Our current setup involves manually extracting JSON strings, sending them out for translation, and then manually re-integrating them—it’s slow and error-prone. I've been looking at ways to make this a seamless part of our "develop → commit → deploy" flow.
One tool I came across and have started testing for this is the Lingo.dev CLI. It's an open-source, AI-powered toolkit designed to handle translation automation locally and fits into a CI/CD pipeline . Its core feature seems to be that you point it at your translation files, and it can automatically translate them using a specified LLM, outputting files in the correct structure .
The concept of integrating this into a pipeline looks powerful. For instance, you can configure a GitHub Action to run the lingo.dev i18n command on every push or pull request. It uses an i18n.lock file with content checksums to translate only changed text, which keeps costs down and speeds things up .
I'm curious about the practical side from other DevOps/SRE folks:
When does automation make sense? Do you run translations on every PR, on merges to main, or as a scheduled job?
Handling the output: Do you commit the newly generated translation files directly back to the feature branch or PR? What does that review process look like?
Provider choice: The CLI seems to support both "bring your own key" (e.g., OpenAI, Anthropic) and a managed cloud option . Any strong opinions on managing API keys/credential rotation in CI vs. using a managed service?
Rollback & state: The checksum-based lock file seems crucial for idempotency . How do you handle scenarios where you need to roll back a batch of translations or audit what was changed?
Basically, I'm trying to figure out if this "set it and forget it" approach is viable or if it introduces more complexity than it solves. I'd love to hear about your real-world implementations, pitfalls, or any alternative tools in this space.
https://redd.it/1pmnax4
@r_devops
lingo.dev
Lingo.dev - Automated AI localization for web & mobile apps
State-of-the-art AI localization for apps, right in CI/CD. Ship faster, release more often, and have more paying customers.
How to master
Amid mass layoffs and restructuring I ended up in devops teams from backend engineering team.
It’s been a couple of months. I am mostly doing pipeline support work meaning application teams use our templates and infra and we support them in all areas from onboarding to stability.
There are a ton of teams and their stacks are very different (therefore templates). How to get a grasp of all the pieces?
I know without giving a ton of info seeking help is hard but I’d like to know if there a framework which I can follow to understand all the moving parts?
We are on Gitlab and AWS. Appreciate your help.
https://redd.it/1pmsh7u
@r_devops
Amid mass layoffs and restructuring I ended up in devops teams from backend engineering team.
It’s been a couple of months. I am mostly doing pipeline support work meaning application teams use our templates and infra and we support them in all areas from onboarding to stability.
There are a ton of teams and their stacks are very different (therefore templates). How to get a grasp of all the pieces?
I know without giving a ton of info seeking help is hard but I’d like to know if there a framework which I can follow to understand all the moving parts?
We are on Gitlab and AWS. Appreciate your help.
https://redd.it/1pmsh7u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How long will Terraform last?
It's a Sunday thought but. I am basically 90% Terraform at my current job. Everything else is learning new tech stacks that I deploy with Terraform or maybe a noscript or two in Bash or PowerShell.
My Sunday night thought is, what will replace Terraform? I really like it. I hated Bicep. No state file, and you can't expand outside the Azure eco system.
Pulumi is too developer orientated and I'm a Infra guy. I guess if it gets to the point where developers can fully grasp infra, they could take over via Pulumi.
That's about as far as I can think.
https://redd.it/1pmzitq
@r_devops
It's a Sunday thought but. I am basically 90% Terraform at my current job. Everything else is learning new tech stacks that I deploy with Terraform or maybe a noscript or two in Bash or PowerShell.
My Sunday night thought is, what will replace Terraform? I really like it. I hated Bicep. No state file, and you can't expand outside the Azure eco system.
Pulumi is too developer orientated and I'm a Infra guy. I guess if it gets to the point where developers can fully grasp infra, they could take over via Pulumi.
That's about as far as I can think.
https://redd.it/1pmzitq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you convince leadership to stop putting every workload into Kubernetes?
Looking for advice from people who have dealt with this in real life.
One of the clients I work with has multiple internal business applications running on Azure. These apps interact with on-prem data, Databricks, SQL Server, Postgres, etc. The workloads are data-heavy, not user-heavy. Total users across all apps is around 1,000, all internal.
A year ago, everything was decoupled. Different teams owned their own apps, infra choices, and deployment patterns. Then a platform manager pushed a big initiative to centralize everything into a small number of AKS clusters in the name of better management, cost reduction, and modernization.
Fast forward to today, and it’s a mess. Non-prod environments are full of unused resources, costs are creeping up, and dev teams are increasingly reckless because AKS is treated as an infinite sink.
What I’m seeing is this: a handful of platform engineers actually understand AKS well, but most developers do not. That gap is leading to:
1. Deployment bottlenecks and slowdowns due to Helm, Docker, and AKS complexity
2. Zero guardrails on AKS usage, where even tiny Python noscripts are deployed as cron jobs in Kubernetes
3. Batch jobs, experiments, long-running services, and one-off noscripts all dumped into the same clusters
4. Overprovisioned node pools and forgotten workloads in non-prod running 24x7
5. Platform teams turning into a support desk instead of building a better platform
At this point, AKS has become the default answer to every problem. Need to run a noscript? AKS. One-time job? AKS. Lightweight data processing? AKS. No real discussion on whether Functions, ADF, Databricks jobs, VMs, or even simple schedulers would be more appropriate.
My question to the community: how have you successfully convinced leadership or clients to stop over-engineering everything and treating Kubernetes as the only solution? What arguments, data points, or governance models actually worked for you?
https://redd.it/1pn0h49
@r_devops
Looking for advice from people who have dealt with this in real life.
One of the clients I work with has multiple internal business applications running on Azure. These apps interact with on-prem data, Databricks, SQL Server, Postgres, etc. The workloads are data-heavy, not user-heavy. Total users across all apps is around 1,000, all internal.
A year ago, everything was decoupled. Different teams owned their own apps, infra choices, and deployment patterns. Then a platform manager pushed a big initiative to centralize everything into a small number of AKS clusters in the name of better management, cost reduction, and modernization.
Fast forward to today, and it’s a mess. Non-prod environments are full of unused resources, costs are creeping up, and dev teams are increasingly reckless because AKS is treated as an infinite sink.
What I’m seeing is this: a handful of platform engineers actually understand AKS well, but most developers do not. That gap is leading to:
1. Deployment bottlenecks and slowdowns due to Helm, Docker, and AKS complexity
2. Zero guardrails on AKS usage, where even tiny Python noscripts are deployed as cron jobs in Kubernetes
3. Batch jobs, experiments, long-running services, and one-off noscripts all dumped into the same clusters
4. Overprovisioned node pools and forgotten workloads in non-prod running 24x7
5. Platform teams turning into a support desk instead of building a better platform
At this point, AKS has become the default answer to every problem. Need to run a noscript? AKS. One-time job? AKS. Lightweight data processing? AKS. No real discussion on whether Functions, ADF, Databricks jobs, VMs, or even simple schedulers would be more appropriate.
My question to the community: how have you successfully convinced leadership or clients to stop over-engineering everything and treating Kubernetes as the only solution? What arguments, data points, or governance models actually worked for you?
https://redd.it/1pn0h49
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Advice Needed for Following DevOps Path
Ladies and Gentlemen, i am grateful in advance for your support and assistance,
i need an advice about my path for DevOps, i am a self taught using Linux since 2008 and i love Linux so much so i went to study DevOps by doing, i used AI tools to create a Real World Scenarios for DevOps + RHCSA + RHCE and i uploaded it on GitHub within 3 Repos ( 2 Projects ), i know stuck is a part of the path specially for DevOps, and i know i am not good with asking for help, i think i have hardships of how to ask for help and where too.
i want an advice if anyone can check my Projects and Repos and give me an overview of the work is it good work so i can continue the path or it is not good and i better to search for another Career.
Project 1 ( First 2 Repos - Linux, Automation ) is finished, Project 2 ( Last Repo - High Availability ) still not complete and in the Milestone 0, i am struggling so much time of how to connect into Private Instances from the Public Instances, i am using AWS and i tried a lot from using ssh and aws ssm plugins, and still can't do it.
Summary, i want an advice to decide whether to carry on after DevOps or not.
Links:
Project 01 ( Repo 01 + Repo 02 ) | RHCSA & RHCE Path
01 - **enterprise-linux-basics-Prjct\_01**
02 - **linux-automation-infrastructure-Prjct\_02**
Project 02 ( Repo 03 ) | High Availability
03 - **linux-high-availability-Prjct\_03**
https://redd.it/1pn2bm0
@r_devops
Ladies and Gentlemen, i am grateful in advance for your support and assistance,
i need an advice about my path for DevOps, i am a self taught using Linux since 2008 and i love Linux so much so i went to study DevOps by doing, i used AI tools to create a Real World Scenarios for DevOps + RHCSA + RHCE and i uploaded it on GitHub within 3 Repos ( 2 Projects ), i know stuck is a part of the path specially for DevOps, and i know i am not good with asking for help, i think i have hardships of how to ask for help and where too.
i want an advice if anyone can check my Projects and Repos and give me an overview of the work is it good work so i can continue the path or it is not good and i better to search for another Career.
Project 1 ( First 2 Repos - Linux, Automation ) is finished, Project 2 ( Last Repo - High Availability ) still not complete and in the Milestone 0, i am struggling so much time of how to connect into Private Instances from the Public Instances, i am using AWS and i tried a lot from using ssh and aws ssm plugins, and still can't do it.
Summary, i want an advice to decide whether to carry on after DevOps or not.
Links:
Project 01 ( Repo 01 + Repo 02 ) | RHCSA & RHCE Path
01 - **enterprise-linux-basics-Prjct\_01**
02 - **linux-automation-infrastructure-Prjct\_02**
Project 02 ( Repo 03 ) | High Availability
03 - **linux-high-availability-Prjct\_03**
https://redd.it/1pn2bm0
@r_devops
GitHub
@AhmadMWaddah's RHCSA & RHCE Path • AhmadMWaddah
This GitHub project outlines a structured learning and practical application path for Red Hat Certified System Administrator (RHCSA) and Red Hat Certified Engineer (RHCE) certifications. It is divi...
How do you know which feature is changed to determine which noscript to run in CI/CD pipeline?
Hi,
I think I have setup almost everything and have this issue left. Currently the repo contains a lot of features. When someone does the enhance one feature and create a PR. Will do you the testing for all the features?
Lets say I have 2 noscripts: noscript/register_model_a and noscript/register_model_b. These register will create a new version and run evaluate and log to MLFlow.
But I don't know what's the best practice for this case. Like will u define folder for each module and detect file changed in which folder to decide which feature is being enhanced? or just run all the test.?
Thank you!
https://redd.it/1pn3g69
@r_devops
Hi,
I think I have setup almost everything and have this issue left. Currently the repo contains a lot of features. When someone does the enhance one feature and create a PR. Will do you the testing for all the features?
Lets say I have 2 noscripts: noscript/register_model_a and noscript/register_model_b. These register will create a new version and run evaluate and log to MLFlow.
But I don't know what's the best practice for this case. Like will u define folder for each module and detect file changed in which folder to decide which feature is being enhanced? or just run all the test.?
Thank you!
https://redd.it/1pn3g69
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community