Reddit DevOps – Telegram
Looking for Slack App Feedback - Slack --> Github/Linear Issues

As a systems engineer(clearly used to writing too many user stories) I tend to have many ideas that get lost in chat or I need to copy pasta over to Github. Was playing around in Discord and got a pretty handy tool(for me at least) going where I react to urls or messages and port those over into Github. I refer to the proces as Capture Clean Create.

**What it does:**

\- React with an emoji to any message with a URL → creates a GitHub issue or Linear ticket

\- Use `/idea capture` to summarize the last N messages into a structured issue

\- AI extracts noscript, summary, category, and key points automatically

Just looking for some feedback on if this is a useful tool for you, mostly for developers/PMs. Outside of Slack/Github it currently supports Linear, Discord. Jira and Teams are next up.

https://slack.com/oauth/v2/authorize?client\_id=9193114002786.10095883648134&scope=channels:history,channels:read,chat:write,reactions:read,users:read,team:read,commands&user\_scope=

https://redd.it/1pltrez
@r_devops
Multi region AI deployment and every country has different data residency laws, compliance is impossible.

We are expanding AI product to europe and asia and thought we had compliance figured out but germany requires data processed in germany, france has different rules, singapore different, japan even more strict. We tried regional deployments but then we have data sync problems and model consistency issues, tried to centralize but that violates residency laws.

The legal team sent us a spreadsheet with 47 rows of different rules per country and some contradict each other. How are companies with global AI products handling this? feels like we need different deployment per country which is impossible to maintain.

https://redd.it/1plyiz1
@r_devops
Terraform still? - I live under a rock

Apparently, I live under a rock and missed that terraform/IBM caused quite a bit of drama this year.


I'm a DE who is working to build his own server where ill be using it for fun and some learning for a little job security. My employer does not have an IaC solution right now or I would just choose whatever they were going with, but I am kind of at a loss on what tool I should be using. Ill be using Proxmox and will be usong a mix of LXC's and VM's to deploy Ubuntu server and SQL Server instances as well as some Azure resources.


Originally I planned on using terraform, but with everything I've been reading it sounds like terraform is losing its marketshare to OpenTofu and Pulumi. With my focus being on learning and job security as a date engineer, is there an obvious choice in IaC solution for me?



Go easy, I fully admit I'm a rookie here.​

https://redd.it/1pm49co
@r_devops
I need help figuring out what this is called and where to start.

My manager just let me know that I will be taking over the terraform repo for Azure AI/ML because one of my teammate left and the one who trained under him did not pick up anything.

The AI/ML project will be resuming next month with the dev side starting to train their own models. My manager told me to self study to prep myself for it.

Right now the terraform repo is used to deploy models and build the endpoints but that is it. At least from what I see it. I was able to deploy a test instance and learn how to deploy them in different regions, etc. However, my manager said as of right now, I will also be responsible for building out the infra for devs to train their own ML models and make sure we have high availablility. I may be doing more but we are not sure yet. The dev that I talked to also said the same thing.

Is this considered platform ops? MLops? AI engineer? Would the Azure AI Engineer cert be the thing for me?

Does anyone do something similar and can give me some recommendations on learning resources? Or can give me an idea of what other things you do related to this? (build out, iac, pipeline, etc. ) I can try to ask my company for pluralsight access if there is anything good there. I already have kodekloud but haven't been through the material since I've been busy but is there anything there that you would recommend?

I'm super excited but also overwhelmed since this is new to me and the company.

https://redd.it/1pm5r94
@r_devops
Supply chain compromises why runtime matters

​

Even if your dependencies are “safe” at build time, runtime can reveal malicious activity. It’s kind of scary how one tiny package can create huge issues once workloads are live.

This blog explains how these runtime threats show up:link

Do you monitor runtime behaviors for dependencies, or mostly rely on pre-deployment scans?

https://redd.it/1pmcc2c
@r_devops
Built an LLM-powered GitHub Actions failure analyzer (no PR spam, advisory-only)

Hi all,

As a DevOps engineer, I often realize that I still spend too much time reading failed GitHub Actions logs.

After a quick search, I couldn’t find anything that focuses specifically on **post-mortem analysis of failed CI jobs**, so I built one myself.

What it does:

\- Runs only when a GitHub Actions job fails

\- Collects and normalizes job logs

\- Uses an LLM to explain the root cause and suggest possible fixes

\- Publishes the result directly into the Job Summary (no PR spam, no comments)



Key points:

\- Language-agnostic (works with almost any stack that produces logs)

\- LLM-agnostic (OpenAI / Claude / OpenRouter / self-hosted)

\- Designed for DevOps workflows, not code review

\- Optimizes logs before sending them to the LLM to reduce token cost



This is advisory-only (no autofix), by design.



You can find and try it here:

https://github.com/ratibor78/actions-ai-advisor



I’d really appreciate feedback from people who live in CI/CD every day:

What would make this genuinely useful for you?



https://redd.it/1pmdb1i
@r_devops
BCP/DR/GRC at your company real readiness — or mostly paperwork?

Entering position as SRE group lead.
I’m trying to better understand how **BCP, DR, and GRC actually work in practice**, not how they’re supposed to work on paper.

In many companies I’ve seen, there are:

* Policies, runbooks, and risk registers
* SOC2 / ISO / internal audits that get “passed”
* Diagrams and recovery plans that look good in reviews

But I’m curious about the **day-to-day reality**:

* When something breaks, **do people actually use the DR/BCP docs?**
* How often are DR or recovery plans *really* tested end-to-end?
* Do incident learnings meaningfully feed back into controls and risk tracking - or does that break down?
* Where do things still rely on spreadsheets, docs, or tribal knowledge?

I’m not looking to judge — just trying to learn from people who live this.

What surprised you the most during a real incident or audit?

(LMK what's the company size - cause I guess it's different in each size)


https://redd.it/1pmg8a9
@r_devops
Best way to isolate Development Environments without Docker/Hyper-V?

I really dislike polluting my main OS with development tools, runtimes, and dependencies.
For a while, I’ve been using Docker to solve this problem, and in many ways it works well.

However, It’s a very advanced tool, it has lots of things and I never actually use and for the simple goal of isolating each project’s development environment, therefore it often feels like overkill.

On top of that, Docker tends to run in the background if you forget to shut it down, consumes a noticeable amount of system resources, and (on Windows) requires Hyper-V/WSL2, which adds even more overhead.

I’m wondering if there are simpler or lighter alternatives for keeping development environments isolated without “polluting” the host OS. I just want to keep it simple.

https://redd.it/1pmfb2e
@r_devops
ingress-nginx retiring March 2026 - what's your migration plan?

So the official **Kubernetes ingress-nginx** is being retired (announcement from SIG Network in November). Best-effort maintenance **until March 2026**, then no more updates or security patches.

Currently evaluating options for our GKE clusters (\~160 ingress):

* **Envoy Gateway** (Gateway API native) - seems like the "future-proof" choice
* **F5 NGINX Ingress Controller** \- different project, still maintained, easier migration path
* **Traefik** \- heard good things, anyone running it at scale?
* **Istio Gateway** \- feels overkill if we don't need full service mesh

For those already migrating or who've made the switch:

* What did you choose and why?
* How painful was moving away from annotation hell?
* Is Gateway API mature enough for prod?

Leaning toward Envoy Gateway but curious about real-world experiences.

https://redd.it/1pmkjqq
@r_devops
One Ubuntu setting that quietly breaks services: ulimit -n

I’ve seen enough strange production issues turn out to be one OS limit most of us never check.
ulimit -n caused random 500s, frozen JVMs, dropped SSH sessions, and broken containers.

Wrote this from personal debugging pain, not theory.
Curious how many others have been bitten by this.

Link : https://medium.com/stackademic/the-one-setting-in-ubuntu-that-quietly-breaks-your-apps-ulimit-n-f458ab437b7d?sk=4e540d4a7b6d16eb826f469de8b8f9ad

https://redd.it/1pmjooe
@r_devops
Stay in a stable job or work for an AI company.

Hi,

I am working for a company in Berlin as an senior infrastructure engineer. The company is stable but does not pay well. I am working on impactful projects and working hard. I asked for a raise, but it seems I will not get a significant increase, maybe 5-8%.

Meanwhile, I am having an interview for an AI company, not EU-based. It got 130M investment last year and wants to expand in EMAE.
They pay ~30% more than what I make at the moment.

Given the market, does it make sense to take the risk or stay in a stable job for a while until the market gets better?






https://redd.it/1pmn9hh
@r_devops
Anyone automating their i18n/localization workflow in CI/CD?

My team is building towards launching in new markets, and the manual translation process is becoming a real bottleneck. We've been exploring ways to integrate localization automation into our DevOps pipeline.

Our current setup involves manually extracting JSON strings, sending them out for translation, and then manually re-integrating them—it’s slow and error-prone. I've been looking at ways to make this a seamless part of our "develop → commit → deploy" flow.

One tool I came across and have started testing for this is the Lingo.dev CLI. It's an open-source, AI-powered toolkit designed to handle translation automation locally and fits into a CI/CD pipeline . Its core feature seems to be that you point it at your translation files, and it can automatically translate them using a specified LLM, outputting files in the correct structure .



The concept of integrating this into a pipeline looks powerful. For instance, you can configure a GitHub Action to run the lingo.dev i18n command on every push or pull request. It uses an i18n.lock file with content checksums to translate only changed text, which keeps costs down and speeds things up .

I'm curious about the practical side from other DevOps/SRE folks:

When does automation make sense? Do you run translations on every PR, on merges to main, or as a scheduled job?

Handling the output: Do you commit the newly generated translation files directly back to the feature branch or PR? What does that review process look like?

Provider choice: The CLI seems to support both "bring your own key" (e.g., OpenAI, Anthropic) and a managed cloud option . Any strong opinions on managing API keys/credential rotation in CI vs. using a managed service?

Rollback & state: The checksum-based lock file seems crucial for idempotency . How do you handle scenarios where you need to roll back a batch of translations or audit what was changed?

Basically, I'm trying to figure out if this "set it and forget it" approach is viable or if it introduces more complexity than it solves. I'd love to hear about your real-world implementations, pitfalls, or any alternative tools in this space.

https://redd.it/1pmnax4
@r_devops
How to master

Amid mass layoffs and restructuring I ended up in devops teams from backend engineering team.

It’s been a couple of months. I am mostly doing pipeline support work meaning application teams use our templates and infra and we support them in all areas from onboarding to stability.

There are a ton of teams and their stacks are very different (therefore templates). How to get a grasp of all the pieces?

I know without giving a ton of info seeking help is hard but I’d like to know if there a framework which I can follow to understand all the moving parts?

We are on Gitlab and AWS. Appreciate your help.

https://redd.it/1pmsh7u
@r_devops
How long will Terraform last?

It's a Sunday thought but. I am basically 90% Terraform at my current job. Everything else is learning new tech stacks that I deploy with Terraform or maybe a noscript or two in Bash or PowerShell.

My Sunday night thought is, what will replace Terraform? I really like it. I hated Bicep. No state file, and you can't expand outside the Azure eco system.

Pulumi is too developer orientated and I'm a Infra guy. I guess if it gets to the point where developers can fully grasp infra, they could take over via Pulumi.

That's about as far as I can think.

https://redd.it/1pmzitq
@r_devops
How do you convince leadership to stop putting every workload into Kubernetes?

Looking for advice from people who have dealt with this in real life.

One of the clients I work with has multiple internal business applications running on Azure. These apps interact with on-prem data, Databricks, SQL Server, Postgres, etc. The workloads are data-heavy, not user-heavy. Total users across all apps is around 1,000, all internal.

A year ago, everything was decoupled. Different teams owned their own apps, infra choices, and deployment patterns. Then a platform manager pushed a big initiative to centralize everything into a small number of AKS clusters in the name of better management, cost reduction, and modernization.

Fast forward to today, and it’s a mess. Non-prod environments are full of unused resources, costs are creeping up, and dev teams are increasingly reckless because AKS is treated as an infinite sink.

What I’m seeing is this: a handful of platform engineers actually understand AKS well, but most developers do not. That gap is leading to:
1. Deployment bottlenecks and slowdowns due to Helm, Docker, and AKS complexity
2. Zero guardrails on AKS usage, where even tiny Python noscripts are deployed as cron jobs in Kubernetes
3. Batch jobs, experiments, long-running services, and one-off noscripts all dumped into the same clusters
4. Overprovisioned node pools and forgotten workloads in non-prod running 24x7
5. Platform teams turning into a support desk instead of building a better platform

At this point, AKS has become the default answer to every problem. Need to run a noscript? AKS. One-time job? AKS. Lightweight data processing? AKS. No real discussion on whether Functions, ADF, Databricks jobs, VMs, or even simple schedulers would be more appropriate.

My question to the community: how have you successfully convinced leadership or clients to stop over-engineering everything and treating Kubernetes as the only solution? What arguments, data points, or governance models actually worked for you?


https://redd.it/1pn0h49
@r_devops
Advice Needed for Following DevOps Path

Ladies and Gentlemen, i am grateful in advance for your support and assistance,
i need an advice about my path for DevOps, i am a self taught using Linux since 2008 and i love Linux so much so i went to study DevOps by doing, i used AI tools to create a Real World Scenarios for DevOps + RHCSA + RHCE and i uploaded it on GitHub within 3 Repos ( 2 Projects ), i know stuck is a part of the path specially for DevOps, and i know i am not good with asking for help, i think i have hardships of how to ask for help and where too.

i want an advice if anyone can check my Projects and Repos and give me an overview of the work is it good work so i can continue the path or it is not good and i better to search for another Career.

Project 1 ( First 2 Repos - Linux, Automation ) is finished, Project 2 ( Last Repo - High Availability ) still not complete and in the Milestone 0, i am struggling so much time of how to connect into Private Instances from the Public Instances, i am using AWS and i tried a lot from using ssh and aws ssm plugins, and still can't do it.

Summary, i want an advice to decide whether to carry on after DevOps or not.

Links:

Project 01 ( Repo 01 + Repo 02 ) | RHCSA & RHCE Path

01 - **enterprise-linux-basics-Prjct\_01**

02 - **linux-automation-infrastructure-Prjct\_02**

Project 02 ( Repo 03 ) | High Availability

03 - **linux-high-availability-Prjct\_03**

https://redd.it/1pn2bm0
@r_devops
How do you know which feature is changed to determine which noscript to run in CI/CD pipeline?

Hi,

I think I have setup almost everything and have this issue left. Currently the repo contains a lot of features. When someone does the enhance one feature and create a PR. Will do you the testing for all the features?

Lets say I have 2 noscripts: noscript/register_model_a and noscript/register_model_b. These register will create a new version and run evaluate and log to MLFlow.

But I don't know what's the best practice for this case. Like will u define folder for each module and detect file changed in which folder to decide which feature is being enhanced? or just run all the test.?



Thank you!

https://redd.it/1pn3g69
@r_devops
DevOps-Tech knowledge für job application (>Agile Coach) (GitLab, CI/CD, Docker, Ansible) - how to get into it?

Hi folks,

any suggestions how to get into the topic?
A job offer for an agile coach requires those, just for context.
Apart from having downloaded stuff from github before, I'm pretty much a newbie in that field.
How to get started, what are good tutorials and sources? What do I even need to know for such a position?

Thanks a lot!

https://redd.it/1pn3vwo
@r_devops
How do you keep storage management simple as infrastructure scales

I am working on a setup where data volume and infrastructure will grow steadily over time. What starts as a simple storage layer can quickly turn into something that needs constant attention if it is not designed carefully.

For those managing larger or growing environments, how do you keep storage from becoming an operational burden Do you rely on automation, strict conventions, or regular cleanup and review processes

I am interested in approaches that reduce day to day overhead while keeping systems reliable.

https://redd.it/1pn5uk0
@r_devops
Single Machine Availability: is it really a problem?

*Discussing Virtual Private Servers for simple systems :)*

Virtual Private Server (VPS) is not really a single physical machine - it is a single logical machine, with many levels of redundancy, both hardware and software, implemented by cloud providers to deliver High Availability. Most cloud providers have at least 99.9% availability, stated in their service-level agreements (SLAs), and some - DigitalOcean and AWS for example - offer 99.99% availability. This comes down to:

24 * 60 = 1440 minutes in a day
30 * 1440 = 43 200 minutes in a month
60 * 1440 = 86 400 seconds in a day

99.9% availability:
86 400 - 86 400 * 0.999 = 86.4 seconds of downtime per day
43 200 - 43 200 * 0.999 = 43.2 minutes of downtime per month

99.99% availability:
86 400 - 86 400 * 0.9999 = 8.64 seconds of downtime per day
43 200 - 43 200 * 0.9999 = 4.32 minutes of downtime per month

Depending on the chosen cloud provider, this is availability we can expect from the simplest possible system, running on a single virtual server. What if that is not enough for us? Or maybe we simply do not trust these claims and want to have more redundancy, but still enjoy the benefits of a Single Machine System Simplicity? Can it be improved upon?

First, let's consider short periods of unavailability - up to a few seconds. These will most likely be the most frequent ones and fortunately, the easiest to fix. If our VPS is not available for just 1 to 5 seconds, it might be handled purely on the client side by having retries - retrying every request up to a few seconds, if the server is not available. For the user, certain operations will just be slower - because of possible, short server unavailability - but they will succeed eventually, unless the issue is more severe and the server is down for longer.

Before considering possible solutions for this longer case, it is worth pausing and asking - *maybe that is enough?* Let's remember that with *99.9% and 99.99% availability* we expect to be daily unavailable for at most *86.4 or 8.64 seconds*.

Most likely, these interruptions will be spread throughout the day, so simple retries can handle most of them without users even noticing. Let's also remember that Complexity is often the Enemy of Reliability. Moreover, our system is as reliable as its weakest link; if we really want to have additional redundancy and be able to deal with potentially longer periods of unavailability, there are at least two ways of going about it - but maybe they are not worth the Complexity they introduce?

I would then argue that in most cases, *99.9% - 99.99% availability* delivered by the cloud provider + simple client retry strategy, handling most short interruptions, is good enough. Should we want/need more, there are tools and strategies to still reap the benefits of a Single Machine System Simplicity while having ultra high redundancy and availability - at the cost of additional Complexity.

*I write deeper and broader pieces on topics like this on my blog. Thanks for reading!*

https://redd.it/1pn77hn
@r_devops