Reddit DevOps – Telegram
Just discovered something crazy on my website

I’ve been testing a new analytics setup and I can literally watch a video of what users do on my site.
Seeing real sessions changed everything… I noticed a small issue I had never caught before.

People would scroll, hesitate, and then completely miss the main CTA because it was slightly below the fold on mobile.

Do you use anything similar to analyze user behavior?

https://redd.it/1oxa2gf
@r_devops
Help Wanted

Help Wanted: Full-Time Developer for Social App MVP

We’re seeking an experienced developer (3+ years) to join us full-time and help launch our social app MVP within the next 1-3 months. We have the wireframes and UI/UX plans ready, and we need someone dedicated to bring this vision to life. If you’re passionate and ready to dive in, we’d love to connect!

https://redd.it/1ox9yd4
@r_devops
What was the tool that gave you your “big break”

I’m interested in what tool or maybe specialty allowed you to transition into DevOps. Like did you transfer from SWE or SysAd, did you get really good with Kubernetes or did you transfer from cloud. What’s everyone’s story?

https://redd.it/1oxg94i
@r_devops
I built an open source, code-based agentic workflow platform!

Hi r/OpenSourceAI,

We are building Bubble Lab, a Typenoscript first automation platform to allow devs to build code-based agentic workflows! Unlike traditional no-code tools, Bubble Lab gives you the visual experience of platforms like n8n, but everything is backed by real TypeScript code. Our custom compiler generates the visual workflow representation through static analysis and AST traversals, so you get the best of both worlds: visual clarity and code ownership.

Here's what makes Bubble Lab different:

1/ prompt to workflow: typenoscript means deep compatibility with LLMs, so you can build/amend workflows with natural language. An agent can orchestrate our composable bubbles (integrations, tools) into a production-ready workflow at a much higher success rate!

2/ full observability & debugging: every workflow is compiled with end-to-end type safety and has built-in traceability with rich logs, you can actually see what's happening under the hood

3/ real code, not JSON blobs: Bubble Lab workflows are built in Typenoscript code. This means you can own it, extend it in your IDE, add it to your existing CI/CD pipelines, and run it anywhere. No more being locked into a proprietary format.

we're also open source :) https://github.com/bubblelabai/BubbleLab

We are constantly iterating Bubble Lab so would love to hear your feedback!!

https://redd.it/1oxgdgg
@r_devops
How do you implement tests and automation around those tests?

I'm in a larger medium sized company and we have a lot of growing pains currently. One such pain is lack of testing just about everywhere. I'm currently trying to foster an environment where we encourage, and potentially enforce, testing but I'm not some super big expert. I try to read about different approaches and have played with a lot of things but curious what opinions others have around this.

We have a big web of API calls between apps and a few backend processing services that consume queues. I am trying to focus on the API portion first because a big problem is feature development in one area breaks another because we didn't know another app needed this API, etc, etc.


Here's a quick sketch of what I'm thinking (these will all be automated)

PR Build/Test
Run unit tests
Run integration tests
Run consumer contract tests
Spin up app with mocked dependencies in a container and run playwright tests against the app <-- (unsure if this should be done here or after deployment to a dev environment)
Contract testing
When consumer contract changes, kick off test against provider
Gate deployments if contract testing does not pass
After stage deployment
Run smoke tests and full E2E tests against live stage environment
After prod deployment
Run smoke tests


I'm sure once we have things implemented for a time we'll find what works and what doesn't, but I would love to hear what others are doing for their testing setup and possibly get some ideas on where we're lacking

https://redd.it/1oxg37a
@r_devops
Group, compare and track health of GitHub repos you use

Hello,

Created this simple website gitfitcheck.com where you can group existing GitHub repos and track their health based on their public data. The idea came from working as a Sr SRE/DevOps on mostly Kubernetes/Cloud environments with tons of CNCF open source products, and usually there are many competing alternatives for the same task, so I started to create static markdown docs about these GitHub groups with basic health data (how old the tool is, how many stars it has, language it was written in), so I can compare them and have a mental map of their quality, lifecycle and where's what.

Over time whenever I hear about a new tool I can use for my job, I update my markdown docs. I found this categorization/grouping useful for mapping the tool landscape, comparing tools in the same category and see trends as certain projects are getting abandoned while others are catching attention.

The challenge I had that the doc I created was static and the data I recorded were point in time manual snapshots, so I thought I'll create an automated, dynamic version of this tool which keeps the health stats up to date. This tool became gitfitcheck.com. Later I realized that I can have further facets as well, not just comparison within the same category, for example I have a group for my core Python packages that I bootstrap all of my Django projects with. Using this tool I can see when a project is getting less love lately and can search for an alternative, maybe a fork or a completely new project. Also, all groups we/you create are public, so whenever we search for a topic/repo, we'll see how others grouped them as well, which can help discoverability too.

I found this process useful in the frontend and ML space as well, as both are depending on open source GitHub projects a lot.

Feedback are welcome, thank you for taking the time reading this and maybe even giving a try!

Thank you,

sendai

PS: I know this isn't the next big thing, neither it has AI in it nor it's vibe coded. It's just a simple tool I believe is useful to support SRE/DevOps/ML/Frontend or any other jobs that depends on GH repos a lot.

https://redd.it/1oxnhq6
@r_devops
How do you handle Github Actions -> Slack notifications at your org?

I saw Slack has an example that uses users.lookupByEmail, here. If I can get the email I will be able to use the user's user ID and then send a Slack message to them. However that would require knowing the email of the ${GITHUB_ACTOR}.

I thought I can use gh api /users/$ACTOR, but testing it on myself I get null in the email field, so I'm not sure if it's the correct way to go about this. Maybe it's a permissions issue.

Feels like I'm over complicating something that must be standard in most companies, so maybe someone can share how they handle sending Slack messages from a GH action in their org?

Thanks

https://redd.it/1oxnfkh
@r_devops
How do I step up as the go to devops person?

I have recently studied docker, kubernetes and gitlab CI/CD from YouTube tutorials. The team I work in got restructured recently and we don't have anyone who knows about this stuff. We have to build our whole pipeline structure and cluster management from what remains. I feel like this is a golden opportunity for someone like me.

I just want to know how can I move from the beginner stuff from YouTube and go on to build real resilient systems and pipelines.

Maybe I can study from some good repos as a reference or other methods. Any help is greatly appreciated. Thank you!

https://redd.it/1oxpq1x
@r_devops
Simple tool that automates tasks by creating rootless containers displayed in tmux

Denoscription: A simple shell noscript that uses buildah to create customized OCI/docker images and podman to deploy rootless containers designed to automate compilation/building of github projects, applications and kernels, including any other conainerized task or service. Pre-defined environment variables, various command options, native integration of all containers with apt-cacher-ng, live log monitoring with neovim and the use of tmux to consolidate container access, ensures maximum flexibility and efficiency during container use.

Url: https://github.com/tabletseeker/pod-buildah

https://redd.it/1oxpb5m
@r_devops
Open-source Azure configuration drift detector - catches manual changes that break IaC compliance

Classic DevOps problem: You maintain infrastructure as code, but manual changes through cloud consoles create drift. Your reality doesn't match your code.



Built this for Azure + Bicep environments:



**Features:**

🔍 Uses Azure's native what-if API for 100% accuracy

🔧 Auto-fixes detected drift with --autofix mode

📊 Clean reporting (console, JSON, HTML, markdown)

🎯 Filters out Azure platform noise (provisioningState, etags, etc.)



**Perfect for:**

• Teams practicing Infrastructure as Code

• Compliance monitoring

• CI/CD pipeline integration

• Preventing security misconfigurations



**Example output:**

Drift detected in storage account:
Expected: allowBlobPublicAccess = false
Actual: allowBlobPublicAccess = true



Built with C#/.NET, integrates with any CI/CD system.



**GitHub:** https://github.com/mwhooo/AzureDriftDetector



How do you handle configuration drift in your environments? Always curious about different approaches!

https://redd.it/1oxugx9
@r_devops
NPMScan - Malicious NPM Package Detection & Security Scanner

I built **npmscan.com** because npm has become a minefield. Too many packages look safe on the surface but hide obfuscated code, weird postinstall noscripts, abandoned maintainers, or straight-up malware. Most devs don’t have time to manually read source every time they install something — so I made a tool that does the dirty work instantly.

What **npmscan.com** does:

Scans any npm package in seconds
Detects malicious patterns, hidden noscripts, obfuscation, and shady network calls
Highlights abandoned or suspicious maintainers
Shows full file structure + dependency tree
Assigns a risk score based on real security signals
No install needed — just search and inspect

The goal is simple:
👉 Make it obvious when a package is trustworthy — and when it’s not.

If you want to quickly “x-ray” your dependencies before you add them to your codebase, you can try it here:

**https://npmscan.com**

Let me know what features you’d want next.

https://redd.it/1oy1pr5
@r_devops
Follow-up to my "Is logging enough?" post — I open-sourced our trace visualizer

A couple of months ago, I posted this thread asking whether logging alone was enough for complex debugging. At the time, we were dumping all our system messages into a database just to trace issues like a “free checked bag” disappearing during checkout.

That approach helped, but digging through logs was still slow and painful. So I built a trace visualizer—something that could actually show the message flow across services, with payloads, in a clear timeline.

I’ve now open-sourced it:
🔗 GitHub: softprobe/softprobe

It’s built as a high-performance Istio WASM plugin, and it’s focused specifically on business-level message flow visualization and troubleshooting. Less about infrastructure metrics—more about understanding what happened in the actual business logic during a user’s journey.

demo

Feedback and critiques welcome. This community’s input on the original post really pushed this forward.

https://redd.it/1oy5jv2
@r_devops
Entire Domain run from Kube?

Good afternoon all,

I have been trying to experiment with running a 3 node Kube cluster inside a single node Nutanix HCI.

My goal was to try and create an entire domain completed with IAM, DHCP, DNS, and a CA and make it as redundant as possible. So, i figured the best way to do that was to set up containers for each service inside a kube cluster.

The cluster itself is configured and complete with Calico and the Nutanix CSI driver. I also set up a storage class that uses a volume group made in Nutanix. Now i'm at the part where i'm trying to set up the actual domain and the containers to do so.

I'm currently stuck right now, because there doesn't seem to be an actual solution to create a domain in kube simliar to how you would do it in AD. I was going to try it by running Samba 4 in the cluster, but it seems like the functionality for it is limited to SMB shares. I was also looking at FreeIPA but there is very limited documentation of it actually working in Kube, and even less on how to set it up in there.

I'm starting to question now if it's even a good idea to run an entire domain from Kube. Am I right to question this?

I know most enterprises just run their domain using VMs of Windows server DCs, but there has to be another way of setting up a HA domain while using cloud technology without having to go through Azure.

I have to admit that I'm not a dev ops engineer, i'm just a security analyst so please go easy on me.

Thank you

https://redd.it/1oy5n6z
@r_devops
Python for Automating stuff on Azure and Kafka

Hi,

I need some suggestions from the community here, I been working bash for noscripting in CI and CD pipeline jobs with minimal exposure to python in the automation pipelines.

I am looking to start focusing on developing my python skills and get some hands on with Azure python SDK and Kafka libraries to start using python at my workplace.

Need some suggestions on online learning platform and books to get started. Looking to invest about 10-12 hours each week in learning.

https://redd.it/1oyctub
@r_devops
Manage Vault in GitOps way

Hi all,

In my home cluster I'm introducing Vault and Vault operator to handle secrets within the cluster.
How to you guys manage Vault in an automated way? For example I would like to create kv and policies in a declarative way maybe managed with Argo CD

Any suggestings?

https://redd.it/1oygbil
@r_devops
Productizing LangGraph Agents

Hey,
I'm trying to understand which option is better based on your experience. I

want to deploy enterprise-ready agentic applications, my current agent framework is Langgraph.

To be production-ready, I need horizontal scaling and durable state so that if a failure occurs, the system can resume from the last successful step.

I’ve been reading a lot about Temporal and the Langsmith Agent Server, both seem to offer similar capabilities and promise durable execution for agents, tools, and MCPs.
I'm not sure which one is more recommended.

I did notice one major difference: in Langgraph I need to explicitly define retry policies in my code, while Temporal handles retries more transparently.

I’d love to get your feedback on this.

https://redd.it/1oyh93l
@r_devops
Trouble sharing a Windows Server 2022 AMI between AWS accounts (no RDP password, no SSM connection)



Hello everyone,

I've been trying for the last two days to share a custom Windows Server 2022 AMI from Account A to Account B, but without success.
The source AMI is based on the official Windows_Server-2022-English-Full-Base image, and I installed a few internal programs and agents on it.

After creating and sharing the AMI, I can successfully launch instances from it in the target account (Account B), but:

I cannot retrieve the Windows password via “Get Windows password” (it says “This instance was launched from a custom AMI...”);

The SSM Agent doesn’t start or connect to Systems Manager;

The instance shows 3/3 health checks OK, but remains inaccessible over RDP or SSM.



---

🔹 What I have tried so far

1. Standard AMI creation:

Created the image via EC2 console → Create image.

Shared both the AMI and its snapshot with the target AWS account (including Allow EBS volume creation).



2. First attempt (no sysprep):

The image worked but AWS couldn’t decrypt the Windows password.

Expected behavior, since Windows wasn’t generalized.



3. Second attempt (sysprep with /oobe /generalize /shutdown):

Ran from SSM:

Start-Process "C:\Windows\System32\Sysprep\sysprep.exe" -ArgumentList "/oobe /generalize /shutdown" -Wait

Result: instance stopped correctly, but when launching from this AMI the system got stuck on the “Hi there” screen (OOBE GUI), so no EC2Launch automation, no RDP, no SSM.



4. Third attempt (sysprep with /generalize /shutdown only):

Based on the AWS official documentation, /oobe should not be used — EC2LaunchV2 handles first boot automatically.

However, the AMI was based on an older image that had EC2Launch v1, not EC2LaunchV2, so I verified this via:

Get-Service | Where-Object { $_.Name -like "EC2Launch*" }

and confirmed it was the legacy EC2Launch service.

Started the service:

Set-Service EC2Launch -StartupType Automatic
Start-Service EC2Launch

Re-ran:

Start-Process "C:\Windows\System32\Sysprep\sysprep.exe" -ArgumentList "/generalize /shutdown" -Wait

The process completed and the instance shut down, but in the new account I still couldn’t decrypt the Windows password (AWS said custom AMI).



5. Tried reinstalling EC2LaunchV2 manually:

Using:

Invoke-WebRequest "https://ec2-launch-v2.s3.amazonaws.com/latest/EC2LaunchV2.msi" -OutFile "$env:TEMP\EC2LaunchV2.msi"
Start-Process msiexec.exe -ArgumentList "/i $env:TEMP\EC2LaunchV2.msi /quiet" -Wait

However, the service didn’t register, likely because the image is built on a base that doesn’t support EC2LaunchV2 natively (Windows Server 2022 + legacy AMI lineage).



https://redd.it/1oyh932
@r_devops
Is there a standard list of all potential metrics that one can / should extract from technologies like HTTP / gRPC / GraphQL server & clients? Or for Request Response systems in general?

We all deal with developing / maintaining servers and clients. With observability playing its part, I am trying to figure out wouldn't we have standardized metrics that one can by default use for such servers?

If so is there actually a project / foundation / tool that is working on it?

e.g. with server there can prometheus metrics for requests, responses
for client could be something similar. I mean developers can choose metrics they deem useful but having a list of what are potentially available metrics would be much better strategy IMHO.

I don't know if OpenTelemetry solves this issue, from what I understand it provides tools to obtain metrics, traces, logs but doesn't define a definitive set as to what most of these standard models can provide

https://redd.it/1oylwuc
@r_devops
How do you handle infrastructure audits across multiple monitoring tools?

Our team just went through an annual audit of our internal tools.

Some of the audits we do are the following:

1. Alerts - We have alerts spanning across Cloudwatch, Splunk, Chronosphere, Grafana, and custom cron jobs. We audit for things like if we still need the alert, is it still accurate, etc..
2. ASGs - We went through all the AWS ASGs that we own and ensured they have appropriate resources (not too much or too little), does our team still own it, etc…

That’s just a small portion of our audit.

Often these audits require the auditor to go to different systems and pull some data to get an idea on the current status of the infrastructure/tool in question.

All of this data is put into a spreadsheet and different audits are assigned to different team members.

Curious on a few things:
- Are you auditing your infra/tools regularly?
- Do you have tooling for this? Something beyond simple spreadsheets.
- How long does it take you to audit?

Looking to hear what works well for others!



https://redd.it/1oyomjm
@r_devops