Regex Denial of Service (ReDoS): The Pattern That Freezes Your Server 🌀
https://instatunnel.my/blog/regex-denial-of-service-redos-the-pattern-that-freezes-your-server
https://redd.it/1oz6b5s
@r_devops
https://instatunnel.my/blog/regex-denial-of-service-redos-the-pattern-that-freezes-your-server
https://redd.it/1oz6b5s
@r_devops
InstaTunnel
Regex Denial of Service (ReDoS): How Catastrophic Patterns
Learn how ReDoS attacks exploit inefficient regular expressions to cause CPU exhaustion and downtime. Discover how small inputs trigger catastrophic backtrackin
Our production crashed for 48 hours because of a version mismatch
ClickHouse migration went wrong. Old region: v22.8. New region: v23.3. Nobody noticed.
Two days of debugging with premium support. Zero results.
Finally caught it ourselves after 48 hours.
Building a tool now to prevent these config nightmares. Lesson learned: always verify versions across environments.
https://redd.it/1oz7rcs
@r_devops
ClickHouse migration went wrong. Old region: v22.8. New region: v23.3. Nobody noticed.
Two days of debugging with premium support. Zero results.
Finally caught it ourselves after 48 hours.
Building a tool now to prevent these config nightmares. Lesson learned: always verify versions across environments.
https://redd.it/1oz7rcs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to send Supabase Postgres logs to New Relic on Pro (cloud, not self-hosted)?
Hey everyone,
I’m trying to figure out a clean way to get Supabase Postgres logs into New Relic without changing my whole setup or upgrading plans.
My situation:
- I’m using Supabase Cloud, not self-hosted
- I’m currently on the Pro plan
- I don’t want to upgrade to Team just to get log drains
- I’ve already successfully integrated New Relic with my Supabase Edge Functions (Node/TypeScript), and that part is working fine
- What I’m missing is Postgres/DB logs (slow queries, errors, etc.) inside New Relic
From what I’ve seen, the “proper” / official way seems to be using log drains, which are only available on the higher tiers. Since I’m on Pro, I’m looking for any of the following:
- Has anyone found a workaround to get Postgres logs or query data from Supabase Cloud → New Relic while staying on Pro?
- Is there any way to forward logs via webhooks, or some pattern like:
- Supabase → Function / Trigger → HTTP → New Relic ingest endpoint?
- Or maybe using database triggers / audit tables + a job that pushes data into New Relic in some structured way?
If anyone has:
- A working setup
- Even a partial solution (e.g. just errors or slow queries)
- Or can confirm that it’s basically impossible without Team / Enterprise
…I’d really appreciate the details.
Thanks in advance.
https://redd.it/1oza164
@r_devops
Hey everyone,
I’m trying to figure out a clean way to get Supabase Postgres logs into New Relic without changing my whole setup or upgrading plans.
My situation:
- I’m using Supabase Cloud, not self-hosted
- I’m currently on the Pro plan
- I don’t want to upgrade to Team just to get log drains
- I’ve already successfully integrated New Relic with my Supabase Edge Functions (Node/TypeScript), and that part is working fine
- What I’m missing is Postgres/DB logs (slow queries, errors, etc.) inside New Relic
From what I’ve seen, the “proper” / official way seems to be using log drains, which are only available on the higher tiers. Since I’m on Pro, I’m looking for any of the following:
- Has anyone found a workaround to get Postgres logs or query data from Supabase Cloud → New Relic while staying on Pro?
- Is there any way to forward logs via webhooks, or some pattern like:
- Supabase → Function / Trigger → HTTP → New Relic ingest endpoint?
- Or maybe using database triggers / audit tables + a job that pushes data into New Relic in some structured way?
If anyone has:
- A working setup
- Even a partial solution (e.g. just errors or slow queries)
- Or can confirm that it’s basically impossible without Team / Enterprise
…I’d really appreciate the details.
Thanks in advance.
https://redd.it/1oza164
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How can I start learning AWS or Azure without a credit/debit card?
I'm trying to get into cloud computing, but I'm stuck at the very first step. I don't have a credit or debit card, and my college ID isn’t eligible for the Azure for Students offer. Because of that, I can’t sign up for the free tiers on AWS or Azure.
For anyone who’s been in a similar situation — how did you start learning? Are there any alternatives, free resources, sandbox environments, or training platforms I can use without needing a card? I really want to get hands-on practice instead of only watching videos.
Any suggestions would be really appreciated!
https://redd.it/1oz9wrh
@r_devops
I'm trying to get into cloud computing, but I'm stuck at the very first step. I don't have a credit or debit card, and my college ID isn’t eligible for the Azure for Students offer. Because of that, I can’t sign up for the free tiers on AWS or Azure.
For anyone who’s been in a similar situation — how did you start learning? Are there any alternatives, free resources, sandbox environments, or training platforms I can use without needing a card? I really want to get hands-on practice instead of only watching videos.
Any suggestions would be really appreciated!
https://redd.it/1oz9wrh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Anyone else tired of juggling SonarQube, Snyk, and manual reviews just to keep code clean?
Our setup has become ridiculous. SonarQube runs nightly, Snyk yells about vulnerabilities once a week, and reviewers manually check for style and logic. It's all disconnected - different dashboards, overlapping issues, and zero visibility on whether we're actually improving. I've been wondering if there's a sane way to bring code quality, review automation, and security scanning into a single workflow. Ideally something that plugs into GitHub so we stop context-switching between five tabs every PR.
https://redd.it/1ozc6lj
@r_devops
Our setup has become ridiculous. SonarQube runs nightly, Snyk yells about vulnerabilities once a week, and reviewers manually check for style and logic. It's all disconnected - different dashboards, overlapping issues, and zero visibility on whether we're actually improving. I've been wondering if there's a sane way to bring code quality, review automation, and security scanning into a single workflow. Ideally something that plugs into GitHub so we stop context-switching between five tabs every PR.
https://redd.it/1ozc6lj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AI is draining my passion
My org is shamelessly promoting the use of AI coding assistants and it’s really draining me. It’s all they talk about in our company all-hands meetings. Every other week they’re handing out licenses to another emerging tool, toting how much more “productive” it will make us, telling us that we’ll fall behind the curve if we don’t use them.
Meanwhile, my team is throwing up PRs of clearly vibe-coded slop noscripts (reviewed by Codex, of course!) and I’m the one human that has to review and leave real comments. I feel like I am just interfacing with robots all day and no one puts care into their work anymore. I really used to love writing and reviewing code. Now I feel like I’m just here to teach AI how to write better code, because my PR comments are probably just put directly into an LLM prompt.
I didn’t go into this field to train AI; I’m truly interested in building and maintaining systems. I’m exhausted from all the hype, ya’ll. I’m not an AI hater or anything, but I feel like the uptick of its usage is really making the job feel way more mundane.
https://redd.it/1ozd2i5
@r_devops
My org is shamelessly promoting the use of AI coding assistants and it’s really draining me. It’s all they talk about in our company all-hands meetings. Every other week they’re handing out licenses to another emerging tool, toting how much more “productive” it will make us, telling us that we’ll fall behind the curve if we don’t use them.
Meanwhile, my team is throwing up PRs of clearly vibe-coded slop noscripts (reviewed by Codex, of course!) and I’m the one human that has to review and leave real comments. I feel like I am just interfacing with robots all day and no one puts care into their work anymore. I really used to love writing and reviewing code. Now I feel like I’m just here to teach AI how to write better code, because my PR comments are probably just put directly into an LLM prompt.
I didn’t go into this field to train AI; I’m truly interested in building and maintaining systems. I’m exhausted from all the hype, ya’ll. I’m not an AI hater or anything, but I feel like the uptick of its usage is really making the job feel way more mundane.
https://redd.it/1ozd2i5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Maybe we need to rethink how prod-like our dev environments are
Been thinking maybe the root cause of so many prod-only bugs is that our dev environments are too different from production. We run things locally with ideal data, low traffic, and maybe even different OS / dependency versions. But prod is messy as everyone knows this
We probably need to invest more in making staging or local setups mimic prod more closely. Containerization, shared mocks, realistic datasets, and maybe time delay simulation for APIs. I know it’s more work, but if it helps catch those weird failures earlier, it might be worth it.
https://redd.it/1ozdffm
@r_devops
Been thinking maybe the root cause of so many prod-only bugs is that our dev environments are too different from production. We run things locally with ideal data, low traffic, and maybe even different OS / dependency versions. But prod is messy as everyone knows this
We probably need to invest more in making staging or local setups mimic prod more closely. Containerization, shared mocks, realistic datasets, and maybe time delay simulation for APIs. I know it’s more work, but if it helps catch those weird failures earlier, it might be worth it.
https://redd.it/1ozdffm
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What is your current Enterprise Cloud Storage solution and why did you choose them?
Excited to get help/insights from experts in the house.
https://redd.it/1ozegqv
@r_devops
Excited to get help/insights from experts in the house.
https://redd.it/1ozegqv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
github.com/rmst/jix (Declarative Project and System Configs in JS)
Hi, [Jix](https://github.com/rmst/jix) is a project I recently open-sourced. I'm not advertising to use this, just looking for feedback first. Does this generally make sense to you? Does the API look good? I know the implemention is hacky in some places but that could be improved later.
Jix allows you to use JavaScript to declaratively define your project environments or system/user configurations, with good editor and type-checking support.
Jix is conceptually similar to [Nix](https://en.wikipedia.org/wiki/Nix_(package_manager)). In Jix, "effects" are a generalization of Nix' "derivations". [Effects](https://rmst.github.io/jix/api/Effect) can have install and uninstall actions which allows them to influence system state declaratively. Dependencies are tracked automatically.
Jix itself has no out-of-repo dependencies. It does not depend on NPM or Node.js or Nix.
Jix can be used as an ergonomic, lightweight alternative^(1) to
* devenv (see [`examples/devenv/`](https://github.com/rmst/jix/tree/main/examples/devenv))
* docker compose (see [`examples/docker-compose/`](https://github.com/rmst/jix/tree/main/examples/docker-compose))
* process-compose (see [`examples/process-compose/`](https://github.com/rmst/jix/tree/main/examples/process-compose))
* nix home-manager (see [`examples/home-manager/`](https://github.com/rmst/jix/tree/main/examples/home-manager))
* Ansible (see [remote targets](https://rmst.github.io/jix/remote-targets))
[Nixpkgs](https://github.com/NixOS/nixpkgs) are available in Jix via `jix.nix.pkgs.<packageName>.<binaryName>` (see [example](https://github.com/rmst/jix/blob/main/examples/devenv/jix/__jix__.js)).
https://redd.it/1ozedzc
@r_devops
Hi, [Jix](https://github.com/rmst/jix) is a project I recently open-sourced. I'm not advertising to use this, just looking for feedback first. Does this generally make sense to you? Does the API look good? I know the implemention is hacky in some places but that could be improved later.
Jix allows you to use JavaScript to declaratively define your project environments or system/user configurations, with good editor and type-checking support.
Jix is conceptually similar to [Nix](https://en.wikipedia.org/wiki/Nix_(package_manager)). In Jix, "effects" are a generalization of Nix' "derivations". [Effects](https://rmst.github.io/jix/api/Effect) can have install and uninstall actions which allows them to influence system state declaratively. Dependencies are tracked automatically.
Jix itself has no out-of-repo dependencies. It does not depend on NPM or Node.js or Nix.
Jix can be used as an ergonomic, lightweight alternative^(1) to
* devenv (see [`examples/devenv/`](https://github.com/rmst/jix/tree/main/examples/devenv))
* docker compose (see [`examples/docker-compose/`](https://github.com/rmst/jix/tree/main/examples/docker-compose))
* process-compose (see [`examples/process-compose/`](https://github.com/rmst/jix/tree/main/examples/process-compose))
* nix home-manager (see [`examples/home-manager/`](https://github.com/rmst/jix/tree/main/examples/home-manager))
* Ansible (see [remote targets](https://rmst.github.io/jix/remote-targets))
[Nixpkgs](https://github.com/NixOS/nixpkgs) are available in Jix via `jix.nix.pkgs.<packageName>.<binaryName>` (see [example](https://github.com/rmst/jix/blob/main/examples/devenv/jix/__jix__.js)).
https://redd.it/1ozedzc
@r_devops
GitHub
GitHub - rmst/jix
Contribute to rmst/jix development by creating an account on GitHub.
Bitbucket Pipelines v. GitHub v. GitLab v. Azure Dev Ops
I recently asked for thoughts on using Bitbucket Pipelines instead of Jenkins for our CI/CD. We've decided to migrate away from Jenkins to ... *drumroll* ...
Bitbucket Pipelines or GitHub or GitLab or Azure Dev Ops.
We've started looking into each of these options but I was curious what this community thinks of these options. It's worth noting my teams utilize Jira for project management and our repos are currently in Bitbucket Cloud.
Since we're already invested in Atlassian tools Bitbucket seems to be the one to beat. We require SAML sign on and as such it's also the least expensive. However, its repo organization and secrets management leave much to be desired. You either set up secrets per repository, or per workspace, the latter means they are available to your entire organization!
If I had 6 months to investigate I'd trial each of them but we'd really like to start moving off Jenkins by the first of the year.
What say you? Of these options which is your preferred CI/CD and why?
https://redd.it/1oziiqs
@r_devops
I recently asked for thoughts on using Bitbucket Pipelines instead of Jenkins for our CI/CD. We've decided to migrate away from Jenkins to ... *drumroll* ...
Bitbucket Pipelines or GitHub or GitLab or Azure Dev Ops.
We've started looking into each of these options but I was curious what this community thinks of these options. It's worth noting my teams utilize Jira for project management and our repos are currently in Bitbucket Cloud.
Since we're already invested in Atlassian tools Bitbucket seems to be the one to beat. We require SAML sign on and as such it's also the least expensive. However, its repo organization and secrets management leave much to be desired. You either set up secrets per repository, or per workspace, the latter means they are available to your entire organization!
If I had 6 months to investigate I'd trial each of them but we'd really like to start moving off Jenkins by the first of the year.
What say you? Of these options which is your preferred CI/CD and why?
https://redd.it/1oziiqs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for examples of DevOps-related LLM failures (building a small dataset)
I've been putting together a small devops -focused dataset - trying to collect cases where LLMs get things wrong in ops or infra tasks (terraform, docker, ci/cd configs, weird shell bugs, etc.).
It's surprisingly hard to find good "failure" data for devops automation. Most public datasets are code-only, not real-world ops logic.
The goal is to use it for training and testing tiny local models (my current one runs in about 1.1 GB RAM) to see how far they can go on specific, domain-tuned tasks.
If you've run into bad llm outputs on devops work, or have snippets that failed, I'd love to include anonymised examples.
Any tips on where people usually share or store that kind of data would also help (besides github — already looked there 🙂).
https://redd.it/1ozjiz6
@r_devops
I've been putting together a small devops -focused dataset - trying to collect cases where LLMs get things wrong in ops or infra tasks (terraform, docker, ci/cd configs, weird shell bugs, etc.).
It's surprisingly hard to find good "failure" data for devops automation. Most public datasets are code-only, not real-world ops logic.
The goal is to use it for training and testing tiny local models (my current one runs in about 1.1 GB RAM) to see how far they can go on specific, domain-tuned tasks.
If you've run into bad llm outputs on devops work, or have snippets that failed, I'd love to include anonymised examples.
Any tips on where people usually share or store that kind of data would also help (besides github — already looked there 🙂).
https://redd.it/1ozjiz6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Drift detector for computer vision: is It really matters?
I’ve been building a small tool for detecting drift in computer vision pipelines, and I’m trying to understand if this solves a real problem or if I’m just scratching my own itch.
The idea is simple: extract embeddings from a reference dataset, save the stats, then compare new images against that distribution to get a drift score. Everything gets saved as artifacts (json, npz, plots, images). A tiny MLflow style UI lets you browse runs locally (free) or online (paid)
Basically: embeddings > drift score > lightweight dashboard.
So:
Do teams actually want something this minimal?
How are you monitoring drift in CV today?
Is this the kind of tool that would be worth paying for, or only useful as opensource?
I’m trying to gauge whether this has real demand before polishing it further. Any feedback is welcome
https://redd.it/1ozmakb
@r_devops
I’ve been building a small tool for detecting drift in computer vision pipelines, and I’m trying to understand if this solves a real problem or if I’m just scratching my own itch.
The idea is simple: extract embeddings from a reference dataset, save the stats, then compare new images against that distribution to get a drift score. Everything gets saved as artifacts (json, npz, plots, images). A tiny MLflow style UI lets you browse runs locally (free) or online (paid)
Basically: embeddings > drift score > lightweight dashboard.
So:
Do teams actually want something this minimal?
How are you monitoring drift in CV today?
Is this the kind of tool that would be worth paying for, or only useful as opensource?
I’m trying to gauge whether this has real demand before polishing it further. Any feedback is welcome
https://redd.it/1ozmakb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Apple Containers vs Docker Desktop vs OrbStack (Updated benchmark)
Hi everyone
After the last benchmark I got a lot of requests to test more setups and include native vs non native containers, plus compare OrbStack as well. So I ran a new round of tests.
This time I measured CPU, memory, and startup time across Apple’s container system, Docker Desktop, and OrbStack on both native arm64 images and non native amd64 images.
|Category|Apple (emulated amd64)|Apple (native arm64)|Docker (emulated amd64)|Docker (native arm64)|OrbStack (emulated amd64)|OrbStack (native arm64)|Units|
|:-|:-|:-|:-|:-|:-|:-|:-|
|CPU 1 thread|7132.88|11089.55|7006.09|10505.76|7075.07|11047.06|events/s|
|CPU all threads|42025.87|54718.16|40882.76|53301.71|42363.40|55134.99|events/s|
|Memory|84108.09|103288.30|80762.94|77505.92|67111.55|90177.42|MiB/s|
|Startup time|0.936|0.940|0.205|0.187|0.232|0.228|seconds (lower is better)|
Full charts and detailed results are available here - Full Benchmark
Let me know if you’d like me to run more benchmarks on other topics
https://redd.it/1ozndrw
@r_devops
Hi everyone
After the last benchmark I got a lot of requests to test more setups and include native vs non native containers, plus compare OrbStack as well. So I ran a new round of tests.
This time I measured CPU, memory, and startup time across Apple’s container system, Docker Desktop, and OrbStack on both native arm64 images and non native amd64 images.
|Category|Apple (emulated amd64)|Apple (native arm64)|Docker (emulated amd64)|Docker (native arm64)|OrbStack (emulated amd64)|OrbStack (native arm64)|Units|
|:-|:-|:-|:-|:-|:-|:-|:-|
|CPU 1 thread|7132.88|11089.55|7006.09|10505.76|7075.07|11047.06|events/s|
|CPU all threads|42025.87|54718.16|40882.76|53301.71|42363.40|55134.99|events/s|
|Memory|84108.09|103288.30|80762.94|77505.92|67111.55|90177.42|MiB/s|
|Startup time|0.936|0.940|0.205|0.187|0.232|0.228|seconds (lower is better)|
Full charts and detailed results are available here - Full Benchmark
Let me know if you’d like me to run more benchmarks on other topics
https://redd.it/1ozndrw
@r_devops
www.repoflow.io
Apple Containers vs Docker Desktop vs OrbStack
We took a deeper dive into container performance on macOS, comparing Apple Container, Docker Desktop, and OrbStack, now including native vs emulated images, CPU and memory tests, startup time, and real small-file I/O workloads.
Is Cloud Code's Pro Plan enough for a cloud internship?
Hey everyone, I'm currently a student and I've subscribed to kodekloud's Pro Plan. I'm wondering if this type of training is sufficient to stand out to recruiters and land a cloud internship. I plan to start applying in about five months. Do you have any advice on which courses to prioritize or areas to explore? Thanks in advance for your insights and suggestions!
https://redd.it/1ozpz79
@r_devops
Hey everyone, I'm currently a student and I've subscribed to kodekloud's Pro Plan. I'm wondering if this type of training is sufficient to stand out to recruiters and land a cloud internship. I plan to start applying in about five months. Do you have any advice on which courses to prioritize or areas to explore? Thanks in advance for your insights and suggestions!
https://redd.it/1ozpz79
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do small teams handle log aggregation?
How do small teams, 1 to 10 develop, handle log aggregation, without running ELK or paying for DataDog?
https://redd.it/1ozu5kj
@r_devops
How do small teams, 1 to 10 develop, handle log aggregation, without running ELK or paying for DataDog?
https://redd.it/1ozu5kj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Would love feedback on a photo-based yard analysis tool I’m building
I’ve been working on a personal project that analyzes outdoor property photos to flag potential issues like drainage risks, grading problems, erosion patterns, and other environmental indicators. It’s something I’ve wanted to build for years because I deal with these issues constantly in North Carolina’s red clay, and I’ve never found a tool that combines AI reasoning + environmental data + practical diagnostics.
If anyone is willing to take a look, here’s the current version:
**https://terrainvision-ai.com**
I’m specifically looking for feedback on:
Accuracy of the analysis
Whether the recommendations feel grounded or off
Clarity of the PDF output
UI/UX improvements
Any blind spots or failure modes you notice
Anything that feels unintuitive or could be explained better
This is a passion project, and I’m genuinely trying to make it something useful. Any feedback, positive, negative, or brutally honest, is appreciated.
https://redd.it/1ozyx0h
@r_devops
I’ve been working on a personal project that analyzes outdoor property photos to flag potential issues like drainage risks, grading problems, erosion patterns, and other environmental indicators. It’s something I’ve wanted to build for years because I deal with these issues constantly in North Carolina’s red clay, and I’ve never found a tool that combines AI reasoning + environmental data + practical diagnostics.
If anyone is willing to take a look, here’s the current version:
**https://terrainvision-ai.com**
I’m specifically looking for feedback on:
Accuracy of the analysis
Whether the recommendations feel grounded or off
Clarity of the PDF output
UI/UX improvements
Any blind spots or failure modes you notice
Anything that feels unintuitive or could be explained better
This is a passion project, and I’m genuinely trying to make it something useful. Any feedback, positive, negative, or brutally honest, is appreciated.
https://redd.it/1ozyx0h
@r_devops
Terrainvision-Ai
Terrain Vision – AI Landscape Preview
Upload a yard photo to preview AI-powered drainage and landscape ideas. Low-maintenance designs reviewed by Carolina Terrain experts
Looking for advice on testing a photo-based analysis tool I’m building
I’ve been working on a personal project that analyzes outdoor property photos to flag potential issues like drainage risks, grading problems, erosion patterns, and other environmental indicators. It’s something I’ve wanted to build for years because I deal with these issues constantly in North Carolina’s red clay, and I’ve never found a tool that combines AI reasoning + environmental data + practical diagnostics.
If anyone is willing to take a look, here’s the current version:
**https://terrainvision-ai.com**
I’m specifically looking for feedback on:
Accuracy of the analysis
Whether the recommendations feel grounded or off
Clarity of the PDF output
UI/UX improvements
Any blind spots or failure modes you notice
Anything that feels unintuitive or could be explained better
This is a passion project, and I’m genuinely trying to make it something useful. Any feedback, positive, negative, or brutally honest, is appreciated.
https://redd.it/1ozyw6j
@r_devops
I’ve been working on a personal project that analyzes outdoor property photos to flag potential issues like drainage risks, grading problems, erosion patterns, and other environmental indicators. It’s something I’ve wanted to build for years because I deal with these issues constantly in North Carolina’s red clay, and I’ve never found a tool that combines AI reasoning + environmental data + practical diagnostics.
If anyone is willing to take a look, here’s the current version:
**https://terrainvision-ai.com**
I’m specifically looking for feedback on:
Accuracy of the analysis
Whether the recommendations feel grounded or off
Clarity of the PDF output
UI/UX improvements
Any blind spots or failure modes you notice
Anything that feels unintuitive or could be explained better
This is a passion project, and I’m genuinely trying to make it something useful. Any feedback, positive, negative, or brutally honest, is appreciated.
https://redd.it/1ozyw6j
@r_devops
Terrainvision-Ai
Terrain Vision – AI Landscape Preview
Upload a yard photo to preview AI-powered drainage and landscape ideas. Low-maintenance designs reviewed by Carolina Terrain experts
I just got back from KubeCon. There were two completely different conferences happening in the same building.
On the exhibit floor: AI agents everywhere. Autonomous operations. Self-healing infrastructure. NVIDIA's Agent Blueprints. Google's Agent-to-Agent protocols. Every third booth promised to replace your ops team.
In the hallways: Not a single conversation about AI agents.
Instead, engineers asked me things like:
\- "How do you deserialize XML from legacy systems without choking your pipeline?"
\- "We're collecting syslogs from 1,000 edge machines—what's your secret for not dropping lines?"
\- "At 100 microservices emitting 100 metrics per second, how do you guarantee delivery?"
The math is brutal: 100 microservices × 100 metrics/second = 864 million data points per day. 315 billion per year. And enterprises lost $12.9M on average in 2024 due to undetected data errors.
Meanwhile, only 57% of companies even use distributed traces. A "mature" technology.
The AI agent market will hit $47B by 2030. But 95% of enterprise AI pilots fail to deliver expected returns.
Why? The foundation isn't ready. We're discussing autonomous operations while struggling with reliable telemetry.
Next time you see a slick AI agent demo, ask one question: "What's your data loss rate?"
The blank stare will tell you everything.
The future belongs to AI agents. The present belongs to fixing your syslogs. You can't skip the prerequisites just because they're boring.
https://redd.it/1p028yk
@r_devops
On the exhibit floor: AI agents everywhere. Autonomous operations. Self-healing infrastructure. NVIDIA's Agent Blueprints. Google's Agent-to-Agent protocols. Every third booth promised to replace your ops team.
In the hallways: Not a single conversation about AI agents.
Instead, engineers asked me things like:
\- "How do you deserialize XML from legacy systems without choking your pipeline?"
\- "We're collecting syslogs from 1,000 edge machines—what's your secret for not dropping lines?"
\- "At 100 microservices emitting 100 metrics per second, how do you guarantee delivery?"
The math is brutal: 100 microservices × 100 metrics/second = 864 million data points per day. 315 billion per year. And enterprises lost $12.9M on average in 2024 due to undetected data errors.
Meanwhile, only 57% of companies even use distributed traces. A "mature" technology.
The AI agent market will hit $47B by 2030. But 95% of enterprise AI pilots fail to deliver expected returns.
Why? The foundation isn't ready. We're discussing autonomous operations while struggling with reliable telemetry.
Next time you see a slick AI agent demo, ask one question: "What's your data loss rate?"
The blank stare will tell you everything.
The future belongs to AI agents. The present belongs to fixing your syslogs. You can't skip the prerequisites just because they're boring.
https://redd.it/1p028yk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
IBM policy after purchased HashiCorp Vault
We are currently utilizing HashiCorp Vault Enterprise under a three-year contract, and we are now entering the three year.
IBM has mandated that we run an auditing noscript to report our actual client count.
Before executing the noscript, I am concerned about the potential outcome if our actual usage exceeds the contracted client numbers. Specifically, how does IBM typically handle this?
Do they require retroactive payment for the overage, or do they adjust the fees for the upcoming contract year(s)?
Have you encountered similar auditing requests? Any insight into their standard reaction or policy regarding license overage would be greatly appreciated.
Thank you
\#hashicorp #vault #ibm
https://redd.it/1p02t3k
@r_devops
We are currently utilizing HashiCorp Vault Enterprise under a three-year contract, and we are now entering the three year.
IBM has mandated that we run an auditing noscript to report our actual client count.
Before executing the noscript, I am concerned about the potential outcome if our actual usage exceeds the contracted client numbers. Specifically, how does IBM typically handle this?
Do they require retroactive payment for the overage, or do they adjust the fees for the upcoming contract year(s)?
Have you encountered similar auditing requests? Any insight into their standard reaction or policy regarding license overage would be greatly appreciated.
Thank you
\#hashicorp #vault #ibm
https://redd.it/1p02t3k
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is DevOps getting harder, or are we just drowning in our own tooling?
Has DevOps has actually become more complex, or have we slowly buried ourselves under layers of tools, noscripts, and processes that nobody fully understands anymore?
across our org, we somehow ended up with ArgoCD for some teams, Jenkins for others, GitHub Actions in a few pockets, and someone even brought in Prefect just for one workflow. On the infra side we have Terraform, but also Pulumi for one team’s project, plus Datadog and Prometheus running in parallel because no one wanted to kill either one
Then testing and quality brought their own mix. Some people track work in plain sheets, others use light test management options like Qase or Tuskr and analytics has its own stack with Mixpanel, Amplitude, and random noscripts floating around. None of these tools are bad, but together they create maintenance overhead that quietly grows in the background.
At this point, every deployment touches five separate systems and at least one integration someone wrote two years ago and swears is “temporary”. when something breaks, half the time we are troubleshooting the toolchain instead of the code
How do your teams deal with this?
Do you standardize everything hard?
Let teams pick their stack as long as they own the pain?
Or is a certain level of tool chaos just the reality of modern DevOps?
Where do you personally draw the line?
https://redd.it/1p04lsx
@r_devops
Has DevOps has actually become more complex, or have we slowly buried ourselves under layers of tools, noscripts, and processes that nobody fully understands anymore?
across our org, we somehow ended up with ArgoCD for some teams, Jenkins for others, GitHub Actions in a few pockets, and someone even brought in Prefect just for one workflow. On the infra side we have Terraform, but also Pulumi for one team’s project, plus Datadog and Prometheus running in parallel because no one wanted to kill either one
Then testing and quality brought their own mix. Some people track work in plain sheets, others use light test management options like Qase or Tuskr and analytics has its own stack with Mixpanel, Amplitude, and random noscripts floating around. None of these tools are bad, but together they create maintenance overhead that quietly grows in the background.
At this point, every deployment touches five separate systems and at least one integration someone wrote two years ago and swears is “temporary”. when something breaks, half the time we are troubleshooting the toolchain instead of the code
How do your teams deal with this?
Do you standardize everything hard?
Let teams pick their stack as long as they own the pain?
Or is a certain level of tool chaos just the reality of modern DevOps?
Where do you personally draw the line?
https://redd.it/1p04lsx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
centralising compliance across clouds. Is it worth building our own pipeline?
maybe we should build our own internal compliance reporting pipeline instead of relying on native tools. hear me out. we could pull logs from CloudTrail Azure Monitor GCP Logging, dump everything into a data lake or SIEM run standard queries / dashboards. yes it’ll take effort up front but the payoff could be huge in terms of audit readiness and consistency. on the other hand maintaining that might become its own beast. has anyone built something like this.
#
https://redd.it/1p04qn1
@r_devops
maybe we should build our own internal compliance reporting pipeline instead of relying on native tools. hear me out. we could pull logs from CloudTrail Azure Monitor GCP Logging, dump everything into a data lake or SIEM run standard queries / dashboards. yes it’ll take effort up front but the payoff could be huge in terms of audit readiness and consistency. on the other hand maintaining that might become its own beast. has anyone built something like this.
#
https://redd.it/1p04qn1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community