How is AI changing DevOps?
Hey everyone,
Most of us have been using AI tools in our DevOps work for a while now, and I think we're at an interesting point to reflect on what we're actually learning.
I'm curious to hear from the community:
What's working well? Which AI tools have genuinely improved your workflow? What use cases have been most valuable?
Where are the gaps? What hasn't lived up to the hype? Where do these tools still fall short?
How is the role changing? Are you noticing shifts in where you spend your time or what skills are becoming more important?
Best practices emerging? Have you developed any strategies or approaches that others might benefit from?
I suspect many of us are navigating similar questions about how to stay effective and relevant as the landscape evolves. Would be great to hear what you're all experiencing and how you're thinking about it.
Looking forward to the discussion!
https://redd.it/1oefomy
@r_devops
Hey everyone,
Most of us have been using AI tools in our DevOps work for a while now, and I think we're at an interesting point to reflect on what we're actually learning.
I'm curious to hear from the community:
What's working well? Which AI tools have genuinely improved your workflow? What use cases have been most valuable?
Where are the gaps? What hasn't lived up to the hype? Where do these tools still fall short?
How is the role changing? Are you noticing shifts in where you spend your time or what skills are becoming more important?
Best practices emerging? Have you developed any strategies or approaches that others might benefit from?
I suspect many of us are navigating similar questions about how to stay effective and relevant as the landscape evolves. Would be great to hear what you're all experiencing and how you're thinking about it.
Looking forward to the discussion!
https://redd.it/1oefomy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you handle configuration drift in your environments?
We've been facing issues with configuration drift across our environments lately, especially with multiple teams deploying changes. It’s becoming a challenge to keep everything in sync and compliant with our standards.
What strategies do you use to manage this? Are there specific tools that have helped you maintain consistency? I'm curious about both proactive and reactive approaches.
https://redd.it/1oe4q90
@r_devops
We've been facing issues with configuration drift across our environments lately, especially with multiple teams deploying changes. It’s becoming a challenge to keep everything in sync and compliant with our standards.
What strategies do you use to manage this? Are there specific tools that have helped you maintain consistency? I'm curious about both proactive and reactive approaches.
https://redd.it/1oe4q90
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AWS us-east-1 outage postmortem
AWS’s retrospective on the DynamoDB disruption in US-East-1 isn’t remarkable because something broke, things break every day.
What stands out is how long it took to see the full extend of picture and how predictable that delay was.
A small defect in DNS automation quietly rewrote endpoint records.
To be clear, this wasn’t DNS. It was a latent race condition that surfaced through DNS.
At AWS scale, even something as simple as “which IP should this endpoint resolve to” is managed by layers of automation. DynamoDB’s routing is backed by thousands of load balancers across multiple AZs, with automated systems continuously adjusting DNS records.
That one race condition broke the implicit contract every AWS service in us-east-1 relied on: that DynamoDB would always be reachable.
Everything downstream continued behaving as if everything was still consistent: DynamoDB calls timed out, EC2 provisioning stalled, Network Load Balancers reported bad health checks.
There was no alerts paging that “DNS is down.” But there were a lot of individual alerts paging for several tother reasons.
At Rootly, we see this pattern everywhere. The hardest part of a major incident isn’t the fix, it’s realizing that ten small, unrelated failures are all symptoms of the same thing; it’s the root cause.
Every distributed system runs on invisible contracts: this record will resolve, this endpoint will respond, this region will behave like the others. Boundaries are the invisible contracts that are baked into how teams and systems reason about the digital world.
When one breaks silently the failure can hide behind normal behaviour. Systems continue with exactly what they have been programmed to do, now just on the wrong assumptions.
By the time patterns become visible, the real question isn’t ok what failed, it’s how many other systems still trust it.
In this case, the DNS automation bug was just the first crack in a chain of invisible contracts that everyone assumed was safe.
AWS’s DNS automation followed instructions perfectly, as automation does, otherwise why would we automate it? The problem was that the instructions were out of date.
There’s a reason we automate things: automation is great at doing things quickly. That’s an obvious statement. Here’s another one: autmation is terrible at deciding whether it should still be done. Otherwise it would be autonomy.
Across large complex systems, we see this dynamic repeatedly. As a matter of fact, Anthropic published a similar retrospective only days ago.
When every safeguard is automatic, you lose the pauses where intuition normally kicks in.
The result isn’t chaos, it’s confidence that everything must be working because no one has said otherwise.
In AWS’s timeline, DynamoDB errors appeared hours before EC2 and NLB issues were connected.
At scale, no single team owns the entire picture.
Each service has its own alerts, escalation policies, and vocabulary.
From inside DynamoDB, it looked like increased error rates.
From inside EC2, provisioning delays.
From NLB, unhealthy targets.
Every team was right. It was just incomplete and missing context.
The coordination overhead of discovering that everyone is actually working on the same problem is massive.
I’ve heard endless stories about organizations spending more time figuring out who should respond rather than fixing what’s actually wrong. That’s not incompetence. Some of the smartest people in the world work at AWS and other large complex companies. It’s just what happens when visibility is local and failure is global.
AWS actually fixed the race condition quickly, but the region didn’t return to steady state for hours.
In my humble opinion and experience that’s normal. Distributed systems don’t snap back, they tend to drift toward normal states.
If you’re ever part of an outage like this temper your expectations so they are not linear, your systems aren’t waiting on you;
AWS’s retrospective on the DynamoDB disruption in US-East-1 isn’t remarkable because something broke, things break every day.
What stands out is how long it took to see the full extend of picture and how predictable that delay was.
A small defect in DNS automation quietly rewrote endpoint records.
To be clear, this wasn’t DNS. It was a latent race condition that surfaced through DNS.
At AWS scale, even something as simple as “which IP should this endpoint resolve to” is managed by layers of automation. DynamoDB’s routing is backed by thousands of load balancers across multiple AZs, with automated systems continuously adjusting DNS records.
That one race condition broke the implicit contract every AWS service in us-east-1 relied on: that DynamoDB would always be reachable.
Everything downstream continued behaving as if everything was still consistent: DynamoDB calls timed out, EC2 provisioning stalled, Network Load Balancers reported bad health checks.
There was no alerts paging that “DNS is down.” But there were a lot of individual alerts paging for several tother reasons.
At Rootly, we see this pattern everywhere. The hardest part of a major incident isn’t the fix, it’s realizing that ten small, unrelated failures are all symptoms of the same thing; it’s the root cause.
Every distributed system runs on invisible contracts: this record will resolve, this endpoint will respond, this region will behave like the others. Boundaries are the invisible contracts that are baked into how teams and systems reason about the digital world.
When one breaks silently the failure can hide behind normal behaviour. Systems continue with exactly what they have been programmed to do, now just on the wrong assumptions.
By the time patterns become visible, the real question isn’t ok what failed, it’s how many other systems still trust it.
In this case, the DNS automation bug was just the first crack in a chain of invisible contracts that everyone assumed was safe.
AWS’s DNS automation followed instructions perfectly, as automation does, otherwise why would we automate it? The problem was that the instructions were out of date.
There’s a reason we automate things: automation is great at doing things quickly. That’s an obvious statement. Here’s another one: autmation is terrible at deciding whether it should still be done. Otherwise it would be autonomy.
Across large complex systems, we see this dynamic repeatedly. As a matter of fact, Anthropic published a similar retrospective only days ago.
When every safeguard is automatic, you lose the pauses where intuition normally kicks in.
The result isn’t chaos, it’s confidence that everything must be working because no one has said otherwise.
In AWS’s timeline, DynamoDB errors appeared hours before EC2 and NLB issues were connected.
At scale, no single team owns the entire picture.
Each service has its own alerts, escalation policies, and vocabulary.
From inside DynamoDB, it looked like increased error rates.
From inside EC2, provisioning delays.
From NLB, unhealthy targets.
Every team was right. It was just incomplete and missing context.
The coordination overhead of discovering that everyone is actually working on the same problem is massive.
I’ve heard endless stories about organizations spending more time figuring out who should respond rather than fixing what’s actually wrong. That’s not incompetence. Some of the smartest people in the world work at AWS and other large complex companies. It’s just what happens when visibility is local and failure is global.
AWS actually fixed the race condition quickly, but the region didn’t return to steady state for hours.
In my humble opinion and experience that’s normal. Distributed systems don’t snap back, they tend to drift toward normal states.
If you’re ever part of an outage like this temper your expectations so they are not linear, your systems aren’t waiting on you;
Anthropic
A postmortem of three recent issues
This is a technical report on three bugs that intermittently degraded responses from Claude. Below we explain what happened, why it took time to fix, and what we're changing.
they are re-learning what “healthy” means.
The question after every incident shouldn’t be “How did this happen?”, it should be “How do we recognize it faster next time?”
AWS’s transparency helps remind everyone that even at hyperscale, the fundamentals are the same: boundaries drift, context fragments, automation repeats mistakes perfectly.
Reliability isn’t about stopping that, it’s about building the reflexes to see it sooner, talk about it clearly, and learn from it completely.
https://redd.it/1oej8pw
@r_devops
The question after every incident shouldn’t be “How did this happen?”, it should be “How do we recognize it faster next time?”
AWS’s transparency helps remind everyone that even at hyperscale, the fundamentals are the same: boundaries drift, context fragments, automation repeats mistakes perfectly.
Reliability isn’t about stopping that, it’s about building the reflexes to see it sooner, talk about it clearly, and learn from it completely.
https://redd.it/1oej8pw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Finding the Right Audience Without Feeling “Salesy” or Pushy
I’ve been thinking a lot lately about how to genuinely connect with the right audience — whether it’s for a creative project, small business, content channel, or personal brand. There’s so much advice out there about “target demographics” and “Individual DM's,” but sometimes it feels like that turns people into metrics instead of humans.
How do you find and attract the audience who actually resonates with what you do without coming across as pushy or overly promotional?
https://redd.it/1oeksjg
@r_devops
I’ve been thinking a lot lately about how to genuinely connect with the right audience — whether it’s for a creative project, small business, content channel, or personal brand. There’s so much advice out there about “target demographics” and “Individual DM's,” but sometimes it feels like that turns people into metrics instead of humans.
How do you find and attract the audience who actually resonates with what you do without coming across as pushy or overly promotional?
https://redd.it/1oeksjg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
New to Devops - Why Is Everything Structured Differently?
I’m currently transitioning from IT to DevOps at my workplace. So far, it’s been going okay, but one thing that confuses me is encountering code that’s structured differently from other code. It’s hard to find consistency. I’m not sure if it’s because I work at a startup, but I constantly have to dig to figure out why one thing has a certain feature enabled while another doesn’t. There is a lot of these "context-specific decisions" on our code base and there are so many namespaces, so many models, it gets difficult to understand. Is this normal?
https://redd.it/1oejuje
@r_devops
I’m currently transitioning from IT to DevOps at my workplace. So far, it’s been going okay, but one thing that confuses me is encountering code that’s structured differently from other code. It’s hard to find consistency. I’m not sure if it’s because I work at a startup, but I constantly have to dig to figure out why one thing has a certain feature enabled while another doesn’t. There is a lot of these "context-specific decisions" on our code base and there are so many namespaces, so many models, it gets difficult to understand. Is this normal?
https://redd.it/1oejuje
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Scheduling ML Workloads on Kubernetes
Hey guys. This article covers NVIDIA Kai-Scheduler, including gang scheduling, bin packing, consolidation, and queue features, etc:
https://martynassubonis.substack.com/p/scheduling-ml-workloads-on-kubernetes
https://redd.it/1oehdnd
@r_devops
Hey guys. This article covers NVIDIA Kai-Scheduler, including gang scheduling, bin packing, consolidation, and queue features, etc:
https://martynassubonis.substack.com/p/scheduling-ml-workloads-on-kubernetes
https://redd.it/1oehdnd
@r_devops
Substack
Scheduling ML Workloads on Kubernetes
On Gang Scheduling, Bin Packing, Consolidation, and the Like
Suggestions of tools to improve life quality of a devops engineer
I'm looking for suggestions that will improve my day to day operations as a devops engineer across the whole stack. For example a tool or ide that helps visualize and interact with the k8s cluster. I'm aware of something called lens ide but havent looked too much into it. Or autocompletion/suggestions for dockerfiles etc.. anything really. What is something you are using and would never go back to not using it again?
https://redd.it/1oebaei
@r_devops
I'm looking for suggestions that will improve my day to day operations as a devops engineer across the whole stack. For example a tool or ide that helps visualize and interact with the k8s cluster. I'm aware of something called lens ide but havent looked too much into it. Or autocompletion/suggestions for dockerfiles etc.. anything really. What is something you are using and would never go back to not using it again?
https://redd.it/1oebaei
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Anyone else feel AI is making them a faster typist, but a dumber developer? 😩
I feel like I'm not programming anymore, I'm just auditing AI output.
Copilot/Cursor is great for boilerplate. It’ll crank out a CRUD endpoint in seconds. But then I spend 3x the time trying to spot the subtle, contextual bug it slipped in (e.g., a tiny thread-safety issue, or a totally wrong way to handle an old library).
It feels like my brain’s problem-solving pathways are atrophying. I trade the joy of solving a hard problem for the anxiety of verifying a complex, auto-generated one. This isn't higher velocity; it's just a different, more draining kind of work.
Am I alone in feeling this cognitive burnout?
https://redd.it/1oepjg3
@r_devops
I feel like I'm not programming anymore, I'm just auditing AI output.
Copilot/Cursor is great for boilerplate. It’ll crank out a CRUD endpoint in seconds. But then I spend 3x the time trying to spot the subtle, contextual bug it slipped in (e.g., a tiny thread-safety issue, or a totally wrong way to handle an old library).
It feels like my brain’s problem-solving pathways are atrophying. I trade the joy of solving a hard problem for the anxiety of verifying a complex, auto-generated one. This isn't higher velocity; it's just a different, more draining kind of work.
Am I alone in feeling this cognitive burnout?
https://redd.it/1oepjg3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Spent 40k on a monitoring solution we never used.
The purchase decision:
\- Sales demo looked amazing
\- Promised AI-powered anomaly detection
\- Would solve all our monitoring problems
\- Got VP approval for 40k annual contract
What happened:
\- Setup took 3 months
\- Required custom instrumentation
\- AI features needed 6 months of data
\- Dashboard was too complex
\- Team kept using Grafana instead
One year later:
\- Login count: 47 times
\- Alerts configured: 3
\- Useful insights: 0
\- Money spent: $40,000
Why it failed:
\- Didn't pilot with smaller team first
\- Bought for features, not current needs
\- No champions within the team
\- Too complex for our maturity level
\- Existing tools were good enough
Lesson: Enterprise sales demos show what's possible, not what you need. Start with free tools and upgrade when you feel the pain.
(https://x.com/brankopetric00/status/1981484857440993523)
https://redd.it/1oeqkvs
@r_devops
The purchase decision:
\- Sales demo looked amazing
\- Promised AI-powered anomaly detection
\- Would solve all our monitoring problems
\- Got VP approval for 40k annual contract
What happened:
\- Setup took 3 months
\- Required custom instrumentation
\- AI features needed 6 months of data
\- Dashboard was too complex
\- Team kept using Grafana instead
One year later:
\- Login count: 47 times
\- Alerts configured: 3
\- Useful insights: 0
\- Money spent: $40,000
Why it failed:
\- Didn't pilot with smaller team first
\- Bought for features, not current needs
\- No champions within the team
\- Too complex for our maturity level
\- Existing tools were good enough
Lesson: Enterprise sales demos show what's possible, not what you need. Start with free tools and upgrade when you feel the pain.
(https://x.com/brankopetric00/status/1981484857440993523)
https://redd.it/1oeqkvs
@r_devops
X (formerly Twitter)
Branko (@brankopetric00) on X
Spent 40k on a monitoring solution we never used.
The purchase decision:
- Sales demo looked amazing
- Promised AI-powered anomaly detection
- Would solve all our monitoring problems
- Got VP approval for 40k annual contract
What happened:
- Setup took…
The purchase decision:
- Sales demo looked amazing
- Promised AI-powered anomaly detection
- Would solve all our monitoring problems
- Got VP approval for 40k annual contract
What happened:
- Setup took…
Auto scaling RabbitMq
I am busy working on a project to replace our AWS managed RabbitMQ service with a Rabbitmq hosted on an EC2 instance. We want to move away from the managed service due to the mandatory maintenance window imposed by AWS.
We are a startup so money is tight. So i am looking to do this in the most cost effective manner.
My current thinking is having one dedicate reserved instance that runs 24/7.
The having a ASG that is able to spin up a spot instance or two when we have a message storm.
We have an IOT company and when the APN blips all our devices reconnect at once causing our current RabbitMQ service's CPU to Spike.
So I would like an extra node to spin up, assist the master node with processing and then gracefully scale down again, leaving us with a single instance rabbit.
Is rabbit built to handle this type of thing? I am getting contrasting information and I am looking to hear from someone else who has gone down this route before.
Any advise, or experience welcome.
https://redd.it/1oeqo8r
@r_devops
I am busy working on a project to replace our AWS managed RabbitMQ service with a Rabbitmq hosted on an EC2 instance. We want to move away from the managed service due to the mandatory maintenance window imposed by AWS.
We are a startup so money is tight. So i am looking to do this in the most cost effective manner.
My current thinking is having one dedicate reserved instance that runs 24/7.
The having a ASG that is able to spin up a spot instance or two when we have a message storm.
We have an IOT company and when the APN blips all our devices reconnect at once causing our current RabbitMQ service's CPU to Spike.
So I would like an extra node to spin up, assist the master node with processing and then gracefully scale down again, leaving us with a single instance rabbit.
Is rabbit built to handle this type of thing? I am getting contrasting information and I am looking to hear from someone else who has gone down this route before.
Any advise, or experience welcome.
https://redd.it/1oeqo8r
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
A fast, private, secure, open-source S3 GUI
Since the web interfaces for Amazon S3 and Cloudflare R2 are a bit tedious, a friend of mine and I decided to build nicebucket, an open-source alternative using Tauri and React, released under the GPLv3 license.
I think it is useful for anyone who works with S3, R2, or any other S3 compatible service. We do not track any data and store all credentials safely via the native keychains.
We are still quite early so feedback is very much appreciated!
https://redd.it/1oeql17
@r_devops
Since the web interfaces for Amazon S3 and Cloudflare R2 are a bit tedious, a friend of mine and I decided to build nicebucket, an open-source alternative using Tauri and React, released under the GPLv3 license.
I think it is useful for anyone who works with S3, R2, or any other S3 compatible service. We do not track any data and store all credentials safely via the native keychains.
We are still quite early so feedback is very much appreciated!
https://redd.it/1oeql17
@r_devops
GitHub
GitHub - nicebucket-org/nicebucket: A fast, private, open-source S3 GUI.
A fast, private, open-source S3 GUI. Contribute to nicebucket-org/nicebucket development by creating an account on GitHub.
Built a desktop app for unified K8s + GitOps visibility - looking for feedback
Hey everyone,
We just shipped something and would love honest feedback from the community.
What we built: Kunobi is a new platform that brings Kubernetes cluster management and GitOps workflows into a single, extensible system — so teams don’t have to juggle Lens, K9s, and GitOps CLIs to stay in control.
We make it easier to use Flux and Argo by enabling seamless interaction with GitOps tools. We’ve focused on addressing pain points we’ve faced ourselves — tools that are slow, memory-heavy, or just not built for scale.
Key features include:
Kubernetes resource discovery
Full RBAC compliance
Multi-cluster support
Fast keyboard navigation
Helm release history
Helm values and manifest diffing
Flux resource tree visualization
[Here’s a short demo video for clarity.](https://youtu.be/y0m5L_XqGps?si=CSKS5Dqby-NqIixH)
Who we are: Kunobi is built by Zondax AG, a Swiss-based engineering team that’s been working in DevOps, blockchain, and infrastructure for years. We’ve built low-level, performance-critical tools for projects in the CNCF and Web3 ecosystems - Kunobi started as an internal tool to manage our own clusters, and evolved into something we wanted to share with others facing the same GitOps challenges.
Current state: It’s rough and in beta, but fully functional. We’ve been using it internally for a few months.
What we’re looking for:
Feedback on whether this actually solves a real problem for you
What features/integrations matter most
Any concerns or questions about the approach
Fair warning — we’re biased since we use this daily. But that’s also why we think it might be useful to others dealing with the same tool sprawl.
Happy to answer questions about how it works, architecture decisions, or anything else.
🔗 https://kunobi.ninja — download the beta here.
https://redd.it/1oetwyc
@r_devops
Hey everyone,
We just shipped something and would love honest feedback from the community.
What we built: Kunobi is a new platform that brings Kubernetes cluster management and GitOps workflows into a single, extensible system — so teams don’t have to juggle Lens, K9s, and GitOps CLIs to stay in control.
We make it easier to use Flux and Argo by enabling seamless interaction with GitOps tools. We’ve focused on addressing pain points we’ve faced ourselves — tools that are slow, memory-heavy, or just not built for scale.
Key features include:
Kubernetes resource discovery
Full RBAC compliance
Multi-cluster support
Fast keyboard navigation
Helm release history
Helm values and manifest diffing
Flux resource tree visualization
[Here’s a short demo video for clarity.](https://youtu.be/y0m5L_XqGps?si=CSKS5Dqby-NqIixH)
Who we are: Kunobi is built by Zondax AG, a Swiss-based engineering team that’s been working in DevOps, blockchain, and infrastructure for years. We’ve built low-level, performance-critical tools for projects in the CNCF and Web3 ecosystems - Kunobi started as an internal tool to manage our own clusters, and evolved into something we wanted to share with others facing the same GitOps challenges.
Current state: It’s rough and in beta, but fully functional. We’ve been using it internally for a few months.
What we’re looking for:
Feedback on whether this actually solves a real problem for you
What features/integrations matter most
Any concerns or questions about the approach
Fair warning — we’re biased since we use this daily. But that’s also why we think it might be useful to others dealing with the same tool sprawl.
Happy to answer questions about how it works, architecture decisions, or anything else.
🔗 https://kunobi.ninja — download the beta here.
https://redd.it/1oetwyc
@r_devops
YouTube
Meet Kunobi — The Visual GitOps Platform for Effortless Kubernetes Management
What if your GitOps workflows actually felt effortless?
Meet Kunobi – your all-in-one platform for managing Kubernetes and GitOps with total clarity.
See everything across clusters, environments, and repositories in one clean visual interface that complements…
Meet Kunobi – your all-in-one platform for managing Kubernetes and GitOps with total clarity.
See everything across clusters, environments, and repositories in one clean visual interface that complements…
MongoDB Pod dont create User inside container
This is my mongodb manifest yaml file, when pod running success, i checked inside mongodb container dont create my user despite i add mono-init.js to folder: docker-entrypoint-initdb.d.
I do the same with docker-compose and everything will be ok!
How to fix this issue. Please help me
https://redd.it/1oeuvm5
@r_devops
This is my mongodb manifest yaml file, when pod running success, i checked inside mongodb container dont create my user despite i add mono-init.js to folder: docker-entrypoint-initdb.d.
I do the same with docker-compose and everything will be ok!
How to fix this issue. Please help me
https://redd.it/1oeuvm5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Real world production on a cv for ansible
Hi all,
I have a network engineer background
I have done playbooks on network devices, mainly for f5
But I was contacted for an ansible job, so I need to put more "system" or DevOps kind of project
Can you give me ideas of what are you doing in production so I can do it myself and put it in my CV
Would an ansible certificate be useful, I have the basis
https://redd.it/1oetwcf
@r_devops
Hi all,
I have a network engineer background
I have done playbooks on network devices, mainly for f5
But I was contacted for an ansible job, so I need to put more "system" or DevOps kind of project
Can you give me ideas of what are you doing in production so I can do it myself and put it in my CV
Would an ansible certificate be useful, I have the basis
https://redd.it/1oetwcf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Only allow specific country IP range to SSH
Hi, May I know what is the simplest way to allow a specific country IP range to access my VPS SSH?
I prefer using UFW but not iptable coz I am a newbie and afraid drilling that down will mess things up
I am reading this post but not sure if it's valid to go with Ubunutu
https://blog.reverside.ch/UFW-GeoIP-and-how-to-get-there/
https://redd.it/1oexn4l
@r_devops
Hi, May I know what is the simplest way to allow a specific country IP range to access my VPS SSH?
I prefer using UFW but not iptable coz I am a newbie and afraid drilling that down will mess things up
I am reading this post but not sure if it's valid to go with Ubunutu
https://blog.reverside.ch/UFW-GeoIP-and-how-to-get-there/
https://redd.it/1oexn4l
@r_devops
our postmortem from last week just identified the same root cause from june
had database connection pool exhaustion issue last tuesday. took three hours to fix. wrote the postmortem yesterday and vp pointed out we had the exact same issue in june.
pulled up that postmortem. action items were increase pool size and add better monitoring. neither happened because we needed to ship features to stay competitive.
so we shipped features for four months while the known prod issue sat unfixed. then it broke again and leadership acted shocked.
now they want to know why we keep having repeat incidents. maybe because postmortem action items go into backlog behind feature work and nobody looks at them until the same thing breaks again.
third time this year we've had a repeat incident where the fix was documented but never implemented. starting to wonder why we even write postmortems if nothing changes.
how do you actually get action items prioritized or is this just accepted everywhere?
https://redd.it/1oeyqqd
@r_devops
had database connection pool exhaustion issue last tuesday. took three hours to fix. wrote the postmortem yesterday and vp pointed out we had the exact same issue in june.
pulled up that postmortem. action items were increase pool size and add better monitoring. neither happened because we needed to ship features to stay competitive.
so we shipped features for four months while the known prod issue sat unfixed. then it broke again and leadership acted shocked.
now they want to know why we keep having repeat incidents. maybe because postmortem action items go into backlog behind feature work and nobody looks at them until the same thing breaks again.
third time this year we've had a repeat incident where the fix was documented but never implemented. starting to wonder why we even write postmortems if nothing changes.
how do you actually get action items prioritized or is this just accepted everywhere?
https://redd.it/1oeyqqd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Database branches to simplify CI/CD
Careful some self-promo ahead (But I genuinely think this is an interesting topic to discuss).
In my experience failed migrations and database differences between environments are one of the most common causes of incidents. I have had failed deployments, half-applied migrations and even full-blown outages because someone didn't consider the legacy null values that were present in production but not on dev.
Many devs think "down migrations" are the answer to this. But they are hard to get right since a rollback of the code usually also removes the migration code from the container.
I work at Tiger Data (formerly Timescale) and we released a feature to fork an existing database this week. I wasn't involved in the development of the underlying tech, but it uses a copy on write mechanism that makes this process complete in under a minute. Imo these kind of features are a great way to simplify CI/CD and prevent issues such as the ones I mentioned above.
Modern infrastructure like this (e.g. Neon also has branches) actually offer a lot of options to simplify CI/CD. You can cheaply create a clone of your production database and use that for testing your migrations. You can even get a good idea of how long it will take to run your migrations by doing that.
Of course you'll also need to cleanup again and figure out if the additional cost of automatically running a db instance in your workflow is worth it. You could in theory even go further though and use the mechanism to spin up a complete test environment for each PR that a developer creates. Similar to how this is often done for frontend changes in my experience.
In practice a lot of the CI/CD setups I have worked with in other companies are really dusty and do not take advantage of the capabilities of the infrastructure that is available. It's also often hard to get buy in from decision makers to invest time in this kind of automation. But when it works it is down right beautiful.
https://redd.it/1of09uc
@r_devops
Careful some self-promo ahead (But I genuinely think this is an interesting topic to discuss).
In my experience failed migrations and database differences between environments are one of the most common causes of incidents. I have had failed deployments, half-applied migrations and even full-blown outages because someone didn't consider the legacy null values that were present in production but not on dev.
Many devs think "down migrations" are the answer to this. But they are hard to get right since a rollback of the code usually also removes the migration code from the container.
I work at Tiger Data (formerly Timescale) and we released a feature to fork an existing database this week. I wasn't involved in the development of the underlying tech, but it uses a copy on write mechanism that makes this process complete in under a minute. Imo these kind of features are a great way to simplify CI/CD and prevent issues such as the ones I mentioned above.
Modern infrastructure like this (e.g. Neon also has branches) actually offer a lot of options to simplify CI/CD. You can cheaply create a clone of your production database and use that for testing your migrations. You can even get a good idea of how long it will take to run your migrations by doing that.
Of course you'll also need to cleanup again and figure out if the additional cost of automatically running a db instance in your workflow is worth it. You could in theory even go further though and use the mechanism to spin up a complete test environment for each PR that a developer creates. Similar to how this is often done for frontend changes in my experience.
In practice a lot of the CI/CD setups I have worked with in other companies are really dusty and do not take advantage of the capabilities of the infrastructure that is available. It's also often hard to get buy in from decision makers to invest time in this kind of automation. But when it works it is down right beautiful.
https://redd.it/1of09uc
@r_devops
Tiger Data Blog
Fast, Zero-Copy Database Forks: Deploy on Fridays with Confidence
Test AI-generated migrations on production data safely. Fork your database in minutes, validate changes, and deploy on Fridays with confidence.
Linux admin to devops
I am moving from Linux admin to devops role via an internal movement....
The thing is I know lil of all ansible,terraform, docker, kubernetes nd jenkins... I don't write any complex or big stuff... And I won't have much ppl to guide in new team....How should I start now ..where to begin !? I have a months time before I land up in new team...
https://redd.it/1oezeoz
@r_devops
I am moving from Linux admin to devops role via an internal movement....
The thing is I know lil of all ansible,terraform, docker, kubernetes nd jenkins... I don't write any complex or big stuff... And I won't have much ppl to guide in new team....How should I start now ..where to begin !? I have a months time before I land up in new team...
https://redd.it/1oezeoz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Question Version Bumping and Automating Releases
I work at a small company (2 person dev team) and there are no real protocols in place for version control or CI/CD. It's basically very smart scientists creating tools to aid R&D and QA on our product.
I don't want to re-invent the wheel, but I also want to take advantage of the freedom I have at work to learn how these processes and tools come about.
Our entire tech stack is basically python using PyQt to make windows desktop applications (yes i'm developing entirely on windows).
The workflow i've come up with is the following:
- Versions tracked in a .py file
- referenced by my pyinstaller .spec file, and my main.py to update noscript bar version, and file name version after compiling
- I have a noscript that bumps the version on
- allows inputs of
- The noscript pushes the tag to main, which then triggers a GH actions
- the GH actions compiles and creates a release with a changelog generated from commits between version tags
- (eg summary of commits between v1.0.0..v1.1.0)
I'm trying to implement a git flow branching system, but have not incorporated
here's some ASCII art from claude (with a review and edits) attempting to demonstrate my release workflow from what i described (going bottom to top like
I know the workflow is missing release branches, where i would ideally go like the following:
My question is mostly about the automation of all the above workflows. How are people managing versions? Is a .py file given my stack reasonable/a professional approach?
Could I offload more of this process to GH actions for example? and have say a noscript that is just called
https://redd.it/1of1sa6
@r_devops
I work at a small company (2 person dev team) and there are no real protocols in place for version control or CI/CD. It's basically very smart scientists creating tools to aid R&D and QA on our product.
I don't want to re-invent the wheel, but I also want to take advantage of the freedom I have at work to learn how these processes and tools come about.
Our entire tech stack is basically python using PyQt to make windows desktop applications (yes i'm developing entirely on windows).
The workflow i've come up with is the following:
- Versions tracked in a .py file
- referenced by my pyinstaller .spec file, and my main.py to update noscript bar version, and file name version after compiling
- I have a noscript that bumps the version on
dev when i'm ready to put out a new release - allows inputs of
major, minor, or patch to determine how the version is bumped.- The noscript pushes the tag to main, which then triggers a GH actions
- the GH actions compiles and creates a release with a changelog generated from commits between version tags
- (eg summary of commits between v1.0.0..v1.1.0)
I'm trying to implement a git flow branching system, but have not incorporated
release branches yet. here's some ASCII art from claude (with a review and edits) attempting to demonstrate my release workflow from what i described (going bottom to top like
git log):* Merge main back into dev - sync release v1.2.0 (HEAD -> dev)
|\
| * v1.2.0 - release tagged on main (release created on GH here) (tag: v1.2.0, main)
| |\
| | * Merge dev into main for release v1.2.0
| |/
| * QA complete on dev (dev)
| * Merge feat/fix into dev
| |\
| | * Implement feature X (feat/fix)
| | * Branch feat/fix created from dev
| |/
* Dev baseline before feature work
I know the workflow is missing release branches, where i would ideally go like the following:
feat -> dev -> release -> dev dev
` -> main | main -> release created from main
| | |
`-> hotfix (if needed)
My question is mostly about the automation of all the above workflows. How are people managing versions? Is a .py file given my stack reasonable/a professional approach?
Could I offload more of this process to GH actions for example? and have say a noscript that is just called
release.py or .sh that triggers this entire process?https://redd.it/1of1sa6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Outsider Curiosity - Outages
I sat through the Alaska Airlines “IT outage” yesterday and it got me very curious about how these situations get managed behind the scenes.
I’m very curious to know how many people are involved in troubleshooting/debugging something like that. Is there a solid staff that’s scheduled around the clock that can be trusted? Or does the company have to call in the savant no matter what time of day it is? Intuitively I feel like this could potentially be a “too many cooks in the kitchen” situation if the task isn’t handed over to a select group.
Are you clocking overtime during these situations or everyone’s salaried and just has to suck it up? Are the suits breathing down your neck during an outage or do they give you some space to work?
I feel like there must be some good insider stories here that I haven’t heard/read before. Feel free to link me any reading. Apologies if this is a common post in this sub, it’s just been on the front of my mind since last night.
https://redd.it/1of4qje
@r_devops
I sat through the Alaska Airlines “IT outage” yesterday and it got me very curious about how these situations get managed behind the scenes.
I’m very curious to know how many people are involved in troubleshooting/debugging something like that. Is there a solid staff that’s scheduled around the clock that can be trusted? Or does the company have to call in the savant no matter what time of day it is? Intuitively I feel like this could potentially be a “too many cooks in the kitchen” situation if the task isn’t handed over to a select group.
Are you clocking overtime during these situations or everyone’s salaried and just has to suck it up? Are the suits breathing down your neck during an outage or do they give you some space to work?
I feel like there must be some good insider stories here that I haven’t heard/read before. Feel free to link me any reading. Apologies if this is a common post in this sub, it’s just been on the front of my mind since last night.
https://redd.it/1of4qje
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community