Building Docker Images with Nix
I've been experimenting creating container images via Nix and wanted to share with the community. I've found the results to be rather insane!
[Check it out here!](https://github.com/okwilkins/h8s/tree/f7d8832efce6a19bb32cdc49b39928f8de49db80/images/image-buildah)
The project linked is a fully worked example of how Nix is used to make a container that can create other containers. These will be used to build containers within my homelab and self-hosted CI/CD pipelines in Argo Workflows. If you're into homelabbing give the wider repo a look through also!
Using Nix allows for the following benefits:
1. The shell environment and binaries within the container is near identical to the shell Nix can provide locally.
2. The image is run from scratch.
* This means the image is nearly as small as possible.
* Security-wise, there are fewer binaries that are left in when compared to distros like Alpine or Debian based images.
3. As Nix flakes pin the exact versions, all binaries will stay at a constant and known state.
* With Alpine or Debian based images, when updating or installing packages, this is not a given.
4. The commands run via Taskfile will be the same locally as they are within CI/CD pipelines.
5. It allows for easily allow for different CPU architecture images and local dev.
The only big downside I've found with this is that when running the `nix build` step, the cache is often invalidated, leading to the image to be nearly completely rebuilt every time.
Really interested in knowing what you all think!
https://redd.it/1p7mpnd
@r_devops
I've been experimenting creating container images via Nix and wanted to share with the community. I've found the results to be rather insane!
[Check it out here!](https://github.com/okwilkins/h8s/tree/f7d8832efce6a19bb32cdc49b39928f8de49db80/images/image-buildah)
The project linked is a fully worked example of how Nix is used to make a container that can create other containers. These will be used to build containers within my homelab and self-hosted CI/CD pipelines in Argo Workflows. If you're into homelabbing give the wider repo a look through also!
Using Nix allows for the following benefits:
1. The shell environment and binaries within the container is near identical to the shell Nix can provide locally.
2. The image is run from scratch.
* This means the image is nearly as small as possible.
* Security-wise, there are fewer binaries that are left in when compared to distros like Alpine or Debian based images.
3. As Nix flakes pin the exact versions, all binaries will stay at a constant and known state.
* With Alpine or Debian based images, when updating or installing packages, this is not a given.
4. The commands run via Taskfile will be the same locally as they are within CI/CD pipelines.
5. It allows for easily allow for different CPU architecture images and local dev.
The only big downside I've found with this is that when running the `nix build` step, the cache is often invalidated, leading to the image to be nearly completely rebuilt every time.
Really interested in knowing what you all think!
https://redd.it/1p7mpnd
@r_devops
GitHub
h8s/images/image-buildah at f7d8832efce6a19bb32cdc49b39928f8de49db80 · okwilkins/h8s
Homernetes is a Talos OS based K8s cluster for my homelab. - okwilkins/h8s
Skill Rot from First DevOps-Adjacent Job. Feel Like I Don’t Have the Skills to Jump.
Hello, intelligentsia of the illustrious r/devops. I’m in a bit of a pickle and am looking for some insight. So I’m about 1 year and couple of months into my first job which happens to be in big tech. The company is known to be very stable and a “rest and vest” sort of situation with good WLB.
My work abstractly entails ETL operations on internal documents. The actual transformation here is usually comprised of node noscripts that find metadata in the documents and re-inserts the metadata, either in its original form or transformed by some computations, into a simplified version of the documents (think html flattering) before dropping them in an s3 bucket. I also schedule and create GitHub Action jobs for these operations based off of jobs already established. Additionally we manage our infrastructure with terraform and AWS. The pay is very good for this early in my career.
This is where the big wrinkle comes in, it seems that our architecture and processes are very mature and the team’s pace is very slow/stable. I looked back at all my commits in the months since I started working and was shocked at how few code contributions I’ve made. In terms of the infrastructure the only real exposure I’ve had to it is through routine/ run book style operations. I haven’t been actually able to alter the terraform files in all the time I’ve been here. There is a lot of tedious/rote work. My most significant contributions have been in the ETL side.
At this point some may say to communicate with my boss to ask for more on the infra side/ more complex tasks. However, the issue is that it genuinely doesn’t seem that there are that many more complex things to do. I realized recently that the second most junior person on the team whose been here a couple more years than I have and also has had more jobs than I have doesn’t seem to do all that more complex work than me. The most complex work just goes to the senior engineer and I suspect it’s been like this for a while. I had a feeling that this position may be bad for my career 6 months in but held out hope until now and I’m now afraid I realized too late.
I am hoping to find a junior devops role, but I am feeling fearful and overwhelmed since 1. I barely have the experience needed for devops with how surface level my experience here has been and 2. the job market seems vicious. I am beginning to upskill and work on getting a tight understanding of python, docker, kubernetes, and AWS. I also plan to make some projects. I hope to hop within the next 6 months.
I guess my questions with all this information in mind are:
1. Is my plan realistic? How much do projects showing self-learned devops skills really matter when the job I performed did not actually require or teach those skills. Short of lying, this will put me at a significant disadvantage, right?
2. If you were in my position how would you handle this?
Thank you all in advance. I’m feeling very uncertain about the future of my career.
https://redd.it/1p7sd6t
@r_devops
Hello, intelligentsia of the illustrious r/devops. I’m in a bit of a pickle and am looking for some insight. So I’m about 1 year and couple of months into my first job which happens to be in big tech. The company is known to be very stable and a “rest and vest” sort of situation with good WLB.
My work abstractly entails ETL operations on internal documents. The actual transformation here is usually comprised of node noscripts that find metadata in the documents and re-inserts the metadata, either in its original form or transformed by some computations, into a simplified version of the documents (think html flattering) before dropping them in an s3 bucket. I also schedule and create GitHub Action jobs for these operations based off of jobs already established. Additionally we manage our infrastructure with terraform and AWS. The pay is very good for this early in my career.
This is where the big wrinkle comes in, it seems that our architecture and processes are very mature and the team’s pace is very slow/stable. I looked back at all my commits in the months since I started working and was shocked at how few code contributions I’ve made. In terms of the infrastructure the only real exposure I’ve had to it is through routine/ run book style operations. I haven’t been actually able to alter the terraform files in all the time I’ve been here. There is a lot of tedious/rote work. My most significant contributions have been in the ETL side.
At this point some may say to communicate with my boss to ask for more on the infra side/ more complex tasks. However, the issue is that it genuinely doesn’t seem that there are that many more complex things to do. I realized recently that the second most junior person on the team whose been here a couple more years than I have and also has had more jobs than I have doesn’t seem to do all that more complex work than me. The most complex work just goes to the senior engineer and I suspect it’s been like this for a while. I had a feeling that this position may be bad for my career 6 months in but held out hope until now and I’m now afraid I realized too late.
I am hoping to find a junior devops role, but I am feeling fearful and overwhelmed since 1. I barely have the experience needed for devops with how surface level my experience here has been and 2. the job market seems vicious. I am beginning to upskill and work on getting a tight understanding of python, docker, kubernetes, and AWS. I also plan to make some projects. I hope to hop within the next 6 months.
I guess my questions with all this information in mind are:
1. Is my plan realistic? How much do projects showing self-learned devops skills really matter when the job I performed did not actually require or teach those skills. Short of lying, this will put me at a significant disadvantage, right?
2. If you were in my position how would you handle this?
Thank you all in advance. I’m feeling very uncertain about the future of my career.
https://redd.it/1p7sd6t
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you make fzf ignore filesystem areas when you dont have a global gitignore and are not necessarily in a git folder?
noscript
I think the fzf docs allow filtering out gitignore items like node_modules, dist but it pulls a lot of unwanted results from XDG .cache/bun/install/cache, for example
https://redd.it/1p7nj2m
@r_devops
noscript
I think the fzf docs allow filtering out gitignore items like node_modules, dist but it pulls a lot of unwanted results from XDG .cache/bun/install/cache, for example
https://redd.it/1p7nj2m
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Our dev workflow feels like a group project gone wrong
I need ONE platform that unifies everyone and lets us track dependencies in a way humans can actually understand. Design, product, marketing, and dev teams all contribute to our releases, but no one sees the same information. Marketing launches features before they’re done. Product teams write requirements no one reads. Devs don’t know what’s blocked until it's too late.
https://redd.it/1p7v4ev
@r_devops
I need ONE platform that unifies everyone and lets us track dependencies in a way humans can actually understand. Design, product, marketing, and dev teams all contribute to our releases, but no one sees the same information. Marketing launches features before they’re done. Product teams write requirements no one reads. Devs don’t know what’s blocked until it's too late.
https://redd.it/1p7v4ev
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops teams: how do you handle cost tracking without it becoming someone's full time job?
Our cloud costs have been creeping up and leadership wants better visibility, but i'm trying to figure out how to actually implement this without it becoming a huge time sink for the team. We're a small devops group, 6 people, managing infrastructure for the whole company.
right now cost tracking is basically whoever has time that week pulls some reports from aws cost explorer and tries to spot anything weird. it's reactive, inconsistent, and honestly pretty useless. but i also can't justify having someone spend 10+ hours a week on cost analysis when we're already stretched thin.
what i'm looking for is a way to handle this that's actually sustainable:
- automated alerts when costs spike or anomalies happen, not manual checking
- reports that generate themselves and go to the right people without intervention
- recommendations we can actually act on quickly, not deep analysis projects
- something that integrates into our existing workflow instead of being a separate thing to maintain
- visibility that helps the team make better decisions during normal work, not a separate cost optimization initiative
basically i want cost awareness to be built into how we operate, not a side project that falls on whoever drew the short straw that quarter.
How are other small devops teams handling this? What's actually worked in practice?
https://redd.it/1p7xpx3
@r_devops
Our cloud costs have been creeping up and leadership wants better visibility, but i'm trying to figure out how to actually implement this without it becoming a huge time sink for the team. We're a small devops group, 6 people, managing infrastructure for the whole company.
right now cost tracking is basically whoever has time that week pulls some reports from aws cost explorer and tries to spot anything weird. it's reactive, inconsistent, and honestly pretty useless. but i also can't justify having someone spend 10+ hours a week on cost analysis when we're already stretched thin.
what i'm looking for is a way to handle this that's actually sustainable:
- automated alerts when costs spike or anomalies happen, not manual checking
- reports that generate themselves and go to the right people without intervention
- recommendations we can actually act on quickly, not deep analysis projects
- something that integrates into our existing workflow instead of being a separate thing to maintain
- visibility that helps the team make better decisions during normal work, not a separate cost optimization initiative
basically i want cost awareness to be built into how we operate, not a side project that falls on whoever drew the short straw that quarter.
How are other small devops teams handling this? What's actually worked in practice?
https://redd.it/1p7xpx3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How are you handling AIsec for developers using ChatGPT and other GenAI tools?
Found out last week that about half our dev team has been using ChatGPT and GitHub Copilot for code generation. Nobody asked permission, they just started using it. Now I'm worried about what proprietary code or sensitive data might have been sent to these platforms.
We need to secure and govern the usage of generative AI before this becomes a bigger problem, but I don't want to just ban it and drive it underground. Developers will always find workarounds.
What policies or technical controls have worked for you? How do you balance AI security with productivity?
https://redd.it/1p7xih3
@r_devops
Found out last week that about half our dev team has been using ChatGPT and GitHub Copilot for code generation. Nobody asked permission, they just started using it. Now I'm worried about what proprietary code or sensitive data might have been sent to these platforms.
We need to secure and govern the usage of generative AI before this becomes a bigger problem, but I don't want to just ban it and drive it underground. Developers will always find workarounds.
What policies or technical controls have worked for you? How do you balance AI security with productivity?
https://redd.it/1p7xih3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
ArgoCD but just for Docker containers
Kubernetes can be overkill, and I bet some folks are still running good old Docker Compose with custom automation.
I was wondering what if there were an ArgoCD-like tool, but just for Docker containers? Obviously, compared to Kubernetes, it wouldn't be feature complete.. But that's kind of the point.
Does such a tool already exist? If yes, please let me know! And if it did, would it be useful to you?
https://redd.it/1p7xv48
@r_devops
Kubernetes can be overkill, and I bet some folks are still running good old Docker Compose with custom automation.
I was wondering what if there were an ArgoCD-like tool, but just for Docker containers? Obviously, compared to Kubernetes, it wouldn't be feature complete.. But that's kind of the point.
Does such a tool already exist? If yes, please let me know! And if it did, would it be useful to you?
https://redd.it/1p7xv48
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Funny how the worst DevOps bottlenecks have nothing to do with tools, and almost nobody brings them up.
Every time people talk about DevOps, the conversation somehow circles back to tools, CI/CD choices, Kubernetes setups, IaC frameworks, whatever. But the longer I’ve worked with different teams, the more I’m convinced the biggest bottlenecks aren’t usually the tools.
It’s all the weird “in-between” stuff nobody ever brings up.
One thing I keep running into is just… messy handoffs. A feature is “done,” but the tests are half-missing, or the deploy requirements aren’t clear, or the local/staging/prod environments are all slightly different in ways that break everything at the worst possible moment.
None of that shows up in a DevOps guide, but it slows things down more than any actual infrastructure issue.
Another one, slow feedback loops. When a pipeline takes 20-30 minutes per commit, people won’t say anything, but they silently start pushing code less often.
It completely changes how the team works, even if the pipeline is technically “fine.”
Anyway, I’m curious what other people have seen.
What’s a DevOps bottleneck you’ve dealt with that doesn’t really get talked about?
https://redd.it/1p7ynlq
@r_devops
Every time people talk about DevOps, the conversation somehow circles back to tools, CI/CD choices, Kubernetes setups, IaC frameworks, whatever. But the longer I’ve worked with different teams, the more I’m convinced the biggest bottlenecks aren’t usually the tools.
It’s all the weird “in-between” stuff nobody ever brings up.
One thing I keep running into is just… messy handoffs. A feature is “done,” but the tests are half-missing, or the deploy requirements aren’t clear, or the local/staging/prod environments are all slightly different in ways that break everything at the worst possible moment.
None of that shows up in a DevOps guide, but it slows things down more than any actual infrastructure issue.
Another one, slow feedback loops. When a pipeline takes 20-30 minutes per commit, people won’t say anything, but they silently start pushing code less often.
It completely changes how the team works, even if the pipeline is technically “fine.”
Anyway, I’m curious what other people have seen.
What’s a DevOps bottleneck you’ve dealt with that doesn’t really get talked about?
https://redd.it/1p7ynlq
@r_devops
Deployment to production . Docker containers
We have a automated ci cd environment for the Dev triggered by any changes to dev . Most of the artifacts are either react app or docker containers
Now we need to move this containers to a prod environment. Assume aws and different region.
Now how do we deploy certain containers. Would it be manual as containers are already built amd noscripts need to be built to just deploy a certain docker image to a different t region ?
https://redd.it/1p7z1ij
@r_devops
We have a automated ci cd environment for the Dev triggered by any changes to dev . Most of the artifacts are either react app or docker containers
Now we need to move this containers to a prod environment. Assume aws and different region.
Now how do we deploy certain containers. Would it be manual as containers are already built amd noscripts need to be built to just deploy a certain docker image to a different t region ?
https://redd.it/1p7z1ij
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
QA/Dev AI testing tool
Hey everyone! I’m working on a new AI-powered QA tool called Sentinel that’s still in development, but we’ve got a few features ready to test out and I’d love to get some real-world feedback. Basically, it helps with things like self-healing tests, AI-driven dashboards, and visual regression comparisons, and I’m looking for a couple of companies or teams who might want to give it a spin and let me know what they think. If you’re interested in trying it out and giving some feedback, just let me know!
P.S.
It’s not a magic AI tool that claims that’s going to take over your testing. It’s more of a dev focused tool that provides insights and gives suggestions.
https://redd.it/1p82kdi
@r_devops
Hey everyone! I’m working on a new AI-powered QA tool called Sentinel that’s still in development, but we’ve got a few features ready to test out and I’d love to get some real-world feedback. Basically, it helps with things like self-healing tests, AI-driven dashboards, and visual regression comparisons, and I’m looking for a couple of companies or teams who might want to give it a spin and let me know what they think. If you’re interested in trying it out and giving some feedback, just let me know!
P.S.
It’s not a magic AI tool that claims that’s going to take over your testing. It’s more of a dev focused tool that provides insights and gives suggestions.
https://redd.it/1p82kdi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
SWE with 7 yoe, thinking about applying to an internal devops/kubernetes role. Advice?
Hello everyone. I’ve been thinking about making a move into a DevOps/kubernetes role at my company, and wanted to hear from people with real experience in the field.
A bit about my background:
- 7 yoe in big data/software development/data engineering, including about 4 years of Python and general noscripting
- 4 yoe working directly with Kubernetes. Writing Helm charts, deploying and maintaining internal apps, debugging, etc.
- 4 yoe managing multiple EKS clusters, handling upgrades with terraform, maintaining monitoring stacks, etc.
Reasons for wanting to make the jump:
- I enjoy managing our EKS infrastructure. I enjoy working with kubernetes.
- I’ve become a bit disinterested in coding. Particularly the CRUD apps. With how much AI can handle now, it’s honestly demotivating, and I really dislike the typical software engineering interview process.
- Maybe this is naïve, but DevOps feels like one of the more AI-safe areas. Much of my software development work can be heavily automated, but the debugging and fire-fighting we do in our current infrastructure feels a lot harder for AI to replace anytime soon.
.
Reasons I’m hesitant:
- It’s a new domain. I think I have a leg up with my current k8s experience, but I really lack networking/linux expertise.
- Stress level. I’m certainly no stranger to late night fire fighting and upgrades. But I’m not sure how much I can handle in the long term.
- Long term outlook. Is this field going to have a future as AI grows?
- Maybe im in a bit of “grass is greener” scenario?
Just seeking some advice/opinions from more experienced folk.
https://redd.it/1p86b77
@r_devops
Hello everyone. I’ve been thinking about making a move into a DevOps/kubernetes role at my company, and wanted to hear from people with real experience in the field.
A bit about my background:
- 7 yoe in big data/software development/data engineering, including about 4 years of Python and general noscripting
- 4 yoe working directly with Kubernetes. Writing Helm charts, deploying and maintaining internal apps, debugging, etc.
- 4 yoe managing multiple EKS clusters, handling upgrades with terraform, maintaining monitoring stacks, etc.
Reasons for wanting to make the jump:
- I enjoy managing our EKS infrastructure. I enjoy working with kubernetes.
- I’ve become a bit disinterested in coding. Particularly the CRUD apps. With how much AI can handle now, it’s honestly demotivating, and I really dislike the typical software engineering interview process.
- Maybe this is naïve, but DevOps feels like one of the more AI-safe areas. Much of my software development work can be heavily automated, but the debugging and fire-fighting we do in our current infrastructure feels a lot harder for AI to replace anytime soon.
.
Reasons I’m hesitant:
- It’s a new domain. I think I have a leg up with my current k8s experience, but I really lack networking/linux expertise.
- Stress level. I’m certainly no stranger to late night fire fighting and upgrades. But I’m not sure how much I can handle in the long term.
- Long term outlook. Is this field going to have a future as AI grows?
- Maybe im in a bit of “grass is greener” scenario?
Just seeking some advice/opinions from more experienced folk.
https://redd.it/1p86b77
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Developed a tool for instant, local execution of AI-generated code — no copy/paste.
Create more bad code! Do more vibe coding with fully automated degeneration with Auto-Fix!
People hate AI Reddit posts so I keep it real the project was, of course Vibe Coded.
But its fully working and tested. You can use with Ollama or any API (Google, Claude, OpenAI or your mother).
You have a Vibe tell it, AI code its, Executes it local on your machine(your fucked) but NO its in a Docker so not yet and this Docker you can even export. If there is an error it sends the error back and generates new code that hopefully works.
As your prompting like a monkey, it doenst matter, someday the Auto-Fix will Fix ist for you. You have no idea what just happend, but things are working?
Great now you can export the whole Docker Container with the Program inside und Ship to to Production ASAP. What a time to be alive!
https://github.com/Ark0N/AI-Code-Executor
Below the "serious" information:
https://redd.it/1p87pub
@r_devops
Create more bad code! Do more vibe coding with fully automated degeneration with Auto-Fix!
People hate AI Reddit posts so I keep it real the project was, of course Vibe Coded.
But its fully working and tested. You can use with Ollama or any API (Google, Claude, OpenAI or your mother).
You have a Vibe tell it, AI code its, Executes it local on your machine(your fucked) but NO its in a Docker so not yet and this Docker you can even export. If there is an error it sends the error back and generates new code that hopefully works.
As your prompting like a monkey, it doenst matter, someday the Auto-Fix will Fix ist for you. You have no idea what just happend, but things are working?
Great now you can export the whole Docker Container with the Program inside und Ship to to Production ASAP. What a time to be alive!
https://github.com/Ark0N/AI-Code-Executor
Below the "serious" information:
https://redd.it/1p87pub
@r_devops
GitHub
GitHub - Ark0N/AI-Code-Executor: A powerful local web interface for Claude AI with automatic code execution in isolated Docker…
A powerful local web interface for Claude AI with automatic code execution in isolated Docker containers. Write code, watch it execute, and iterate faster than ever. - Ark0N/AI-Code-Executor
Manage cultural change
Hello,
Coming from a technical background, I’ve recently been offered the opportunity to become an observability advocate at my current organization, within a team that promotes DevOps and manages the so-called “DevOps” tools (closer to platform engineering).
The current situation is the result of a legacy, highly siloed structure: developers are not very engaged in observability. They either lack time, interest, or feel it isn’t their responsibility. Operations are still handled by dedicated teams using older processes and tools, and developers or application managers are only involved when incidents are escalated through tickets.
A new observability platform has been purchased, but it hasn’t yet been fully integrated into existing processes.
I’m curious to hear about your experience: how would you approach cultural change in this situation? How can we encourage people to invest in observability and take more ownership of their applications (“you build it, you run it”)?
I’m also open to any resources you can share on driving cultural change, as this is still relatively new to me.
Thank you all for reading, and for any help you can provide.
https://redd.it/1p88ao8
@r_devops
Hello,
Coming from a technical background, I’ve recently been offered the opportunity to become an observability advocate at my current organization, within a team that promotes DevOps and manages the so-called “DevOps” tools (closer to platform engineering).
The current situation is the result of a legacy, highly siloed structure: developers are not very engaged in observability. They either lack time, interest, or feel it isn’t their responsibility. Operations are still handled by dedicated teams using older processes and tools, and developers or application managers are only involved when incidents are escalated through tickets.
A new observability platform has been purchased, but it hasn’t yet been fully integrated into existing processes.
I’m curious to hear about your experience: how would you approach cultural change in this situation? How can we encourage people to invest in observability and take more ownership of their applications (“you build it, you run it”)?
I’m also open to any resources you can share on driving cultural change, as this is still relatively new to me.
Thank you all for reading, and for any help you can provide.
https://redd.it/1p88ao8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
From wanting to have more storage to building a homelab to a start in Devops
https://www.reddit.com/gallery/1p7zt0n
https://redd.it/1p7ztcg
@r_devops
https://www.reddit.com/gallery/1p7zt0n
https://redd.it/1p7ztcg
@r_devops
Reddit
From the homelab community on Reddit: From wanting to have more storage to building a homelab to a start in Devops
Explore this post and more from the homelab community
I built a "Portable" Postgres/FastAPI stack with baked-in DR, Connection Pooling, and Load Testing
https://github.com/Selfdb-io/SelfDB-mini
We all know moving stateless containers is trivial, but moving stateful workloads (databases) usually involves a manual checklist of
I built SelfDB-mini to make the "stateful" part as portable as the container itself, specifically for self-hosted or on-prem environments where you don't have managed RDS.
The "Disaster Recovery" Approach:
Instead of relying on external backup agents, the system treats the database state and the runtime configuration as a single portable unit. It bundles the SQL dump and the .env config into a
Migration: Spin up a fresh `docker-compose` stack on a new server, upload the artifact via the UI (or CLI), and the system restores the DB and injects the config automatically.
The Architecture (Batteries Included):
I didn't want a toy setup, so I included the infrastructure needed for stability:
Connection Pooling: PgBouncer is pre-configured in front of PostgreSQL 18. (Essential for async Python apps to prevent connection exhaustion).
Observability/Testing: I baked in Locust for load testing and Schemathesis for API contract testing, so you can validate the stack immediately after deployment.
Backend: FastAPI (Python 3.11) running on
It’s open-source and fully Dockerized. I’d love to hear your thoughts on this "snapshot" approach for smaller deployments versus traditional streaming replication.
https://redd.it/1p8bke3
@r_devops
https://github.com/Selfdb-io/SelfDB-mini
We all know moving stateless containers is trivial, but moving stateful workloads (databases) usually involves a manual checklist of
pg_dump, scp, volume mounting, and re-aligning environment variables.I built SelfDB-mini to make the "stateful" part as portable as the container itself, specifically for self-hosted or on-prem environments where you don't have managed RDS.
The "Disaster Recovery" Approach:
Instead of relying on external backup agents, the system treats the database state and the runtime configuration as a single portable unit. It bundles the SQL dump and the .env config into a
.tar.gz artifact.Migration: Spin up a fresh `docker-compose` stack on a new server, upload the artifact via the UI (or CLI), and the system restores the DB and injects the config automatically.
The Architecture (Batteries Included):
I didn't want a toy setup, so I included the infrastructure needed for stability:
Connection Pooling: PgBouncer is pre-configured in front of PostgreSQL 18. (Essential for async Python apps to prevent connection exhaustion).
Observability/Testing: I baked in Locust for load testing and Schemathesis for API contract testing, so you can validate the stack immediately after deployment.
Backend: FastAPI (Python 3.11) running on
uv.It’s open-source and fully Dockerized. I’d love to hear your thoughts on this "snapshot" approach for smaller deployments versus traditional streaming replication.
https://redd.it/1p8bke3
@r_devops
GitHub
GitHub - Selfdb-io/SelfDB-mini: SelfDB-mini is a minimal implementation of [SelfDB](https://github.com/Selfdb-io) with only the…
SelfDB-mini is a minimal implementation of [SelfDB](https://github.com/Selfdb-io) with only the essential features: FastAPI backend, React + TypeScript frontend, and PostgreSQL database with PgBoun...
I built a Python ingestion pipeline to archive Reddit data locally.
I needed a way to archive and analyze large volumes of text data (specifically engineering career discussions) from Reddit without relying on the heavy overhead of Selenium, but using PRAW, cuz duh .
It's an ingestion pipeline (ORION) that runs locally.
The Architecture:
Ingestion: Python requests hitting Reddit's JSON endpoints directly rather than parsing HTML.
Rate Limiting: Implemented a custom delay logic to handle HTTP 429 backoffs without getting the IP blacklisted.
Transformation: Parses the raw nested JSON tree, cleans the data (removes stickies/automod spam), and structures it into linear text/PDF reports.
Resource Usage: Runs on minimal resources (no headless browser required).
It’s a specific tool for a specific job, but I thought the approach to handling the JSON endpoints might be interesting to anyone looking to build lightweight things
Source Code: https://mrweeb0.github.io/ORION-tool-showcase/
It's non promotional an fully open source, munch trho it.
Feedback on the error handling logic is welcome.
https://redd.it/1p8aujq
@r_devops
I needed a way to archive and analyze large volumes of text data (specifically engineering career discussions) from Reddit without relying on the heavy overhead of Selenium, but using PRAW, cuz duh .
It's an ingestion pipeline (ORION) that runs locally.
The Architecture:
Ingestion: Python requests hitting Reddit's JSON endpoints directly rather than parsing HTML.
Rate Limiting: Implemented a custom delay logic to handle HTTP 429 backoffs without getting the IP blacklisted.
Transformation: Parses the raw nested JSON tree, cleans the data (removes stickies/automod spam), and structures it into linear text/PDF reports.
Resource Usage: Runs on minimal resources (no headless browser required).
It’s a specific tool for a specific job, but I thought the approach to handling the JSON endpoints might be interesting to anyone looking to build lightweight things
Source Code: https://mrweeb0.github.io/ORION-tool-showcase/
It's non promotional an fully open source, munch trho it.
Feedback on the error handling logic is welcome.
https://redd.it/1p8aujq
@r_devops
mrweeb0.github.io
Orion - Engineering Career Insights Tool
Orion - Advanced Engineering Career Insights Tool. Scrape, analyze, and visualize real-world engineering career discussions.
What’s the right way to deal with a QA team that slows down your workflow?
I am a dev and I’m running into some issues with my QA team. I’m trying to get a clear picture of what’s actually causing them because we keep seeing vague bug reports, inconsistent coverage, and build/test mismatches, and it slows things down more than it should. don't get me wrong, i’m not looking to blame anyone here, I’ve worked with brilliant QA teams before and clearly know how important the role is.
I just want to understand where these breakdowns usually start and how to go about addressing them without creating internal conflict, and what a healthy QA–dev process actually looks like. appreciate everyone's feedback
small ps: please be respectful and contribute productively to the thread.
https://redd.it/1p8em0j
@r_devops
I am a dev and I’m running into some issues with my QA team. I’m trying to get a clear picture of what’s actually causing them because we keep seeing vague bug reports, inconsistent coverage, and build/test mismatches, and it slows things down more than it should. don't get me wrong, i’m not looking to blame anyone here, I’ve worked with brilliant QA teams before and clearly know how important the role is.
I just want to understand where these breakdowns usually start and how to go about addressing them without creating internal conflict, and what a healthy QA–dev process actually looks like. appreciate everyone's feedback
small ps: please be respectful and contribute productively to the thread.
https://redd.it/1p8em0j
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops Job Titles question
I used to work for a AWS Ops Center, where mostly we monitored and tracked/recorded alerts thru cloudwatch.
After 2 years with the company they gave me AWS Admin rights, & the developers were not able to trigger the cards in Jenkins themselves since they were not admins, they trusted me to do so. Also the admin rights gave me rights to grant/deny access to instances/databases for developers for a certain amount of time (while they deploy their codes).
Since I do not have any coding background, I see that im not qualified to apply to DevOps positions. However, would there be any other positions i could apply to? Are there more job noscripts out there that are responsible for monitoring? Maybe i can learn how to create these alerts?
Is there a job noscripts for what i was doing? Or would it be worth while to learn the coding since i have experience of how Ci/CD works now.
https://redd.it/1p8f7wb
@r_devops
I used to work for a AWS Ops Center, where mostly we monitored and tracked/recorded alerts thru cloudwatch.
After 2 years with the company they gave me AWS Admin rights, & the developers were not able to trigger the cards in Jenkins themselves since they were not admins, they trusted me to do so. Also the admin rights gave me rights to grant/deny access to instances/databases for developers for a certain amount of time (while they deploy their codes).
Since I do not have any coding background, I see that im not qualified to apply to DevOps positions. However, would there be any other positions i could apply to? Are there more job noscripts out there that are responsible for monitoring? Maybe i can learn how to create these alerts?
Is there a job noscripts for what i was doing? Or would it be worth while to learn the coding since i have experience of how Ci/CD works now.
https://redd.it/1p8f7wb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Which is the most popular CI/CD tool used nowadays?
SO, there are many CI/CD tools like Jenkins, Azure pipelines, GitHub Actions etc., Which one is the most popularly used in current market? I guess it would be GtHub actions based on its ease of use and flexibility. Any other tool apart from these that you can mention here? Thank you
https://redd.it/1p8glxi
@r_devops
SO, there are many CI/CD tools like Jenkins, Azure pipelines, GitHub Actions etc., Which one is the most popularly used in current market? I guess it would be GtHub actions based on its ease of use and flexibility. Any other tool apart from these that you can mention here? Thank you
https://redd.it/1p8glxi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Intel SGX alternative needed since they're killing attestation service
Earlier this year Intel announced killing SGX IAS which is attestation service for older trusted execution tech. Deadline April 2025 sounded far but migrations always take forever. Anyone built on SGX started scrambling to migrate to Intel TDX or AMD SEV. Problem was these aren't drop in replacements, APIs different and security models work differently.
I recently dug deep enough and saw an old post complaining about it and started thinking abt it. Back then companies were posting about this everywhere, lots of production workloads was still on SGX cause it was most mature for years and suddenly everyone was rebuilding. Silver lining is that newer stuff actually better with performance improved and less memory restrictions. Thankfully it wasn't just another migration for the sake of it. Still annoying tho when you build critical infrastructure on vendor hardware and they discontinue. Makes you think twice about single vendor dependence.
Wondering after some time passed, how widespread impact of this was how many production systems using SGX attestation need migration?
https://redd.it/1p8ilv5
@r_devops
Earlier this year Intel announced killing SGX IAS which is attestation service for older trusted execution tech. Deadline April 2025 sounded far but migrations always take forever. Anyone built on SGX started scrambling to migrate to Intel TDX or AMD SEV. Problem was these aren't drop in replacements, APIs different and security models work differently.
I recently dug deep enough and saw an old post complaining about it and started thinking abt it. Back then companies were posting about this everywhere, lots of production workloads was still on SGX cause it was most mature for years and suddenly everyone was rebuilding. Silver lining is that newer stuff actually better with performance improved and less memory restrictions. Thankfully it wasn't just another migration for the sake of it. Still annoying tho when you build critical infrastructure on vendor hardware and they discontinue. Makes you think twice about single vendor dependence.
Wondering after some time passed, how widespread impact of this was how many production systems using SGX attestation need migration?
https://redd.it/1p8ilv5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community