am looking to answer two questions. Firstly, how do I explain to the team that their current process is not up to standard? Many of them do not come from a technical background and have been updating these noscripts this way for years and are quite comfortable in their workflow, I experienced quite a bit of pushback trying to do this in my last organization. Is implementing a devops process even worth it in this case? Secondly, does my proposed process seem sound and how would you address the concerns I brought up in points 4 and 5 above?
Some additional info: If it would make the process cleaner then I believe I could convince my manager to move to scheduled releases. Also, I am a developer, so anything that doesn't just work out of the box, I can build, but I want to find the cleanest solution possible.
Thank you for taking the time to read!
https://redd.it/1pjd1mh
@r_devops
Some additional info: If it would make the process cleaner then I believe I could convince my manager to move to scheduled releases. Also, I am a developer, so anything that doesn't just work out of the box, I can build, but I want to find the cleanest solution possible.
Thank you for taking the time to read!
https://redd.it/1pjd1mh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Another Need feedback on resume post :))
Resume
It's been really hard landing a job, even for roles that are "Junior/Entry DevOps Engineer" roles. I don't know if it's because my resume screams red flags, or if the market is just in general tough.
1. Yes, I do have a 2 year work gap from graduation to now(traveling aha). I am still trying to stay hands-on though through curated DevOps roadmaps and doing end-to-end projects.
2. Does my work experience section come off as "too advanced" as someone who only worked as a DevOps Engineer Intern?
I just feel like the whole internship might've been a waste now and that it left me kind of in a "grey" area? Maybe I should start off as a System admin/It support guy? But even then, those are still hard to land lol.
https://redd.it/1pjfojr
@r_devops
Resume
It's been really hard landing a job, even for roles that are "Junior/Entry DevOps Engineer" roles. I don't know if it's because my resume screams red flags, or if the market is just in general tough.
1. Yes, I do have a 2 year work gap from graduation to now(traveling aha). I am still trying to stay hands-on though through curated DevOps roadmaps and doing end-to-end projects.
2. Does my work experience section come off as "too advanced" as someone who only worked as a DevOps Engineer Intern?
I just feel like the whole internship might've been a waste now and that it left me kind of in a "grey" area? Maybe I should start off as a System admin/It support guy? But even then, those are still hard to land lol.
https://redd.it/1pjfojr
@r_devops
Reddit
From the devops community on Reddit: Another *Need feedback on resume* post :))
Explore this post and more from the devops community
Built a Visual Docker Compose Editor - Looking for Feedback!
Hey
I've been wrestling with Docker Compose YAML files for way too long, so I built something to make it easier, a visual editor that lets you build and manage multi-container Docker applications without the YAML headaches.
The Problem
We've all been there:
\- Forgetting the exact YAML syntax
\- Spending hours debugging indentation issues
\- Copy-pasting configs and hoping they work
\- Managing environment variables, volumes, and ports manually
The Solution
A visual, form-based editor that:
\- ✅ No YAML knowledge required
\- ✅ See your YAML update in real-time as you type
\- ✅ Upload your docker-compose.yml and edit it visually
\- ✅ Download your configuration as a ready-to-use YAML file
\- ✅ No sign-up required to try the editor
What I've Built (MVP)
Core Features:
\- Visual form-based configuration
\- Service templates (Nginx, PostgreSQL, Redis)
\- Environment variables management
\- Volume mapping
\- Port configuration
\- Health checks
\- Resource limits (CPU/Memory)
\- Service dependencies
\- Multi-service support
Try it here: https://docker-compose-manager.vercel.app/
Why I'm Sharing This
This is an MVP and I'm looking for honest feedback from the community:
\- Does this solve a real problem for you?
\- What features are missing?
\- What would make you actually use this?
\- Any bugs or UX issues?
I've set up a quick waitlist for early access to future features (multi-environment management, team collaboration, etc.), but the editor is 100% free and functional right now - no sign-up needed.
Tech Stack
\- Angular 18
\- Firebase (Firestore + Analytics)
\- EmailJS (for contact form)
\- Deployed on Vercel
What's Next?
Based on your feedback, I'm planning:
\- Multi-service editing in one view
\- Environment-specific configurations
\- Team collaboration features
\- Integration with Docker Hub
\- More service templates
Feedback: Drop a comment or DM me!
TL;DR: Built a visual Docker Compose editor because YAML is painful. It's free, works now, and I'd love your feedback! 🚀
https://redd.it/1pjgrjx
@r_devops
Hey
I've been wrestling with Docker Compose YAML files for way too long, so I built something to make it easier, a visual editor that lets you build and manage multi-container Docker applications without the YAML headaches.
The Problem
We've all been there:
\- Forgetting the exact YAML syntax
\- Spending hours debugging indentation issues
\- Copy-pasting configs and hoping they work
\- Managing environment variables, volumes, and ports manually
The Solution
A visual, form-based editor that:
\- ✅ No YAML knowledge required
\- ✅ See your YAML update in real-time as you type
\- ✅ Upload your docker-compose.yml and edit it visually
\- ✅ Download your configuration as a ready-to-use YAML file
\- ✅ No sign-up required to try the editor
What I've Built (MVP)
Core Features:
\- Visual form-based configuration
\- Service templates (Nginx, PostgreSQL, Redis)
\- Environment variables management
\- Volume mapping
\- Port configuration
\- Health checks
\- Resource limits (CPU/Memory)
\- Service dependencies
\- Multi-service support
Try it here: https://docker-compose-manager.vercel.app/
Why I'm Sharing This
This is an MVP and I'm looking for honest feedback from the community:
\- Does this solve a real problem for you?
\- What features are missing?
\- What would make you actually use this?
\- Any bugs or UX issues?
I've set up a quick waitlist for early access to future features (multi-environment management, team collaboration, etc.), but the editor is 100% free and functional right now - no sign-up needed.
Tech Stack
\- Angular 18
\- Firebase (Firestore + Analytics)
\- EmailJS (for contact form)
\- Deployed on Vercel
What's Next?
Based on your feedback, I'm planning:
\- Multi-service editing in one view
\- Environment-specific configurations
\- Team collaboration features
\- Integration with Docker Hub
\- More service templates
Feedback: Drop a comment or DM me!
TL;DR: Built a visual Docker Compose editor because YAML is painful. It's free, works now, and I'd love your feedback! 🚀
https://redd.it/1pjgrjx
@r_devops
docker-compose-manager.vercel.app
Docker Compose Manager - Visual Editor for Docker Compose Files
Stop wrestling with YAML. Build, manage, and deploy multi-container Docker applications with a visual editor that just works.
Argocd upgrade strategy
Hello everyone,
I’m looking to upgrade an existing Argo CD installation from v1.8.x to the latest stable release, and I’d love to hear from anyone who has gone through a similar jump.
Given how old our version is, I’m assuming a straight upgrade probably isn’t safe. So, I’m currently going with incremental upgrade.
A few questions I have:
1) Any major breaking changes or gotchas I should be aware of?
2) Any other upgrades strategies you’d recommend ?
3) Anything related to CRD updates, repo-server changes, RBAC, or controller behavior that I should watch out for?
4) Any tips for minimizing downtime?
If you have links, guides, or personal notes from your migration, I’d really appreciate it. Thanks!
https://redd.it/1pji3tv
@r_devops
Hello everyone,
I’m looking to upgrade an existing Argo CD installation from v1.8.x to the latest stable release, and I’d love to hear from anyone who has gone through a similar jump.
Given how old our version is, I’m assuming a straight upgrade probably isn’t safe. So, I’m currently going with incremental upgrade.
A few questions I have:
1) Any major breaking changes or gotchas I should be aware of?
2) Any other upgrades strategies you’d recommend ?
3) Anything related to CRD updates, repo-server changes, RBAC, or controller behavior that I should watch out for?
4) Any tips for minimizing downtime?
If you have links, guides, or personal notes from your migration, I’d really appreciate it. Thanks!
https://redd.it/1pji3tv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
observability ina box
I always hated how devs don't have access to production like stack at home so with help of my good friend copilot i coded OIB - Observability in a box.
https://github.com/matijazezelj/oib
With single make install you'll get grafana, open telemetry, loki, prometheus, node exporter, alloy..., all interconnected, with exposed open telemetry endpoints and grafana dashboards and examples how to implement those in your setup. someone may find it useful, rocks may be thrown my way but hey it helped me:)
if you have any ideas PRs are always welcome, or just steal from it:)
https://redd.it/1pjgqw4
@r_devops
I always hated how devs don't have access to production like stack at home so with help of my good friend copilot i coded OIB - Observability in a box.
https://github.com/matijazezelj/oib
With single make install you'll get grafana, open telemetry, loki, prometheus, node exporter, alloy..., all interconnected, with exposed open telemetry endpoints and grafana dashboards and examples how to implement those in your setup. someone may find it useful, rocks may be thrown my way but hey it helped me:)
if you have any ideas PRs are always welcome, or just steal from it:)
https://redd.it/1pjgqw4
@r_devops
GitHub
GitHub - matijazezelj/oib: Observability in a box
Observability in a box. Contribute to matijazezelj/oib development by creating an account on GitHub.
What’s the best way to practice DevOps tools? I built something for beginners + need your thoughts
A lot of people entering DevOps keep asking the same question:
“Where can I practice CI/CD, Kubernetes, Terraform, etc. without paying for a bootcamp?”
Instead of repeating answers, I ended up building a small learning hub that has:
Free DevOps tutorials blogs
Hands-on practice challenges
Simple explanations of complex tools
Mini projects for beginners
If any of you are willing to take a look and tell me what’s good/bad/missing, I’d appreciate it:
**https://thedevopsworld.com**
Not selling anything — just trying to make a genuinely useful practice resource for newcomers to our field.
it will always remain free and with no intentions of making money.
Would love your suggestions on features, topics, or improvements!
https://redd.it/1pjn6aj
@r_devops
A lot of people entering DevOps keep asking the same question:
“Where can I practice CI/CD, Kubernetes, Terraform, etc. without paying for a bootcamp?”
Instead of repeating answers, I ended up building a small learning hub that has:
Free DevOps tutorials blogs
Hands-on practice challenges
Simple explanations of complex tools
Mini projects for beginners
If any of you are willing to take a look and tell me what’s good/bad/missing, I’d appreciate it:
**https://thedevopsworld.com**
Not selling anything — just trying to make a genuinely useful practice resource for newcomers to our field.
it will always remain free and with no intentions of making money.
Would love your suggestions on features, topics, or improvements!
https://redd.it/1pjn6aj
@r_devops
DevOps Worlds
DevOps Worlds - Master DevOps with 800+ Practice Problems
Learn Kubernetes, Docker, Terraform, AWS, CI/CD, and more with hands-on tutorials. Free forever for all DevOps learners.
blackblaze or aws s3 or google cloud storage for photo hosting website?
I have a client who want to make photography hosting website. He has tons of images(\~20MB/image), all those photos can be viewed around the world. what is the best cloud option for storing photos?
thx
https://redd.it/1pjn652
@r_devops
I have a client who want to make photography hosting website. He has tons of images(\~20MB/image), all those photos can be viewed around the world. what is the best cloud option for storing photos?
thx
https://redd.it/1pjn652
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Anyone tried the Debug Mode for coding agents? Does it change anything?
I'm not sure if I can mention the editor's name here. Anyway, they've released a new feature called Debug Mode.
>Coding agents are great at lots of things, but some bugs consistently stump them. That's why we're introducing Debug Mode, an entirely new agent loop built around runtime information and human verification.
>How it works
>1. Describe the bug - Select Debug Mode and describe the issue. The agent generates hypotheses and adds logging.
>2. Reproduce the bug - Trigger the bug while the agent collects runtime data (variable states, execution paths, timing).
>3. Verify the fix - Test the proposed fix. If it works, the agent removes instrumentation. If not, it refines and tries again.
What do you all think about how useful this feature is in actual debugging processes?
I think debugging is definitely one of the biggest pain points when using coding agents. This approach stabilizes what was already being done in the agent loop.
But when I'm debugging, I don't want to describe so much context, and sometimes bugs are hard to reproduce. So, I previously created an editor extension that can continuously access runtime context, which means I don't have to make the agent waste tokens by adding logs—just send the context directly to the agent to fix the bug.
I guess they won't implement something like that, since it would save too much on quotas, lol.
https://redd.it/1pjo2zz
@r_devops
I'm not sure if I can mention the editor's name here. Anyway, they've released a new feature called Debug Mode.
>Coding agents are great at lots of things, but some bugs consistently stump them. That's why we're introducing Debug Mode, an entirely new agent loop built around runtime information and human verification.
>How it works
>1. Describe the bug - Select Debug Mode and describe the issue. The agent generates hypotheses and adds logging.
>2. Reproduce the bug - Trigger the bug while the agent collects runtime data (variable states, execution paths, timing).
>3. Verify the fix - Test the proposed fix. If it works, the agent removes instrumentation. If not, it refines and tries again.
What do you all think about how useful this feature is in actual debugging processes?
I think debugging is definitely one of the biggest pain points when using coding agents. This approach stabilizes what was already being done in the agent loop.
But when I'm debugging, I don't want to describe so much context, and sometimes bugs are hard to reproduce. So, I previously created an editor extension that can continuously access runtime context, which means I don't have to make the agent waste tokens by adding logs—just send the context directly to the agent to fix the bug.
I guess they won't implement something like that, since it would save too much on quotas, lol.
https://redd.it/1pjo2zz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Inherited a legacy project with zero API docs any fast way to map all endpoints?
I just inherited a 5-year-old legacy project and found out… there’s zero API documentation.
No Swagger/OpenAPI, no Postman collections, and the frontend is full of hardcoded URLs.
Manually tracing every endpoint is possible, but realistically it would take days.
Before I spend the whole week digging through the codebase, I wanted to ask:
Is there a fast, reliable way to generate API documentation from an existing system?
I’ve seen people mention packet-capture workflows (mitmproxy, Fiddler, Apidog’s capture mode, etc.) where you run the app and let the tool record all HTTP requests, then turn them into structured API docs.
Has anyone here tried this on a legacy service?
Did it help, or did it create more noise than value?
I’d love to hear how DevOps/infra teams handle undocumented backend systems in the real world.
https://redd.it/1pjqnqi
@r_devops
I just inherited a 5-year-old legacy project and found out… there’s zero API documentation.
No Swagger/OpenAPI, no Postman collections, and the frontend is full of hardcoded URLs.
Manually tracing every endpoint is possible, but realistically it would take days.
Before I spend the whole week digging through the codebase, I wanted to ask:
Is there a fast, reliable way to generate API documentation from an existing system?
I’ve seen people mention packet-capture workflows (mitmproxy, Fiddler, Apidog’s capture mode, etc.) where you run the app and let the tool record all HTTP requests, then turn them into structured API docs.
Has anyone here tried this on a legacy service?
Did it help, or did it create more noise than value?
I’d love to hear how DevOps/infra teams handle undocumented backend systems in the real world.
https://redd.it/1pjqnqi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Protecting your own machine
Hi all. I've been promoted (if that's the proper word) to devops after 20+ years of being a developer, so I'm learning a lot of stuff on the fly...
One of the things I wouldn't like to learn the hard way is how to protect your own machine (the one holding the access keys). My passwords are in a password manager, my ssh keys are passphrase protected, i pull the repos in a virtual machine... What else can and should I do? I'm really afraid that some of these junior devs will download some malicious library and fuck everything up.
https://redd.it/1pjt2bj
@r_devops
Hi all. I've been promoted (if that's the proper word) to devops after 20+ years of being a developer, so I'm learning a lot of stuff on the fly...
One of the things I wouldn't like to learn the hard way is how to protect your own machine (the one holding the access keys). My passwords are in a password manager, my ssh keys are passphrase protected, i pull the repos in a virtual machine... What else can and should I do? I'm really afraid that some of these junior devs will download some malicious library and fuck everything up.
https://redd.it/1pjt2bj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Workload on GKE: Migrating from Zonal to Regional Persistent Disk for true Multi-Zone
Hey folks,
I'm running Jenkins on GKE as a StatefulSet with a 2.5TB persistent volume, and I'm trying to achieve true high availability across multiple zones.
**Current Setup:**
* Jenkins StatefulSet in `devops-tools` namespace
* Node pool currently in `us-central1-a`, adding `us-central1-b`
* PVC using `premium-rwo` StorageClass (pd-ssd)
* The underlying PV has `nodeAffinity` locked to `us-central1-a`
**The Problem:** The PersistentVolume is zonal (pinned to us-central1-a), which means my Jenkins pod can only schedule on nodes in that zone. This defeats the purpose of having a multi-zone node pool.
**What I'm Considering:**
Migrate to a regional persistent disk (replicated across us-central1-a and us-central1-b)
**Questions:**
* Has anyone successfully migrated a large PV from zonal to regional on GKE? Any gotchas?
* What's the typical downtime window for creating a snapshot and provisioning a \~2.5TB regional disk?
* Are there better approaches I'm missing for achieving HA with StatefulSets in GKE?
The regional disk approach seems cleanest (snapshot → create regional disk → update PVC), but I'd love to hear from anyone who's done this in production before committing to the migration.
Thanks!
https://redd.it/1pju1b3
@r_devops
Hey folks,
I'm running Jenkins on GKE as a StatefulSet with a 2.5TB persistent volume, and I'm trying to achieve true high availability across multiple zones.
**Current Setup:**
* Jenkins StatefulSet in `devops-tools` namespace
* Node pool currently in `us-central1-a`, adding `us-central1-b`
* PVC using `premium-rwo` StorageClass (pd-ssd)
* The underlying PV has `nodeAffinity` locked to `us-central1-a`
**The Problem:** The PersistentVolume is zonal (pinned to us-central1-a), which means my Jenkins pod can only schedule on nodes in that zone. This defeats the purpose of having a multi-zone node pool.
**What I'm Considering:**
Migrate to a regional persistent disk (replicated across us-central1-a and us-central1-b)
**Questions:**
* Has anyone successfully migrated a large PV from zonal to regional on GKE? Any gotchas?
* What's the typical downtime window for creating a snapshot and provisioning a \~2.5TB regional disk?
* Are there better approaches I'm missing for achieving HA with StatefulSets in GKE?
The regional disk approach seems cleanest (snapshot → create regional disk → update PVC), but I'd love to hear from anyone who's done this in production before committing to the migration.
Thanks!
https://redd.it/1pju1b3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Droplets compromised!!!
Hi everyone,
I’m dealing with a server security issue and wanted to explain what happened to get some opinions.
I had two different DigitalOcean droplets that were both flagged by DigitalOcean for sending DDoS traffic. This means the droplets were compromised and used as part of a botnet attack.
The strange thing is that I had already hardened SSH on both servers:
SSH key authentication only
Password login disabled
Root SSH login disabled
So SSH access should not have been possible.
After investigating inside the server, I found a malware process running as root from the /dev directory, and it kept respawning under different names. I also saw processes running that were checking for cryptomining signatures, which suggests the machine was infected with a mining botnet.
This makes me believe that the attacker didn’t get in through SSH, but instead through my application — I had a Node/Next.js server exposed on port 3000, and it was running as root. So it was probably an application-level vulnerability or an exposed service that got exploited, not an SSH breach.
At this point I’m planning to back up my data, destroy the droplet, and rebuild everything with stricter security (non-root user, close all ports except 22/80/443, Nginx reverse proxy, fail2ban, firewall rules, etc.).
If anyone has seen this type of attack before or has suggestions on how to prevent it in the future, I’d appreciate any insights.
https://redd.it/1pjujj7
@r_devops
Hi everyone,
I’m dealing with a server security issue and wanted to explain what happened to get some opinions.
I had two different DigitalOcean droplets that were both flagged by DigitalOcean for sending DDoS traffic. This means the droplets were compromised and used as part of a botnet attack.
The strange thing is that I had already hardened SSH on both servers:
SSH key authentication only
Password login disabled
Root SSH login disabled
So SSH access should not have been possible.
After investigating inside the server, I found a malware process running as root from the /dev directory, and it kept respawning under different names. I also saw processes running that were checking for cryptomining signatures, which suggests the machine was infected with a mining botnet.
This makes me believe that the attacker didn’t get in through SSH, but instead through my application — I had a Node/Next.js server exposed on port 3000, and it was running as root. So it was probably an application-level vulnerability or an exposed service that got exploited, not an SSH breach.
At this point I’m planning to back up my data, destroy the droplet, and rebuild everything with stricter security (non-root user, close all ports except 22/80/443, Nginx reverse proxy, fail2ban, firewall rules, etc.).
If anyone has seen this type of attack before or has suggestions on how to prevent it in the future, I’d appreciate any insights.
https://redd.it/1pjujj7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I am currently finishing my college degree in germany. Any advice on future career path?
Next month I will graduate and wanted to hear advice on what kind of field is advancing and preferbly secure and accessible in germany?
I am a decent student. Not the best. But my biggest interests were in theoritcle and math orientated classes. But I am willing to delv my knowledge into any direction.
I don‘t know how much should I fear AI development in terms of job security. But I would like to hear some advice for the future if somebody has anything to give?
https://redd.it/1pjvequ
@r_devops
Next month I will graduate and wanted to hear advice on what kind of field is advancing and preferbly secure and accessible in germany?
I am a decent student. Not the best. But my biggest interests were in theoritcle and math orientated classes. But I am willing to delv my knowledge into any direction.
I don‘t know how much should I fear AI development in terms of job security. But I would like to hear some advice for the future if somebody has anything to give?
https://redd.it/1pjvequ
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Do you use Postman to monitor your APIs?
As a developer who recently started using Postman and primarily uses it only to create collections and do some manual testing, I was wondering if is also helpful to monitor API health and performance.
View Poll
https://redd.it/1pjxrbr
@r_devops
As a developer who recently started using Postman and primarily uses it only to create collections and do some manual testing, I was wondering if is also helpful to monitor API health and performance.
View Poll
https://redd.it/1pjxrbr
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Rust and Go Malware: Cross-Platform Threats Evading Traditional Defenses 🦀
https://instatunnel.my/blog/rust-and-go-malware-cross-platform-threats-evading-traditional-defenses
https://redd.it/1pjz1f5
@r_devops
https://instatunnel.my/blog/rust-and-go-malware-cross-platform-threats-evading-traditional-defenses
https://redd.it/1pjz1f5
@r_devops
InstaTunnel
Rust and Go Malware:The Cross-Platform Threat Evading Modern
Learn why cybercriminals are adopting Rust and Go to build fast, cross-platform malware that bypasses traditional antivirus defenses.Discover detection challeng
We got 30 api access tickets per week, platform team became the bottleneck
Three months ago a PM stood up in standup and said "we can't ship because we're waiting on platform for api keys, this has been 4 days." Everyone went quiet and I felt my face get hot.Checked jira right after and 28 open tickets just for api access. Average close time was 4 days. We're 6 people supporting 14 product teams. Every ticket is the same and takes 2 hours per ticket spread over 3 days because everyone's in meetings. When I tracked it 60 hours of team time just doing manual api access, that's more than one full person.I told management we need to hire, but they said fix the process first. I tried some ideas like confluence docs but nobody reads them. Tried a spreadsheet with all the api keys but got out of date in 2 weeks. My lead engineer said "we need self service, like how stripe does it." I said yeah obviously but we don't have time to build that. He said we don't have time NOT to build it.So I did my research and readme does docs but not key management, we could built custom portal with react but gave up after realizing we would build an entire user management system. Looked at api management platforms but most were insane enterprise pricing, found something that had the developer portal built in. Set it up, connected our apis, took a month to completely set up gravitee, but it was the only thing that wasn't $50k per year and had self service built in. Rolled it out to 2 teams first as beta, they found bugs, we fixed it and rolled out to everyone. Took like a month and half to roll out to everyone, the tickets dropped to about 5 and they're weird stuff like "my team was deleted how do I recover it."If your platform team is drowning in api access tickets you have two options. Hire more people to do manual work or build self service. We're too small to hire so we had to build it, took way longer than I wanted but it worked.
https://redd.it/1pk17a7
@r_devops
Three months ago a PM stood up in standup and said "we can't ship because we're waiting on platform for api keys, this has been 4 days." Everyone went quiet and I felt my face get hot.Checked jira right after and 28 open tickets just for api access. Average close time was 4 days. We're 6 people supporting 14 product teams. Every ticket is the same and takes 2 hours per ticket spread over 3 days because everyone's in meetings. When I tracked it 60 hours of team time just doing manual api access, that's more than one full person.I told management we need to hire, but they said fix the process first. I tried some ideas like confluence docs but nobody reads them. Tried a spreadsheet with all the api keys but got out of date in 2 weeks. My lead engineer said "we need self service, like how stripe does it." I said yeah obviously but we don't have time to build that. He said we don't have time NOT to build it.So I did my research and readme does docs but not key management, we could built custom portal with react but gave up after realizing we would build an entire user management system. Looked at api management platforms but most were insane enterprise pricing, found something that had the developer portal built in. Set it up, connected our apis, took a month to completely set up gravitee, but it was the only thing that wasn't $50k per year and had self service built in. Rolled it out to 2 teams first as beta, they found bugs, we fixed it and rolled out to everyone. Took like a month and half to roll out to everyone, the tickets dropped to about 5 and they're weird stuff like "my team was deleted how do I recover it."If your platform team is drowning in api access tickets you have two options. Hire more people to do manual work or build self service. We're too small to hire so we had to build it, took way longer than I wanted but it worked.
https://redd.it/1pk17a7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
A Production Incident Taught Me the Real Difference Between Git Token Types
We hit a strange issue during deployment last month. Our production was pulling code using a developer’s PAT.
That turned into a rabbit hole about which Git tokens are actually meant for humans vs machines.
Wrote down the learning in case others find it useful.
Link : https://medium.com/stackademic/git-authentication-tokens-explained-personal-access-token-vs-deploy-token-vs-other-tokens-f555e92b3918?sk=27b6dab0ff08fcb102c4215823168d7e
https://redd.it/1pk2nc7
@r_devops
We hit a strange issue during deployment last month. Our production was pulling code using a developer’s PAT.
That turned into a rabbit hole about which Git tokens are actually meant for humans vs machines.
Wrote down the learning in case others find it useful.
Link : https://medium.com/stackademic/git-authentication-tokens-explained-personal-access-token-vs-deploy-token-vs-other-tokens-f555e92b3918?sk=27b6dab0ff08fcb102c4215823168d7e
https://redd.it/1pk2nc7
@r_devops
Medium
Git Authentication Tokens Explained : Personal Access Token vs Deploy Token vs Other Tokens
A few months ago, I was helping a junior engineer fix a failing deployment. The app was running fine locally, but the production server…
I made a tool that lets AI see your screen and fix technical problems step-by-step
I got tired of fixing the same tech issues for people at work again and again.
Demo: **Creating an S3 bucket on Google Cloud**
So, I built Screen Vision. It’s an open source, browser-based app where you share your screen with an AI, and it gives you step-by-step instructions to solve your problem in real-time. It's like having an expert looking at your screen, giving you the exact steps to navigate software, change settings, or debug issues instantly.
Crucially: no user screen data is stored.
See the code: https://github.com/bullmeza/screen.vision
Would love to hear what you think! What frustrating problem would you use Screen Vision to solve first?
https://redd.it/1pk72bu
@r_devops
I got tired of fixing the same tech issues for people at work again and again.
Demo: **Creating an S3 bucket on Google Cloud**
So, I built Screen Vision. It’s an open source, browser-based app where you share your screen with an AI, and it gives you step-by-step instructions to solve your problem in real-time. It's like having an expert looking at your screen, giving you the exact steps to navigate software, change settings, or debug issues instantly.
Crucially: no user screen data is stored.
See the code: https://github.com/bullmeza/screen.vision
Would love to hear what you think! What frustrating problem would you use Screen Vision to solve first?
https://redd.it/1pk72bu
@r_devops
YouTube
Screen Vision Demo
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
Manual SBOM validation is killing my team, what base images are you folks using?
Current vendor requires manual SBOM validation for every image update. My team spends 15+ hours weekly cross-referencing CVE feeds against their bloated Ubuntu derivatives. 200+ packages per image, half we don't even use.
Need something with signed SBOMs that work, daily rebuilds, and minimal attack surface. Tired of vendors promising enterprise security then dumping manual processes on us.
Considered Chainguard but it became way too expensive for our scale. Heard of Minimus but my team is sceptical
What's working for you? Skip the marketing pitch please.
https://redd.it/1pk9rgg
@r_devops
Current vendor requires manual SBOM validation for every image update. My team spends 15+ hours weekly cross-referencing CVE feeds against their bloated Ubuntu derivatives. 200+ packages per image, half we don't even use.
Need something with signed SBOMs that work, daily rebuilds, and minimal attack surface. Tired of vendors promising enterprise security then dumping manual processes on us.
Considered Chainguard but it became way too expensive for our scale. Heard of Minimus but my team is sceptical
What's working for you? Skip the marketing pitch please.
https://redd.it/1pk9rgg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Meta replaces SELinux with eBPF
SELinux was too slow for Meta so they replaced it with an eBPF based sandbox to safely run untrusted code.
bpfjailer handles things legacy MACs struggle with, like signed binary enforcement and deep protocol interception, without waiting for upstream kernel patches and without a measurable performance regressions across any workload/host type.
Full presentation here: https://lpc.events/event/19/contributions/2159/attachments/1833/3929/BpfJailer%20LPC%202025.pdf
https://redd.it/1pkhl58
@r_devops
SELinux was too slow for Meta so they replaced it with an eBPF based sandbox to safely run untrusted code.
bpfjailer handles things legacy MACs struggle with, like signed binary enforcement and deep protocol interception, without waiting for upstream kernel patches and without a measurable performance regressions across any workload/host type.
Full presentation here: https://lpc.events/event/19/contributions/2159/attachments/1833/3929/BpfJailer%20LPC%202025.pdf
https://redd.it/1pkhl58
@r_devops
I didn't like that cloud certificate practice exams cost money, so i built some free ones
https://exam-prep-6e334.web.app/
https://redd.it/1pklpt2
@r_devops
https://exam-prep-6e334.web.app/
https://redd.it/1pklpt2
@r_devops