SRE SE Interview at Google - Help Appreciated
# I got a phone screen in few weeks time, and it is a practical coding/noscripting round. Anyone here interviewed for this role?
Prep guide does mention it’s not algorithmically complex, but I’ll need familiarity with basic DSA like hash tables, trees, recursion and linked lists
If anyone interviewed for SE SRE, can you share how you prepped for this round? Is there any problem-set that i can look at online to practice such questions? I tried looking online, but very limited info for SE role.
https://redd.it/1oqfod1
@r_devops
# I got a phone screen in few weeks time, and it is a practical coding/noscripting round. Anyone here interviewed for this role?
Prep guide does mention it’s not algorithmically complex, but I’ll need familiarity with basic DSA like hash tables, trees, recursion and linked lists
If anyone interviewed for SE SRE, can you share how you prepped for this round? Is there any problem-set that i can look at online to practice such questions? I tried looking online, but very limited info for SE role.
https://redd.it/1oqfod1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Experimenting with AI for sprint management?
Has anyone tried using AI tools to help with sprint planning, retrospectives, or other agile ceremonies? Most tools just seem like glorified assistants but wondering if anyone's found something actually useful.
https://redd.it/1oqdvqy
@r_devops
Has anyone tried using AI tools to help with sprint planning, retrospectives, or other agile ceremonies? Most tools just seem like glorified assistants but wondering if anyone's found something actually useful.
https://redd.it/1oqdvqy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Struggling to connect AWS App Runner to RDS in multi-environment CDK setup (dev/prod isolation, VPC connector, Parameter Store confusion)
I’m trying to build a clean AWS setup with FastAPI on App Runner and Postgres on RDS, both provisioned via CDK.
It all works locally, and even deploys fine to App Runner.
I’ve got:
* `CoolStartupInfra-dev` → RDS + VPC
* `CoolStartupInfra-prod` → RDS + VPC
* `coolstartup-api-core-dev` and `coolstartup-api-core-prod` App Runner services
I get that it needs a VPC connector, but I’m confused about how this should work long-term with multiple environments.
What’s the right pattern here?
Should App Runner import the VPC and DB directly from the core stack, or read everything from Parameter Store?
Do I make a connector per environment?
And how do people normally guarantee “dev talks only to dev DB” in practice?
Would really appreciate if someone could share how they structure this properly - I feel like I’m missing the mental model for how "App Runner ↔ RDS" isolation is meant to fit together.
https://redd.it/1oqjrq7
@r_devops
I’m trying to build a clean AWS setup with FastAPI on App Runner and Postgres on RDS, both provisioned via CDK.
It all works locally, and even deploys fine to App Runner.
I’ve got:
* `CoolStartupInfra-dev` → RDS + VPC
* `CoolStartupInfra-prod` → RDS + VPC
* `coolstartup-api-core-dev` and `coolstartup-api-core-prod` App Runner services
I get that it needs a VPC connector, but I’m confused about how this should work long-term with multiple environments.
What’s the right pattern here?
Should App Runner import the VPC and DB directly from the core stack, or read everything from Parameter Store?
Do I make a connector per environment?
And how do people normally guarantee “dev talks only to dev DB” in practice?
Would really appreciate if someone could share how they structure this properly - I feel like I’m missing the mental model for how "App Runner ↔ RDS" isolation is meant to fit together.
https://redd.it/1oqjrq7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Email Header Injection: Turning Contact Forms into Spam Cannons 📧
https://instatunnel.my/blog/email-header-injection-turning-contact-forms-into-spam-cannons
https://redd.it/1oqlody
@r_devops
https://instatunnel.my/blog/email-header-injection-turning-contact-forms-into-spam-cannons
https://redd.it/1oqlody
@r_devops
InstaTunnel
Mail Header Injection: Exploiting Contact Forms to Send Spam
Discover how mail header injection lets attackers exploit contact forms to send phishing emails from trusted domains, bypassing SPF and DMARC. Learn detection
Azure pipeline limitations DockerCompose@1
Folks, I was trying to build image for a specific service of my compose file but I unable to do with pipeline. I found only below from azure doc, why it is there for only run? not for build?
https://redd.it/1oqniv9
@r_devops
Folks, I was trying to build image for a specific service of my compose file but I unable to do with pipeline. I found only below from azure doc, why it is there for only run? not for build?
serviceName \- Service Name string. Required when action = Run a specific service.https://redd.it/1oqniv9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you track if code quality is actually improving?
We’ve been fixing a lot of tech debt but it’s hard to tell if things are getting better. We use a few linters, but there’s no clear trend line or score.
Would love a way to visualize progress over time, not just see today’s issues.
https://redd.it/1oqqcsv
@r_devops
We’ve been fixing a lot of tech debt but it’s hard to tell if things are getting better. We use a few linters, but there’s no clear trend line or score.
Would love a way to visualize progress over time, not just see today’s issues.
https://redd.it/1oqqcsv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to Post CodeQL Analysis Results (High/Critical Counts + Details) as a Comment on a GitHub Pull Request?
I'm working with a custom-built CodeQL GitHub Actions workflow, and I want to automatically push the analysis results directly into a comment on the pull request. Specifically, I'd like to include things like the count of high and critical severity issues, along with some details about them (e.g., denoscriptions, locations, etc.).
I need them visible in the PR for easier review. Has anyone done something similar? Maybe by parsing the SARIF file and using the GitHub API to post a comment?
Any step-by-step guidance, workflow YAML snippets, or recommended actions/tools would be awesome. Thanks in advance!
https://redd.it/1oqshfj
@r_devops
I'm working with a custom-built CodeQL GitHub Actions workflow, and I want to automatically push the analysis results directly into a comment on the pull request. Specifically, I'd like to include things like the count of high and critical severity issues, along with some details about them (e.g., denoscriptions, locations, etc.).
I need them visible in the PR for easier review. Has anyone done something similar? Maybe by parsing the SARIF file and using the GitHub API to post a comment?
Any step-by-step guidance, workflow YAML snippets, or recommended actions/tools would be awesome. Thanks in advance!
https://redd.it/1oqshfj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Machine learning research internship
For my career and for future internships as a CS/math student at a top 20 University, how competitive is a machine learning research internship at a good European University? I have an opportunity to spend 3 months at this University (different continent) and work on implementing cutting edge information retrieval and NLP models/methods. Would this experience make me competitive for future internships or is it pretty standard? I am just trying to get this jist of its significance seeing that I’ll be spending a substantial amount of time there next year.
https://redd.it/1oqtbmo
@r_devops
For my career and for future internships as a CS/math student at a top 20 University, how competitive is a machine learning research internship at a good European University? I have an opportunity to spend 3 months at this University (different continent) and work on implementing cutting edge information retrieval and NLP models/methods. Would this experience make me competitive for future internships or is it pretty standard? I am just trying to get this jist of its significance seeing that I’ll be spending a substantial amount of time there next year.
https://redd.it/1oqtbmo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Do you use containers for local development or still stick to VMs?
I’ve been moving my workflow toward Docker and Podman for local dev, and it’s been great lightweight, fast, and easy to replicate environments.
But I’ve seen people say VMs are still better for full OS-level isolation and reproducibility.
If you’re doing Linux development, what’s your current setup containers, VMs, or bare metal?
https://redd.it/1oqw1cq
@r_devops
I’ve been moving my workflow toward Docker and Podman for local dev, and it’s been great lightweight, fast, and easy to replicate environments.
But I’ve seen people say VMs are still better for full OS-level isolation and reproducibility.
If you’re doing Linux development, what’s your current setup containers, VMs, or bare metal?
https://redd.it/1oqw1cq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Cutting down on TAC tickets
Looking for opinions on a topic of TAC support.
Having been on the both sides of the issue (both as tech support and admin) - I am a bit aware know how slow and sometimes unprofessional it can get.
Not really because TAC or admins are not knowledgeable - there is not enough time to be knowledgeable due to repetitiveness and constantly growing amount of information that has to be expedited to customers/users.
Sprinkle into it the fact, that even internally - you don’t have enough info. Or it’s structured in a way that makes you question how this all been holding up in the first place.
Average engineer gets 10+ calls per day +a certain amount of tickets that are more or less proportionate to the amount of calls. Some of these calls are expectingly easy, some can take a crazy amount of time to figure out.
And sometimes you have to lab the setup, look for similar issues while having another customer waiting for you to reply. It literally takes days due to simple tasks just repeating.
So I started looking for a way to cut down on this repetitive bureaucratic idiocy and cut down on resolving tac tickets using AI.
For two reasons:
1. In critical scenario it’s almost impossible to get the right guy on the phone. I remember getting a call once from some sort of school or other educational facility - their certificate authentication was failing for everyone and system administrator was on vacation. As L1 - I was hella lucky to be familiar with setup (ms ca -> fortiauth as sub-ca -> 802.11x with certs).
Imagine some L1 who just got out of uni and gets on a call like that. No amount of theoretical knowledge will prepare them for the pressure of 10 people staring at their avatar in GoToMeeting, being at a complete loss and thinking your are their only chance to make it work. That leads us to reason 2.
2. It will free up time for engineers to actually learn the product. Enormous amounts of best practices depends on some person just knowing a certain combination of toggles which is not in the docs.
That would free up their time to get to know the product and be actual tech support. I might be missing a certain angle here so please feel free to critique.
That’s is how i came with question - how can an AI solve all that for folks who are in similar context?
Not like - “do stuff for me and we will see”. Use it for actual assistance - ask it questions, help inspect devices, configure them. So human would still be the one making decisions but AI doing all the grunt work?
I’m saying it because I refuse to believe that simple log analysis should take days to complete.
So what’s your experience guys? How long on average it takes to deal with TAC? Is it different per product/vendor?
Share your thoughts, let’s find a consensus!
https://redd.it/1or13mz
@r_devops
Looking for opinions on a topic of TAC support.
Having been on the both sides of the issue (both as tech support and admin) - I am a bit aware know how slow and sometimes unprofessional it can get.
Not really because TAC or admins are not knowledgeable - there is not enough time to be knowledgeable due to repetitiveness and constantly growing amount of information that has to be expedited to customers/users.
Sprinkle into it the fact, that even internally - you don’t have enough info. Or it’s structured in a way that makes you question how this all been holding up in the first place.
Average engineer gets 10+ calls per day +a certain amount of tickets that are more or less proportionate to the amount of calls. Some of these calls are expectingly easy, some can take a crazy amount of time to figure out.
And sometimes you have to lab the setup, look for similar issues while having another customer waiting for you to reply. It literally takes days due to simple tasks just repeating.
So I started looking for a way to cut down on this repetitive bureaucratic idiocy and cut down on resolving tac tickets using AI.
For two reasons:
1. In critical scenario it’s almost impossible to get the right guy on the phone. I remember getting a call once from some sort of school or other educational facility - their certificate authentication was failing for everyone and system administrator was on vacation. As L1 - I was hella lucky to be familiar with setup (ms ca -> fortiauth as sub-ca -> 802.11x with certs).
Imagine some L1 who just got out of uni and gets on a call like that. No amount of theoretical knowledge will prepare them for the pressure of 10 people staring at their avatar in GoToMeeting, being at a complete loss and thinking your are their only chance to make it work. That leads us to reason 2.
2. It will free up time for engineers to actually learn the product. Enormous amounts of best practices depends on some person just knowing a certain combination of toggles which is not in the docs.
That would free up their time to get to know the product and be actual tech support. I might be missing a certain angle here so please feel free to critique.
That’s is how i came with question - how can an AI solve all that for folks who are in similar context?
Not like - “do stuff for me and we will see”. Use it for actual assistance - ask it questions, help inspect devices, configure them. So human would still be the one making decisions but AI doing all the grunt work?
I’m saying it because I refuse to believe that simple log analysis should take days to complete.
So what’s your experience guys? How long on average it takes to deal with TAC? Is it different per product/vendor?
Share your thoughts, let’s find a consensus!
https://redd.it/1or13mz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I built sbsh to keep my team’s terminal environments reproducible across Kubernetes, Terraform, and CI setups
I have been working on a small open-source tool called sbsh that makes terminal sessions persistent, reproducible, and shareable.
Repo: github.com/eminwux/sbsh
It started from a simple pain point: every engineer on a team ends up with slightly different local setups, environment variables, and shell aliases for things like Kubernetes clusters or Terraform workspaces.
With sbsh, you can define those environments declaratively in YAML, including variables, working directory, hooks, prompt color, and safeguards.
Then anyone can run the same terminal session safely and identically. No more “works on my laptop” when running
Here is an example for Kubernetes: docs/profiles/k8s-default.yaml
apiVersion: sbsh/v1beta1
kind: TerminalProfile
metadata:
name: k8s-default
spec:
runTarget: local
restartPolicy: restart-on-error
shell:
cwd: "~/projects"
cmd: /bin/bash
cmdArgs:
env:
KUBECONF: "$HOME/.kube/config"
KUBECONTEXT: default
KUBENAMESPACE: default
HISTSIZE: "5000"
prompt: '"[\e1;31m\sbsh($SBSHTERMPROFILE/$SBSHTERMID) [\e1;32m\\u@\h[\e0m\:\w\$ "'
stages:
onInit:
- noscript: kubectl config use-context $KUBECONTEXT
- noscript: kubectl config get-contexts
postAttach:
- noscript: kubectl get ns
- noscript: kubectl -n $KUBENAMESPACE get pods
Here's a brief demo:
sbsh - kubernetes profile demo
You can also define profiles for Terraform, Docker, or even attach directly to Kubernetes pods.
Terminal sessions can be detached, reattached, listed, and logged, similar to tmux but focused on reproducible DevOps environments instead of window layouts.
Profile examples: docs/profiles
I would really appreciate any feedback, especially from people who manage multiple clusters or Terraform workspaces.
I am genuinely looking for feedback from people who deal with this kind of setup, and any thoughts or suggestions would be very much appreciated.
https://redd.it/1or36aw
@r_devops
I have been working on a small open-source tool called sbsh that makes terminal sessions persistent, reproducible, and shareable.
Repo: github.com/eminwux/sbsh
It started from a simple pain point: every engineer on a team ends up with slightly different local setups, environment variables, and shell aliases for things like Kubernetes clusters or Terraform workspaces.
With sbsh, you can define those environments declaratively in YAML, including variables, working directory, hooks, prompt color, and safeguards.
Then anyone can run the same terminal session safely and identically. No more “works on my laptop” when running
terraform plan or kubectl apply.Here is an example for Kubernetes: docs/profiles/k8s-default.yaml
apiVersion: sbsh/v1beta1
kind: TerminalProfile
metadata:
name: k8s-default
spec:
runTarget: local
restartPolicy: restart-on-error
shell:
cwd: "~/projects"
cmd: /bin/bash
cmdArgs:
env:
KUBECONF: "$HOME/.kube/config"
KUBECONTEXT: default
KUBENAMESPACE: default
HISTSIZE: "5000"
prompt: '"[\e1;31m\sbsh($SBSHTERMPROFILE/$SBSHTERMID) [\e1;32m\\u@\h[\e0m\:\w\$ "'
stages:
onInit:
- noscript: kubectl config use-context $KUBECONTEXT
- noscript: kubectl config get-contexts
postAttach:
- noscript: kubectl get ns
- noscript: kubectl -n $KUBENAMESPACE get pods
Here's a brief demo:
sbsh - kubernetes profile demo
You can also define profiles for Terraform, Docker, or even attach directly to Kubernetes pods.
Terminal sessions can be detached, reattached, listed, and logged, similar to tmux but focused on reproducible DevOps environments instead of window layouts.
Profile examples: docs/profiles
I would really appreciate any feedback, especially from people who manage multiple clusters or Terraform workspaces.
I am genuinely looking for feedback from people who deal with this kind of setup, and any thoughts or suggestions would be very much appreciated.
https://redd.it/1or36aw
@r_devops
GitHub
GitHub - eminwux/sbsh: sbsh - tty supervisor
sbsh - tty supervisor. Contribute to eminwux/sbsh development by creating an account on GitHub.
If teams moved to “apps not VMs” for ML dev, what might actually change for ops?
Exploring a potential shift in how ML development environments are managed. Instead of giving each engineer a full VM or desktop, the idea is that every GUI tool (Jupyter, VS Code, labeling apps) would run as its own container and stream directly to the browser. No desktops, no VDI layer. Compute would be pooled, golden images would define standard environments, and the model would stay cloud-agnostic across Kubernetes clusters.
A few things I am trying to anticipate:
* Would environment drift and “works on my machine” actually decrease once each tool runs in isolation?
* Where might operational toil move next - image lifecycle management, stateful storage, or session orchestration?
* What policies would make sense to control costs, such as idle timeouts, per-user quotas, or scheduled teardown of inactive sessions?
* What metrics would be worth instrumenting on day one - cold start latency, cost per active user, GPU-hour distribution, or utilization of pooled nodes?
* If this model scales, what parts of CI/CD or access control might need to evolve?
Not pitching anything. Just thinking ahead about how this kind of setup could reshape the DevOps workflow in real teams.
https://redd.it/1or6lal
@r_devops
Exploring a potential shift in how ML development environments are managed. Instead of giving each engineer a full VM or desktop, the idea is that every GUI tool (Jupyter, VS Code, labeling apps) would run as its own container and stream directly to the browser. No desktops, no VDI layer. Compute would be pooled, golden images would define standard environments, and the model would stay cloud-agnostic across Kubernetes clusters.
A few things I am trying to anticipate:
* Would environment drift and “works on my machine” actually decrease once each tool runs in isolation?
* Where might operational toil move next - image lifecycle management, stateful storage, or session orchestration?
* What policies would make sense to control costs, such as idle timeouts, per-user quotas, or scheduled teardown of inactive sessions?
* What metrics would be worth instrumenting on day one - cold start latency, cost per active user, GPU-hour distribution, or utilization of pooled nodes?
* If this model scales, what parts of CI/CD or access control might need to evolve?
Not pitching anything. Just thinking ahead about how this kind of setup could reshape the DevOps workflow in real teams.
https://redd.it/1or6lal
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Retraining prompt injection classifiers for every new jailbreak is impossible
Our team is burning out retraining models every time a new jailbreak drops. We went from monthly retrains to weekly, now it's almost daily with all the creative bypasses hitting production. The eval pipeline alone takes 6 hours, then there's data labeling, hyperparameter tuning, and deployment testing.
Anyone found a better approach? We've tried ensemble methods and rule-based fallbacks but coverage gaps keep appearing. Thinking about switching to more dynamic detection but worried about latency.
https://redd.it/1orc5kb
@r_devops
Our team is burning out retraining models every time a new jailbreak drops. We went from monthly retrains to weekly, now it's almost daily with all the creative bypasses hitting production. The eval pipeline alone takes 6 hours, then there's data labeling, hyperparameter tuning, and deployment testing.
Anyone found a better approach? We've tried ensemble methods and rule-based fallbacks but coverage gaps keep appearing. Thinking about switching to more dynamic detection but worried about latency.
https://redd.it/1orc5kb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
A playlist on docker which will make your skilled enough to make your own container
I have created a docker internals playlist of 3 videos.
In the first video you will learn core concepts: like internals of docker, binaries, filesystems, what’s inside an image ? , what’s not inside an image ?, how image is executed in a separate environment in a host, linux namespaces and cgroups.
In the second one i have provided a walkthrough video where you can see and learn how you can implement your own custom container from scratch, a git link for code is also in the denoscription.
In the third and last video there are answers of some questions and some topics like mount, etc skipped in video 1 for not making it more complex for newcomers.
After this learning experience you will be able to understand and fix production level issues by thinking in terms of first principles because you will know docker is just linux managed to run separate binaries.
I was also able to understand and develop interest in docker internals after handling and deep diving into many of production issues in Kubernetes clusters. For a good backend engineer these learnings are must.
Docker INTERNALS
https://www.youtube.com/playlist?list=PLyAwYymvxZNhuiZ7F_BCjZbWvmDBtVGXa
https://redd.it/1orelme
@r_devops
I have created a docker internals playlist of 3 videos.
In the first video you will learn core concepts: like internals of docker, binaries, filesystems, what’s inside an image ? , what’s not inside an image ?, how image is executed in a separate environment in a host, linux namespaces and cgroups.
In the second one i have provided a walkthrough video where you can see and learn how you can implement your own custom container from scratch, a git link for code is also in the denoscription.
In the third and last video there are answers of some questions and some topics like mount, etc skipped in video 1 for not making it more complex for newcomers.
After this learning experience you will be able to understand and fix production level issues by thinking in terms of first principles because you will know docker is just linux managed to run separate binaries.
I was also able to understand and develop interest in docker internals after handling and deep diving into many of production issues in Kubernetes clusters. For a good backend engineer these learnings are must.
Docker INTERNALS
https://www.youtube.com/playlist?list=PLyAwYymvxZNhuiZ7F_BCjZbWvmDBtVGXa
https://redd.it/1orelme
@r_devops
Unicode Normalization Attacks: When "admin" ≠ "admin" 🔤
https://instatunnel.my/blog/unicode-normalization-attacks-when-admin-admin
https://redd.it/1orfljl
@r_devops
https://instatunnel.my/blog/unicode-normalization-attacks-when-admin-admin
https://redd.it/1orfljl
@r_devops
InstaTunnel
Unicode Normalization Attack:When "admin" Isn’t Really Admin
Discover how Unicode normalization attacks exploit invisible character differences to bypass filters, hijack accounts, and spoof domains. Learn real 2025 threat
OpenSource work recommendations to get into devops?
Have 5YOE mostly as backend developer, with 3 years IAM team at big company (interviewers tend to ask mostly about this).
Recently got AWS Solutions Architect Professional which was super hard, though IAM was quite a bit easier since I've seen quite a few of the architectures while studying that portion of the exam. Before I got the SAP, I had SAA and many interviews I got were CI/CD roles which I bombed. When I got the SAP, I got a handful of interviews right away, none of which were related to AWS.
I don't really want to get the AWS DevOps Pro cert as I heard they use Cloudformation which most companies don't use. Also don't want to have to renew another cert in 3 years (SAP was the only one I wanted).
Anyways, I'm currently doing some open source work for aws-terraform-modules to get familiarized with IaC. Suprisingly, tf seems super simple. Maybe it's the act of deploying resources with no errors which is the key.
So basically, am I on the right track? Should I learn Ansible? Swagger? etc.
Did a few personal projects on Github, but I doubt that will wow employers unless I grind out something original.
Here's my resume btw: https://imgur.com/a/Iy2QNv6
https://redd.it/1org8l4
@r_devops
Have 5YOE mostly as backend developer, with 3 years IAM team at big company (interviewers tend to ask mostly about this).
Recently got AWS Solutions Architect Professional which was super hard, though IAM was quite a bit easier since I've seen quite a few of the architectures while studying that portion of the exam. Before I got the SAP, I had SAA and many interviews I got were CI/CD roles which I bombed. When I got the SAP, I got a handful of interviews right away, none of which were related to AWS.
I don't really want to get the AWS DevOps Pro cert as I heard they use Cloudformation which most companies don't use. Also don't want to have to renew another cert in 3 years (SAP was the only one I wanted).
Anyways, I'm currently doing some open source work for aws-terraform-modules to get familiarized with IaC. Suprisingly, tf seems super simple. Maybe it's the act of deploying resources with no errors which is the key.
So basically, am I on the right track? Should I learn Ansible? Swagger? etc.
Did a few personal projects on Github, but I doubt that will wow employers unless I grind out something original.
Here's my resume btw: https://imgur.com/a/Iy2QNv6
https://redd.it/1org8l4
@r_devops
Reddit
From the devops community on Reddit: OpenSource work recommendations to get into devops?
Explore this post and more from the devops community
Offline postman alternative without any account.
Postman was great with rich features like api flows till it went cloud only which is a deal breaker for me.
Since then I was looking for offline only api client with complex testing support like api flows through drag and drop ui, noscripting.
Found HawkClient that works offline without any account, support api flows through drag and drop ui, noscripting, collection runner.
curious to know has any one else tried hawkclient or any other tool that meets the requirement.
https://redd.it/1ori0ld
@r_devops
Postman was great with rich features like api flows till it went cloud only which is a deal breaker for me.
Since then I was looking for offline only api client with complex testing support like api flows through drag and drop ui, noscripting.
Found HawkClient that works offline without any account, support api flows through drag and drop ui, noscripting, collection runner.
curious to know has any one else tried hawkclient or any other tool that meets the requirement.
https://redd.it/1ori0ld
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
doubts of mine ?
me facing problem while learning something like :
"from where should i have to learn ?"
"how much i have to learn ?"
etc ...
all these questions come to my mind while learning.
if you face these problem let me know how you handle these with an example.
https://redd.it/1ormrzd
@r_devops
me facing problem while learning something like :
"from where should i have to learn ?"
"how much i have to learn ?"
etc ...
all these questions come to my mind while learning.
if you face these problem let me know how you handle these with an example.
https://redd.it/1ormrzd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Token Agent – Config-driven token fetcher/rotator
Hello!
I'm working on a simple Token Agent service designed to manage token fetching, caching/invalidation, and propagation via a simple YAML config.
>
It was originally designed for cloud VM.
It can retrieve data from metadata APIs or internal HTTP services, and then serve tokens via files, sockets, or HTTP endpoints.
Resilience and Observability included.
Use cases generic:
\- Keep workload tokens in sync without custom noscripts
\- Rotate tokens automatically with retry/backoff
\- Define everything declaratively (no hardcoded logic)
Use cases for me:
\- Passing tokens to vector.dev via files
\- Token source for other services on vm via http
Repo: github.com/AleksandrNi/token-agent
Would love feedback from folks managing service credentials or secure automation.
Thanks!
https://redd.it/1ormmne
@r_devops
Hello!
I'm working on a simple Token Agent service designed to manage token fetching, caching/invalidation, and propagation via a simple YAML config.
>
metadata API → token exchange service → http | file | udsIt was originally designed for cloud VM.
It can retrieve data from metadata APIs or internal HTTP services, and then serve tokens via files, sockets, or HTTP endpoints.
Resilience and Observability included.
Use cases generic:
\- Keep workload tokens in sync without custom noscripts
\- Rotate tokens automatically with retry/backoff
\- Define everything declaratively (no hardcoded logic)
Use cases for me:
\- Passing tokens to vector.dev via files
\- Token source for other services on vm via http
Repo: github.com/AleksandrNi/token-agent
Would love feedback from folks managing service credentials or secure automation.
Thanks!
https://redd.it/1ormmne
@r_devops
vector.dev
A lightweight, ultra-fast tool for building observability pipelines
Do companies hire DevOps freshers?
Hey everyone
I’ve been learning DevOps tools like Docker, CI/CD, Kubernetes, Terraform, and cloud basics. I also have some experience with backend development using Node.js.
But I’m confused — do companies actually hire DevOps freshers, or do I need to first work as a backend developer (or some other role) and then switch to DevOps after getting experience?
If anyone here started their career directly in DevOps, I’d love to hear how you did it — was it through internships, projects, certifications, or something else?
Any advice would be really helpful
https://redd.it/1oroiqp
@r_devops
Hey everyone
I’ve been learning DevOps tools like Docker, CI/CD, Kubernetes, Terraform, and cloud basics. I also have some experience with backend development using Node.js.
But I’m confused — do companies actually hire DevOps freshers, or do I need to first work as a backend developer (or some other role) and then switch to DevOps after getting experience?
If anyone here started their career directly in DevOps, I’d love to hear how you did it — was it through internships, projects, certifications, or something else?
Any advice would be really helpful
https://redd.it/1oroiqp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Kubernetes operator for declarative IDP management
Since 1 year, I've been developing a Kubernetes Operator for Kanidm identity provider.
From the release notes:
Kaniop is now available as an official release! After extensive beta cycles, this marks our first supported version for real-world use.
Key capabilities include:
Identity Resources: Declaratively manage persons, groups, OAuth2 clients, and service accounts
GitOps Ready: Full integration with Git-based workflows for infrastructure-as-code
Kubernetes Native: Built using Custom Resources and standard Kubernetes patterns
Production Ready: Comprehensive testing, monitoring, and observability features
If this sounds interesting to you, I’d really appreciate your thoughts or feedback — and contributions are always welcome.
Links:
repository: https://github.com/pando85/kaniop/
website: https://pando85.github.io/
https://redd.it/1orq23c
@r_devops
Since 1 year, I've been developing a Kubernetes Operator for Kanidm identity provider.
From the release notes:
Kaniop is now available as an official release! After extensive beta cycles, this marks our first supported version for real-world use.
Key capabilities include:
Identity Resources: Declaratively manage persons, groups, OAuth2 clients, and service accounts
GitOps Ready: Full integration with Git-based workflows for infrastructure-as-code
Kubernetes Native: Built using Custom Resources and standard Kubernetes patterns
Production Ready: Comprehensive testing, monitoring, and observability features
If this sounds interesting to you, I’d really appreciate your thoughts or feedback — and contributions are always welcome.
Links:
repository: https://github.com/pando85/kaniop/
website: https://pando85.github.io/
https://redd.it/1orq23c
@r_devops
GitHub
GitHub - pando85/kaniop: Kubernetes operator for managing Kanidm
Kubernetes operator for managing Kanidm. Contribute to pando85/kaniop development by creating an account on GitHub.