Learning Journey Review and Guidance
Hi all,
I'm currently working as IT Support Technician and during free time, I have been learning devops. The first 2 personal projects I did was to learn as much as possible while breaking things. The first one was learning to use docker, docker compose and github actions to achieve CICD. The next one was using minikube cluster, and self hosted runner that would update the cluster after a push.
Currently, I have been building a k8s cluster from scratch, iteratively and gradually. I've used 3 VMs, one control plane node and 2 worker nodes. I have been attempting to simulate professional working environment. I have created 3 environments (namespaces in cluster and branches in github), dev, stage and prod. The app code and the manifests for the cluster are in the same repo. I also decided to document every step in a mark down file. For CI, I have created reusable workflows for both app and manifests. The app CI will only run in dev branch and it will lint, test, build, containerize and push the app in dockerhub with sha-commit tag. The manifests-ci will run a bunch of pre-deploy tests like yamllint, kube-score, conftesg, kusotmize build, etc. These reusable workflows are branch agnostic and designed to work on different event types like pull request and push. Once both the ci's results are satisfied, a tag-bump reusable workflow will run which will bump the tags from the manifests. Each app will call these workflows using it's own ci workflow with necessary inputs. I'm using ArgoCD for CD. Once a tag is changed, Argo CD will automatically deploy the latest change.
Next Steps: I'm gonna version everything in the infra like the packages I've created, the workflows and the manifests. Then, add monitoring and logging tools. Then, I'm thinking to deploy a full stack app I've created to learn about using and provisioning persistent voluumes in k8s. Next is to migrate everything to cloud, both AWS and AZURE.
Please feel free to checkout what I've done so far in detail here.
My questions to lovely peeps here:
Am I following professional standards and since Ihaven't worked as a devops engineer before,, is my attempt at simulating professional envs correct? If not, where can I improve? Also, are my next steps logical and am I thinking the right ?
Thank you very much in advance. Have a great day!
https://redd.it/1ovw75j
@r_devops
Hi all,
I'm currently working as IT Support Technician and during free time, I have been learning devops. The first 2 personal projects I did was to learn as much as possible while breaking things. The first one was learning to use docker, docker compose and github actions to achieve CICD. The next one was using minikube cluster, and self hosted runner that would update the cluster after a push.
Currently, I have been building a k8s cluster from scratch, iteratively and gradually. I've used 3 VMs, one control plane node and 2 worker nodes. I have been attempting to simulate professional working environment. I have created 3 environments (namespaces in cluster and branches in github), dev, stage and prod. The app code and the manifests for the cluster are in the same repo. I also decided to document every step in a mark down file. For CI, I have created reusable workflows for both app and manifests. The app CI will only run in dev branch and it will lint, test, build, containerize and push the app in dockerhub with sha-commit tag. The manifests-ci will run a bunch of pre-deploy tests like yamllint, kube-score, conftesg, kusotmize build, etc. These reusable workflows are branch agnostic and designed to work on different event types like pull request and push. Once both the ci's results are satisfied, a tag-bump reusable workflow will run which will bump the tags from the manifests. Each app will call these workflows using it's own ci workflow with necessary inputs. I'm using ArgoCD for CD. Once a tag is changed, Argo CD will automatically deploy the latest change.
Next Steps: I'm gonna version everything in the infra like the packages I've created, the workflows and the manifests. Then, add monitoring and logging tools. Then, I'm thinking to deploy a full stack app I've created to learn about using and provisioning persistent voluumes in k8s. Next is to migrate everything to cloud, both AWS and AZURE.
Please feel free to checkout what I've done so far in detail here.
My questions to lovely peeps here:
Am I following professional standards and since Ihaven't worked as a devops engineer before,, is my attempt at simulating professional envs correct? If not, where can I improve? Also, are my next steps logical and am I thinking the right ?
Thank you very much in advance. Have a great day!
https://redd.it/1ovw75j
@r_devops
GitHub
NextJSPortfolioSite/technical_guide_and_learning_logs.md at dev · nishanau/NextJSPortfolioSite
Contribute to nishanau/NextJSPortfolioSite development by creating an account on GitHub.
Integrating test automation into CI/CD pipelines
How are you integrating automated testing into CI/CD without slowing everything down? We’ve got a decent CI/CD pipeline in place (GitHub Actions + Docker + Kubernetes) but our testing
process is still mostly manual.
I’ve tried a few experiments with Selenium and Playwright in CI, but the test runs end up slowing deployments to a crawl. Especially when UI tests kick in. Right now we only run unit tests automatically, everything else gets verified manually before release.
How are teams efficiently automating regression or E2E testing? Basically, how do you maintain speed and reliability without sacrificing deployment frequency?
Parallelization? Test environment orchestration? Separate pipelines for smoke vs. full regression?
What am I missing here?
https://redd.it/1ovzhu1
@r_devops
How are you integrating automated testing into CI/CD without slowing everything down? We’ve got a decent CI/CD pipeline in place (GitHub Actions + Docker + Kubernetes) but our testing
process is still mostly manual.
I’ve tried a few experiments with Selenium and Playwright in CI, but the test runs end up slowing deployments to a crawl. Especially when UI tests kick in. Right now we only run unit tests automatically, everything else gets verified manually before release.
How are teams efficiently automating regression or E2E testing? Basically, how do you maintain speed and reliability without sacrificing deployment frequency?
Parallelization? Test environment orchestration? Separate pipelines for smoke vs. full regression?
What am I missing here?
https://redd.it/1ovzhu1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
what ai tools do you use for the “boring” parts of coding?
something i’ve been thinking about lately is how much of coding is actually the small, repetitive stuff that nobody talks about. not the big features or cool refactors, but the tiny things that eat time quietly. everyone uses chatgpt or copilot for broad tasks, but i’m curious about the lesser-known tools people use specifically to clean up the boring parts.
i’ve tried a few like aider for quick edits, tabnine for suggestions that don’t feel too heavy, cosine for checking how changes affect different files, and windsurf for small cleanup passes. none of these are headline tools, but they help in those moments where you just want to save ten minutes and move on.
wondering what everyone else uses for that category. which smaller ai tools or utilities help you handle the day-to-day friction points that slow you down but never make it into tutorials or tech talks?
https://redd.it/1ovz1u2
@r_devops
something i’ve been thinking about lately is how much of coding is actually the small, repetitive stuff that nobody talks about. not the big features or cool refactors, but the tiny things that eat time quietly. everyone uses chatgpt or copilot for broad tasks, but i’m curious about the lesser-known tools people use specifically to clean up the boring parts.
i’ve tried a few like aider for quick edits, tabnine for suggestions that don’t feel too heavy, cosine for checking how changes affect different files, and windsurf for small cleanup passes. none of these are headline tools, but they help in those moments where you just want to save ten minutes and move on.
wondering what everyone else uses for that category. which smaller ai tools or utilities help you handle the day-to-day friction points that slow you down but never make it into tutorials or tech talks?
https://redd.it/1ovz1u2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AI SRE Platforms: Because What DevOps Really Needed Was Another Overpriced Black Box
Oh good, another vendor has launched a “fully autonomous AI SRE platform.”
Because nothing says resilience like handing your production stack to a GPU that panics at YAML.
These pitches always read like:
>
I swear, half these platforms are just:
DevOps: “We’re trying to reduce our cloud bill.”
AI SRE platforms:
“What if… hear me out…we multiplied it?”
Every sneeze in your cluster triggers an LLM:
LLM to read logs, LLM to misinterpret logs, LLM to summarize its own confusion, LLM to generate poetic RCA haikus, LLM to hallucinate remediation steps that reboot prod
You know what isn’t reduced?
Your cloud bill, Your MTTR, Your sanity
“Use your normal SRE/DevOps workflows, add AI nodes where needed, and keep costs predictable.”
Wow.
Brilliant.
How innovative.
Why isn’t this a keynote?
But no platforms want you to: send them all your logs, your metrics, your runbooks, your hopes, your dreams, your savings, and your firstborn child (optional, but recommended for better support SLAs)
The platform:
>
Me checking logs:
It turned the cluster OFF. Off. Entirely. Like a light switch.
I’m convinced some of these “AI remediation” systems are running:
rm -rf / (trial mode)
Are these AI SRE platforms the future… or just APM vendors reincarnated with a GPU addiction?
Because at this point, I feel like we’re buying:
GPT-powered Nagios
Clippy with root access
A SaaS product that’s basically just
“Intelligent Incident Management” that’s allergic to intelligence
Let me know if any of these platforms have actually helped, or if we should all go back to grepping logs like it’s 2012.
https://redd.it/1ow1653
@r_devops
Oh good, another vendor has launched a “fully autonomous AI SRE platform.”
Because nothing says resilience like handing your production stack to a GPU that panics at YAML.
These pitches always read like:
>
I swear, half these platforms are just:
if (anything happens):call LLM()blame Kubernetessend invoiceDevOps: “We’re trying to reduce our cloud bill.”
AI SRE platforms:
“What if… hear me out…we multiplied it?”
Every sneeze in your cluster triggers an LLM:
LLM to read logs, LLM to misinterpret logs, LLM to summarize its own confusion, LLM to generate poetic RCA haikus, LLM to hallucinate remediation steps that reboot prod
You know what isn’t reduced?
Your cloud bill, Your MTTR, Your sanity
“Use your normal SRE/DevOps workflows, add AI nodes where needed, and keep costs predictable.”
Wow.
Brilliant.
How innovative.
Why isn’t this a keynote?
But no platforms want you to: send them all your logs, your metrics, your runbooks, your hopes, your dreams, your savings, and your firstborn child (optional, but recommended for better support SLAs)
The platform:
>
Me checking logs:
It turned the cluster OFF. Off. Entirely. Like a light switch.
I’m convinced some of these “AI remediation” systems are running:
rm -rf / (trial mode)
Are these AI SRE platforms the future… or just APM vendors reincarnated with a GPU addiction?
Because at this point, I feel like we’re buying:
GPT-powered Nagios
Clippy with root access
A SaaS product that’s basically just
/dev/null ingesting tokens “Intelligent Incident Management” that’s allergic to intelligence
Let me know if any of these platforms have actually helped, or if we should all go back to grepping logs like it’s 2012.
https://redd.it/1ow1653
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Security scanner flagged critical vulnerability in our Next.js app. The vulnerable code literally never runs in production.
got flagged for a critical vulnerability in lodash during our pre-deployment security scan. cve with a high severity score. leadership immediately asked when we're patching it.
dug into it. we use lodash in one of our build noscripts that runs during compilation. the vulnerable function never makes it to the production bundle. nextjs tree-shakes it out completely. the code doesn't even exist in our deployed application.
tried explaining this to our security team. they said "the scanner detected it in the repository so it needs to be fixed for compliance." spent three days updating lodash across the entire monorepo and testing everything just to satisfy a scanner that has no idea what actually ships to production.
meanwhile we have an actual exposed api endpoint with weak auth that nobody's looking at because it's not in the scanner's signature database.
the whole process feels backwards. we're prioritizing theoretical vulnerabilities in build tooling over actual security issues in running code because that's what the scanner can see.
starting to think static scanners just weren't built for modern javanoscript apps where most of your dependencies get compiled away.
anyone else dealing with this or found tools that understand what actually runs versus what's just sitting in node_modules.
https://redd.it/1ow3pyr
@r_devops
got flagged for a critical vulnerability in lodash during our pre-deployment security scan. cve with a high severity score. leadership immediately asked when we're patching it.
dug into it. we use lodash in one of our build noscripts that runs during compilation. the vulnerable function never makes it to the production bundle. nextjs tree-shakes it out completely. the code doesn't even exist in our deployed application.
tried explaining this to our security team. they said "the scanner detected it in the repository so it needs to be fixed for compliance." spent three days updating lodash across the entire monorepo and testing everything just to satisfy a scanner that has no idea what actually ships to production.
meanwhile we have an actual exposed api endpoint with weak auth that nobody's looking at because it's not in the scanner's signature database.
the whole process feels backwards. we're prioritizing theoretical vulnerabilities in build tooling over actual security issues in running code because that's what the scanner can see.
starting to think static scanners just weren't built for modern javanoscript apps where most of your dependencies get compiled away.
anyone else dealing with this or found tools that understand what actually runs versus what's just sitting in node_modules.
https://redd.it/1ow3pyr
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Working on a kubernetes and gitops
I am working on a kubernetes and gitops complex project. Touch basing even driver level things and also hardware setup that i am not understanding.
It is been 6 months and most things are going above my head. Making so many mistakes and technical debts. I dont know what to do.
Tried learning kubernetes looks simple on those video and labs but i feel the project complexity is eating me. Not sure what is wrong.
Please suggest .
https://redd.it/1ow5ax5
@r_devops
I am working on a kubernetes and gitops complex project. Touch basing even driver level things and also hardware setup that i am not understanding.
It is been 6 months and most things are going above my head. Making so many mistakes and technical debts. I dont know what to do.
Tried learning kubernetes looks simple on those video and labs but i feel the project complexity is eating me. Not sure what is wrong.
Please suggest .
https://redd.it/1ow5ax5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Better noscript/tool distribution to team than Colab or web-app?
I work on a small team (15 people) at a startup and am tasked with building internal tools / single and multi-use noscripts (usually in python / JS). I do a mix of Colabs with iPywidget interfaces and stand alone web apps for more complete tools. Wondering if there is a better way, since there is always a large surface area to deal with for: errors, updates, UX/UI, etc.
tldr; After you generate/code a noscript or internal process tool, how do you distribute/give this to other coworkers to use?
EDIT: for semi/non-tech coworkers mainly
https://redd.it/1ow68ea
@r_devops
I work on a small team (15 people) at a startup and am tasked with building internal tools / single and multi-use noscripts (usually in python / JS). I do a mix of Colabs with iPywidget interfaces and stand alone web apps for more complete tools. Wondering if there is a better way, since there is always a large surface area to deal with for: errors, updates, UX/UI, etc.
tldr; After you generate/code a noscript or internal process tool, how do you distribute/give this to other coworkers to use?
EDIT: for semi/non-tech coworkers mainly
https://redd.it/1ow68ea
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How did you start your career in DevOps?
I graduated this May with a bachelor’s in computer engineering and a CS minor. I originally planned to go into software engineering, mostly web development, but I was pretty passive during undergrad and waited too long to look for internships. By the time I started applying for SWE jobs after graduation, I was way behind my classmates in experience and could not even get an interview.
Fortunately, my dad is the IT director at his company and had been struggling to fill an IT specialist role. He got me hired in June, and while it was not the career path I had in mind, I have ended up liking it more than I expected. I started with basic help desk tasks, onboarding and offboarding, and simple O365 and Active Directory work. The job was pretty boring at first and I had a lot of downtime, so I kept asking for more things to do. Now I am doing a fair amount of sysadmin work like GPO configuration, server management, and email administration.
In my downtime I've been learning PowerShell and automating pretty much everything I can get my hands on. A couple months ago finished a full onboarding automation system that integrates with Jira's API, and I learned a lot from it. Our CIO happened to notice all of the microsoft graph apps I have been making, so he created a repo in our company's Azure DevOps for me to push all my automation stuff to (I had previously been using my personal Github).
Since then I’ve built a few small projects in my down time. One was a simple web app that shows password expiry info for our AD users. I wrote the backend logic, threw together a basic frontend, and packaged it in Docker so I could deploy it on one of our servers. Working through that whole build, containerize, deploy workflow made me realize I actually really enjoy the DevOps side of things. I still have a lot to learn, but all this has gotten me thinking about a potential career in this field.
For others already in the field: how did you get started, especially if you came from help desk or sysadmin work? And what should I be doing if my goal is to eventually move into a DevOps role?
TL:DR: Currently working in IT with a mix of sysadmin responsibilities, wondering how others got into DevOps now that I am interested in the field.
https://redd.it/1ow7yu3
@r_devops
I graduated this May with a bachelor’s in computer engineering and a CS minor. I originally planned to go into software engineering, mostly web development, but I was pretty passive during undergrad and waited too long to look for internships. By the time I started applying for SWE jobs after graduation, I was way behind my classmates in experience and could not even get an interview.
Fortunately, my dad is the IT director at his company and had been struggling to fill an IT specialist role. He got me hired in June, and while it was not the career path I had in mind, I have ended up liking it more than I expected. I started with basic help desk tasks, onboarding and offboarding, and simple O365 and Active Directory work. The job was pretty boring at first and I had a lot of downtime, so I kept asking for more things to do. Now I am doing a fair amount of sysadmin work like GPO configuration, server management, and email administration.
In my downtime I've been learning PowerShell and automating pretty much everything I can get my hands on. A couple months ago finished a full onboarding automation system that integrates with Jira's API, and I learned a lot from it. Our CIO happened to notice all of the microsoft graph apps I have been making, so he created a repo in our company's Azure DevOps for me to push all my automation stuff to (I had previously been using my personal Github).
Since then I’ve built a few small projects in my down time. One was a simple web app that shows password expiry info for our AD users. I wrote the backend logic, threw together a basic frontend, and packaged it in Docker so I could deploy it on one of our servers. Working through that whole build, containerize, deploy workflow made me realize I actually really enjoy the DevOps side of things. I still have a lot to learn, but all this has gotten me thinking about a potential career in this field.
For others already in the field: how did you get started, especially if you came from help desk or sysadmin work? And what should I be doing if my goal is to eventually move into a DevOps role?
TL:DR: Currently working in IT with a mix of sysadmin responsibilities, wondering how others got into DevOps now that I am interested in the field.
https://redd.it/1ow7yu3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Snyk is not finding the same base image vulnerabilities as jfrog
Short version: We scan our docker images using snyk. We have a customer than scans then using jfrog. We got a report from the customer that shows medium and low base image vulnerabilities from their jfrog scan that our snyk scan doesn't show.
Medium and low are outside of our SLA but in principle I don't like this. I don't like not having all the info.
I've been playing with snyk settings but I can't reproduce the jfrog results. Does anyone know any nice little snyk tricks to fix this? We are using the default security policy.
https://redd.it/1ow80sn
@r_devops
Short version: We scan our docker images using snyk. We have a customer than scans then using jfrog. We got a report from the customer that shows medium and low base image vulnerabilities from their jfrog scan that our snyk scan doesn't show.
Medium and low are outside of our SLA but in principle I don't like this. I don't like not having all the info.
I've been playing with snyk settings but I can't reproduce the jfrog results. Does anyone know any nice little snyk tricks to fix this? We are using the default security policy.
https://redd.it/1ow80sn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How is devops in New Zealand?
I'm looking to immigrate, working with a firm and currently applying to positions, but I've only just started my search. I've been in DevOps orgs for over 14 years mostly jumping around from SRE, Platform engineering, and "DevOps Engineer", but have spent some time as a SWE as well. Are things super competitive in the senior/principal/staff positions? Are companies generally pretty decent to employees? Anyone looking to hire an immigrant, lol?
https://redd.it/1owbvkn
@r_devops
I'm looking to immigrate, working with a firm and currently applying to positions, but I've only just started my search. I've been in DevOps orgs for over 14 years mostly jumping around from SRE, Platform engineering, and "DevOps Engineer", but have spent some time as a SWE as well. Are things super competitive in the senior/principal/staff positions? Are companies generally pretty decent to employees? Anyone looking to hire an immigrant, lol?
https://redd.it/1owbvkn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Automating Jira releases from my CI/CD Pipeline
Hi!
I want to know if I'm on the right track with my idea. Here is my problem/status quo:
* BitBucket and Jira
* Software repo pipeline builds container images and updates GitOps repo with new image tags
* GitOps repo deploys container images to different production environments
* Software repo is integrated with Jira and development information is visible in Jira work items
* I have no information in Jira work items about the actual deployments
* Releases/Versions in Jira are created manually and someone has to set that version on the work items
* DORA metrics are wrong (especially change lead time)
My plan:
* Run semantic-release in my software repo pipeline
* Build container images and tag them with the version from semantic-release
* Run a noscript to create an unreleased version in Jira and update all work items with that version (fixVersions field) using the work item reference in the commit message
* Trigger a deployment pipeline in my GitOps repo that runs a noscript that:
* Get all work items for that release from the Jira API
* Use the [Jira Deployments API](https://developer.atlassian.com/cloud/jira/software/rest/api-group-deployments/#api-group-deployments) to add deployment information on work items
* Set the release in Jira as 'released' with the correct release date
* Have correct DORA metrics
* No manual interaction
* Release management in Jira is driven by my git versions
Has anyone done something like this? Are there better ways to do this? Good tools?
Thanks for reading this mess 😘
https://redd.it/1owcuiv
@r_devops
Hi!
I want to know if I'm on the right track with my idea. Here is my problem/status quo:
* BitBucket and Jira
* Software repo pipeline builds container images and updates GitOps repo with new image tags
* GitOps repo deploys container images to different production environments
* Software repo is integrated with Jira and development information is visible in Jira work items
* I have no information in Jira work items about the actual deployments
* Releases/Versions in Jira are created manually and someone has to set that version on the work items
* DORA metrics are wrong (especially change lead time)
My plan:
* Run semantic-release in my software repo pipeline
* Build container images and tag them with the version from semantic-release
* Run a noscript to create an unreleased version in Jira and update all work items with that version (fixVersions field) using the work item reference in the commit message
* Trigger a deployment pipeline in my GitOps repo that runs a noscript that:
* Get all work items for that release from the Jira API
* Use the [Jira Deployments API](https://developer.atlassian.com/cloud/jira/software/rest/api-group-deployments/#api-group-deployments) to add deployment information on work items
* Set the release in Jira as 'released' with the correct release date
* Have correct DORA metrics
* No manual interaction
* Release management in Jira is driven by my git versions
Has anyone done something like this? Are there better ways to do this? Good tools?
Thanks for reading this mess 😘
https://redd.it/1owcuiv
@r_devops
How confident are you that your container images aren't compromised at build time?
I've been digging into our container supply chain and it's frankly terrifying. We pull base images from Docker Hub, npm packages from who knows where, and our build process has zero visibility into what's actually getting baked in.
Had a security audit last month and they asked for signed SBOMs. We had nothing. Asked about provenance attestation, we had none. Meanwhile we're shipping containers with 500+ CVEs because our base images are bloated with stuff we don't even use.
What's everyone doing beyond trust but don't verify? Are you signing everything? How do you even audit this mess at scale?
https://redd.it/1owfer2
@r_devops
I've been digging into our container supply chain and it's frankly terrifying. We pull base images from Docker Hub, npm packages from who knows where, and our build process has zero visibility into what's actually getting baked in.
Had a security audit last month and they asked for signed SBOMs. We had nothing. Asked about provenance attestation, we had none. Meanwhile we're shipping containers with 500+ CVEs because our base images are bloated with stuff we don't even use.
What's everyone doing beyond trust but don't verify? Are you signing everything? How do you even audit this mess at scale?
https://redd.it/1owfer2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking to design a better alerting system
Our company has an alerting system based on AWS Cloudwatch structured like so:
- Logs get ingested into an AWS Cloudwatch log group, a metric is defined on the group that looks for the keyword “ERROR”
- A Cloudwatch alarm is defined on the log metric, when the alarm is triggered, it triggers an SNS topic
- The SNS topic sends a request to a custom python endpoint
- The custom python endpoint scrapes through all logstreams within the log group for the “ERROR” keyword within a timeframe and posts it out to Slack
There are 2 problems with our setup:
1. Slack sends out the same ERRORs multiple times even though there’s one ERROR
- This happens if two ERRORs come in within the timeframe that our python noscript scrapes logs, our Cloudwatch alarm will trigger the SNS topic twice.
- Each SNS trigger will cause our python noscript to scrape and posts out both ERRORs twice to Slack
2. Not all ERRORs end up posting out to Slack
- This happens when multiple ERRORs come in while the Cloudwatch alarm is in triggered state so the SNS topic is not triggered for those ERRORs
- Some ERRORs are outside of the timeframe for the python scraper, so they don’t get pulled and posted to Slack
- Our Cloudwatch alarm is configured to evaluate a 10sec window, which is the lowest period AWS allows
Ideally, we would like for our setup to be extremely precise and granular: each ERROR in the log will trigger the Cloudwatch alarm which will trigger the SNS topic and our python endpoint will pull logs only for that ERROR.
What do people recommend we change in our setup? How are others alerting for keywords in their logs?
https://redd.it/1owge0h
@r_devops
Our company has an alerting system based on AWS Cloudwatch structured like so:
- Logs get ingested into an AWS Cloudwatch log group, a metric is defined on the group that looks for the keyword “ERROR”
- A Cloudwatch alarm is defined on the log metric, when the alarm is triggered, it triggers an SNS topic
- The SNS topic sends a request to a custom python endpoint
- The custom python endpoint scrapes through all logstreams within the log group for the “ERROR” keyword within a timeframe and posts it out to Slack
There are 2 problems with our setup:
1. Slack sends out the same ERRORs multiple times even though there’s one ERROR
- This happens if two ERRORs come in within the timeframe that our python noscript scrapes logs, our Cloudwatch alarm will trigger the SNS topic twice.
- Each SNS trigger will cause our python noscript to scrape and posts out both ERRORs twice to Slack
2. Not all ERRORs end up posting out to Slack
- This happens when multiple ERRORs come in while the Cloudwatch alarm is in triggered state so the SNS topic is not triggered for those ERRORs
- Some ERRORs are outside of the timeframe for the python scraper, so they don’t get pulled and posted to Slack
- Our Cloudwatch alarm is configured to evaluate a 10sec window, which is the lowest period AWS allows
Ideally, we would like for our setup to be extremely precise and granular: each ERROR in the log will trigger the Cloudwatch alarm which will trigger the SNS topic and our python endpoint will pull logs only for that ERROR.
What do people recommend we change in our setup? How are others alerting for keywords in their logs?
https://redd.it/1owge0h
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Code review tooling
I've always been a massive proponent of code reviews. In Microsoft, there used to be an internal code review tool, which was basically just a diffing engine with some nifty integrations for the internal repos (pre-git).
Anyway - I've been building out something for myself, to improve my workflow (been using gitkraken for a looooong time now and used that for most of my personal reviews (my workflow include reviewing my own code first)
What kind of tooling do you use? If any.
https://redd.it/1owhoq7
@r_devops
I've always been a massive proponent of code reviews. In Microsoft, there used to be an internal code review tool, which was basically just a diffing engine with some nifty integrations for the internal repos (pre-git).
Anyway - I've been building out something for myself, to improve my workflow (been using gitkraken for a looooong time now and used that for most of my personal reviews (my workflow include reviewing my own code first)
What kind of tooling do you use? If any.
https://redd.it/1owhoq7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
what is best practices for deploying local changes to AWS ASG
i’m trying to move from a single EC2 instance to an Auto Scaling Group (ASG). Because each ASG has 2-3 instances, I need to create an image, a launch template, and then perform an instance refresh, which takes a long time. How do you guys deploy it?
https://redd.it/1owjulh
@r_devops
i’m trying to move from a single EC2 instance to an Auto Scaling Group (ASG). Because each ASG has 2-3 instances, I need to create an image, a launch template, and then perform an instance refresh, which takes a long time. How do you guys deploy it?
https://redd.it/1owjulh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps Eng Looking for Collaboration: Exchange High-Perf US-East Infra for Project Ideas
Hey y'all,
I know the pain of launching a project on cheap, distant infrastructure. I’ve currently got a high-spec, low-latency VPS with Cloudpanel in Ashburn, VA (US-East) that is sitting partially underutilized and screaming for a purpose.
I'm looking to partner with other engineers, developers, or product people who have solid Micro-SaaS or AI-powered app ideas but need a high-performance, cost-free environment to launch and test.
The Proposition: I provide the optimized infrastructure and ongoing maintenance/scaling; you provide the project concept and handle the development/marketing. We agree on a fair profit-split. Thinking specifically about projects where latency matters (e.g., real-time tools, high-traffic APIs).
If you have an idea that needs a rock-solid US-East foundation, hit me up!
https://redd.it/1owmfis
@r_devops
Hey y'all,
I know the pain of launching a project on cheap, distant infrastructure. I’ve currently got a high-spec, low-latency VPS with Cloudpanel in Ashburn, VA (US-East) that is sitting partially underutilized and screaming for a purpose.
I'm looking to partner with other engineers, developers, or product people who have solid Micro-SaaS or AI-powered app ideas but need a high-performance, cost-free environment to launch and test.
The Proposition: I provide the optimized infrastructure and ongoing maintenance/scaling; you provide the project concept and handle the development/marketing. We agree on a fair profit-split. Thinking specifically about projects where latency matters (e.g., real-time tools, high-traffic APIs).
If you have an idea that needs a rock-solid US-East foundation, hit me up!
https://redd.it/1owmfis
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Memory Corruption in WebAssembly: Native Exploits in Your Browser 🧠
https://instatunnel.my/blog/memory-corruption-in-webassembly-native-exploits-in-your-browser
https://redd.it/1owm2x0
@r_devops
https://instatunnel.my/blog/memory-corruption-in-webassembly-native-exploits-in-your-browser
https://redd.it/1owm2x0
@r_devops
InstaTunnel
Memory Corruption in WebAssembly: Native Exploits Inside You
Explore how memory corruption vulnerabilities like buffer overflows and use-after-free affect WebAssembly. Learn real CVEs, browser sandbox limits, and defenses
Introduction to Docker Image Optimization — practical steps and pitfalls for smaller, faster containers
Hi all — I recently wrote a blog post that walks through how to **optimize Docker container images**, focusing on common mistakes, layering strategies, build cache nuances, and how to reduce runtime footprint.
Some of the things covered:
* What makes a Docker image “bloated” and why that matters in CI/CD or production.
* Techniques like multi-stage builds, minimizing base images, proper layer ordering.
* Real-world trade-offs: speed vs size, security vs size, build complexity vs maintainability.
* A checklist you can apply in your next project (even if you’re already comfortable with Docker).
I’d love feedback from fellow devs/ops folks:
* Which techniques do you use that weren’t covered?
* Have you run into unexpected problems when trying to shrink images?
* In your environment (cloud, on-prem, edge) what did image size actually cost you (time, storage, cost)?
Here’s the link: [https://www.codetocrack.dev/introduction-to-docker-image-optimization](https://www.codetocrack.dev/introduction-to-docker-image-optimization)
I’m not just dropping a link — I’m here to discuss, clarify, expand on any bit you find interesting. Happy to walk through any part of the post in more depth if you like.
https://redd.it/1owq0t5
@r_devops
Hi all — I recently wrote a blog post that walks through how to **optimize Docker container images**, focusing on common mistakes, layering strategies, build cache nuances, and how to reduce runtime footprint.
Some of the things covered:
* What makes a Docker image “bloated” and why that matters in CI/CD or production.
* Techniques like multi-stage builds, minimizing base images, proper layer ordering.
* Real-world trade-offs: speed vs size, security vs size, build complexity vs maintainability.
* A checklist you can apply in your next project (even if you’re already comfortable with Docker).
I’d love feedback from fellow devs/ops folks:
* Which techniques do you use that weren’t covered?
* Have you run into unexpected problems when trying to shrink images?
* In your environment (cloud, on-prem, edge) what did image size actually cost you (time, storage, cost)?
Here’s the link: [https://www.codetocrack.dev/introduction-to-docker-image-optimization](https://www.codetocrack.dev/introduction-to-docker-image-optimization)
I’m not just dropping a link — I’m here to discuss, clarify, expand on any bit you find interesting. Happy to walk through any part of the post in more depth if you like.
https://redd.it/1owq0t5
@r_devops
www.codetocrack.dev
Blog Post - Code to Crack
Detailed article on programming concepts and techniques
Hiring dev / cloud help
I'm trying to setup code in cloud, i'm doing it on azure and it doesn't load right, the website is blank and it shouldn't be. It might be code or setup issue I don't know. I've asked AI and it doesn't know what to do. I'll pay like $100 or more for the fix which should take like 2 hours. $50/h. And you'll look and tell me what's the issue and fix it. I want it done now so send me dm and let me know if you can do it.
https://redd.it/1owsoxk
@r_devops
I'm trying to setup code in cloud, i'm doing it on azure and it doesn't load right, the website is blank and it shouldn't be. It might be code or setup issue I don't know. I've asked AI and it doesn't know what to do. I'll pay like $100 or more for the fix which should take like 2 hours. $50/h. And you'll look and tell me what's the issue and fix it. I want it done now so send me dm and let me know if you can do it.
https://redd.it/1owsoxk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Context aware AI optimization for Spark jobs
trying to optimize our Spark jobs using some AI suggestions, but it keeps recommending things that would break the job. The recommendations don't seem to take into account our actual data or cluster setup. How do you make sure the AI suggestions actually fit your environment? looking for ways to get more context-aware optimization that doesn't just break everything.
https://redd.it/1owthpv
@r_devops
trying to optimize our Spark jobs using some AI suggestions, but it keeps recommending things that would break the job. The recommendations don't seem to take into account our actual data or cluster setup. How do you make sure the AI suggestions actually fit your environment? looking for ways to get more context-aware optimization that doesn't just break everything.
https://redd.it/1owthpv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Anyone in Europe getting more than 100K?
Hello all,
I'm looking for a job as the US client I'm currently working for didn't like I took paternity leave.
I'm wondering how difficult is to find a remote job where I can get more than 100K. Is this realistic?
Any advice for the ones who managed to do so? I've thought about creating a LLC in the US and then try to find clients over there but that's gonna be hard as hell plus the bureaucracy.
Another option I've thought is to go niche, taking into advantage I have a past in embedded software I have thought about going into eBPF or something like that. Any recommendations? There are many paths kubernetes development, AI, security, etc. so I'm a bit lost about this option.
For the ones interested in helping me in the right direction my CV is here https://www.swisstransfer.com/d/a438c72f-e4b3-4ee8-a114-09d177118015 feel free to connect on Linkedin.
Thank you in advance.
https://redd.it/1owt72p
@r_devops
Hello all,
I'm looking for a job as the US client I'm currently working for didn't like I took paternity leave.
I'm wondering how difficult is to find a remote job where I can get more than 100K. Is this realistic?
Any advice for the ones who managed to do so? I've thought about creating a LLC in the US and then try to find clients over there but that's gonna be hard as hell plus the bureaucracy.
Another option I've thought is to go niche, taking into advantage I have a past in embedded software I have thought about going into eBPF or something like that. Any recommendations? There are many paths kubernetes development, AI, security, etc. so I'm a bit lost about this option.
For the ones interested in helping me in the right direction my CV is here https://www.swisstransfer.com/d/a438c72f-e4b3-4ee8-a114-09d177118015 feel free to connect on Linkedin.
Thank you in advance.
https://redd.it/1owt72p
@r_devops