Does hybrid security create invisible friction no one admits?
Hybrid security policies don’t just block access, they subtly shape how people work. Some teams duplicate work just to avoid policy conflicts. Some folks even find workarounds, probably not great. Nobody talks about it because it’s invisible to leadership, but it’s real. Do you all see this in your orgs, or is it just us?
https://redd.it/1p72igz
@r_devops
Hybrid security policies don’t just block access, they subtly shape how people work. Some teams duplicate work just to avoid policy conflicts. Some folks even find workarounds, probably not great. Nobody talks about it because it’s invisible to leadership, but it’s real. Do you all see this in your orgs, or is it just us?
https://redd.it/1p72igz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
devs who’ve tested a bunch of AI tools, what actually reduced your workload instead of increasing it?
i’ve been hopping between a bunch of these coding agents and honestly most of them felt cool for a few days and then started getting in the way. after a while i just wanted a setup that doesn’t make me babysit it.
right now i’ve narrowed it down to a small mix. cosine has stayed in the rotation, along with aider, windsurf, cursor’s free tier, cody, and continue dev. tried a few others that looked flashy but didn’t really click long term.
curious what everyone else settled on. which ones did you keep, and which ones did you quietly uninstall after a week?
https://redd.it/1p72pjc
@r_devops
i’ve been hopping between a bunch of these coding agents and honestly most of them felt cool for a few days and then started getting in the way. after a while i just wanted a setup that doesn’t make me babysit it.
right now i’ve narrowed it down to a small mix. cosine has stayed in the rotation, along with aider, windsurf, cursor’s free tier, cody, and continue dev. tried a few others that looked flashy but didn’t really click long term.
curious what everyone else settled on. which ones did you keep, and which ones did you quietly uninstall after a week?
https://redd.it/1p72pjc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Relying on AI for learning, is it good or bad?
Hello everyone! I recently quit my Game Dev job and decided that DevOps is a better field for my mindset and work style so i made the switch.
I'm currently building my own homelab from scratch so i can use it as my portfolio and i can actually have some autonomy under my belt that i can rely on for my daily life. I'm pretty new to this, just started last week. So far i can confidently say that i have knowledge about the stuff i integrated.
Short summary of what i have;
I set up 2 Arch, 1 Debian Server PCs that i set up manually with partitions, encryption etc. I practice Linux daily on my main PC and i practice on terminal consistently. I SSH into other two PCs when i want to do something. Debian currently runs a Linkding with Nginx reverse proxy. I plan to integrate Github Actions CI, Grafana & Prometheus next. I have a few bash noscripts i run for my use and I can code in Python. Homelab is getting documented on Github with Readme files.
I quite enjoy learning something completely new to me and make progress in it but i do a lot of stuff by asking AI and learning why and how i should do it in that way. I'm mostly following it's recommendations even though i find different approaches from time to time.
I wonder if it's too dangerous for learning to approach AI as an assistant like this or am i just overthinking, i can't be sure. What are your thoughts about this, what would your recommendations be?
https://redd.it/1p73dns
@r_devops
Hello everyone! I recently quit my Game Dev job and decided that DevOps is a better field for my mindset and work style so i made the switch.
I'm currently building my own homelab from scratch so i can use it as my portfolio and i can actually have some autonomy under my belt that i can rely on for my daily life. I'm pretty new to this, just started last week. So far i can confidently say that i have knowledge about the stuff i integrated.
Short summary of what i have;
I set up 2 Arch, 1 Debian Server PCs that i set up manually with partitions, encryption etc. I practice Linux daily on my main PC and i practice on terminal consistently. I SSH into other two PCs when i want to do something. Debian currently runs a Linkding with Nginx reverse proxy. I plan to integrate Github Actions CI, Grafana & Prometheus next. I have a few bash noscripts i run for my use and I can code in Python. Homelab is getting documented on Github with Readme files.
I quite enjoy learning something completely new to me and make progress in it but i do a lot of stuff by asking AI and learning why and how i should do it in that way. I'm mostly following it's recommendations even though i find different approaches from time to time.
I wonder if it's too dangerous for learning to approach AI as an assistant like this or am i just overthinking, i can't be sure. What are your thoughts about this, what would your recommendations be?
https://redd.it/1p73dns
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for a few Network / Automation Engineers to try a new multi-vendor CLI + automation workflow tool
Hey all,
I’m working with a small team on a new workflow tool for network and automation engineers. Before we open it to a bigger audience, we’re looking for a few people who regularly deal with things like:
• Multi-vendor networks (Cisco, Juniper, Arista, etc.)
• Lots of parallel SSH sessions
• Repetitive CLI workflows
• Troubleshooting or debugging across multiple devices
• Lab work (CML, EVE-NG, GNS3, vendor simulators)
• Python/Ansible automation or CI/CD validation
The goal is to make everyday operational tasks a lot smoother, especially for people who are constantly jumping between devices or dealing with multi-vendor issues.
We’re looking for a handful of engineers willing to try it out and give honest feedback based on your real workflows.
Happy to compensate for your time. approximately 1 hr/day for 1–2 months
If this sounds interesting, feel free to DM me or drop a comment and I’ll reach out with details.
Thanks!
https://redd.it/1p736at
@r_devops
Hey all,
I’m working with a small team on a new workflow tool for network and automation engineers. Before we open it to a bigger audience, we’re looking for a few people who regularly deal with things like:
• Multi-vendor networks (Cisco, Juniper, Arista, etc.)
• Lots of parallel SSH sessions
• Repetitive CLI workflows
• Troubleshooting or debugging across multiple devices
• Lab work (CML, EVE-NG, GNS3, vendor simulators)
• Python/Ansible automation or CI/CD validation
The goal is to make everyday operational tasks a lot smoother, especially for people who are constantly jumping between devices or dealing with multi-vendor issues.
We’re looking for a handful of engineers willing to try it out and give honest feedback based on your real workflows.
Happy to compensate for your time. approximately 1 hr/day for 1–2 months
If this sounds interesting, feel free to DM me or drop a comment and I’ll reach out with details.
Thanks!
https://redd.it/1p736at
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Broken Object Level Authorization (BOLA): The API Vulnerability Bankrupting Companies 🔓
https://instatunnel.my/blog/broken-object-level-authorization-bola-the-api-vulnerability-bankrupting-companies
https://redd.it/1p75vvk
@r_devops
https://instatunnel.my/blog/broken-object-level-authorization-bola-the-api-vulnerability-bankrupting-companies
https://redd.it/1p75vvk
@r_devops
InstaTunnel
Broken Object Level Authorization (BOLA):The #1 API Security
Learn why BOLA tops the OWASP API Security Top 10. Discover how insecure object references let attackers access other users’ data, real-world breach examples
If you had to pick one vendor for cross-browser + mobile + API testing, who’s your shortlist?
Our QA team is trying to consolidate tools instead of juggling 3–4 platforms.
Which vendors actually deliver all-in-one testing (cloud devices, browsers, API monitors)?
Is TestGrid, LambdaTest, or BrowserStack closer to a “single pane of glass,” or is that still unrealistic?
https://redd.it/1p75ume
@r_devops
Our QA team is trying to consolidate tools instead of juggling 3–4 platforms.
Which vendors actually deliver all-in-one testing (cloud devices, browsers, API monitors)?
Is TestGrid, LambdaTest, or BrowserStack closer to a “single pane of glass,” or is that still unrealistic?
https://redd.it/1p75ume
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to run llama 3.1 70B on ec2.
Hi
Has anyone tried to run llama 3.1 70B on ec2 instance .
If yes which instance size did you choose.
I’m trying to run the same model from ollama but can’t figure out the perfect size of instance.
https://redd.it/1p7ak1j
@r_devops
Hi
Has anyone tried to run llama 3.1 70B on ec2 instance .
If yes which instance size did you choose.
I’m trying to run the same model from ollama but can’t figure out the perfect size of instance.
https://redd.it/1p7ak1j
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I built an agentless K8s cost auditor (Bash + Python) to avoid long security reviews
I've been consulting for startups and kept running into the same wall: we needed to see where money was being wasted in the cluster, but installing tools like Kubecost or CastAI required a 3-month security review process because they install persistent agents/pods.
So I built a lightweight, client-side tool to do a "15-minute audit" without installing anything in the cluster.
How it works:
1. It runs locally on your machine using your existing
2. It grabs
3. It calculates the cost gap using standard cloud pricing (AWS/GCP/Azure).
4. It prints the monthly waste total directly to your terminal.
Features:
100% Local: No data leaves your machine.
Stateless Viewer: If you want charts, I built a client-side web viewer (drag & drop JSON) that parses the data in your browser.
Privacy: Pod names are hashed locally before any export/visualization.
MIT Licensed: You can fork/modify it.
Repo: https://github.com/WozzHQ/wozz
Quick Start:
I'm looking for feedback on the waste calculation logic—specifically, does a 20% safety buffer on memory requests feel right for most production workloads?
Thanks!
https://redd.it/1p7baoc
@r_devops
I've been consulting for startups and kept running into the same wall: we needed to see where money was being wasted in the cluster, but installing tools like Kubecost or CastAI required a 3-month security review process because they install persistent agents/pods.
So I built a lightweight, client-side tool to do a "15-minute audit" without installing anything in the cluster.
How it works:
1. It runs locally on your machine using your existing
kubectl context.2. It grabs
kubectl top metrics (usage) and compares them to deployments (requests/limits).3. It calculates the cost gap using standard cloud pricing (AWS/GCP/Azure).
4. It prints the monthly waste total directly to your terminal.
Features:
100% Local: No data leaves your machine.
Stateless Viewer: If you want charts, I built a client-side web viewer (drag & drop JSON) that parses the data in your browser.
Privacy: Pod names are hashed locally before any export/visualization.
MIT Licensed: You can fork/modify it.
Repo: https://github.com/WozzHQ/wozz
Quick Start:
curl -sL https://raw.githubusercontent.com/WozzHQ/wozz/main/noscripts/wozz-audit.sh | bashI'm looking for feedback on the waste calculation logic—specifically, does a 20% safety buffer on memory requests feel right for most production workloads?
Thanks!
https://redd.it/1p7baoc
@r_devops
GitHub
GitHub - WozzHQ/wozz: Kubernetes cost optimization - catch expensive resource changes before they merge
Kubernetes cost optimization - catch expensive resource changes before they merge - WozzHQ/wozz
I built a simple CLI tool to audit AWS IAM keys because I was tired of clicking through the Console. Roast my code.
Hey everyone,
I've been working on hardening cloud setups for a while and noticed I always run the same manual checks: looking for users without MFA, old access keys (>90 days), and dormant admins.
So I wrote a Python noscript (Boto3) to automate this and output a simple table.
It’s open-source. I’d love some feedback on the logic or suggestions on what other security checks I should add.
repo
https://redd.it/1p7bbop
@r_devops
Hey everyone,
I've been working on hardening cloud setups for a while and noticed I always run the same manual checks: looking for users without MFA, old access keys (>90 days), and dormant admins.
So I wrote a Python noscript (Boto3) to automate this and output a simple table.
It’s open-source. I’d love some feedback on the logic or suggestions on what other security checks I should add.
repo
https://redd.it/1p7bbop
@r_devops
GitHub
GitHub - ranas-mukminov/Cloud-IAM-Optimizer: AWS/GCP IAM Least Privilege Auditor.
AWS/GCP IAM Least Privilege Auditor. Contribute to ranas-mukminov/Cloud-IAM-Optimizer development by creating an account on GitHub.
DevOps engineer here – want to level up into MLOps / LLMOps + go deeper into Kubernetes. Best learning path in 2026?
I’ve been working as a DevOps engineer for a few years now (CI/CD, Terraform, AWS/GCP, Docker, basic K8s, etc.). I can get around a cluster, but I know my Kubernetes knowledge is still pretty surface-level.
With all the AI/LLM hype, I really want to pivot/sharpen my skills toward MLOps (and especially LLMOps) while also going much deeper into Kubernetes, because basically every serious ML platform today runs on K8s.
My questions:
1. What’s the best way in 2025 to learn MLOps/LLMOps coming from a DevOps background?
Are there any courses, learning paths, or certifications that you actually found worth the time?
Anything that covers the full cycle: data versioning, experiment tracking, model serving, monitoring, scaling inference, cost optimization, prompt management, RAG pipelines, etc.?
2. Separately, I want to become really strong at Kubernetes (not just “I deployed a yaml”).
Looking for a path that takes me from intermediate → advanced → “I can design and troubleshoot production clusters confidently”.
CKA → CKAD → CKS worth it in 2025? Or are there better alternatives (KodeKloud, Kubernetes the Hard Way, etc.)?
I’m willing to invest serious time (evenings + weekends) and some money if the content is high quality. Hands-on labs and real-world projects are a big plus for me.
https://redd.it/1p7ey3d
@r_devops
I’ve been working as a DevOps engineer for a few years now (CI/CD, Terraform, AWS/GCP, Docker, basic K8s, etc.). I can get around a cluster, but I know my Kubernetes knowledge is still pretty surface-level.
With all the AI/LLM hype, I really want to pivot/sharpen my skills toward MLOps (and especially LLMOps) while also going much deeper into Kubernetes, because basically every serious ML platform today runs on K8s.
My questions:
1. What’s the best way in 2025 to learn MLOps/LLMOps coming from a DevOps background?
Are there any courses, learning paths, or certifications that you actually found worth the time?
Anything that covers the full cycle: data versioning, experiment tracking, model serving, monitoring, scaling inference, cost optimization, prompt management, RAG pipelines, etc.?
2. Separately, I want to become really strong at Kubernetes (not just “I deployed a yaml”).
Looking for a path that takes me from intermediate → advanced → “I can design and troubleshoot production clusters confidently”.
CKA → CKAD → CKS worth it in 2025? Or are there better alternatives (KodeKloud, Kubernetes the Hard Way, etc.)?
I’m willing to invest serious time (evenings + weekends) and some money if the content is high quality. Hands-on labs and real-world projects are a big plus for me.
https://redd.it/1p7ey3d
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Impostor Syndrome in Tech: Why It Hits Hard and What to Do About it
Have you ever thought you are not good enough at work? You are not that smart to get that job, and it’s all just luck? That’s called the
I wrote a post about that mainly focusing on DevOps, but it’s still valid for software engineering, and the tech industry in general:
- What is impostor syndrome, and what is not?
- Why does impostor syndrome hit hard?
- What to do about impostor syndrome?
Impostor Syndrome in Tech: Why It Hits Hard and What to Do About it
Enjoy :-)
https://redd.it/1p7fqm6
@r_devops
Have you ever thought you are not good enough at work? You are not that smart to get that job, and it’s all just luck? That’s called the
Impostor Syndrome! And it’s common than you think because many people don’t even dare to talk about it!I wrote a post about that mainly focusing on DevOps, but it’s still valid for software engineering, and the tech industry in general:
- What is impostor syndrome, and what is not?
- Why does impostor syndrome hit hard?
- What to do about impostor syndrome?
Impostor Syndrome in Tech: Why It Hits Hard and What to Do About it
Enjoy :-)
https://redd.it/1p7fqm6
@r_devops
newsletter.devopsroadmap.io
Impostor Syndrome in Tech: Why It Hits Hard and What to Do About it
Decoding Impostor Syndrome in DevOps and Software Engineering 🎭
I feel lost, how do I manage to build the right pipeline as a junior dev in my company without a senior?
I have about 2 years of experience as a software developer.
In my last job I had a good senior who taught me a bit of DevOps with Azure DevOps, but here my current boss doesn't have knowledge about CI/CD and DevOps strategies in general, basically he worked directly on production and copied the compiled .exe on the server when done...
In the past months, In the few free moments that I had, I've set up a very simple pipeline on bitbucket which runs on a self hosted Windows machine, very simple:
BUILD->DEPLOY
But now I want to improve it by adding more steps, I want at least to version the db because otherwise is a mess, I've set up a test machine with the test database. I was thinking about starting simple with:
BUILD -> UPDATE TEST DB -> UPDATE PRODUCTION DB -> DEPLOY
is this ok? Should each one of us use a local copy of the db to work with? We always have to check for new changes in the db when working with it? We use Visual Studio.
I feel lost, I know that each environment is different and there isn't a strategy which works for everyone, but I don't even know where can I learn something about it.
https://redd.it/1p7bbz6
@r_devops
I have about 2 years of experience as a software developer.
In my last job I had a good senior who taught me a bit of DevOps with Azure DevOps, but here my current boss doesn't have knowledge about CI/CD and DevOps strategies in general, basically he worked directly on production and copied the compiled .exe on the server when done...
In the past months, In the few free moments that I had, I've set up a very simple pipeline on bitbucket which runs on a self hosted Windows machine, very simple:
BUILD->DEPLOY
But now I want to improve it by adding more steps, I want at least to version the db because otherwise is a mess, I've set up a test machine with the test database. I was thinking about starting simple with:
BUILD -> UPDATE TEST DB -> UPDATE PRODUCTION DB -> DEPLOY
is this ok? Should each one of us use a local copy of the db to work with? We always have to check for new changes in the db when working with it? We use Visual Studio.
I feel lost, I know that each environment is different and there isn't a strategy which works for everyone, but I don't even know where can I learn something about it.
https://redd.it/1p7bbz6
@r_devops
What’s the worst kind of API analytics setup you’ve inherited from a previous team?
Is it just me or do most teams over-engineer API observability?
https://redd.it/1p7iich
@r_devops
Is it just me or do most teams over-engineer API observability?
https://redd.it/1p7iich
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Smal SaaS on Serverless Setup
I remember seeing multiple comments online about developers working in small scale SaaS companies where an entirely event driven architecture is adopted and everything running on lambdas being such a headache to the developers and endless debugging.
What are your opinions on it? If you agree to the statement, I’d love to hear on why.
https://redd.it/1p7lika
@r_devops
I remember seeing multiple comments online about developers working in small scale SaaS companies where an entirely event driven architecture is adopted and everything running on lambdas being such a headache to the developers and endless debugging.
What are your opinions on it? If you agree to the statement, I’d love to hear on why.
https://redd.it/1p7lika
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Github Runner Cost
My team has been spending a lot on Github runners and was wondering how other folks have dealt with this? See tools like [blacksmith\](http://blacksmith.sh), but curious if others have tried this? Or if this is a cost we should just eat? Have others had to deal with the cost of Github runners?
https://redd.it/1p7k9fi
@r_devops
My team has been spending a lot on Github runners and was wondering how other folks have dealt with this? See tools like [blacksmith\](http://blacksmith.sh), but curious if others have tried this? Or if this is a cost we should just eat? Have others had to deal with the cost of Github runners?
https://redd.it/1p7k9fi
@r_devops
www.blacksmith.sh
The Fastest Way to Run GitHub Actions | Blacksmith
Speed up your GitHub actions with Blacksmith. Run CI/CD 2x faster, download caches 4x faster, build Docker images 40x faster, and eliminate queue times. Start free today.
I built a small open-source browser extension to validate Kubernetes YAMLs locally — looking for feedback
Hey everyone,
I’ve been working on a side project called Guardon — a lightweight browser extension that lets you validate Kubernetes YAMLs right inside GitHub or GitLab, before a PR is even created.
It runs completely local (no backend or telemetry) and supports multi-document YAML and Kyverno policy import.
The goal is to help catch resource, limits, and policy issues early — basically shifting security a bit more “left.”
It’s open-source here: https://github.com/guardon-dev/guardon
Try It : https://chromewebstore.google.com/detail/jhhegdmiakbocegfcfjngkodicpjkgpb?utm\_source=item-share-cb
Demo: https://youtu.be/LPAi8UY1XIM?si=0hKOnqpf6WzalpTh
Would really appreciate any feedback or suggestions from folks working with Kubernetes policies, CI/CD, or developer platforms.
Thanks!
https://redd.it/1p7nzb7
@r_devops
Hey everyone,
I’ve been working on a side project called Guardon — a lightweight browser extension that lets you validate Kubernetes YAMLs right inside GitHub or GitLab, before a PR is even created.
It runs completely local (no backend or telemetry) and supports multi-document YAML and Kyverno policy import.
The goal is to help catch resource, limits, and policy issues early — basically shifting security a bit more “left.”
It’s open-source here: https://github.com/guardon-dev/guardon
Try It : https://chromewebstore.google.com/detail/jhhegdmiakbocegfcfjngkodicpjkgpb?utm\_source=item-share-cb
Demo: https://youtu.be/LPAi8UY1XIM?si=0hKOnqpf6WzalpTh
Would really appreciate any feedback or suggestions from folks working with Kubernetes policies, CI/CD, or developer platforms.
Thanks!
https://redd.it/1p7nzb7
@r_devops
GitHub
GitHub - guardon-dev/guardon
Contribute to guardon-dev/guardon development by creating an account on GitHub.
Oci DevOps CI/CD
Anybody here using OCI DevOps CI/CD extensively ? We have been using it for a while and have had good experience. Sure, there are some problems but so far it’s been very effective for us
https://redd.it/1p7kf9c
@r_devops
Anybody here using OCI DevOps CI/CD extensively ? We have been using it for a while and have had good experience. Sure, there are some problems but so far it’s been very effective for us
https://redd.it/1p7kf9c
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
WIP student project: multi-account AWS “Secure Data Hub” (would love feedback!)
https://preview.redd.it/dgn7dr8a6p3g1.png?width=1920&format=png&auto=webp&s=d8f3e6f9de1fcb713aa505dd7236f71ac798462c
Hi everyone,
**TL;DR:**
I’m a sophomore cybersecurity engineering student sharing a work-in-progress multi-account Amazon Web Services (AWS, cloud computing platform) “Secure Data Hub” architecture with Cognito, API Gateway, Lambda, DynamoDB, and KMS. It is about 60% built and I would really appreciate any security or architecture feedback.
**See overview below!** (bottom of post, check repo for more);
...........
I’m a sophomore cybersecurity engineering student and I’ve been building a personal project called **Secure Data Hub**. The idea is to give small teams handling sensitive client data something safer than spreadsheets and email, but still simple to use.
The project is about 60% done, so this is not a finished product post. I wanted to share the design and architecture now so I can improve it before everything is locked in.
**What it is trying to do**
* Centralize client records for small teams (small law, health, or finance practices).
* Separate client and admin web apps that talk to the same encrypted client profiles.
* Keep access narrow and well logged so mistakes are easier to spot and recover from.
**Current architecture (high level)**
* Multi-account AWS Organizations setup (management, admin app, client app, data, security).
* Cognito + API Gateway + Lambda for auth and APIs, using ID token claims in mapping templates.
* DynamoDB with client-side encryption using the DynamoDB Encryption Client and a customer-managed KMS key, on top of DynamoDB’s own encryption at rest.
* Centralized logging and GuardDuty findings into a security account.
* Static frontends (HTML/JS) for the admin and client apps calling the APIs.
**Tech stack**
* Compute: AWS Lambda
* Database and storage: DynamoDB, S3
* Security and identity: IAM, KMS, Cognito, GuardDuty
* Networking and delivery: API Gateway (REST), CloudFront, Route 53
* Monitoring and logging: CloudWatch, centralized logging into a security account
* Frontend: Static HTML/JavaScript apps served via CloudFront and S3
* IaC and workflow: Terraform for infrastructure as code, GitHub + GitHub Actions for version control and CI
**Who this might help**
* Students or early professionals preparing for the AWS Certified Security – Specialty who want to see a realistic multi-account architecture that uses AWS KMS for both client-side and server-side encryption, rather than isolated examples.
* Anyone curious how identity, encryption, logging, and GuardDuty can fit together in one end-to-end design.
I architected, diagrammed, and implemented everything myself from scratch (no templates, no previous setup) because one of my goals was to learn what it takes to design a realistic, secure architecture end to end.
I know some choices may look overkill for small teams, but **I’m very open to suggestions** for simpler or more correct patterns.
**I’d really love feedback on anything:**
* Security concerns I might be missing
* Places where the account/IAM design could be better or simpler
* Better approaches for client-side encryption and updating items in DynamoDB
* Even small details like naming, logging strategy, etc.
Github repo (code + diagrams):
[`https://github.com/andyyaro/Building-A-Secure-Data-Hub-in-the-cloud-AWS-`](https://github.com/andyyaro/Building-A-Secure-Data-Hub-in-the-cloud-AWS-)
Write-up / slides:
[`https://gmuedu-my.sharepoint.com/:b:/g/personal/yyaro_gmu_edu/IQCTvQ7cpKYYT7CXae4d3fuwAVT3u67MN6gJr3nyEncEcS0?e=YFpCFC`](https://gmuedu-my.sharepoint.com/:b:/g/personal/yyaro_gmu_edu/IQCTvQ7cpKYYT7CXae4d3fuwAVT3u67MN6gJr3nyEncEcS0?e=YFpCFC)
Feel free to DM me. whether you’re also a student learning this stuff or someone with real-world experience, I’m always happy to exchange ideas and learn from others.
And if you think this could help other students or small teams, an upvote would really help more folks see it. Thanks a lot for taking the time
https://preview.redd.it/dgn7dr8a6p3g1.png?width=1920&format=png&auto=webp&s=d8f3e6f9de1fcb713aa505dd7236f71ac798462c
Hi everyone,
**TL;DR:**
I’m a sophomore cybersecurity engineering student sharing a work-in-progress multi-account Amazon Web Services (AWS, cloud computing platform) “Secure Data Hub” architecture with Cognito, API Gateway, Lambda, DynamoDB, and KMS. It is about 60% built and I would really appreciate any security or architecture feedback.
**See overview below!** (bottom of post, check repo for more);
...........
I’m a sophomore cybersecurity engineering student and I’ve been building a personal project called **Secure Data Hub**. The idea is to give small teams handling sensitive client data something safer than spreadsheets and email, but still simple to use.
The project is about 60% done, so this is not a finished product post. I wanted to share the design and architecture now so I can improve it before everything is locked in.
**What it is trying to do**
* Centralize client records for small teams (small law, health, or finance practices).
* Separate client and admin web apps that talk to the same encrypted client profiles.
* Keep access narrow and well logged so mistakes are easier to spot and recover from.
**Current architecture (high level)**
* Multi-account AWS Organizations setup (management, admin app, client app, data, security).
* Cognito + API Gateway + Lambda for auth and APIs, using ID token claims in mapping templates.
* DynamoDB with client-side encryption using the DynamoDB Encryption Client and a customer-managed KMS key, on top of DynamoDB’s own encryption at rest.
* Centralized logging and GuardDuty findings into a security account.
* Static frontends (HTML/JS) for the admin and client apps calling the APIs.
**Tech stack**
* Compute: AWS Lambda
* Database and storage: DynamoDB, S3
* Security and identity: IAM, KMS, Cognito, GuardDuty
* Networking and delivery: API Gateway (REST), CloudFront, Route 53
* Monitoring and logging: CloudWatch, centralized logging into a security account
* Frontend: Static HTML/JavaScript apps served via CloudFront and S3
* IaC and workflow: Terraform for infrastructure as code, GitHub + GitHub Actions for version control and CI
**Who this might help**
* Students or early professionals preparing for the AWS Certified Security – Specialty who want to see a realistic multi-account architecture that uses AWS KMS for both client-side and server-side encryption, rather than isolated examples.
* Anyone curious how identity, encryption, logging, and GuardDuty can fit together in one end-to-end design.
I architected, diagrammed, and implemented everything myself from scratch (no templates, no previous setup) because one of my goals was to learn what it takes to design a realistic, secure architecture end to end.
I know some choices may look overkill for small teams, but **I’m very open to suggestions** for simpler or more correct patterns.
**I’d really love feedback on anything:**
* Security concerns I might be missing
* Places where the account/IAM design could be better or simpler
* Better approaches for client-side encryption and updating items in DynamoDB
* Even small details like naming, logging strategy, etc.
Github repo (code + diagrams):
[`https://github.com/andyyaro/Building-A-Secure-Data-Hub-in-the-cloud-AWS-`](https://github.com/andyyaro/Building-A-Secure-Data-Hub-in-the-cloud-AWS-)
Write-up / slides:
[`https://gmuedu-my.sharepoint.com/:b:/g/personal/yyaro_gmu_edu/IQCTvQ7cpKYYT7CXae4d3fuwAVT3u67MN6gJr3nyEncEcS0?e=YFpCFC`](https://gmuedu-my.sharepoint.com/:b:/g/personal/yyaro_gmu_edu/IQCTvQ7cpKYYT7CXae4d3fuwAVT3u67MN6gJr3nyEncEcS0?e=YFpCFC)
Feel free to DM me. whether you’re also a student learning this stuff or someone with real-world experience, I’m always happy to exchange ideas and learn from others.
And if you think this could help other students or small teams, an upvote would really help more folks see it. Thanks a lot for taking the time
Building Docker Images with Nix
I've been experimenting creating container images via Nix and wanted to share with the community. I've found the results to be rather insane!
[Check it out here!](https://github.com/okwilkins/h8s/tree/f7d8832efce6a19bb32cdc49b39928f8de49db80/images/image-buildah)
The project linked is a fully worked example of how Nix is used to make a container that can create other containers. These will be used to build containers within my homelab and self-hosted CI/CD pipelines in Argo Workflows. If you're into homelabbing give the wider repo a look through also!
Using Nix allows for the following benefits:
1. The shell environment and binaries within the container is near identical to the shell Nix can provide locally.
2. The image is run from scratch.
* This means the image is nearly as small as possible.
* Security-wise, there are fewer binaries that are left in when compared to distros like Alpine or Debian based images.
3. As Nix flakes pin the exact versions, all binaries will stay at a constant and known state.
* With Alpine or Debian based images, when updating or installing packages, this is not a given.
4. The commands run via Taskfile will be the same locally as they are within CI/CD pipelines.
5. It allows for easily allow for different CPU architecture images and local dev.
The only big downside I've found with this is that when running the `nix build` step, the cache is often invalidated, leading to the image to be nearly completely rebuilt every time.
Really interested in knowing what you all think!
https://redd.it/1p7mpnd
@r_devops
I've been experimenting creating container images via Nix and wanted to share with the community. I've found the results to be rather insane!
[Check it out here!](https://github.com/okwilkins/h8s/tree/f7d8832efce6a19bb32cdc49b39928f8de49db80/images/image-buildah)
The project linked is a fully worked example of how Nix is used to make a container that can create other containers. These will be used to build containers within my homelab and self-hosted CI/CD pipelines in Argo Workflows. If you're into homelabbing give the wider repo a look through also!
Using Nix allows for the following benefits:
1. The shell environment and binaries within the container is near identical to the shell Nix can provide locally.
2. The image is run from scratch.
* This means the image is nearly as small as possible.
* Security-wise, there are fewer binaries that are left in when compared to distros like Alpine or Debian based images.
3. As Nix flakes pin the exact versions, all binaries will stay at a constant and known state.
* With Alpine or Debian based images, when updating or installing packages, this is not a given.
4. The commands run via Taskfile will be the same locally as they are within CI/CD pipelines.
5. It allows for easily allow for different CPU architecture images and local dev.
The only big downside I've found with this is that when running the `nix build` step, the cache is often invalidated, leading to the image to be nearly completely rebuilt every time.
Really interested in knowing what you all think!
https://redd.it/1p7mpnd
@r_devops
GitHub
h8s/images/image-buildah at f7d8832efce6a19bb32cdc49b39928f8de49db80 · okwilkins/h8s
Homernetes is a Talos OS based K8s cluster for my homelab. - okwilkins/h8s
Skill Rot from First DevOps-Adjacent Job. Feel Like I Don’t Have the Skills to Jump.
Hello, intelligentsia of the illustrious r/devops. I’m in a bit of a pickle and am looking for some insight. So I’m about 1 year and couple of months into my first job which happens to be in big tech. The company is known to be very stable and a “rest and vest” sort of situation with good WLB.
My work abstractly entails ETL operations on internal documents. The actual transformation here is usually comprised of node noscripts that find metadata in the documents and re-inserts the metadata, either in its original form or transformed by some computations, into a simplified version of the documents (think html flattering) before dropping them in an s3 bucket. I also schedule and create GitHub Action jobs for these operations based off of jobs already established. Additionally we manage our infrastructure with terraform and AWS. The pay is very good for this early in my career.
This is where the big wrinkle comes in, it seems that our architecture and processes are very mature and the team’s pace is very slow/stable. I looked back at all my commits in the months since I started working and was shocked at how few code contributions I’ve made. In terms of the infrastructure the only real exposure I’ve had to it is through routine/ run book style operations. I haven’t been actually able to alter the terraform files in all the time I’ve been here. There is a lot of tedious/rote work. My most significant contributions have been in the ETL side.
At this point some may say to communicate with my boss to ask for more on the infra side/ more complex tasks. However, the issue is that it genuinely doesn’t seem that there are that many more complex things to do. I realized recently that the second most junior person on the team whose been here a couple more years than I have and also has had more jobs than I have doesn’t seem to do all that more complex work than me. The most complex work just goes to the senior engineer and I suspect it’s been like this for a while. I had a feeling that this position may be bad for my career 6 months in but held out hope until now and I’m now afraid I realized too late.
I am hoping to find a junior devops role, but I am feeling fearful and overwhelmed since 1. I barely have the experience needed for devops with how surface level my experience here has been and 2. the job market seems vicious. I am beginning to upskill and work on getting a tight understanding of python, docker, kubernetes, and AWS. I also plan to make some projects. I hope to hop within the next 6 months.
I guess my questions with all this information in mind are:
1. Is my plan realistic? How much do projects showing self-learned devops skills really matter when the job I performed did not actually require or teach those skills. Short of lying, this will put me at a significant disadvantage, right?
2. If you were in my position how would you handle this?
Thank you all in advance. I’m feeling very uncertain about the future of my career.
https://redd.it/1p7sd6t
@r_devops
Hello, intelligentsia of the illustrious r/devops. I’m in a bit of a pickle and am looking for some insight. So I’m about 1 year and couple of months into my first job which happens to be in big tech. The company is known to be very stable and a “rest and vest” sort of situation with good WLB.
My work abstractly entails ETL operations on internal documents. The actual transformation here is usually comprised of node noscripts that find metadata in the documents and re-inserts the metadata, either in its original form or transformed by some computations, into a simplified version of the documents (think html flattering) before dropping them in an s3 bucket. I also schedule and create GitHub Action jobs for these operations based off of jobs already established. Additionally we manage our infrastructure with terraform and AWS. The pay is very good for this early in my career.
This is where the big wrinkle comes in, it seems that our architecture and processes are very mature and the team’s pace is very slow/stable. I looked back at all my commits in the months since I started working and was shocked at how few code contributions I’ve made. In terms of the infrastructure the only real exposure I’ve had to it is through routine/ run book style operations. I haven’t been actually able to alter the terraform files in all the time I’ve been here. There is a lot of tedious/rote work. My most significant contributions have been in the ETL side.
At this point some may say to communicate with my boss to ask for more on the infra side/ more complex tasks. However, the issue is that it genuinely doesn’t seem that there are that many more complex things to do. I realized recently that the second most junior person on the team whose been here a couple more years than I have and also has had more jobs than I have doesn’t seem to do all that more complex work than me. The most complex work just goes to the senior engineer and I suspect it’s been like this for a while. I had a feeling that this position may be bad for my career 6 months in but held out hope until now and I’m now afraid I realized too late.
I am hoping to find a junior devops role, but I am feeling fearful and overwhelmed since 1. I barely have the experience needed for devops with how surface level my experience here has been and 2. the job market seems vicious. I am beginning to upskill and work on getting a tight understanding of python, docker, kubernetes, and AWS. I also plan to make some projects. I hope to hop within the next 6 months.
I guess my questions with all this information in mind are:
1. Is my plan realistic? How much do projects showing self-learned devops skills really matter when the job I performed did not actually require or teach those skills. Short of lying, this will put me at a significant disadvantage, right?
2. If you were in my position how would you handle this?
Thank you all in advance. I’m feeling very uncertain about the future of my career.
https://redd.it/1p7sd6t
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community