We had a credential leak scare and now I do not trust how we share access
"We had a close call last week where an old API key showed up in a place it absolutely should not have been. Nothing bad happened, but it was enough to make me realize how messy our access setup actually is. Between Slack, docs, and password managers, credentials have been shared far more casually than I am comfortable with.
The problem is that people genuinely need access. Contractors, accountants, devs jumping in to help, sometimes even temporary automation. Rotating everything constantly is not realistic, but keeping things as they are feels irresponsible.
I am looking for recommendations on better ways to handle this. Ideally something where access can be granted without exposing credentials and can be revoked instantly without breaking everything else. How are others solving this after a scare like this?"
https://redd.it/1pyo1hh
@r_devops
"We had a close call last week where an old API key showed up in a place it absolutely should not have been. Nothing bad happened, but it was enough to make me realize how messy our access setup actually is. Between Slack, docs, and password managers, credentials have been shared far more casually than I am comfortable with.
The problem is that people genuinely need access. Contractors, accountants, devs jumping in to help, sometimes even temporary automation. Rotating everything constantly is not realistic, but keeping things as they are feels irresponsible.
I am looking for recommendations on better ways to handle this. Ideally something where access can be granted without exposing credentials and can be revoked instantly without breaking everything else. How are others solving this after a scare like this?"
https://redd.it/1pyo1hh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
are you guys using sop's and runbooks?
i’m about to start writing sops and runbooks for my infra and wanted to see how others are doing it.
are you actually using sops/runbooks in prod or do they just rot over time?
what tools do you use to draft and maintain them?(notion, confluence..)
how are you handling alerts?
would love to hear what setups are actually working (or not) in real companies.
https://redd.it/1pynmxg
@r_devops
i’m about to start writing sops and runbooks for my infra and wanted to see how others are doing it.
are you actually using sops/runbooks in prod or do they just rot over time?
what tools do you use to draft and maintain them?(notion, confluence..)
how are you handling alerts?
would love to hear what setups are actually working (or not) in real companies.
https://redd.it/1pynmxg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you enforce escalation processes across teams?
In environments with multiple teams and external dependencies, how do you enforce that escalation processes are actually respected?
Specifically:
* required inputs are always provided
* ownership is clear
* escalations don’t rely on calls or tribal knowledge
Or does it still mostly depend on people chasing others on Slack?
Looking for real experiences, not theoretical frameworks.
https://redd.it/1pynic2
@r_devops
In environments with multiple teams and external dependencies, how do you enforce that escalation processes are actually respected?
Specifically:
* required inputs are always provided
* ownership is clear
* escalations don’t rely on calls or tribal knowledge
Or does it still mostly depend on people chasing others on Slack?
Looking for real experiences, not theoretical frameworks.
https://redd.it/1pynic2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
how to combine 2 different framework in devops temple
ok guys I know it's not make sense.
1) english is not my first language
2)I am not a devops professional. just practicing
so I want to set up a wordpress app to write blog posts (I already host one wordpress on my ec2 so I am familiar with wordpress little bit ) and I have another app as side project and want to set a cd/ci pipeline for my side project and I want to post progress of my side project in the blog but where I am struggle is:
1) wordpress written in php and different framework, my side project written in java with springboot. is it common to interact 2 different framework ?
2) I want to keep my wordpress container up always, would it cost too much ?
3) is it make sense to host my wordpress as container?
https://redd.it/1pytdp5
@r_devops
ok guys I know it's not make sense.
1) english is not my first language
2)I am not a devops professional. just practicing
so I want to set up a wordpress app to write blog posts (I already host one wordpress on my ec2 so I am familiar with wordpress little bit ) and I have another app as side project and want to set a cd/ci pipeline for my side project and I want to post progress of my side project in the blog but where I am struggle is:
1) wordpress written in php and different framework, my side project written in java with springboot. is it common to interact 2 different framework ?
2) I want to keep my wordpress container up always, would it cost too much ?
3) is it make sense to host my wordpress as container?
https://redd.it/1pytdp5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How much code are you writing daily
what's is the dev ops workflow like. are you always writing automation noscripts or is a large chunk reviewing others noscripts. how much of the job are you actually writing noscripts. And what is the best advice you can give me with becoming a dev ops engineer. what do you feel you really need to understand to make it in the field.
https://redd.it/1pyx2x1
@r_devops
what's is the dev ops workflow like. are you always writing automation noscripts or is a large chunk reviewing others noscripts. how much of the job are you actually writing noscripts. And what is the best advice you can give me with becoming a dev ops engineer. what do you feel you really need to understand to make it in the field.
https://redd.it/1pyx2x1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops free courses
Can you guys recommend free courses,I know that DevOps it's plenty of tools, skills, recommend me please good docker,terraform etc.. Thanks.
https://redd.it/1pyyyzz
@r_devops
Can you guys recommend free courses,I know that DevOps it's plenty of tools, skills, recommend me please good docker,terraform etc.. Thanks.
https://redd.it/1pyyyzz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Thinking About a DevOps Career in 2026? Focus on What Truly Counts
A lot of beginners jump straight into Docker and Kubernetes, only to feel overwhelmed a few weeks later. That confusion is normal. DevOps is not about memorizing a checklist of tools. It is about understanding systems, building the right habits, and introducing tools only when they actually solve a problem.
If I were starting from scratch in 2026, this is the approach I would follow.
# 1. Start With Strong Foundations
Before automating anything, you must understand what you are automating.
Spend time on:
Linux fundamentals like file systems, processes, permissions, and services
Networking basics such as IP addressing, DNS, HTTP/HTTPS, ports, routing, NAT, and firewalls
Core system administration concepts including users, groups, packages, and logs
Bash noscripting for day-to-day automation
Basic Python for tasks like API calls, log parsing, and simple automation
If you cannot clearly explain what happens when you run a `curl` command or why a service fails to start, advanced tools will only add confusion later.
# 2. Git and CI/CD Are Non-Negotiable
Version control sits at the heart of DevOps. Get comfortable with Git concepts like branching, pull requests, merges, and conflict resolution.
After that, move into CI/CD. Tools may vary, but the concepts stay the same:
Jenkins
GitLab CI
GitHub Actions
CircleCI
Do not treat pipelines as magic buttons. Learn how stages work, how tests run, how artifacts are created, and what a proper rollback looks like when something breaks.
# 3. Containers, Then Orchestration
Containers matter, but timing matters too.
Start with Docker:
Write Dockerfiles
Understand images and layers
Use volumes and networks
Run multi-service setups with Docker Compose
Only once this feels natural should you move to Kubernetes. Take it slow and focus on:
Pods, deployments, and services
ConfigMaps and secrets
Scaling and rolling updates
Ingress and service discovery
You should also get familiar with managed Kubernetes platforms like EKS, AKS, or GKE.
# 4. Cloud Knowledge Is Mandatory
Pick one cloud provider and go deep. AWS is common, but Azure or GCP are equally valid depending on your region.
Core areas to learn:
Compute services
Virtual networking and security boundaries
Object and block storage
Identity and access management with least-privilege principles
Once comfortable, practice deploying containerized or Kubernetes workloads in the cloud.
# 5. Infrastructure as Code
Manual cloud setups do not scale. Infrastructure must be repeatable and version-controlled.
Terraform is a solid starting point. Learn how to:
Define resources using code
Use variables and modules properly
Apply and destroy infrastructure safely
Manage and secure remote state
# 6. Observability: Metrics, Logs, Alerts
If you cannot see failures, you cannot operate systems.
Get practical experience with:
Metrics using Prometheus and Grafana
Centralized logging with tools like the ELK stack
Cloud-native monitoring solutions such as CloudWatch
Understanding what “healthy” looks like is just as important as knowing when things break.
# 7. Security as a Default Practice
Security is no longer optional in DevOps.
Learn the basics of:
Vulnerability scanning for code and containers
Secure secret management
Hardening Docker images
Applying IAM best practices
These skills naturally lead into DevSecOps responsibilities.
# 8. Build End-to-End Projects
Tutorials help, but real learning happens when you build something complete.
Good project ideas include:
A microservice-based application using Docker
A full CI/CD flow from commit to cloud deployment
Infrastructure provisioning using Terraform
Monitoring and logging integrated into the system
Document everything clearly in GitHub so others can understand your decisions.
# 9. Learn With the Community
DevOps is collaborative by nature. Learning in isolation slows you down.
Join DevOps communities
A lot of beginners jump straight into Docker and Kubernetes, only to feel overwhelmed a few weeks later. That confusion is normal. DevOps is not about memorizing a checklist of tools. It is about understanding systems, building the right habits, and introducing tools only when they actually solve a problem.
If I were starting from scratch in 2026, this is the approach I would follow.
# 1. Start With Strong Foundations
Before automating anything, you must understand what you are automating.
Spend time on:
Linux fundamentals like file systems, processes, permissions, and services
Networking basics such as IP addressing, DNS, HTTP/HTTPS, ports, routing, NAT, and firewalls
Core system administration concepts including users, groups, packages, and logs
Bash noscripting for day-to-day automation
Basic Python for tasks like API calls, log parsing, and simple automation
If you cannot clearly explain what happens when you run a `curl` command or why a service fails to start, advanced tools will only add confusion later.
# 2. Git and CI/CD Are Non-Negotiable
Version control sits at the heart of DevOps. Get comfortable with Git concepts like branching, pull requests, merges, and conflict resolution.
After that, move into CI/CD. Tools may vary, but the concepts stay the same:
Jenkins
GitLab CI
GitHub Actions
CircleCI
Do not treat pipelines as magic buttons. Learn how stages work, how tests run, how artifacts are created, and what a proper rollback looks like when something breaks.
# 3. Containers, Then Orchestration
Containers matter, but timing matters too.
Start with Docker:
Write Dockerfiles
Understand images and layers
Use volumes and networks
Run multi-service setups with Docker Compose
Only once this feels natural should you move to Kubernetes. Take it slow and focus on:
Pods, deployments, and services
ConfigMaps and secrets
Scaling and rolling updates
Ingress and service discovery
You should also get familiar with managed Kubernetes platforms like EKS, AKS, or GKE.
# 4. Cloud Knowledge Is Mandatory
Pick one cloud provider and go deep. AWS is common, but Azure or GCP are equally valid depending on your region.
Core areas to learn:
Compute services
Virtual networking and security boundaries
Object and block storage
Identity and access management with least-privilege principles
Once comfortable, practice deploying containerized or Kubernetes workloads in the cloud.
# 5. Infrastructure as Code
Manual cloud setups do not scale. Infrastructure must be repeatable and version-controlled.
Terraform is a solid starting point. Learn how to:
Define resources using code
Use variables and modules properly
Apply and destroy infrastructure safely
Manage and secure remote state
# 6. Observability: Metrics, Logs, Alerts
If you cannot see failures, you cannot operate systems.
Get practical experience with:
Metrics using Prometheus and Grafana
Centralized logging with tools like the ELK stack
Cloud-native monitoring solutions such as CloudWatch
Understanding what “healthy” looks like is just as important as knowing when things break.
# 7. Security as a Default Practice
Security is no longer optional in DevOps.
Learn the basics of:
Vulnerability scanning for code and containers
Secure secret management
Hardening Docker images
Applying IAM best practices
These skills naturally lead into DevSecOps responsibilities.
# 8. Build End-to-End Projects
Tutorials help, but real learning happens when you build something complete.
Good project ideas include:
A microservice-based application using Docker
A full CI/CD flow from commit to cloud deployment
Infrastructure provisioning using Terraform
Monitoring and logging integrated into the system
Document everything clearly in GitHub so others can understand your decisions.
# 9. Learn With the Community
DevOps is collaborative by nature. Learning in isolation slows you down.
Join DevOps communities
like:
Reddit (r/devops, r/kubernetes, r/aws, r/sre)
CNCF Slack channels
DevOps Discord servers
Local meetups or conferences
Online tech communities that are oriented towards Cloud and Devops (hexplain.space)
# 10. Be Consistent, Not Overwhelmed
DevOps is a long-term journey. Tools will change, fundamentals will not.
If you dedicate a few focused hours each week and build your skills layer by layer, becoming job-ready within several months is realistic. The key is patience, consistency, and learning with purpose.
Join the conversation, stay curious, and keep building.
https://redd.it/1pzalqb
@r_devops
Reddit (r/devops, r/kubernetes, r/aws, r/sre)
CNCF Slack channels
DevOps Discord servers
Local meetups or conferences
Online tech communities that are oriented towards Cloud and Devops (hexplain.space)
# 10. Be Consistent, Not Overwhelmed
DevOps is a long-term journey. Tools will change, fundamentals will not.
If you dedicate a few focused hours each week and build your skills layer by layer, becoming job-ready within several months is realistic. The key is patience, consistency, and learning with purpose.
Join the conversation, stay curious, and keep building.
https://redd.it/1pzalqb
@r_devops
Reddit
Everything DevOps
r/devops
Supply chain feels “unfinished” once things are live
We do all the right things at build time, but I’ve still seen dependencies behave oddly once they’re under real traffic.
It made me realize how much we assume build-time checks are enough.
How are others thinking about this after deployment?
https://redd.it/1pz9xn9
@r_devops
We do all the right things at build time, but I’ve still seen dependencies behave oddly once they’re under real traffic.
It made me realize how much we assume build-time checks are enough.
How are others thinking about this after deployment?
https://redd.it/1pz9xn9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
CKAD exam pricing confusion: KodeKloud vs Linux Foundation
I recently purchased CKAD via KodeKloud.
For my other four Kubernetes certifications, I bought the exams directly from the Linux Foundation, but this time KodeKloud was offering 55% off for annual subscribers.
https://preview.redd.it/88jby84yo9ag1.png?width=1386&format=png&auto=webp&s=7d94cdcacfd9db0e6f1fced2aca6ddbd500a36b3
The main reason I purchased the annual subnoscription was to use this discount when needed. After applying it, I paid ₹20.5k INR (including taxes).
Once I redeemed the voucher, it showed:
>
That was fine with me, as I was confident I won’t need a retake.
However, today I accidentally landed on this Linux Foundation page:
https://trainingportal.linuxfoundation.org/learn/course/certified-kubernetes-application-developer-single-attempt-ckad-single/exam/exam
It lists the same CKAD single-attempt exam for $140 (\~₹12–12.5k INR).
https://preview.redd.it/zx7tz4u0p9ag1.png?width=1391&format=png&auto=webp&s=04a80c160758b3dd3eafdcd2ac002de7600b51fe
Same exam.
Same attempt type.
Different platforms. Very different prices.
Am I missing something here or is this just confusing / misleading discount framing?
Posting this to understand better and to help others make an informed choice.
https://redd.it/1pz89eo
@r_devops
I recently purchased CKAD via KodeKloud.
For my other four Kubernetes certifications, I bought the exams directly from the Linux Foundation, but this time KodeKloud was offering 55% off for annual subscribers.
https://preview.redd.it/88jby84yo9ag1.png?width=1386&format=png&auto=webp&s=7d94cdcacfd9db0e6f1fced2aca6ddbd500a36b3
The main reason I purchased the annual subnoscription was to use this discount when needed. After applying it, I paid ₹20.5k INR (including taxes).
Once I redeemed the voucher, it showed:
>
That was fine with me, as I was confident I won’t need a retake.
However, today I accidentally landed on this Linux Foundation page:
https://trainingportal.linuxfoundation.org/learn/course/certified-kubernetes-application-developer-single-attempt-ckad-single/exam/exam
It lists the same CKAD single-attempt exam for $140 (\~₹12–12.5k INR).
https://preview.redd.it/zx7tz4u0p9ag1.png?width=1391&format=png&auto=webp&s=04a80c160758b3dd3eafdcd2ac002de7600b51fe
Same exam.
Same attempt type.
Different platforms. Very different prices.
Am I missing something here or is this just confusing / misleading discount framing?
Posting this to understand better and to help others make an informed choice.
https://redd.it/1pz89eo
@r_devops
The hardest incidents to explain are the quiet ones
Some of the hardest security incidents I’ve been part of weren’t dramatic. No outages, no obvious alerts, nothing screaming for attention.
Just small things that didn’t line up in hindsight.
How do you all validate concerns when there’s no clear signal yet?
https://redd.it/1pzcww5
@r_devops
Some of the hardest security incidents I’ve been part of weren’t dramatic. No outages, no obvious alerts, nothing screaming for attention.
Just small things that didn’t line up in hindsight.
How do you all validate concerns when there’s no clear signal yet?
https://redd.it/1pzcww5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
zsh-doppler - ZSH plugin to show Doppler project/config in your prompt
I work with a lot of Doppler projects and got tired of running doppler setup / configure to remember which env I was in. So I made a simple plugin that shows [project/config\] in your prompt.
Colors change based on environment - green for dev, yellow for staging, red for prod. Helps avoid that "oh shit" moment when you realize you were in prod.
Works with Oh My Zsh, Powerlevel10k, zinit, etc.
https://github.com/lsdcapital/zsh-doppler
Contributions welcome, happy to help debug, improve it based on feedback
https://redd.it/1pzdt6a
@r_devops
I work with a lot of Doppler projects and got tired of running doppler setup / configure to remember which env I was in. So I made a simple plugin that shows [project/config\] in your prompt.
Colors change based on environment - green for dev, yellow for staging, red for prod. Helps avoid that "oh shit" moment when you realize you were in prod.
Works with Oh My Zsh, Powerlevel10k, zinit, etc.
https://github.com/lsdcapital/zsh-doppler
Contributions welcome, happy to help debug, improve it based on feedback
https://redd.it/1pzdt6a
@r_devops
GitHub
GitHub - lsdcapital/zsh-doppler: Zsh plugin that displays your current doppler environment in the prompt
Zsh plugin that displays your current doppler environment in the prompt - lsdcapital/zsh-doppler
Terraform's dependency on github.com - what are your thoughts?
Hi all,
Like two weeks ago ( december the 18th ) github.com its reachability was affected by an issue on their side.
See -> https://www.githubstatus.com/incidents/xntfc1fz5rfb
We needed to do maintenance that very day. All of our terraform providers were defined as default. "Go get it from github" plus we didn't had any terraform caching active.
We needed to run some terraform noscripts multiple times to be lucky to not get a 500/503 from github downloading the providers. In the end we succeeded but it took a lot more time then anticipated.
We now worked on having all of our terraform providers on local hosted location.
Some tuning with .terraformrc, some extra's in our CI/CD pipeline for running terraform.
All together a nice project to put together, it requires you to think about what are the providers that we are using? And which versions do we exactly need.
But it also creates another technical nook in our infrastructure. F.e. when we want to up one of the provider versions we need to perform additional tasks.
What are your thoughts about this? Some services are treated like they are the light and water of the internet. They are always there ( github / dockerhub / cloudfare ) - until they are not and recently we noticed a lot of the latter behavior.
One thought is this doesn't happens that often, they have the top of the line infra + expertise.
It isn't worth doing this kind of workaround if you are not servicing infra for an hospital or a bank.
The other more personally thought is, I like the disruptive nature of these incidents, it encourages you to think past the assumption of tech building blocks that are to big to fail.
And it ignites the doubt that is not so wise that everybody should stick to the same golden standards from the big 7 in Silicon Valley.
Tell me!?
https://redd.it/1pzfe7e
@r_devops
Hi all,
Like two weeks ago ( december the 18th ) github.com its reachability was affected by an issue on their side.
See -> https://www.githubstatus.com/incidents/xntfc1fz5rfb
We needed to do maintenance that very day. All of our terraform providers were defined as default. "Go get it from github" plus we didn't had any terraform caching active.
We needed to run some terraform noscripts multiple times to be lucky to not get a 500/503 from github downloading the providers. In the end we succeeded but it took a lot more time then anticipated.
We now worked on having all of our terraform providers on local hosted location.
Some tuning with .terraformrc, some extra's in our CI/CD pipeline for running terraform.
All together a nice project to put together, it requires you to think about what are the providers that we are using? And which versions do we exactly need.
But it also creates another technical nook in our infrastructure. F.e. when we want to up one of the provider versions we need to perform additional tasks.
What are your thoughts about this? Some services are treated like they are the light and water of the internet. They are always there ( github / dockerhub / cloudfare ) - until they are not and recently we noticed a lot of the latter behavior.
One thought is this doesn't happens that often, they have the top of the line infra + expertise.
It isn't worth doing this kind of workaround if you are not servicing infra for an hospital or a bank.
The other more personally thought is, I like the disruptive nature of these incidents, it encourages you to think past the assumption of tech building blocks that are to big to fail.
And it ignites the doubt that is not so wise that everybody should stick to the same golden standards from the big 7 in Silicon Valley.
Tell me!?
https://redd.it/1pzfe7e
@r_devops
GitHub
GitHub · Change is constant. GitHub keeps you ahead.
Join the world's most widely adopted, AI-powered developer platform where millions of developers, businesses, and the largest open source community build software that advances humanity.
Kubernetes concepts in 60 seconds
Trying an experiment: explaining Kubernetes concepts in under 60 seconds.
Would love feedback.
Check out the videos on YouTube
https://youtube.com/@soulmaniqbal?si=pZCVwXQizNQXFzv1
https://redd.it/1pzfsir
@r_devops
Trying an experiment: explaining Kubernetes concepts in under 60 seconds.
Would love feedback.
Check out the videos on YouTube
https://youtube.com/@soulmaniqbal?si=pZCVwXQizNQXFzv1
https://redd.it/1pzfsir
@r_devops
Reddit
From the devops community on Reddit: Kubernetes concepts in 60 seconds
Explore this post and more from the devops community
qa tests blocking deploys 6 times today, averaging 40min per run
our pipeline is killing productivity. we've got this selenium test suite with about 650 tests that runs on every pr and it's become everyone's least favorite part of the day.
takes 40 minutes on average, sometimes up to an hour. but the real problem is the flakiness. probably 8 to 12 tests fail on every single run, always different ones. devs have learned to just click rerun and grab coffee.
we're trying to ship multiple times per day but qa stage is the bottleneck. and nobody trusts the tests anymore because they've cried wolf so many times. when something actually fails everyone assumes it's just another selector issue.
tried parallelizing more but hit our ci runner limits. tried being smarter about what runs when but then we miss integration issues. feels like we're stuck between slow and unreliable.
anyone actually solved this problem? need tests that are fast, stable, and catch real bugs. starting to think the whole selector based approach is fundamentally flawed for complex modern webapps.
https://redd.it/1pzgupz
@r_devops
our pipeline is killing productivity. we've got this selenium test suite with about 650 tests that runs on every pr and it's become everyone's least favorite part of the day.
takes 40 minutes on average, sometimes up to an hour. but the real problem is the flakiness. probably 8 to 12 tests fail on every single run, always different ones. devs have learned to just click rerun and grab coffee.
we're trying to ship multiple times per day but qa stage is the bottleneck. and nobody trusts the tests anymore because they've cried wolf so many times. when something actually fails everyone assumes it's just another selector issue.
tried parallelizing more but hit our ci runner limits. tried being smarter about what runs when but then we miss integration issues. feels like we're stuck between slow and unreliable.
anyone actually solved this problem? need tests that are fast, stable, and catch real bugs. starting to think the whole selector based approach is fundamentally flawed for complex modern webapps.
https://redd.it/1pzgupz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for help for my startup
Hey all!
I'm coming here to seek for some guidance or help on how to tackle my next challenge on the startup I am creating.
We currently have various services that some clients are currently using, and our next step is white labeling certain type of website.
Right now, we operate this website which is running over a mono-repo with React and NextJS, and is extremely connected with an admin panel in a different repository.
The website usually requests for data to the admin panel, including for secrets at server-boot (I did this to allow my future self to deploy multiple websites over the same codebase, without having a mess of secrets on GitHub). These secrets are being pulled from the admin panel using a slug I assigned to my website. Ideally, other websites in the future will use this same system.
The problem (or challenge): what's the way to go in order to have multiple deployments happening every time we merge into the main branch? Currently I am using GH actions but to me, it doesn't look sustainable in the future, once we have many white-labeled websites running out there.
It's also important to mention that each website will have it's own external Supabase, an internal (self-hosted) Redis instance, and all of them will use our centralized Soketi (Pusher alternative - self-hosted) service... So, ideally, the solution would include deploying that external Supabase (this is easy, APIs exist for that), a dedicated Redis, and... a server to host the backend, and that dedicated Redis.
I've been a Software Engineer for the last 7-8 years but never really had to actually take care of devops / infra / you-call-it. I'm really open to learn all of this, had multiple conversations with Claude but I always prefer human-to-human information transfers.
Thank you!
https://redd.it/1pzjdwk
@r_devops
Hey all!
I'm coming here to seek for some guidance or help on how to tackle my next challenge on the startup I am creating.
We currently have various services that some clients are currently using, and our next step is white labeling certain type of website.
Right now, we operate this website which is running over a mono-repo with React and NextJS, and is extremely connected with an admin panel in a different repository.
The website usually requests for data to the admin panel, including for secrets at server-boot (I did this to allow my future self to deploy multiple websites over the same codebase, without having a mess of secrets on GitHub). These secrets are being pulled from the admin panel using a slug I assigned to my website. Ideally, other websites in the future will use this same system.
The problem (or challenge): what's the way to go in order to have multiple deployments happening every time we merge into the main branch? Currently I am using GH actions but to me, it doesn't look sustainable in the future, once we have many white-labeled websites running out there.
It's also important to mention that each website will have it's own external Supabase, an internal (self-hosted) Redis instance, and all of them will use our centralized Soketi (Pusher alternative - self-hosted) service... So, ideally, the solution would include deploying that external Supabase (this is easy, APIs exist for that), a dedicated Redis, and... a server to host the backend, and that dedicated Redis.
I've been a Software Engineer for the last 7-8 years but never really had to actually take care of devops / infra / you-call-it. I'm really open to learn all of this, had multiple conversations with Claude but I always prefer human-to-human information transfers.
Thank you!
https://redd.it/1pzjdwk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community