GCP Docs Misleading: AWS RDS Postgres → Cloud SQL Postgres migration doesn’t need Cloud SQL public IP (Configure connectivity using IP allowlists)
When migrating from **AWS RDS Postgres → GCP Cloud SQL Postgres** using **Database Migration Service (DMS)**, the official docs say you must:
* Enable the **Cloud SQL public IP**, and
* Add that Cloud SQL egress public IP to the **AWS RDS security group inbound rules**.
But in practice, this isn’t needed.
* You **don’t have to enable Cloud SQL public IP at all**.
* You only need to allow the **DMS service egress IP(s)** (for your region) in the AWS RDS security group inbound rules.
* With just that, the migration works fine.
This means the documentation is misleading and encourages users to unnecessarily expose Cloud SQL to the public internet, weakening security.
Docs reference: [Configure connectivity using IP allowlists](https://cloud.google.com/database-migration/docs/postgres/configure-connectivity-ip-allowlists)
Last week, I was working on this migration and lost nearly four hours due to misleading documentation, only to realize that enabling the Cloud SQL public IP wasn’t necessary.
I feel like I’m doing more service for Google than many of their customer engineers. I’m essentially providing free feedback to help improve their documentation. Maybe I should be charging for it, just kidding, I genuinely love Google Cloud.
I have write an [article](https://medium.com/@rasvihostings/simplifying-aws-rds-to-google-cloud-sql-enterprise-migrations-navigating-documentation-challenges-af5914b55570) about it check it out as well
[https://medium.com/@rasvihostings/simplifying-aws-rds-to-google-cloud-sql-enterprise-migrations-navigating-documentation-challenges-af5914b55570](https://medium.com/@rasvihostings/simplifying-aws-rds-to-google-cloud-sql-enterprise-migrations-navigating-documentation-challenges-af5914b55570)
https://redd.it/1nm75jh
@r_devops
When migrating from **AWS RDS Postgres → GCP Cloud SQL Postgres** using **Database Migration Service (DMS)**, the official docs say you must:
* Enable the **Cloud SQL public IP**, and
* Add that Cloud SQL egress public IP to the **AWS RDS security group inbound rules**.
But in practice, this isn’t needed.
* You **don’t have to enable Cloud SQL public IP at all**.
* You only need to allow the **DMS service egress IP(s)** (for your region) in the AWS RDS security group inbound rules.
* With just that, the migration works fine.
This means the documentation is misleading and encourages users to unnecessarily expose Cloud SQL to the public internet, weakening security.
Docs reference: [Configure connectivity using IP allowlists](https://cloud.google.com/database-migration/docs/postgres/configure-connectivity-ip-allowlists)
Last week, I was working on this migration and lost nearly four hours due to misleading documentation, only to realize that enabling the Cloud SQL public IP wasn’t necessary.
I feel like I’m doing more service for Google than many of their customer engineers. I’m essentially providing free feedback to help improve their documentation. Maybe I should be charging for it, just kidding, I genuinely love Google Cloud.
I have write an [article](https://medium.com/@rasvihostings/simplifying-aws-rds-to-google-cloud-sql-enterprise-migrations-navigating-documentation-challenges-af5914b55570) about it check it out as well
[https://medium.com/@rasvihostings/simplifying-aws-rds-to-google-cloud-sql-enterprise-migrations-navigating-documentation-challenges-af5914b55570](https://medium.com/@rasvihostings/simplifying-aws-rds-to-google-cloud-sql-enterprise-migrations-navigating-documentation-challenges-af5914b55570)
https://redd.it/1nm75jh
@r_devops
Google Cloud
Configure connectivity using IP allowlists | Database Migration Service | Google Cloud
Trunk Based
Does anyone else find that dev teams within their org constantly complain and want feature branches or GitFlow?
When what the real issue is, those teams are terrible at communicating and coordination..
https://redd.it/1nm84la
@r_devops
Does anyone else find that dev teams within their org constantly complain and want feature branches or GitFlow?
When what the real issue is, those teams are terrible at communicating and coordination..
https://redd.it/1nm84la
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Practical Terminal Commands Every DevOps Should Know
I put together a list of 17 practical terminal commands that save me time every day — from reusing arguments with
These aren’t your usual
Here is the Link
Curious to hear, what are your favorite hidden terminal commands?
https://redd.it/1nma0as
@r_devops
I put together a list of 17 practical terminal commands that save me time every day — from reusing arguments with
!$, fixing typos with ^old^new, to debugging ports with lsof.These aren’t your usual
ls and cd, but small tricks that make you feel much faster at the terminal.Here is the Link
Curious to hear, what are your favorite hidden terminal commands?
https://redd.it/1nma0as
@r_devops
Medium
Practical Terminal Commands Every Developer Should Know
Most developers know the basics cd, ls, pwd, maybe even grep. But the terminal has a lot more under the hood. There are commands and…
Anyone here trying to deploy resources to Azure using Bicep and running Gitlab pipelines?
Hi everyone!
I am a Fullstack developer trying to learn CICD and configure pipelines. My workplace uses Gitlab with Azure and thus I am trying to learn this. I hope this is the right sub to post this.
I have managed to do it through App Registration but that means I need to add
Is this the right approach or can I use managed identities for this?
The problem I encounter with managed identities is that I need to specify a branch. Sure I could configure it with my
Am I missing something?
I want to accomplish the following workflow
1. Develop and deploy a Fullstack App (Frontend React - Backend .NET)
2. Deploy Infrastructure as Code with Bicep. I want to deploy my application from a Dockerfile and using Azure Container Registry and Azure container Apps
3. Run Gitlab CICD Pipelines on merge request and check if the pipeline succeeds
4. On merge request approved, run the pipeline in main
I have been trying to find tutorials but most of them use Gitlab with AWS or Github. The articles I have tried to follow do not cover everything so clear.
The following pipeline worked but notice how I have the global
stages:
- validate
- deploy
variables:
RESOURCEGROUP: my-group
LOCATION: my-location
image: mcr.microsoft.com/azure-cli:latest
beforenoscript:
- echo $AZURETENANTID
- echo $AZURECLIENTID
- echo $AZURECLIENTSECRET
- az login --service-principal -u $AZURECLIENTID -t $AZURETENANTID --password $AZURECLIENTSECRET
- az account show
- az bicep install
validateazure:
stage: validate
noscript:
- az bicep build --file main.bicep
- ls -la
- az deployment group validate --resource-group $RESOURCEGROUP --template-file main.bicep --parameters u/parameters.dev.json
rules:
- if: $CIPIPELINESOURCE == "mergerequestevent"
- if: $CICOMMITBRANCH == "main"
deploytodev:
stage: deploy
noscript:
- az group create --name $RESOURCEGROUP --location $LOCATION --only-show-errors
- |
az deployment group create \
--resource-group $RESOURCEGROUP \
--template-file main.bicep \
--parameters u/parameters.dev.json
environment:
name: development
rules:
- if: $CICOMMITBRANCH == "main"
when: manual
Would really appreciate feedback and thoughts about the code.
Thanks a lot!
https://redd.it/1nm7nuw
@r_devops
Hi everyone!
I am a Fullstack developer trying to learn CICD and configure pipelines. My workplace uses Gitlab with Azure and thus I am trying to learn this. I hope this is the right sub to post this.
I have managed to do it through App Registration but that means I need to add
AZURE_CLIENT_ID, AZURE_TENANT_ID and AZURE_CLIENT_SECRET environment variables in Gitlab.Is this the right approach or can I use managed identities for this?
The problem I encounter with managed identities is that I need to specify a branch. Sure I could configure it with my
main branch but how can I test the pipeline in a merge requests? That means I would have many different branches and thus I would need to create a new managed identity for each? That sounds ridiculous and not logical.Am I missing something?
I want to accomplish the following workflow
1. Develop and deploy a Fullstack App (Frontend React - Backend .NET)
2. Deploy Infrastructure as Code with Bicep. I want to deploy my application from a Dockerfile and using Azure Container Registry and Azure container Apps
3. Run Gitlab CICD Pipelines on merge request and check if the pipeline succeeds
4. On merge request approved, run the pipeline in main
I have been trying to find tutorials but most of them use Gitlab with AWS or Github. The articles I have tried to follow do not cover everything so clear.
The following pipeline worked but notice how I have the global
before_noscript and image so it is available for other jobs. Is this okay?stages:
- validate
- deploy
variables:
RESOURCEGROUP: my-group
LOCATION: my-location
image: mcr.microsoft.com/azure-cli:latest
beforenoscript:
- echo $AZURETENANTID
- echo $AZURECLIENTID
- echo $AZURECLIENTSECRET
- az login --service-principal -u $AZURECLIENTID -t $AZURETENANTID --password $AZURECLIENTSECRET
- az account show
- az bicep install
validateazure:
stage: validate
noscript:
- az bicep build --file main.bicep
- ls -la
- az deployment group validate --resource-group $RESOURCEGROUP --template-file main.bicep --parameters u/parameters.dev.json
rules:
- if: $CIPIPELINESOURCE == "mergerequestevent"
- if: $CICOMMITBRANCH == "main"
deploytodev:
stage: deploy
noscript:
- az group create --name $RESOURCEGROUP --location $LOCATION --only-show-errors
- |
az deployment group create \
--resource-group $RESOURCEGROUP \
--template-file main.bicep \
--parameters u/parameters.dev.json
environment:
name: development
rules:
- if: $CICOMMITBRANCH == "main"
when: manual
Would really appreciate feedback and thoughts about the code.
Thanks a lot!
https://redd.it/1nm7nuw
@r_devops
Microsoft
Microsoft Artifact Registry
Microsoft Artifact Registry (also known as Microsoft Container Registry or MCR) Discovery Portal
Automate SQL Query
Right now in my company, the process for running SQL queries is still very manual. An SDE writes a query in a post/thread, then DevOps (or Sysadmin) needs to:
1. Review the query
2. Run it on the database
3. Check the output to make sure no confidential data is exposed
4. Share the sanitized result back to the SDE
We keep it manual because we want to ensure that any shared data is confidential and that queries are reviewed before execution. The downside is that this slows things down, and my manager recently disapproved of continuing with such a manual approach.
I’m wondering:
* What kind of DevOps/data engineering tools are best suited for this workflow?
* Ideally: SDE can create a query, DevOps reviews/approves, and then the query runs in a safe environment with proper logging.
* Bonus if the system can enforce **read-only vs. write queries** differently.
Has anyone here set up something like this? Would you recommend GitHub PR + CI/CD, Airflow with manual triggers, or building a custom internal tool?
https://redd.it/1nmc7ti
@r_devops
Right now in my company, the process for running SQL queries is still very manual. An SDE writes a query in a post/thread, then DevOps (or Sysadmin) needs to:
1. Review the query
2. Run it on the database
3. Check the output to make sure no confidential data is exposed
4. Share the sanitized result back to the SDE
We keep it manual because we want to ensure that any shared data is confidential and that queries are reviewed before execution. The downside is that this slows things down, and my manager recently disapproved of continuing with such a manual approach.
I’m wondering:
* What kind of DevOps/data engineering tools are best suited for this workflow?
* Ideally: SDE can create a query, DevOps reviews/approves, and then the query runs in a safe environment with proper logging.
* Bonus if the system can enforce **read-only vs. write queries** differently.
Has anyone here set up something like this? Would you recommend GitHub PR + CI/CD, Airflow with manual triggers, or building a custom internal tool?
https://redd.it/1nmc7ti
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What's the best route for communicating/transferring data from Azure to AWS?
The situation: I have been tasked with 1 of our big vendors where it is a requirement their data needs to be located in Azure's ecosystem, primarily in Azure DB in Postgres. That's simple, but the kicker is they need a consistent communication from AWS to Azure back to AWS where the data lives in Azure.
The problem: We use AWS EKS to host all our apps and databases here where our vendors don't give a damn where we host their data.
The resolution: Is my resolution correct in creating a Site-to-Site VPN where I can have communication tunneled securely from AWS to Azure back to AWS? I have also read blogs implementing AWS DMS with Azure's agent where I setup a standalone Aurora RDS db in AWS to daily send data to a Aurora RDS db. Unsure what's the best solution and most cost-effective when it comes to data.
More than likely I will need to do this for Google as well where their data needs to reside in GCP :'(
https://redd.it/1nmd2up
@r_devops
The situation: I have been tasked with 1 of our big vendors where it is a requirement their data needs to be located in Azure's ecosystem, primarily in Azure DB in Postgres. That's simple, but the kicker is they need a consistent communication from AWS to Azure back to AWS where the data lives in Azure.
The problem: We use AWS EKS to host all our apps and databases here where our vendors don't give a damn where we host their data.
The resolution: Is my resolution correct in creating a Site-to-Site VPN where I can have communication tunneled securely from AWS to Azure back to AWS? I have also read blogs implementing AWS DMS with Azure's agent where I setup a standalone Aurora RDS db in AWS to daily send data to a Aurora RDS db. Unsure what's the best solution and most cost-effective when it comes to data.
More than likely I will need to do this for Google as well where their data needs to reside in GCP :'(
https://redd.it/1nmd2up
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What are some things that are extremely useful that can be done with minimal effort?
What are some things that are extremely useful that can be done with minimal effort? I am trying to see if there are things I can do to help my team work faster and more efficiently.
https://redd.it/1nmei60
@r_devops
What are some things that are extremely useful that can be done with minimal effort? I am trying to see if there are things I can do to help my team work faster and more efficiently.
https://redd.it/1nmei60
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Which noscript should I use on LinkedIn and job applications ?
I have about 5 months of intern experience as a Web Developer and 2 years (ongoing) at a startup. They gave me the noscript SRE Tech Lead, but I was really just the first person doing DevOps/SRE there.
Here’s what I worked on:
CI/CD pipelines
Infrastructure (console + Terraform)
Monitoring, alerting, on-call
Code reviews
Some backend development
Troubleshooting production issues
IAM/roles/workspace management
Cloud cost optimization
I basically own all of our infra and repos. My work is fine, though not always “best practices.”
The issue: I don’t feel like I’m really at a “Tech Lead” level. I’m worried it’ll sound inflated if I put that on my resume. I’m currently leaning toward DevOps and SRE Engineer.
What do you think is the best way to frame my experience?
https://redd.it/1nmen6n
@r_devops
I have about 5 months of intern experience as a Web Developer and 2 years (ongoing) at a startup. They gave me the noscript SRE Tech Lead, but I was really just the first person doing DevOps/SRE there.
Here’s what I worked on:
CI/CD pipelines
Infrastructure (console + Terraform)
Monitoring, alerting, on-call
Code reviews
Some backend development
Troubleshooting production issues
IAM/roles/workspace management
Cloud cost optimization
I basically own all of our infra and repos. My work is fine, though not always “best practices.”
The issue: I don’t feel like I’m really at a “Tech Lead” level. I’m worried it’ll sound inflated if I put that on my resume. I’m currently leaning toward DevOps and SRE Engineer.
What do you think is the best way to frame my experience?
https://redd.it/1nmen6n
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need Guidance/Advice in Fake internship (Please Help, Don't ignore)
Hi Everyone,
I hope you all are doing well. I just completed my 2 projects of Devops also completed course and get certification.
As we all know, getting entry into devops is hard, so i am thinking to show fake internship (I know its wrong, but sometime we need to take decision) could you please help, what can i mention in my resume about internship?
Please don't ignore
your suggestions will really help me!!
https://redd.it/1nmn3mw
@r_devops
Hi Everyone,
I hope you all are doing well. I just completed my 2 projects of Devops also completed course and get certification.
As we all know, getting entry into devops is hard, so i am thinking to show fake internship (I know its wrong, but sometime we need to take decision) could you please help, what can i mention in my resume about internship?
Please don't ignore
your suggestions will really help me!!
https://redd.it/1nmn3mw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
PSA: Consider EBS snapshots over Jenkins backup plugins DiscussionAWS
TL;DR: Moved from ThinBackup plugin to EBS snapshots + Lambda automation. Faster recovery, lower maintenance overhead, \~$2/month. CloudFormation template available.
The Plugin Backup Challenge
Many Jenkins setups I've encountered follow this pattern:
ThinBackup or similar plugin installed
Scheduled backups to local storage
Backup monitoring often neglected
Recovery procedures untested
Common issues with this approach:
Dependency on the host system \- local backups don't help if the instance fails
Incomplete system state \- captures Jenkins config but misses OS-level dependencies
Plugin maintenance overhead \- updates occasionally break backup workflows
Recovery complexity \- restoring from file-based backups requires multiple manual steps
Infrastructure-Level Alternative
Since Jenkins typically runs on EC2 with EBS storage, why not leverage EBS snapshots for complete system backup?
Implementation Overview Created a CloudFormation stack that:
Lambda function discovers EBS volumes attached to Jenkins instance
Creates daily snapshots with retention policy
Tags snapshots appropriately for cost tracking
Sends notifications on success/failure
Includes cleanup automation
Cost Comparison Plugin approach: Time spent on maintenance + storage costs EBS approach: \~$1-3/month for incremental snapshots + minimal setup time
Recovery Experience Had to test this recently when a system update caused issues. Process was:
1. Identify appropriate snapshot (2 minutes)
2. Launch new instance from snapshot (5 minutes)
3. Update DNS/load balancer (1 minute)
4. Verify Jenkins functionality (2 minutes)
Total: \~10 minutes to fully operational state with complete history intact.
Why This Approach Works
Complete system recovery: OS, installed packages, Jenkins state, everything
Point-in-time consistency: EBS snapshots are atomic
AWS-native solution: Uses proven infrastructure services
Low maintenance: Automated with proper error handling
Scalable: Easy to extend for cross-region disaster recovery
Implementation Details The solution handles:
Multi-volume instances automatically
Configurable retention policies
IAM roles with minimal required permissions
CloudWatch metrics for monitoring
Optional cross-region replication
Implementation (GitHub): [https://github.com/HeinanCA/automatic-jenkinser](https://github.com/HeinanCA/automatic-jenkinser)
Discussion Points
How are others handling Jenkins backup/recovery?
Any experience with infrastructure-layer vs application-layer backup approaches?
What other services might benefit from this pattern?
Note: This pattern applies beyond Jenkins - any service running on EBS can use similar approaches (GitLab, databases, application servers, etc.).
https://redd.it/1nmov15
@r_devops
TL;DR: Moved from ThinBackup plugin to EBS snapshots + Lambda automation. Faster recovery, lower maintenance overhead, \~$2/month. CloudFormation template available.
The Plugin Backup Challenge
Many Jenkins setups I've encountered follow this pattern:
ThinBackup or similar plugin installed
Scheduled backups to local storage
Backup monitoring often neglected
Recovery procedures untested
Common issues with this approach:
Dependency on the host system \- local backups don't help if the instance fails
Incomplete system state \- captures Jenkins config but misses OS-level dependencies
Plugin maintenance overhead \- updates occasionally break backup workflows
Recovery complexity \- restoring from file-based backups requires multiple manual steps
Infrastructure-Level Alternative
Since Jenkins typically runs on EC2 with EBS storage, why not leverage EBS snapshots for complete system backup?
Implementation Overview Created a CloudFormation stack that:
Lambda function discovers EBS volumes attached to Jenkins instance
Creates daily snapshots with retention policy
Tags snapshots appropriately for cost tracking
Sends notifications on success/failure
Includes cleanup automation
Cost Comparison Plugin approach: Time spent on maintenance + storage costs EBS approach: \~$1-3/month for incremental snapshots + minimal setup time
Recovery Experience Had to test this recently when a system update caused issues. Process was:
1. Identify appropriate snapshot (2 minutes)
2. Launch new instance from snapshot (5 minutes)
3. Update DNS/load balancer (1 minute)
4. Verify Jenkins functionality (2 minutes)
Total: \~10 minutes to fully operational state with complete history intact.
Why This Approach Works
Complete system recovery: OS, installed packages, Jenkins state, everything
Point-in-time consistency: EBS snapshots are atomic
AWS-native solution: Uses proven infrastructure services
Low maintenance: Automated with proper error handling
Scalable: Easy to extend for cross-region disaster recovery
Implementation Details The solution handles:
Multi-volume instances automatically
Configurable retention policies
IAM roles with minimal required permissions
CloudWatch metrics for monitoring
Optional cross-region replication
Implementation (GitHub): [https://github.com/HeinanCA/automatic-jenkinser](https://github.com/HeinanCA/automatic-jenkinser)
Discussion Points
How are others handling Jenkins backup/recovery?
Any experience with infrastructure-layer vs application-layer backup approaches?
What other services might benefit from this pattern?
Note: This pattern applies beyond Jenkins - any service running on EBS can use similar approaches (GitLab, databases, application servers, etc.).
https://redd.it/1nmov15
@r_devops
GitHub
GitHub - HeinanCA/automatic-jenkinser: A fully automatic way to backup Jenkins to AWS
A fully automatic way to backup Jenkins to AWS. Contribute to HeinanCA/automatic-jenkinser development by creating an account on GitHub.
What’s your go-to deployment setup these days?
I’m curious how different teams are handling deployments right now. Some folks are all-in on GitOps with ArgoCD or Flux, others keep it simple with Helm charts, plain manifests, or even homegrown noscripts.
What’s working best for you? And what trade-offs have you run into (simplicity, speed, control, security, etc.)?
https://redd.it/1nn0l1t
@r_devops
I’m curious how different teams are handling deployments right now. Some folks are all-in on GitOps with ArgoCD or Flux, others keep it simple with Helm charts, plain manifests, or even homegrown noscripts.
What’s working best for you? And what trade-offs have you run into (simplicity, speed, control, security, etc.)?
https://redd.it/1nn0l1t
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I built a lightweight Go-based CI/CD tool for hacking on projects without setting up tons of infra
Hi All,
I’ve been experimenting with a simple problem, I wanted to use Claude Code to generate code from GitHub issues, and then quickly deploy those changes from a PR on my laptop so I could view them remotely — even when I’m away, by tunneling in over Tailscale.
Instead of setting up a full CI/CD stack with runners, servers, and cloud infra, I wrote a small tool in Go: [**gocd**](https://github.com/simonjcarr/gocd).
The idea
* No heavy infrastructure setup required
* Run it directly on your dev machine (or anywhere)
* Hook into GitHub issues + PRs to automate builds/deploys
* Great for solo devs or small experiments where spinning up GitHub Actions / Jenkins / GitLab CI feels like overkill
For me, it’s been a way to keep iterating quickly on side projects without dragging in too much tooling. But I’d love to hear from others:
* Would something like this be useful in your dev setup?
* What features would make it more valuable?
* Are there pain points in your current CI/CD workflows that a lightweight approach could help with?
Repo: [https://github.com/simonjcarr/gocd](https://github.com/simonjcarr/gocd)
Would really appreciate any feedback or ideas — I want to evolve this into something genuinely useful for folks who don’t need (or want) a huge CI/CD system just to test and deploy their work.
https://redd.it/1nn3u4x
@r_devops
Hi All,
I’ve been experimenting with a simple problem, I wanted to use Claude Code to generate code from GitHub issues, and then quickly deploy those changes from a PR on my laptop so I could view them remotely — even when I’m away, by tunneling in over Tailscale.
Instead of setting up a full CI/CD stack with runners, servers, and cloud infra, I wrote a small tool in Go: [**gocd**](https://github.com/simonjcarr/gocd).
The idea
* No heavy infrastructure setup required
* Run it directly on your dev machine (or anywhere)
* Hook into GitHub issues + PRs to automate builds/deploys
* Great for solo devs or small experiments where spinning up GitHub Actions / Jenkins / GitLab CI feels like overkill
For me, it’s been a way to keep iterating quickly on side projects without dragging in too much tooling. But I’d love to hear from others:
* Would something like this be useful in your dev setup?
* What features would make it more valuable?
* Are there pain points in your current CI/CD workflows that a lightweight approach could help with?
Repo: [https://github.com/simonjcarr/gocd](https://github.com/simonjcarr/gocd)
Would really appreciate any feedback or ideas — I want to evolve this into something genuinely useful for folks who don’t need (or want) a huge CI/CD system just to test and deploy their work.
https://redd.it/1nn3u4x
@r_devops
GitHub
GitHub - simonjcarr/SPDeploy
Contribute to simonjcarr/SPDeploy development by creating an account on GitHub.
Hey folks, if you are struggling to get a Job right now try starting something out.. I created DevOps related side hustle 2 years ago that brings me 2-3k per month.
Hey Folks,
About 2 years ago I started a side hustle. I didn't make it big, but still it is beyond what I ever hoped.
I had some spare time during evenings and wanted to build something where I could use my DevOps skills and SWE. I wanted to get better at writing Go code, and my thought process was that if it didn't work out I would walk away with real experience in areas I hadn't worked in, and if it did, then even better.
Initially I had a few ideas, but after a while looking at posts and what people are interested in, I settled on a final idea and coded the first draft.
If you don't already have something you want to build, look around Reddit, use filters and check top/popular posts, see what people are missing or actively upvoting.
I signed up for Microsoft for Startups and they gave me $25k in credits (more than I expected). You too could request AWS, Azure or Google Cloud credits. Quick advice: GCP has the least favorable terms so I'd avoid them altogether. So you don't need to pay out of your pocket initially it's quite easy to start off with free credits.
Another important factor is, if you try something, publish it asap, don't build it for months, my first draft was a basic static website with 30 questions from Googles SRE Interview \- if people see value in what you do, they will like it regardless of all the bells and whistles, but if they don't - you've saved yourself a lot of time.
Don't shy away accepting money early on, We were surprised when some folks paid us 5$ for our broken site so we even refunded that since in our mind we were just testing out Stripe and were not planning to accept payments, but lesson from this is put a stripe checkout even for a symbolic price of few bucks. If people will like and support your work those contributions will give you a motivation to put an extra work and deliver a better product.
Finally I wanna say few words about what I've built. It is a platform with real DevOps interview questions from companies like Google, Meta, Amazon, etc. and hands-on live environments.. its called prepare.sh
We are making final changes to Open Source under MIT License our backend Kubernetes controllers that lets us run thousands of isolated ephemeral environments inside single cluster, basically what powers our website so others can use it too, but I will keep that for another post.
https://redd.it/1nn4s9e
@r_devops
Hey Folks,
About 2 years ago I started a side hustle. I didn't make it big, but still it is beyond what I ever hoped.
I had some spare time during evenings and wanted to build something where I could use my DevOps skills and SWE. I wanted to get better at writing Go code, and my thought process was that if it didn't work out I would walk away with real experience in areas I hadn't worked in, and if it did, then even better.
Initially I had a few ideas, but after a while looking at posts and what people are interested in, I settled on a final idea and coded the first draft.
If you don't already have something you want to build, look around Reddit, use filters and check top/popular posts, see what people are missing or actively upvoting.
I signed up for Microsoft for Startups and they gave me $25k in credits (more than I expected). You too could request AWS, Azure or Google Cloud credits. Quick advice: GCP has the least favorable terms so I'd avoid them altogether. So you don't need to pay out of your pocket initially it's quite easy to start off with free credits.
Another important factor is, if you try something, publish it asap, don't build it for months, my first draft was a basic static website with 30 questions from Googles SRE Interview \- if people see value in what you do, they will like it regardless of all the bells and whistles, but if they don't - you've saved yourself a lot of time.
Don't shy away accepting money early on, We were surprised when some folks paid us 5$ for our broken site so we even refunded that since in our mind we were just testing out Stripe and were not planning to accept payments, but lesson from this is put a stripe checkout even for a symbolic price of few bucks. If people will like and support your work those contributions will give you a motivation to put an extra work and deliver a better product.
Finally I wanna say few words about what I've built. It is a platform with real DevOps interview questions from companies like Google, Meta, Amazon, etc. and hands-on live environments.. its called prepare.sh
We are making final changes to Open Source under MIT License our backend Kubernetes controllers that lets us run thousands of isolated ephemeral environments inside single cluster, basically what powers our website so others can use it too, but I will keep that for another post.
https://redd.it/1nn4s9e
@r_devops
Prepare.sh
Coding Interview Questions from Real Companies | Prepare.sh
Discover real interview questions...
CTO / Founding Engineer] React Native + Voice AI (EN/ES) — Equity
Demo Video Below
[https://drive.google.com/file/d/15qr4JYBfnqjXpkli0LJAX-qio7fEL8kz/view?usp=drivesdk](https://drive.google.com/file/d/15qr4JYBfnqjXpkli0LJAX-qio7fEL8kz/view?usp=drivesdk)
**Seeking an equity cofounder CTO to own the React Native app. Repo:** [**https://github.com/romer288/Tranquiloo-App.git**](https://github.com/romer288/Tranquiloo-App.git)**.**
DM me with links to shipped RN apps (voice/audio), a brief note on EN/ES handling + fallback design, and your availability.
**What is Tranquiloo?**
* **24/7 companion, therapist-friendly cadence:** patients check in and get evidence-based support; therapists get **structured insight, not notifications**.
* **The quiet layer between visits:** patients capture what matters and get help in the moment; therapists receive **clear, at-a-glance progress**—no pager vibes.
* **Always on for patients, never “always on” for therapists:** quick logs and coping help for clients; curated summaries when you choose to review.
* **Tranquiloo keeps care moving between sessions**—simple check-ins and smart coping for patients; **concise, right-time insights** for therapists (boundaries respected).
* **24/7 support for patients, therapist-paced for you:** lightning-fast check-ins + coping tools for clients; focused, digestible snapshots for clinicians—**no off-hours interruptions**.
* **A always-there companion for the moments between sessions:** patients log feelings and get instant guidance; therapists see clean weekly insights—**not late-night messages**.
* **Tranquiloo is your 24/7 calm in-between sessions**—patients get quick check-ins and real coping support; therapists get clear, digestible insights **on their schedule** (zero after-hours pings).
https://redd.it/1nn5t0s
@r_devops
Demo Video Below
[https://drive.google.com/file/d/15qr4JYBfnqjXpkli0LJAX-qio7fEL8kz/view?usp=drivesdk](https://drive.google.com/file/d/15qr4JYBfnqjXpkli0LJAX-qio7fEL8kz/view?usp=drivesdk)
**Seeking an equity cofounder CTO to own the React Native app. Repo:** [**https://github.com/romer288/Tranquiloo-App.git**](https://github.com/romer288/Tranquiloo-App.git)**.**
DM me with links to shipped RN apps (voice/audio), a brief note on EN/ES handling + fallback design, and your availability.
**What is Tranquiloo?**
* **24/7 companion, therapist-friendly cadence:** patients check in and get evidence-based support; therapists get **structured insight, not notifications**.
* **The quiet layer between visits:** patients capture what matters and get help in the moment; therapists receive **clear, at-a-glance progress**—no pager vibes.
* **Always on for patients, never “always on” for therapists:** quick logs and coping help for clients; curated summaries when you choose to review.
* **Tranquiloo keeps care moving between sessions**—simple check-ins and smart coping for patients; **concise, right-time insights** for therapists (boundaries respected).
* **24/7 support for patients, therapist-paced for you:** lightning-fast check-ins + coping tools for clients; focused, digestible snapshots for clinicians—**no off-hours interruptions**.
* **A always-there companion for the moments between sessions:** patients log feelings and get instant guidance; therapists see clean weekly insights—**not late-night messages**.
* **Tranquiloo is your 24/7 calm in-between sessions**—patients get quick check-ins and real coping support; therapists get clear, digestible insights **on their schedule** (zero after-hours pings).
https://redd.it/1nn5t0s
@r_devops
AWS Cloud Associate (Solutions Architect Associate, Developer Associate, SysOps, Data Engineer Associate, Machine Learning Associate) Vouchers Available
Hi all,
I have AWS Associate vouchers available with me. If any one requires, dm me
AWS Certified Solutions Architect - Associate
AWS Certified Developer - Associate
AWS Certified SysOps Administrator - Associate
AWS Certified Data Engineer - Associate
AWS Certified Machine Learning Engineer - Associate
https://redd.it/1nnd3be
@r_devops
Hi all,
I have AWS Associate vouchers available with me. If any one requires, dm me
AWS Certified Solutions Architect - Associate
AWS Certified Developer - Associate
AWS Certified SysOps Administrator - Associate
AWS Certified Data Engineer - Associate
AWS Certified Machine Learning Engineer - Associate
https://redd.it/1nnd3be
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you juggle multiple API versions in testing?
I’m running into headaches when dealing with multiple API versions across environments (staging vs production vs legacy). Some tools now let you import/export data by version and even configure different security schemes.
Do most teams here handle versioning in their gateway setup, or directly inside their testing/debugging tool?
https://redd.it/1nnfhfx
@r_devops
I’m running into headaches when dealing with multiple API versions across environments (staging vs production vs legacy). Some tools now let you import/export data by version and even configure different security schemes.
Do most teams here handle versioning in their gateway setup, or directly inside their testing/debugging tool?
https://redd.it/1nnfhfx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to handle this dedicated vm scenario ?
Pipeline runs and fails because it doesn't have the required tools installed in the agent
All agents are ephemeral - fire and forget
So I need a statefull dedicated agent which has these required tools installed in it
Required tools = Unity software
Is it good idea to get a dedicated vm and have these tools installed so that I can use that ?
Want to hear from experts if there's something I got be careful about
https://redd.it/1nnhr8c
@r_devops
Pipeline runs and fails because it doesn't have the required tools installed in the agent
All agents are ephemeral - fire and forget
So I need a statefull dedicated agent which has these required tools installed in it
Required tools = Unity software
Is it good idea to get a dedicated vm and have these tools installed so that I can use that ?
Want to hear from experts if there's something I got be careful about
https://redd.it/1nnhr8c
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Docker projects for beginners
#
I have recently been hired in a tech company as an intern and I have spent the past half month reading tutorials about docker. In your opinion what are some good projects in order to learn those technologies? I have done some exercises in KodeKloud but the fact that the answer is implied in the text and not always hidden behind a button makes me think that I don't actually solve the problem myself.
https://redd.it/1nnhm5o
@r_devops
#
I have recently been hired in a tech company as an intern and I have spent the past half month reading tutorials about docker. In your opinion what are some good projects in order to learn those technologies? I have done some exercises in KodeKloud but the fact that the answer is implied in the text and not always hidden behind a button makes me think that I don't actually solve the problem myself.
https://redd.it/1nnhm5o
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you integrate compliance checks into your CI/CD pipeline?
Trying to shift compliance left. We want to automate evidence gathering for certain controls (e.g., ensuring a cloud config is compliant at deploy time). Does anyone hook their GRC or compliance tool into their pipeline? What tools are even API-friendly enough for this
https://redd.it/1nngzgl
@r_devops
Trying to shift compliance left. We want to automate evidence gathering for certain controls (e.g., ensuring a cloud config is compliant at deploy time). Does anyone hook their GRC or compliance tool into their pipeline? What tools are even API-friendly enough for this
https://redd.it/1nngzgl
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can I become a DevOps Engineer if my background is only IT Support (hardware & OS installation)?
Hey everyone,
I’m currently working in IT support, mainly handling hardware and OS installation/troubleshooting. I don’t have much experience in coding or advanced system administration yet, but I really want to transition into DevOps engineering.
Is it possible for someone like me to make this career shift?
If yes:
What skills should I start learning first?
Which tools/technologies are must-know for DevOps beginners?
Are there free/affordable resources or roadmaps you recommend?
How much time (roughly) would it take to become job-ready in DevOps?
I’m motivated and willing to put in consistent effort. Just need some guidance on the right path so I don’t waste time.
Thanks in advance! 🙏
https://redd.it/1nnj904
@r_devops
Hey everyone,
I’m currently working in IT support, mainly handling hardware and OS installation/troubleshooting. I don’t have much experience in coding or advanced system administration yet, but I really want to transition into DevOps engineering.
Is it possible for someone like me to make this career shift?
If yes:
What skills should I start learning first?
Which tools/technologies are must-know for DevOps beginners?
Are there free/affordable resources or roadmaps you recommend?
How much time (roughly) would it take to become job-ready in DevOps?
I’m motivated and willing to put in consistent effort. Just need some guidance on the right path so I don’t waste time.
Thanks in advance! 🙏
https://redd.it/1nnj904
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps folks in India: Do you really have to sacrifice sleep and work life balance for career growth?
I need some real talk from people already in DevOps. I currently work as a server & network analyst with 3 years of experience, but I’m looking to transition into DevOps.
Here’s my worry: in my current company, rotational shifts and night shifts are draining me.
When I look at DevOps openings, I often notice irregular or rotational shift requirements and I don’t want to jump from one fire into another.
So I need your help:
1) How common are rotational/night shifts in DevOps roles in India?
2) Are they unavoidable, or can I aim for companies/teams where DevOps mostly works general shift?
3) For those of you already in shifts, how do you manage it and what’s your plan to eventually get out?
Any advice, personal stories, or even harsh truths are welcome 🙏
https://redd.it/1nnkpfo
@r_devops
I need some real talk from people already in DevOps. I currently work as a server & network analyst with 3 years of experience, but I’m looking to transition into DevOps.
Here’s my worry: in my current company, rotational shifts and night shifts are draining me.
When I look at DevOps openings, I often notice irregular or rotational shift requirements and I don’t want to jump from one fire into another.
So I need your help:
1) How common are rotational/night shifts in DevOps roles in India?
2) Are they unavoidable, or can I aim for companies/teams where DevOps mostly works general shift?
3) For those of you already in shifts, how do you manage it and what’s your plan to eventually get out?
Any advice, personal stories, or even harsh truths are welcome 🙏
https://redd.it/1nnkpfo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community