“Looking for Best Practices to Restructure a DevOps Git Repository
I’m currently working as a DevOps intern, and one of my tasks is to restructure an existing Git repository. I want to ensure that I follow Git and repository best practices during this process.
The development team primarily uses Java Spring Boot for microservices. On the DevOps side, our stack includes:
Helm charts
ArgoCD
Kubernetes (k8s)
Ingress NGINX
Prometheus and Grafana for monitoring
APISIX for API management
GitHub Actions for CI/CD
I’m looking for resources and best practices that can guide me in restructuring this repository effectively.
Where can I learn more about best practices for organizing repositories and maintaining a clean DevOps structure? What are your thoughts or recommendations on how to approach this?
https://redd.it/1o9uy92
@r_devops
I’m currently working as a DevOps intern, and one of my tasks is to restructure an existing Git repository. I want to ensure that I follow Git and repository best practices during this process.
The development team primarily uses Java Spring Boot for microservices. On the DevOps side, our stack includes:
Helm charts
ArgoCD
Kubernetes (k8s)
Ingress NGINX
Prometheus and Grafana for monitoring
APISIX for API management
GitHub Actions for CI/CD
I’m looking for resources and best practices that can guide me in restructuring this repository effectively.
Where can I learn more about best practices for organizing repositories and maintaining a clean DevOps structure? What are your thoughts or recommendations on how to approach this?
https://redd.it/1o9uy92
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops resources
hello everyone i am looking for resources to learn linux i found website name Linux Foundation and it have free course for linux it's enough ? if it's not i would be thankful if you give me good resource thank all
https://redd.it/1o9y6zy
@r_devops
hello everyone i am looking for resources to learn linux i found website name Linux Foundation and it have free course for linux it's enough ? if it's not i would be thankful if you give me good resource thank all
https://redd.it/1o9y6zy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need career advice Infra Associate (Linux) wanting to move into DevOps
Hi everyone,
I’m currently working as an Infrastructure Associate, mostly handling Linux servers...doing patching, monitoring, and general system maintenance.
Alongside my job, I’m pursuing an MCA with a specialization in Cloud Computing. I have completed BCA.I’ve been learning Oracle cloud, Aws and Ansible automation, and I really want to move into a DevOps role.
I’d really appreciate some advice from people who’ve made a similar switch:
• What should I focus on next to make my skills more DevOps-ready?
• Any specific tools, projects, or certifications that helped you?
• How can I use my Linux + infra background as a strength when applying for DevOps roles?
• How much Scope is devops roles?
Thanks in advance for any guidance or suggestions!
https://redd.it/1oa69z1
@r_devops
Hi everyone,
I’m currently working as an Infrastructure Associate, mostly handling Linux servers...doing patching, monitoring, and general system maintenance.
Alongside my job, I’m pursuing an MCA with a specialization in Cloud Computing. I have completed BCA.I’ve been learning Oracle cloud, Aws and Ansible automation, and I really want to move into a DevOps role.
I’d really appreciate some advice from people who’ve made a similar switch:
• What should I focus on next to make my skills more DevOps-ready?
• Any specific tools, projects, or certifications that helped you?
• How can I use my Linux + infra background as a strength when applying for DevOps roles?
• How much Scope is devops roles?
Thanks in advance for any guidance or suggestions!
https://redd.it/1oa69z1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is my current setup crazy? How do I convince my friends that it is (if it is)?
So an old friend of mine invited me to work on a freelance project with him. Even though I found it crazy, I complied with his recommendation for the initial setup because he does have more experience than me but now and he wanted to keep costs low but now I'm starting to regret it.
The current setup:
Locally, a docker network which has a frontend on a container, backend on another container, and a sql database on the 3rd container.
On production, I have an EC2 where I pull the GitHub repo and have a noscript that builds the vite frontend, and deploys the backend container and database. We have a domain that routes to the EC2.
I got tired of ssh-ing into the EC2 to pull changes and backup and build and redeploy etc so I created a GitHub pipeline for it. But recently the builds have been failing more often because sometimes the docker volumes persist, restoring backups when database changes were made is getting more and more painful.
I cant help but think that if I could just use like AWS SAM and utilize Lambdas, Cognito, RDS, and have Cloudfront to host frontend, I'd be much happier.
Is my way significantly expensive? Is this how early-stage deployment looks like? I've only ever dealt with adjusting deployments/automation and less with setting things up.
https://redd.it/1oa9fyg
@r_devops
So an old friend of mine invited me to work on a freelance project with him. Even though I found it crazy, I complied with his recommendation for the initial setup because he does have more experience than me but now and he wanted to keep costs low but now I'm starting to regret it.
The current setup:
Locally, a docker network which has a frontend on a container, backend on another container, and a sql database on the 3rd container.
On production, I have an EC2 where I pull the GitHub repo and have a noscript that builds the vite frontend, and deploys the backend container and database. We have a domain that routes to the EC2.
I got tired of ssh-ing into the EC2 to pull changes and backup and build and redeploy etc so I created a GitHub pipeline for it. But recently the builds have been failing more often because sometimes the docker volumes persist, restoring backups when database changes were made is getting more and more painful.
I cant help but think that if I could just use like AWS SAM and utilize Lambdas, Cognito, RDS, and have Cloudfront to host frontend, I'd be much happier.
Is my way significantly expensive? Is this how early-stage deployment looks like? I've only ever dealt with adjusting deployments/automation and less with setting things up.
https://redd.it/1oa9fyg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Go library that improves DNS reliability through multi-resolver strategies
Wrote a library, https://github.com/bschaatsbergen/dnsdialer, which acts as a drop-in replacement for Go’s standard net.Dialer. It allows querying multiple DNS resolvers using different strategies to improve reliability, performance, and security of host resolution.
https://redd.it/1oaac6h
@r_devops
Wrote a library, https://github.com/bschaatsbergen/dnsdialer, which acts as a drop-in replacement for Go’s standard net.Dialer. It allows querying multiple DNS resolvers using different strategies to improve reliability, performance, and security of host resolution.
https://redd.it/1oaac6h
@r_devops
GitHub
GitHub - bschaatsbergen/dnsdialer: Multi-resolver DNS dialer for Go with race, fallback, and consensus strategies
Multi-resolver DNS dialer for Go with race, fallback, and consensus strategies - bschaatsbergen/dnsdialer
AI tool for gathering metrics for workflows
Hey fellow devops!
I want to implement in my current job as a side project a common framework/tool to gather metrics from the github workflows ran by multiple teams in their code bases.
I want to gather common things like code coverage, tests passing/failure rates, errors reported by code analysis tools, etc (in a nutshell the metrics produced by a code base when it is built and tested)
So I have 2 paths:
1. implement some common framework/tool that all the different repos can consume and configure which will lead me to code a parser for each tool/metric i.e a parser for coverage files, a parser for pytests results, a parser for coverity results, etc you get the idea
2. Implement some kind of AI agent which I can ask to gather such metrics for me at the end of a workflow, through a prompt that is issued as an API request with the files I want to be analyzed.
I have been exercising myself with AI with the usual copilot, chatgpt stuff but I wanted to get my feet wet in trying to use it differently. And I dont know if agenticAI is a good candidate for such scenario or if I should tackle this in a more traditional manner like option 1.
https://redd.it/1oaczul
@r_devops
Hey fellow devops!
I want to implement in my current job as a side project a common framework/tool to gather metrics from the github workflows ran by multiple teams in their code bases.
I want to gather common things like code coverage, tests passing/failure rates, errors reported by code analysis tools, etc (in a nutshell the metrics produced by a code base when it is built and tested)
So I have 2 paths:
1. implement some common framework/tool that all the different repos can consume and configure which will lead me to code a parser for each tool/metric i.e a parser for coverage files, a parser for pytests results, a parser for coverity results, etc you get the idea
2. Implement some kind of AI agent which I can ask to gather such metrics for me at the end of a workflow, through a prompt that is issued as an API request with the files I want to be analyzed.
I have been exercising myself with AI with the usual copilot, chatgpt stuff but I wanted to get my feet wet in trying to use it differently. And I dont know if agenticAI is a good candidate for such scenario or if I should tackle this in a more traditional manner like option 1.
https://redd.it/1oaczul
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is RHCSA a good choice to start a DevOps career?
Hi everyone,
I’m planning to build my career in DevOps but feeling confused about where to start. I’m thinking about doing the RHCSA (Red Hat Certified System Administrator) certification.
Would RHCSA be a good starting point for DevOps, or should I focus on something else like AWS or CCNA?
I’d really appreciate some advice from professionals already working in DevOps.
Thanks in advance!
https://redd.it/1oadq6n
@r_devops
Hi everyone,
I’m planning to build my career in DevOps but feeling confused about where to start. I’m thinking about doing the RHCSA (Red Hat Certified System Administrator) certification.
Would RHCSA be a good starting point for DevOps, or should I focus on something else like AWS or CCNA?
I’d really appreciate some advice from professionals already working in DevOps.
Thanks in advance!
https://redd.it/1oadq6n
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Multi-region testing strategy – how do you validate app behavior worldwide?
Our site behaves differently by region (pricing, redirects, language). I’m faking headers now, but I’m sure there’s a better way. How do you guys confirm regional logic actually works?
https://redd.it/1oac84h
@r_devops
Our site behaves differently by region (pricing, redirects, language). I’m faking headers now, but I’m sure there’s a better way. How do you guys confirm regional logic actually works?
https://redd.it/1oac84h
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Azure Devops Repo to Visual Studio
Hello,
I work for a bank and we have repo on Azure DevOps. I want to push the changes I made to UAT but before that I need to build the changes on Visual Studio which is not on my local machine but on a VDI. When I am trying to import/connect with my Repo via the Visual Studio on the VDI I am getting a Git Fatal error which says something about SSL Certificate.
Does anybody have any ideas how to resolve this issue. Any help will be appreciated. Thank you!
https://redd.it/1oaat7e
@r_devops
Hello,
I work for a bank and we have repo on Azure DevOps. I want to push the changes I made to UAT but before that I need to build the changes on Visual Studio which is not on my local machine but on a VDI. When I am trying to import/connect with my Repo via the Visual Studio on the VDI I am getting a Git Fatal error which says something about SSL Certificate.
Does anybody have any ideas how to resolve this issue. Any help will be appreciated. Thank you!
https://redd.it/1oaat7e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I’m building an API for a mobile app
I'm working on a new project that requires a backend and I'm planning to host it on AWS. Does anyone know if there are any current AWS credits or promotional programs available that I could apply for?
https://redd.it/1oagq1b
@r_devops
I'm working on a new project that requires a backend and I'm planning to host it on AWS. Does anyone know if there are any current AWS credits or promotional programs available that I could apply for?
https://redd.it/1oagq1b
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How make sense to connect desktop machine from laptop to do practice?
Hi guys. Let's assume I have job where I do nothing for 40 50min and I'm allowed to use tablet. I want to use that time to do some practice in devops but these program are too heavy for a tablet. I am planning to left my laptop open and connect it with my tablet but idk is good idea or not.
My laptop OS will be Ubuntu BTW.
https://redd.it/1oancyv
@r_devops
Hi guys. Let's assume I have job where I do nothing for 40 50min and I'm allowed to use tablet. I want to use that time to do some practice in devops but these program are too heavy for a tablet. I am planning to left my laptop open and connect it with my tablet but idk is good idea or not.
My laptop OS will be Ubuntu BTW.
https://redd.it/1oancyv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you maintain observability across automated workflows?
I’ve got automations running through several systems (GitHub Actions, webhooks, 3rd-party SaaS), and tracking failures across all of them is a nightmare. I’m thinking of building some centralized logging or alerting, but curious how others handle it at scale.
https://redd.it/1oas5py
@r_devops
I’ve got automations running through several systems (GitHub Actions, webhooks, 3rd-party SaaS), and tracking failures across all of them is a nightmare. I’m thinking of building some centralized logging or alerting, but curious how others handle it at scale.
https://redd.it/1oas5py
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Gartner Magic Quadrant for Observability 2025
Some interesting movement since last year. Splunk slipping a bit and Grafana Labs shooting up.
Wondering what people think about this?
What opinions do you have in the solutions you use.?
I would really appreciate the opinions of people who are experienced in more the one of the listed solutions?
https://www.gartner.com/doc/reprints?id=1-2LFAL8EW&ct=250710&st=sb
https://redd.it/1oasip4
@r_devops
Some interesting movement since last year. Splunk slipping a bit and Grafana Labs shooting up.
Wondering what people think about this?
What opinions do you have in the solutions you use.?
I would really appreciate the opinions of people who are experienced in more the one of the listed solutions?
https://www.gartner.com/doc/reprints?id=1-2LFAL8EW&ct=250710&st=sb
https://redd.it/1oasip4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Trying to get precise historical resource usage from Railway — why is this so hard?
I’ve been trying to get the exact resource usage (CPU, memory, network, etc.) for a specific Railway project within a specific time range, but I can’t seem to find a proper way to do it.
The API doesn’t give me consistent data, and the dashboard only shows recent stats.
Has anyone here managed to pull accurate historical usage from Railway?
Would really appreciate any pointers or workarounds.
https://redd.it/1oat9ne
@r_devops
I’ve been trying to get the exact resource usage (CPU, memory, network, etc.) for a specific Railway project within a specific time range, but I can’t seem to find a proper way to do it.
The API doesn’t give me consistent data, and the dashboard only shows recent stats.
Has anyone here managed to pull accurate historical usage from Railway?
Would really appreciate any pointers or workarounds.
https://redd.it/1oat9ne
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
VPS + Managing DB Migrations in CI
Hi all, I'm posting a similar question I posed to r/selfhosted, basically looking for advice on how to manage DB migrations via CI. I have this setup:
1. VPS running services (frontend, backend, db) via docker compose (using Dokploy)
2. SSH locked down to only allow access via private VPN (using Tailscale)
3. DB is not exposed to external internet, only accessible to other services within the VPS.
The issue is I cannot determine what the right CI/CD processes should be for checking/applying migrations. Basically, my thought is I need to access prod DB from CI at two points in time: when I have a PR, we need to check to see if any migrations would be needed, and when deploying I should apply migrations as part of that process.
I previously had my DB open to the internet on e.g. port 5432. This worked since I could just access via standard connection string, but I was seeing a lot of invalid access logs, which made me think it was a possible risk/attack surface, so I switched it to be internal only.
After switching DB to no longer be accessible to the internet, I have a new set of issues, which is just accessing and running the DB commands is tricky. It seems my options are:
1. Keep DB port open and just deal with attack attempts. I was not successful configuring UFW to allow Tailscale only for TCP, but if this is possible it's probably a good option.
2. Close DB port, run migration/checks against DB via SSH somehow, but this gets complex. As an example, if I wanted to run a migration for Better Auth, as far as I can tell it can't be run in the prod container on startup, since it requires npx + files that are tree shaken/minified/chunked (migration noscripts, auth.ts file), as part of the standard build/packaging process and no longer present. So if we go this route, it seems like it needs a custom container just for migrations (assuming we spin it up as a separate ephemeral service).
How are other folks managing this? I'm open to any advice or patterns you've found helpful.
https://redd.it/1oavnun
@r_devops
Hi all, I'm posting a similar question I posed to r/selfhosted, basically looking for advice on how to manage DB migrations via CI. I have this setup:
1. VPS running services (frontend, backend, db) via docker compose (using Dokploy)
2. SSH locked down to only allow access via private VPN (using Tailscale)
3. DB is not exposed to external internet, only accessible to other services within the VPS.
The issue is I cannot determine what the right CI/CD processes should be for checking/applying migrations. Basically, my thought is I need to access prod DB from CI at two points in time: when I have a PR, we need to check to see if any migrations would be needed, and when deploying I should apply migrations as part of that process.
I previously had my DB open to the internet on e.g. port 5432. This worked since I could just access via standard connection string, but I was seeing a lot of invalid access logs, which made me think it was a possible risk/attack surface, so I switched it to be internal only.
After switching DB to no longer be accessible to the internet, I have a new set of issues, which is just accessing and running the DB commands is tricky. It seems my options are:
1. Keep DB port open and just deal with attack attempts. I was not successful configuring UFW to allow Tailscale only for TCP, but if this is possible it's probably a good option.
2. Close DB port, run migration/checks against DB via SSH somehow, but this gets complex. As an example, if I wanted to run a migration for Better Auth, as far as I can tell it can't be run in the prod container on startup, since it requires npx + files that are tree shaken/minified/chunked (migration noscripts, auth.ts file), as part of the standard build/packaging process and no longer present. So if we go this route, it seems like it needs a custom container just for migrations (assuming we spin it up as a separate ephemeral service).
How are other folks managing this? I'm open to any advice or patterns you've found helpful.
https://redd.it/1oavnun
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Building a DevOps homelab and AWS portfolio project. Looking for ideas from people who have done this well
Hey everyone,
I am setting up a DevOps homelab and want to host my own portfolio website on AWS as part of it. The goal is to have something that both shows my skills and helps me learn by doing. I want to treat it like a real production-style setup with CI/CD, infrastructure as code, monitoring, and containerization.
I am trying to think through how to make it more than just a static site. I want it to evolve as I grow, and I want to avoid building something that looks cool but teaches me nothing.
Here are some questions I am exploring and would love input on:
• How do you decide what is the right balance between keeping it simple and adding more components for realism?
• What parts of a DevOps pipeline or environment are worth showing off in a personal project?
• For hands-on learning, is it better to keep everything on AWS or mix in self-hosted systems and a local lab setup?
• How do you keep personal projects maintainable when they get complex?
• What are some underrated setups or tools that taught you real-world lessons when you built your own homelab?
I would really appreciate hearing from people who have gone through this or have lessons to share. My main goal is to make this project a long-term learning environment that also reflects real DevOps thinking.
Thanks in advance.
https://redd.it/1oazccy
@r_devops
Hey everyone,
I am setting up a DevOps homelab and want to host my own portfolio website on AWS as part of it. The goal is to have something that both shows my skills and helps me learn by doing. I want to treat it like a real production-style setup with CI/CD, infrastructure as code, monitoring, and containerization.
I am trying to think through how to make it more than just a static site. I want it to evolve as I grow, and I want to avoid building something that looks cool but teaches me nothing.
Here are some questions I am exploring and would love input on:
• How do you decide what is the right balance between keeping it simple and adding more components for realism?
• What parts of a DevOps pipeline or environment are worth showing off in a personal project?
• For hands-on learning, is it better to keep everything on AWS or mix in self-hosted systems and a local lab setup?
• How do you keep personal projects maintainable when they get complex?
• What are some underrated setups or tools that taught you real-world lessons when you built your own homelab?
I would really appreciate hearing from people who have gone through this or have lessons to share. My main goal is to make this project a long-term learning environment that also reflects real DevOps thinking.
Thanks in advance.
https://redd.it/1oazccy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Browser Automation Tools
I’ve been playing around with selenium and puppeteer for a few workloads but they crash way too often and maintaining them is a pain. browserbase has been decent, there’s a new one called steel.dev, and i’ve tried browser-use too but it hasn’t been that performant for me. I'm trying to use it more and more for web testing and deep research, but is there is anything else where it can work well?
Curious what everyone’s using browser automation for these days; scraping, ai agents, qa? What actually makes your setup work well. what tools are you running, what problems have you hit, and what makes one setup better than another in your experience?
Big thanks!
https://redd.it/1ob3xsf
@r_devops
I’ve been playing around with selenium and puppeteer for a few workloads but they crash way too often and maintaining them is a pain. browserbase has been decent, there’s a new one called steel.dev, and i’ve tried browser-use too but it hasn’t been that performant for me. I'm trying to use it more and more for web testing and deep research, but is there is anything else where it can work well?
Curious what everyone’s using browser automation for these days; scraping, ai agents, qa? What actually makes your setup work well. what tools are you running, what problems have you hit, and what makes one setup better than another in your experience?
Big thanks!
https://redd.it/1ob3xsf
@r_devops
steel.dev
Steel | Open-source Headless Browser API
Steel is an open-source browser API purpose-built for AI agents.
CI/CD template for FastAPI: CodeQL, Dependabot, GHCR publishing
Focus is the pipeline rather than the framework.
Push triggers tests, lint, CodeQL
Tag triggers Docker build, health check, push to GHCR, and GitHub Release
Dependabot for dependencies and Actions
Optional Postgres and Sentry via secrets without breaking first run
Repo: https://github.com/ArmanShirzad/fastapi-production-template
https://redd.it/1ob387d
@r_devops
Focus is the pipeline rather than the framework.
Push triggers tests, lint, CodeQL
Tag triggers Docker build, health check, push to GHCR, and GitHub Release
Dependabot for dependencies and Actions
Optional Postgres and Sentry via secrets without breaking first run
Repo: https://github.com/ArmanShirzad/fastapi-production-template
https://redd.it/1ob387d
@r_devops
GitHub
GitHub - ArmanShirzad/fastapi-production-template: production-ready FastAPI template with Docker, CI/CD, observability, and one…
production-ready FastAPI template with Docker, CI/CD, observability, and one-click deployment to Render or Koyeb. - ArmanShirzad/fastapi-production-template
Job Market is crazy
The job market is crazy out there right now, I am lucky I currently have one and just browsing. I applied to one position I meet all the requirements to and was sent a rejection email before I received the indeed confirmation it felt like. I understand they cannot look at all resumes, but what are these AIs looking for when all the skills match their requirements?
I wish anyone dealing with real job hunting the best of luck.
https://redd.it/1ob6vej
@r_devops
The job market is crazy out there right now, I am lucky I currently have one and just browsing. I applied to one position I meet all the requirements to and was sent a rejection email before I received the indeed confirmation it felt like. I understand they cannot look at all resumes, but what are these AIs looking for when all the skills match their requirements?
I wish anyone dealing with real job hunting the best of luck.
https://redd.it/1ob6vej
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
We developed a web monitoring tool ZomniLens and want your opinion
We've recent built a web monitoring tool [https://zomnilens.com](https://zomnilens.com) to detect websites anomaly. The following features are included in the Standard plan:
* 60s monitoring interval.
* Supports HTTP GET, POST and PUT
* Each client has a beautiful service status page to ensure security and data protection. It can be made public at any time if desired. [demo page](https://dashboard.zomnilens.com/dashboard/snapshot/1pZmr4luQyVrhU1RzcABxiBytNVS8ceq?orgId=0&from=2025-10-07T20:00:21.510Z&to=2025-10-08T01:00:21.510Z&timezone=browser).
* Currently it supports email and SMS alerts. We are working on integrating other alerting channels (Slack, Webex, etc.) and they will be included in the same Standard pricing plan once available.
* Alert will be triggered on downtime, slow response time, to-be-expired SSL certificate and keyword matching failure.
We would like to hear your thoughts on:
* What are the features you think the service is missing and like us to include in future releases.
* What are the other areas the service should improve on.
Feel free to submit a free trial request via [https://zomnilens.com/pricing/](https://zomnilens.com/pricing/) and try it out and let me know if you like it or not for your personal or business needs.
https://redd.it/1ob3lwk
@r_devops
We've recent built a web monitoring tool [https://zomnilens.com](https://zomnilens.com) to detect websites anomaly. The following features are included in the Standard plan:
* 60s monitoring interval.
* Supports HTTP GET, POST and PUT
* Each client has a beautiful service status page to ensure security and data protection. It can be made public at any time if desired. [demo page](https://dashboard.zomnilens.com/dashboard/snapshot/1pZmr4luQyVrhU1RzcABxiBytNVS8ceq?orgId=0&from=2025-10-07T20:00:21.510Z&to=2025-10-08T01:00:21.510Z&timezone=browser).
* Currently it supports email and SMS alerts. We are working on integrating other alerting channels (Slack, Webex, etc.) and they will be included in the same Standard pricing plan once available.
* Alert will be triggered on downtime, slow response time, to-be-expired SSL certificate and keyword matching failure.
We would like to hear your thoughts on:
* What are the features you think the service is missing and like us to include in future releases.
* What are the other areas the service should improve on.
Feel free to submit a free trial request via [https://zomnilens.com/pricing/](https://zomnilens.com/pricing/) and try it out and let me know if you like it or not for your personal or business needs.
https://redd.it/1ob3lwk
@r_devops