If teams moved to “apps not VMs” for ML dev, what might actually change for ops?
Exploring a potential shift in how ML development environments are managed. Instead of giving each engineer a full VM or desktop, the idea is that every GUI tool (Jupyter, VS Code, labeling apps) would run as its own container and stream directly to the browser. No desktops, no VDI layer. Compute would be pooled, golden images would define standard environments, and the model would stay cloud-agnostic across Kubernetes clusters.
A few things I am trying to anticipate:
* Would environment drift and “works on my machine” actually decrease once each tool runs in isolation?
* Where might operational toil move next - image lifecycle management, stateful storage, or session orchestration?
* What policies would make sense to control costs, such as idle timeouts, per-user quotas, or scheduled teardown of inactive sessions?
* What metrics would be worth instrumenting on day one - cold start latency, cost per active user, GPU-hour distribution, or utilization of pooled nodes?
* If this model scales, what parts of CI/CD or access control might need to evolve?
Not pitching anything. Just thinking ahead about how this kind of setup could reshape the DevOps workflow in real teams.
https://redd.it/1or6lal
@r_devops
Exploring a potential shift in how ML development environments are managed. Instead of giving each engineer a full VM or desktop, the idea is that every GUI tool (Jupyter, VS Code, labeling apps) would run as its own container and stream directly to the browser. No desktops, no VDI layer. Compute would be pooled, golden images would define standard environments, and the model would stay cloud-agnostic across Kubernetes clusters.
A few things I am trying to anticipate:
* Would environment drift and “works on my machine” actually decrease once each tool runs in isolation?
* Where might operational toil move next - image lifecycle management, stateful storage, or session orchestration?
* What policies would make sense to control costs, such as idle timeouts, per-user quotas, or scheduled teardown of inactive sessions?
* What metrics would be worth instrumenting on day one - cold start latency, cost per active user, GPU-hour distribution, or utilization of pooled nodes?
* If this model scales, what parts of CI/CD or access control might need to evolve?
Not pitching anything. Just thinking ahead about how this kind of setup could reshape the DevOps workflow in real teams.
https://redd.it/1or6lal
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Retraining prompt injection classifiers for every new jailbreak is impossible
Our team is burning out retraining models every time a new jailbreak drops. We went from monthly retrains to weekly, now it's almost daily with all the creative bypasses hitting production. The eval pipeline alone takes 6 hours, then there's data labeling, hyperparameter tuning, and deployment testing.
Anyone found a better approach? We've tried ensemble methods and rule-based fallbacks but coverage gaps keep appearing. Thinking about switching to more dynamic detection but worried about latency.
https://redd.it/1orc5kb
@r_devops
Our team is burning out retraining models every time a new jailbreak drops. We went from monthly retrains to weekly, now it's almost daily with all the creative bypasses hitting production. The eval pipeline alone takes 6 hours, then there's data labeling, hyperparameter tuning, and deployment testing.
Anyone found a better approach? We've tried ensemble methods and rule-based fallbacks but coverage gaps keep appearing. Thinking about switching to more dynamic detection but worried about latency.
https://redd.it/1orc5kb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
A playlist on docker which will make your skilled enough to make your own container
I have created a docker internals playlist of 3 videos.
In the first video you will learn core concepts: like internals of docker, binaries, filesystems, what’s inside an image ? , what’s not inside an image ?, how image is executed in a separate environment in a host, linux namespaces and cgroups.
In the second one i have provided a walkthrough video where you can see and learn how you can implement your own custom container from scratch, a git link for code is also in the denoscription.
In the third and last video there are answers of some questions and some topics like mount, etc skipped in video 1 for not making it more complex for newcomers.
After this learning experience you will be able to understand and fix production level issues by thinking in terms of first principles because you will know docker is just linux managed to run separate binaries.
I was also able to understand and develop interest in docker internals after handling and deep diving into many of production issues in Kubernetes clusters. For a good backend engineer these learnings are must.
Docker INTERNALS
https://www.youtube.com/playlist?list=PLyAwYymvxZNhuiZ7F_BCjZbWvmDBtVGXa
https://redd.it/1orelme
@r_devops
I have created a docker internals playlist of 3 videos.
In the first video you will learn core concepts: like internals of docker, binaries, filesystems, what’s inside an image ? , what’s not inside an image ?, how image is executed in a separate environment in a host, linux namespaces and cgroups.
In the second one i have provided a walkthrough video where you can see and learn how you can implement your own custom container from scratch, a git link for code is also in the denoscription.
In the third and last video there are answers of some questions and some topics like mount, etc skipped in video 1 for not making it more complex for newcomers.
After this learning experience you will be able to understand and fix production level issues by thinking in terms of first principles because you will know docker is just linux managed to run separate binaries.
I was also able to understand and develop interest in docker internals after handling and deep diving into many of production issues in Kubernetes clusters. For a good backend engineer these learnings are must.
Docker INTERNALS
https://www.youtube.com/playlist?list=PLyAwYymvxZNhuiZ7F_BCjZbWvmDBtVGXa
https://redd.it/1orelme
@r_devops
Unicode Normalization Attacks: When "admin" ≠ "admin" 🔤
https://instatunnel.my/blog/unicode-normalization-attacks-when-admin-admin
https://redd.it/1orfljl
@r_devops
https://instatunnel.my/blog/unicode-normalization-attacks-when-admin-admin
https://redd.it/1orfljl
@r_devops
InstaTunnel
Unicode Normalization Attack:When "admin" Isn’t Really Admin
Discover how Unicode normalization attacks exploit invisible character differences to bypass filters, hijack accounts, and spoof domains. Learn real 2025 threat
OpenSource work recommendations to get into devops?
Have 5YOE mostly as backend developer, with 3 years IAM team at big company (interviewers tend to ask mostly about this).
Recently got AWS Solutions Architect Professional which was super hard, though IAM was quite a bit easier since I've seen quite a few of the architectures while studying that portion of the exam. Before I got the SAP, I had SAA and many interviews I got were CI/CD roles which I bombed. When I got the SAP, I got a handful of interviews right away, none of which were related to AWS.
I don't really want to get the AWS DevOps Pro cert as I heard they use Cloudformation which most companies don't use. Also don't want to have to renew another cert in 3 years (SAP was the only one I wanted).
Anyways, I'm currently doing some open source work for aws-terraform-modules to get familiarized with IaC. Suprisingly, tf seems super simple. Maybe it's the act of deploying resources with no errors which is the key.
So basically, am I on the right track? Should I learn Ansible? Swagger? etc.
Did a few personal projects on Github, but I doubt that will wow employers unless I grind out something original.
Here's my resume btw: https://imgur.com/a/Iy2QNv6
https://redd.it/1org8l4
@r_devops
Have 5YOE mostly as backend developer, with 3 years IAM team at big company (interviewers tend to ask mostly about this).
Recently got AWS Solutions Architect Professional which was super hard, though IAM was quite a bit easier since I've seen quite a few of the architectures while studying that portion of the exam. Before I got the SAP, I had SAA and many interviews I got were CI/CD roles which I bombed. When I got the SAP, I got a handful of interviews right away, none of which were related to AWS.
I don't really want to get the AWS DevOps Pro cert as I heard they use Cloudformation which most companies don't use. Also don't want to have to renew another cert in 3 years (SAP was the only one I wanted).
Anyways, I'm currently doing some open source work for aws-terraform-modules to get familiarized with IaC. Suprisingly, tf seems super simple. Maybe it's the act of deploying resources with no errors which is the key.
So basically, am I on the right track? Should I learn Ansible? Swagger? etc.
Did a few personal projects on Github, but I doubt that will wow employers unless I grind out something original.
Here's my resume btw: https://imgur.com/a/Iy2QNv6
https://redd.it/1org8l4
@r_devops
Reddit
From the devops community on Reddit: OpenSource work recommendations to get into devops?
Explore this post and more from the devops community
Offline postman alternative without any account.
Postman was great with rich features like api flows till it went cloud only which is a deal breaker for me.
Since then I was looking for offline only api client with complex testing support like api flows through drag and drop ui, noscripting.
Found HawkClient that works offline without any account, support api flows through drag and drop ui, noscripting, collection runner.
curious to know has any one else tried hawkclient or any other tool that meets the requirement.
https://redd.it/1ori0ld
@r_devops
Postman was great with rich features like api flows till it went cloud only which is a deal breaker for me.
Since then I was looking for offline only api client with complex testing support like api flows through drag and drop ui, noscripting.
Found HawkClient that works offline without any account, support api flows through drag and drop ui, noscripting, collection runner.
curious to know has any one else tried hawkclient or any other tool that meets the requirement.
https://redd.it/1ori0ld
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
doubts of mine ?
me facing problem while learning something like :
"from where should i have to learn ?"
"how much i have to learn ?"
etc ...
all these questions come to my mind while learning.
if you face these problem let me know how you handle these with an example.
https://redd.it/1ormrzd
@r_devops
me facing problem while learning something like :
"from where should i have to learn ?"
"how much i have to learn ?"
etc ...
all these questions come to my mind while learning.
if you face these problem let me know how you handle these with an example.
https://redd.it/1ormrzd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Token Agent – Config-driven token fetcher/rotator
Hello!
I'm working on a simple Token Agent service designed to manage token fetching, caching/invalidation, and propagation via a simple YAML config.
>
It was originally designed for cloud VM.
It can retrieve data from metadata APIs or internal HTTP services, and then serve tokens via files, sockets, or HTTP endpoints.
Resilience and Observability included.
Use cases generic:
\- Keep workload tokens in sync without custom noscripts
\- Rotate tokens automatically with retry/backoff
\- Define everything declaratively (no hardcoded logic)
Use cases for me:
\- Passing tokens to vector.dev via files
\- Token source for other services on vm via http
Repo: github.com/AleksandrNi/token-agent
Would love feedback from folks managing service credentials or secure automation.
Thanks!
https://redd.it/1ormmne
@r_devops
Hello!
I'm working on a simple Token Agent service designed to manage token fetching, caching/invalidation, and propagation via a simple YAML config.
>
metadata API → token exchange service → http | file | udsIt was originally designed for cloud VM.
It can retrieve data from metadata APIs or internal HTTP services, and then serve tokens via files, sockets, or HTTP endpoints.
Resilience and Observability included.
Use cases generic:
\- Keep workload tokens in sync without custom noscripts
\- Rotate tokens automatically with retry/backoff
\- Define everything declaratively (no hardcoded logic)
Use cases for me:
\- Passing tokens to vector.dev via files
\- Token source for other services on vm via http
Repo: github.com/AleksandrNi/token-agent
Would love feedback from folks managing service credentials or secure automation.
Thanks!
https://redd.it/1ormmne
@r_devops
vector.dev
A lightweight, ultra-fast tool for building observability pipelines
Do companies hire DevOps freshers?
Hey everyone
I’ve been learning DevOps tools like Docker, CI/CD, Kubernetes, Terraform, and cloud basics. I also have some experience with backend development using Node.js.
But I’m confused — do companies actually hire DevOps freshers, or do I need to first work as a backend developer (or some other role) and then switch to DevOps after getting experience?
If anyone here started their career directly in DevOps, I’d love to hear how you did it — was it through internships, projects, certifications, or something else?
Any advice would be really helpful
https://redd.it/1oroiqp
@r_devops
Hey everyone
I’ve been learning DevOps tools like Docker, CI/CD, Kubernetes, Terraform, and cloud basics. I also have some experience with backend development using Node.js.
But I’m confused — do companies actually hire DevOps freshers, or do I need to first work as a backend developer (or some other role) and then switch to DevOps after getting experience?
If anyone here started their career directly in DevOps, I’d love to hear how you did it — was it through internships, projects, certifications, or something else?
Any advice would be really helpful
https://redd.it/1oroiqp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Kubernetes operator for declarative IDP management
Since 1 year, I've been developing a Kubernetes Operator for Kanidm identity provider.
From the release notes:
Kaniop is now available as an official release! After extensive beta cycles, this marks our first supported version for real-world use.
Key capabilities include:
Identity Resources: Declaratively manage persons, groups, OAuth2 clients, and service accounts
GitOps Ready: Full integration with Git-based workflows for infrastructure-as-code
Kubernetes Native: Built using Custom Resources and standard Kubernetes patterns
Production Ready: Comprehensive testing, monitoring, and observability features
If this sounds interesting to you, I’d really appreciate your thoughts or feedback — and contributions are always welcome.
Links:
repository: https://github.com/pando85/kaniop/
website: https://pando85.github.io/
https://redd.it/1orq23c
@r_devops
Since 1 year, I've been developing a Kubernetes Operator for Kanidm identity provider.
From the release notes:
Kaniop is now available as an official release! After extensive beta cycles, this marks our first supported version for real-world use.
Key capabilities include:
Identity Resources: Declaratively manage persons, groups, OAuth2 clients, and service accounts
GitOps Ready: Full integration with Git-based workflows for infrastructure-as-code
Kubernetes Native: Built using Custom Resources and standard Kubernetes patterns
Production Ready: Comprehensive testing, monitoring, and observability features
If this sounds interesting to you, I’d really appreciate your thoughts or feedback — and contributions are always welcome.
Links:
repository: https://github.com/pando85/kaniop/
website: https://pando85.github.io/
https://redd.it/1orq23c
@r_devops
GitHub
GitHub - pando85/kaniop: Kubernetes operator for managing Kanidm
Kubernetes operator for managing Kanidm. Contribute to pando85/kaniop development by creating an account on GitHub.
VSCode multiple ssh tunnels
Hi All. Hoping this is a good place for this question.
I currently work heavily in devcontainer based environments often using GitHub Codespace. Our local systems are heavily locked down so even getting simple cli tools installed is a pain. A platform we use is setting up the ability to run code through the remote ssh extension capabilities. Ideally allowing us to use VSCode while leveraging the remote execution environment.
However it seems like I can't use that while connected to a codespace since uses the tunnel. I looked into using a local docker image on wsl but again that uses the tunnel.
Anything you can think of to keep the devcontainer backed environment but then still be able to tunnel to the execution environment?
https://redd.it/1orqufh
@r_devops
Hi All. Hoping this is a good place for this question.
I currently work heavily in devcontainer based environments often using GitHub Codespace. Our local systems are heavily locked down so even getting simple cli tools installed is a pain. A platform we use is setting up the ability to run code through the remote ssh extension capabilities. Ideally allowing us to use VSCode while leveraging the remote execution environment.
However it seems like I can't use that while connected to a codespace since uses the tunnel. I looked into using a local docker image on wsl but again that uses the tunnel.
Anything you can think of to keep the devcontainer backed environment but then still be able to tunnel to the execution environment?
https://redd.it/1orqufh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I created a cool uhmegle extension
I created this useful extension for Uhmegle called ChillTools. It shows you the other person’s IP address and approximate location while you chat. It also keeps a log of the last 30 people you’ve talked to, saving their IP and a screenshot so you can look back later. it have also a country filter a block people system and custom personaliziation with ccs The extension is completely free, open source, and transparent no hidden code or anything suspicious. You can download it from here .gg/FBsPkXDche. If you use Uhmegle often, this might be helpful
https://redd.it/1ortiuo
@r_devops
I created this useful extension for Uhmegle called ChillTools. It shows you the other person’s IP address and approximate location while you chat. It also keeps a log of the last 30 people you’ve talked to, saving their IP and a screenshot so you can look back later. it have also a country filter a block people system and custom personaliziation with ccs The extension is completely free, open source, and transparent no hidden code or anything suspicious. You can download it from here .gg/FBsPkXDche. If you use Uhmegle often, this might be helpful
https://redd.it/1ortiuo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Created a minimal pipeline with github connection , codebuild. Succeeds when created but no subsequent pushes create builds/triggers. No event bridge rules created
Here is the cloudformation
Removed some parts as it's too long.
But the core logic is to trigger a build on a repo/branch using am existing connection.
Will this create event bridge rules?
None have been created .
Or do I need to add the event triggers for any push to this repo/branch.
Llm says they will be created automatic and there is some issues creating them. Thank you in advance.
AWSTemplateFormatVersion: '2010-09-09'
Denoscription: Minimal CodePipeline with CodeStar Connection (GitHub) Trigger & CodeBuild
Parameters:
PipelineName:
Type: String
Default: TestCodeStarPipeline
GitHubOwner:
Type: String
Denoscription: GitHub user or org name (e.g. octocat)
GitHubRepo:
Type: String
Denoscription: GitHub repository name (e.g. Hello-World)
GitHubBranch:
Type: String
Default: main
Denoscription: Branch to track (e.g. main)
CodeStarConnectionArn:
Type: String
Denoscription: ARN of your AWS CodeStar connection to GitHub
Resources:
ArtifactBucket:
Type: AWS::S3::Bucket
Properties:
VersioningConfiguration:
Status: Enabled
PipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal: { Service: codepipeline.amazonaws.com }
Action: sts:AssumeRole
Path: /
Policies:
- PolicyName: ArtifactS3Access
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:ListBucket
Resource:
- !Sub '${ArtifactBucket.Arn}'
- !Sub '${ArtifactBucket.Arn}/*'
- Effect: Allow
Action: codestar-connections:UseConnection
Resource: !Ref CodeStarConnectionArn
- Effect: Allow
Action:
- codebuild:StartBuild
- codebuild:BatchGetBuilds
Resource: '*'
BuildRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal: { Service: codebuild.amazonaws.com }
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
- arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: !Sub '${PipelineName}-build'
ServiceRole: !GetAtt BuildRole.Arn
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
Type: LINUX_CONTAINER
Source:
Type: CODEPIPELINE
BuildSpec: |
version: 0.2
phases:
build:
commands:
- echo "Hello World from CodeBuild"
artifacts:
files:
- '**/*'
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: !Ref PipelineName
RoleArn: !GetAtt PipelineRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: '1'
Configuration:
ConnectionArn: !Ref CodeStarConnectionArn
FullRepositoryId: !Sub "${GitHubOwner}/${GitHubRepo}"
BranchName: !Ref GitHubBranch
OutputArtifactFormat: CODE_ZIP
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Here is the cloudformation
Removed some parts as it's too long.
But the core logic is to trigger a build on a repo/branch using am existing connection.
Will this create event bridge rules?
None have been created .
Or do I need to add the event triggers for any push to this repo/branch.
Llm says they will be created automatic and there is some issues creating them. Thank you in advance.
AWSTemplateFormatVersion: '2010-09-09'
Denoscription: Minimal CodePipeline with CodeStar Connection (GitHub) Trigger & CodeBuild
Parameters:
PipelineName:
Type: String
Default: TestCodeStarPipeline
GitHubOwner:
Type: String
Denoscription: GitHub user or org name (e.g. octocat)
GitHubRepo:
Type: String
Denoscription: GitHub repository name (e.g. Hello-World)
GitHubBranch:
Type: String
Default: main
Denoscription: Branch to track (e.g. main)
CodeStarConnectionArn:
Type: String
Denoscription: ARN of your AWS CodeStar connection to GitHub
Resources:
ArtifactBucket:
Type: AWS::S3::Bucket
Properties:
VersioningConfiguration:
Status: Enabled
PipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal: { Service: codepipeline.amazonaws.com }
Action: sts:AssumeRole
Path: /
Policies:
- PolicyName: ArtifactS3Access
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:ListBucket
Resource:
- !Sub '${ArtifactBucket.Arn}'
- !Sub '${ArtifactBucket.Arn}/*'
- Effect: Allow
Action: codestar-connections:UseConnection
Resource: !Ref CodeStarConnectionArn
- Effect: Allow
Action:
- codebuild:StartBuild
- codebuild:BatchGetBuilds
Resource: '*'
BuildRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal: { Service: codebuild.amazonaws.com }
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
- arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: !Sub '${PipelineName}-build'
ServiceRole: !GetAtt BuildRole.Arn
Artifacts:
Type: CODEPIPELINE
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
Type: LINUX_CONTAINER
Source:
Type: CODEPIPELINE
BuildSpec: |
version: 0.2
phases:
build:
commands:
- echo "Hello World from CodeBuild"
artifacts:
files:
- '**/*'
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: !Ref PipelineName
RoleArn: !GetAtt PipelineRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: '1'
Configuration:
ConnectionArn: !Ref CodeStarConnectionArn
FullRepositoryId: !Sub "${GitHubOwner}/${GitHubRepo}"
BranchName: !Ref GitHubBranch
OutputArtifactFormat: CODE_ZIP
OutputArtifacts:
- Name: SourceArtifact
RunOrder: 1
- Name: Build
Actions:
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: '1'
Configuration:
ProjectName: !Ref CodeBuildProject
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildOutput
RunOrder: 1
Outputs:
PipelineName:
Value: !Ref PipelineName
Denoscription: Name of the CodePipeline
ArtifactBucket:
Value: !Ref ArtifactBucket
Denoscription: Name of the S3 bucket used for pipeline artifacts
https://redd.it/1orue5c
@r_devops
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: '1'
Configuration:
ProjectName: !Ref CodeBuildProject
InputArtifacts:
- Name: SourceArtifact
OutputArtifacts:
- Name: BuildOutput
RunOrder: 1
Outputs:
PipelineName:
Value: !Ref PipelineName
Denoscription: Name of the CodePipeline
ArtifactBucket:
Value: !Ref ArtifactBucket
Denoscription: Name of the S3 bucket used for pipeline artifacts
https://redd.it/1orue5c
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to create a curated repository in Nexus?
I would like to create a repository in Nexus that has only selected packages that I download from Maven Central. This repository should have only the packages and versions that I have selected. The aim is to prevent developers in my organization from downloading any random package and work with a standardised set.
Based on the documentation at https://help.sonatype.com/en/repository-types.html I see that a repo can be a proxy or hosted.
Is there a way to create a curated repository?
https://redd.it/1orvv1g
@r_devops
I would like to create a repository in Nexus that has only selected packages that I download from Maven Central. This repository should have only the packages and versions that I have selected. The aim is to prevent developers in my organization from downloading any random package and work with a standardised set.
Based on the documentation at https://help.sonatype.com/en/repository-types.html I see that a repo can be a proxy or hosted.
Is there a way to create a curated repository?
https://redd.it/1orvv1g
@r_devops
Do you separate template browsing from deployment in your internal IaC tooling?
I’m working on an internal platform for our teams to deploy infrastructure using templates (Terraform mostly). Right now we have two flows:
A “catalog” view where users can see available templates (as cards or list), but can’t do much beyond launching from there
A “deployment” flow where they select where the new env will live (e.g., workflow group/project), and inside that flow, they select the template (usually a dropdown or embedded step)
I’m debating whether to kill the catalog view and just make people launch everything through the deployment flow. which would mean template selection happens inside the stepper (no more dedicated browse view).
Would love to hear how this works in your org or with tools like Spacelift, env0, or similar.
TL;DR:
Trying to decide whether to keep a separate template catalog view or just let users select templates inside the deploy wizard. Curious how others handle this do you browse templates separately or pick them during deployment? Looking for examples from tools like env0, Spacelift, or your own internal setups.
https://redd.it/1os3byi
@r_devops
I’m working on an internal platform for our teams to deploy infrastructure using templates (Terraform mostly). Right now we have two flows:
A “catalog” view where users can see available templates (as cards or list), but can’t do much beyond launching from there
A “deployment” flow where they select where the new env will live (e.g., workflow group/project), and inside that flow, they select the template (usually a dropdown or embedded step)
I’m debating whether to kill the catalog view and just make people launch everything through the deployment flow. which would mean template selection happens inside the stepper (no more dedicated browse view).
Would love to hear how this works in your org or with tools like Spacelift, env0, or similar.
TL;DR:
Trying to decide whether to keep a separate template catalog view or just let users select templates inside the deploy wizard. Curious how others handle this do you browse templates separately or pick them during deployment? Looking for examples from tools like env0, Spacelift, or your own internal setups.
https://redd.it/1os3byi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need guidance to deep dive.
So I was able to secure a job as a Devops Engineer in a fintech app. I have a very good understanding of Linux System administration and networking as my previous job was purely Linux administration. Here, I am part of 7 members team which are looking after 4 different on-premises Openshift prod clusters. This is my first job where I got my hands on technologies like kubernetes, Jenkins, gitlab etc. I quickly got the idea of pipelines since I was good with bash. Furthermore, I spent first 4 months learning about kuberenetes from Kodekloud CKA prep course and quickly got the idea of kubernetes and its importance. However, I just don't want to be a person who just clicks the deployment buttons or run few oc apply commands. I want to learn ins and outs of Devops from architectural perspective. ( planning, installation, configuration, troubleshooting) etc. I am overwhelmed with most of the stuff and need a clear learning path. All sort of help is appreciated.
https://redd.it/1os4j8n
@r_devops
So I was able to secure a job as a Devops Engineer in a fintech app. I have a very good understanding of Linux System administration and networking as my previous job was purely Linux administration. Here, I am part of 7 members team which are looking after 4 different on-premises Openshift prod clusters. This is my first job where I got my hands on technologies like kubernetes, Jenkins, gitlab etc. I quickly got the idea of pipelines since I was good with bash. Furthermore, I spent first 4 months learning about kuberenetes from Kodekloud CKA prep course and quickly got the idea of kubernetes and its importance. However, I just don't want to be a person who just clicks the deployment buttons or run few oc apply commands. I want to learn ins and outs of Devops from architectural perspective. ( planning, installation, configuration, troubleshooting) etc. I am overwhelmed with most of the stuff and need a clear learning path. All sort of help is appreciated.
https://redd.it/1os4j8n
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I built Haloy, a open source tool for zero-downtime Docker deploys on your own servers.
Hey, r/devops!
I run a lot of projects on my own servers, but I was missing a simple way to deploy app with zero downtime without complicated setups.
So, I built Haloy. It's an open-source tool written in Go that deploys dockerized apps with a simple config and a single haloy deploy command.
Here's an example config in its simplest form:
name: my-app
server: haloy.yourserver.com
domains:
- domain: my-app.com
aliases:
- www.my-app.com
It's still in beta, so I'd love to get some feedback from the community.
You can check out the source code and a quick-start guide on GitHub: https://github.com/haloydev/haloy
Thanks!
https://redd.it/1os2mig
@r_devops
Hey, r/devops!
I run a lot of projects on my own servers, but I was missing a simple way to deploy app with zero downtime without complicated setups.
So, I built Haloy. It's an open-source tool written in Go that deploys dockerized apps with a simple config and a single haloy deploy command.
Here's an example config in its simplest form:
name: my-app
server: haloy.yourserver.com
domains:
- domain: my-app.com
aliases:
- www.my-app.com
It's still in beta, so I'd love to get some feedback from the community.
You can check out the source code and a quick-start guide on GitHub: https://github.com/haloydev/haloy
Thanks!
https://redd.it/1os2mig
@r_devops
GitHub
GitHub - haloydev/haloy: Deploy containerized apps
Deploy containerized apps . Contribute to haloydev/haloy development by creating an account on GitHub.
Has anyone integrated AI tools into their PR or code review workflow?
We’ve been looking for ways to speed up our review cycles without cutting corners on quality. Lately, our team has been testing a few AI assistants for code reviews, mainly Coderabbit and Cubic, to handle repetitive or low-level feedback before a human gets involved.
So far they’ve been useful for small stuff like style issues and missed edge cases, but I’m still not sure how well they scale when multiple reviewers or services are involved.
I’m curious if anyone here has built these tools into their CI/CD process or used them alongside automation pipelines. Are they actually improving turnaround time, or just adding another step to maintain?
https://redd.it/1os7j08
@r_devops
We’ve been looking for ways to speed up our review cycles without cutting corners on quality. Lately, our team has been testing a few AI assistants for code reviews, mainly Coderabbit and Cubic, to handle repetitive or low-level feedback before a human gets involved.
So far they’ve been useful for small stuff like style issues and missed edge cases, but I’m still not sure how well they scale when multiple reviewers or services are involved.
I’m curious if anyone here has built these tools into their CI/CD process or used them alongside automation pipelines. Are they actually improving turnaround time, or just adding another step to maintain?
https://redd.it/1os7j08
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
OpenTelemetry Collector Contrib v0.139.0 Released — new features, bug fixes, and a small project helping us keep up
OpenTelemetry moves fast — and keeping track of what’s new is getting harder each release.
I’ve been working on something called **Relnx** — a site that tracks and summarizes releases for tools we use every day in observability and cloud-native work.
Here’s the latest breakdown for OpenTelemetry Collector Contrib v0.139.0 👇
🔗 https://www.relnx.io/releases/opentelemetry-collector-contrib-v0.139.0
Would love feedback or ideas on what other tools you’d like to stay up to date with.
\#OpenTelemetry #Observability #DevOps #SRE #CloudNative
https://redd.it/1ose1kf
@r_devops
OpenTelemetry moves fast — and keeping track of what’s new is getting harder each release.
I’ve been working on something called **Relnx** — a site that tracks and summarizes releases for tools we use every day in observability and cloud-native work.
Here’s the latest breakdown for OpenTelemetry Collector Contrib v0.139.0 👇
🔗 https://www.relnx.io/releases/opentelemetry-collector-contrib-v0.139.0
Would love feedback or ideas on what other tools you’d like to stay up to date with.
\#OpenTelemetry #Observability #DevOps #SRE #CloudNative
https://redd.it/1ose1kf
@r_devops
Relnx
Relnx - Never Miss a Feature Release
Track releases and discover features for your favorite dev tools. Stay updated with the latest features, updates, and releases from your essential developer tools.
How to find companies with good work life balance and modern stack?
I'd love to hear your recommendations or advice. My last job was SRE in startup. Total mess, toxic people and constant firefighting. Thought to move from SRE to DevOps for some calm.
Now I'm looking for a place:
• no 24/7 on-call rotations, high-pressure "hustle" culture, finishing work at the same time everyday etc.
• at the same time working with modern tech stack like K8s, AWS, Docker, Grafana, Terraform etc...
Easy to filter by stack. But how do I filter out the companies that give me the highest probability of the culture being as I described above?
I worked for a bank before and boredom there was killing me. Also old stack... I need some autonomy.
At the same time startups seem a bit too chaotic. My best bet would be a mid size scale ups? Places with good documentation, async communication, and work-life balance. How about consulting agencies?
Is it also random which project I will land in?
I'd love to hear from people who've found teams like that:
• Which companies (in Europe or remote-first) have that kind of environment?
• What kind of questions should I ask during interviews to detect toxic culture or hidden on-call stress?
• Are there specific industries (fintech, SaaS, analytics, medtech, etc.) that tend to have calmer DevOps roles?
Thank you so much!
https://redd.it/1osf31s
@r_devops
I'd love to hear your recommendations or advice. My last job was SRE in startup. Total mess, toxic people and constant firefighting. Thought to move from SRE to DevOps for some calm.
Now I'm looking for a place:
• no 24/7 on-call rotations, high-pressure "hustle" culture, finishing work at the same time everyday etc.
• at the same time working with modern tech stack like K8s, AWS, Docker, Grafana, Terraform etc...
Easy to filter by stack. But how do I filter out the companies that give me the highest probability of the culture being as I described above?
I worked for a bank before and boredom there was killing me. Also old stack... I need some autonomy.
At the same time startups seem a bit too chaotic. My best bet would be a mid size scale ups? Places with good documentation, async communication, and work-life balance. How about consulting agencies?
Is it also random which project I will land in?
I'd love to hear from people who've found teams like that:
• Which companies (in Europe or remote-first) have that kind of environment?
• What kind of questions should I ask during interviews to detect toxic culture or hidden on-call stress?
• Are there specific industries (fintech, SaaS, analytics, medtech, etc.) that tend to have calmer DevOps roles?
Thank you so much!
https://redd.it/1osf31s
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community