DevOps&SRE Library – Telegram
DevOps&SRE Library
18.4K subscribers
459 photos
3 videos
2 files
5K links
Библиотека статей по теме DevOps и SRE.

Реклама: @ostinostin
Контент: @mxssl

РКН: https://www.gosuslugi.ru/snet/67704b536aa9672b963777b3
Download Telegram
oomd

oomd is userspace Out-Of-Memory (OOM) killer for linux systems.


https://github.com/facebookincubator/oomd
cloud-snitch

Map visualization and firewall for AWS activity, inspired by Little Snitch for macOS.


https://github.com/ccbrown/cloud-snitch
arkflow

High-performance Rust stream processing engine, providing powerful data stream processing capabilities, supporting multiple input/output sources and processors.


https://github.com/arkflow-rs/arkflow
brush

brush (Bo(u)rn(e) RUsty SHell) is a POSIX- and bash-compatible shell, implemented in Rust. It's built and tested on Linux and macOS, with experimental support on Windows. (Its Linux build is fully supported running on Windows via WSL.)


https://github.com/reubeno/brush
outpost

Outpost is a self-hosted and open-source infrastructure that enables event producers to add outbound webhooks and Event Destinations to their platform with support for destination types such as Webhooks, Hookdeck Event Gateway, Amazon EventBridge, AWS SQS, AWS SNS, GCP Pub/Sub, RabbitMQ, and Kafka.


https://github.com/hookdeck/outpost
tilt

Define your dev environment as code. For microservice apps on Kubernetes.


https://github.com/tilt-dev/tilt
Anomaly Detection in Time Series Using Statistical Analysis

Setting up alerts for metrics isn’t always straightforward. In some cases, a simple threshold works just fine — for example, monitoring disk space on a device. You can just set an alert at 10% remaining, and you’re covered. The same goes for tracking available memory on a server.

But what if we need to monitor something like user behavior on a website? Imagine running a web store where you sell products. One approach might be to set a minimum threshold for daily sales and check it once a day. But what if something goes wrong, and you need to catch the issue much sooner — within hours or even minutes? In that case, a static threshold won’t cut it because user activity fluctuates throughout the day. This is where anomaly detection comes in.


https://medium.com/booking-com-development/anomaly-detection-in-time-series-using-statistical-analysis-cc587b21d008
Incident SEV scales are a waste of time

Ask an engineering leader about their incident response protocol and they’ll tell you about their severity scale. “The first thing we do is we assign a severity to the incident,” they’ll say, “so the right people will get notified.”

And this is sensible. In order to figure out whom to get involved, decision makers need to know how bad the problem is. If the problem is trivial, a small response will do, and most people can get on with their day. If it’s severe, it’s all hands on deck.

Severity correlates (or at least, it’s easy to imagine it correlating) to financial impact. This makes a SEV scale appealing to management: it takes production incidents, which are so complex as to defy tidy categorization on any dimension, and helps make them legible.

A typical SEV scale looks like this:

- SEV-3: Impact limited to internal systems.
- SEV-2: Non-customer-facing problem in production.
- SEV-1: Service degradation with limited impact in production.
- SEV-0: Widespread production outage. All hands on deck!

But when you’re organizing an incident response, is severity really what matters?


https://blog.danslimmon.com/2025/01/29/incident-sev-scales-are-a-waste-of-time/
The Lost Fourth Pillar of Observability - Config Data Monitoring

A lot has been written about logs, metrics, and traces as they are indeed key components in observability, application, and system monitoring. One thing that is often overlooked, however, is config data and its observability. In this blog, we'll explore what config data is, how it differs from logs, metrics, and traces, and discuss what architecture is needed to store this type of data and in which scenarios it provides value.


https://www.cloudquery.io/blog/fourth-lost-pillar-of-observability-config-data-monitoring
L4-L7 Performance: Comparing LoxiLB, MetalLB, NGINX, HAProxy

As Kubernetes continues to dominate the cloud-native ecosystem, the need for high-performance, scalable, and efficient networking solutions has become paramount. This blog compares LoxiLB with MetalLB as Kubernetes service load balancers and pits LoxiLB against NGINX and HAProxy for Kubernetes ingress. These comparisons mainly focus on performance for modern cloud-native workloads.


https://dev.to/nikhilmalik/l4-l7-performance-comparing-loxilb-metallb-nginx-haproxy-1eh0
Optimising Node.js Application Performance

In this post, I’d like to take you through the journey of optimising Aurora, our high-traffic GraphQL front end API built on Node.js. Running on Google Kubernetes Engine, we’ve managed to reduce our pod count by over 30% without compromising latency, thanks to improvements in resource utilisation and code efficiency.

I’ll share what worked, what didn’t, and why. So whether you’re facing similar challenges or simply curious about real-world Node.js optimisation, you should find practical insights here that you can apply to your own projects.


https://tech.loveholidays.com/optimising-node-js-application-performance-7ba998c15a46
The Karpenter Effect: Redefining Our Kubernetes Operations

A reflection on our journey towards AWS Karpenter, improving our Upgrades, Flexibility, and Cost-Efficiency in a 2,000+ Nodes Fleet


https://medium.com/adevinta-tech-blog/the-karpenter-effect-redefining-our-kubernetes-operations-80c7ba90a599
Replacing StatefulSets With a Custom K8s Operator in Our Postgres Cloud Platform

Over the last year, the platform team here at Timescale has been working hard on improving the stability, reliability and cost efficiency of our infrastructure. Our entire cloud is run on Kubernetes, and we have spent a lot of engineering time working out how best to orchestrate its various parts. We have written many different Kubernetes operators for this purpose, but until this year, we always used StatefulSets to manage customer database pods and their volumes.

StatefulSets are a native Kubernetes workload resource used to manage stateful applications. Unlike Deployments, StatefulSets provide unique, stable network identities and persistent storage for each pod, ensuring ordered and consistent scaling, rolling updates, and maintaining state across restarts, which is essential for stateful applications like databases or distributed systems.

However, working with StatefulSets was becoming increasingly painful and preventing us from innovating. In this blog post, we’re sharing how we replaced StatefulSets with our own Kubernetes custom resource and operator, which we called PatroniSets, without a single customer noticing the shift. This move has improved our stability considerably, minimized disruptions to the user, and helped us perform maintenance work that would have been impossible previously.


https://www.timescale.com/blog/replacing-statefulsets-with-a-custom-k8s-operator-in-our-postgres-cloud-platform
Kubernetes Authentication - Comparing Solutions

This post is a deep dive into comparing different solutions for authenticating into a Kubernetes cluster. The goal of this post is to give you an idea of what the various solutions provide for a typical cluster deployment using production capable configurations. We're also going to walk through deployments to get an idea as to how long it takes for each project and look at common operations tasks for the each solution. This blog post is written from the perspective of an enterprise deployment. If you're looking to run a Kubernetes lab, or use Kubernetes for a service provider, I think you'll still find this useful. We're not going to do a deep dive in how either OpenID connect or Kubernetes authentication actually works.


https://www.tremolo.io/post/kubernetes-authentication-comparing-solutions
From four to five 9s of uptime by migrating to Kubernetes

When we launched User Management along with a free tier of up to 1 million MAUs, we faced several challenges using Heroku: the lack of an SLA, limited rollout functionality, and inadequate data locality options. To address these, we migrated to Kubernetes on EKS, developing a custom platform called Terrace to streamline deployment, secret management, and automated load balancing.


https://workos.com/blog/from-four-to-five-9s-of-uptime-by-migrating-to-kubernetes
Tackling OOM: Strategies for Reliable ML Training on Kubernetes

Tackle OOMs => reliable training => win !


https://medium.com/better-ml/tackling-oom-strategies-for-reliable-ml-training-on-kubernetes-dcd49a2b83f9
Kubernetes -Network Policies

A NetworkPolicy is a Kubernetes resource that defines rules for controlling the traffic flow to/from pods. It works at layer 3 (IP) and layer 4 (TCP/UDP) of the OSI model. The policies are namespaced and use labels to identify the target pods and define allowed traffic.


https://medium.com/@umangunadakat/kubernetes-network-policies-41f288fa53fc
wave

Wave watches Deployments, StatefulSets and DaemonSets within a Kubernetes cluster and ensures that their Pods always have up to date configuration.

By monitoring mounted ConfigMaps and Secrets, Wave can trigger a Rolling Update of the Deployment when the mounted configuration is changed.


https://github.com/wave-k8s/wave
winter-soldier

Winter Soldier can be used to

- cleans up (delete) Kubernetes resources
- reduce workload pods to 0

at user defined time of the day and conditions. Winter Soldier is an operator which expects conditions to be defined using CRD hibernator.


https://github.com/devtron-labs/winter-soldier