DevOps & SRE notes – Telegram
DevOps & SRE notes
12K subscribers
38 photos
19 files
2.49K links
Helpfull articles and tools for DevOps&SRE

WhatsApp: https://whatsapp.com/channel/0029Vb79nmmHVvTUnc4tfp2F

For paid consultation (RU/EN), contact: @tutunak


All ways to support https://telegra.ph/How-support-the-channel-02-19
Download Telegram
Enhancing workload isolation and security in Kubernetes environments is critical for protecting sensitive operations and preventing container breakouts. This blogpost explores how Kata Containers combine the efficiency of containers with the robust security of virtual machines, enabling secure deployments on Amazon EKS with minimal configuration changes.

https://aws.amazon.com/blogs/containers/enhancing-kubernetes-workload-isolation-and-security-using-kata-containers/
👍31
The challenge of making artificial intelligence more transparent is at the heart of Andrew Mallaband's exploration of the "black box" dilemma. This insightful editorial delves into the real-world implications of explainability in AI systems.

https://www.linkedin.com/pulse/explainability-black-box-dilemma-real-world-andrew-mallaband-ogvae/
👍1
Optimizing autoscaling in Kubernetes involves much more than just monitoring CPU and memory, as this blogpost by Cristian Sepulveda demonstrates through a practical application workflow. By leveraging KEDA to scale based on real-world metrics like message queue length, teams can achieve faster, cost-effective scaling tailored to specific application needs.

https://medium.com/@csepulvedab/how-to-optimize-autoscaling-in-kubernetes-using-metrics-based-on-application-workflows-7f899fdef4d9
👍2
As the complexity of modern software systems grows, the meaning and practice of "observability" have become increasingly muddled. In this personal essay, Charity Majors argues that it's time to "version" observability—differentiating the traditional metrics-logs-traces approach (Observability 1.0) from a new, more flexible model built on wide, structured log events (Observability 2.0).

https://charity.wtf/2024/08/07/is-it-time-to-version-observability-signs-point-to-yes/
👍2
Designing a robust network architecture for K3s multi-cluster environments can be challenging, especially when integrating Layer 2 and BGP routing on Unifi UDM devices. In this guide, David Elizondo walks through practical considerations and strategies for planning private RFC 1918 address spaces and achieving effective communication between clusters using tools like Cilium and native routing.

https://medium.com/@david-elizondo/planning-a-k3s-multi-cluster-network-with-l2-and-bgp-on-unifi-udm-ae4480a7b4f7
Learning from unexpected service failures can be a catalyst for long-term improvement, as Tines software engineer Shayon Mukherjee shares in this blog post. The story reveals how a Redis upgrade exposed a hidden point of failure in their webhook system, ultimately leading to stronger resilience and more comprehensive testing practices.

https://www.tines.com/blog/engineering-incidents-improvement/
👍21
Slow container startup times can cripple the productivity of Kubernetes teams managing large Docker images—sometimes dragging deployments out for hours. In this feature, Kazakov Kirill shares a practical strategy for pre-warming nodes and leveraging image caching, dramatically reducing cold starts and disk pressure during mass pod rollouts in Amazon EKS clusters.

https://hackernoon.com/how-to-optimize-kubernetes-for-large-docker-images
2
Please open Telegram to view this post
VIEW IN TELEGRAM
Tail-based sampling unlocks deeper insights into distributed systems by allowing OpenTelemetry users to prioritize traces that matter most, such as those with errors or slow responses. This guide explains how tail-based sampling works, its differences from head-based sampling, and provides a practical walkthrough for setting up a two-tier OpenTelemetry Collector architecture that intelligently filters traces for more actionable observability.

https://itnext.io/empower-your-observability-tail-based-sampling-for-better-tracing-with-opentelemtry-243ca2cc55d1
👍1
Achieving end-to-end visibility for Python data pipelines is essential for ensuring quality and reliability in modern data architectures. This hands-on walkthrough from Elastic Observability Labs explains how to implement OpenTelemetry (OTEL) in your Python ETL noscripts—covering automatic instrumentation, manual tracing, performance metrics, and anomaly-driven alerting—to proactively monitor, troubleshoot, and optimize your entire pipeline lifecycle using Elastic’s platform.

https://www.elastic.co/observability-labs/blog/monitor-your-python-data-pipelines-with-otel
👍1