DevOps & SRE notes – Telegram
DevOps & SRE notes
12K subscribers
42 photos
19 files
2.5K links
Helpful articles and tools for DevOps&SRE

WhatsApp: https://whatsapp.com/channel/0029Vb79nmmHVvTUnc4tfp2F

For paid consultation (RU/EN), contact: @tutunak


All ways to support https://telegra.ph/How-support-the-channel-02-19
Download Telegram
Steampipe is an open-source tool that lets you instantly query cloud services like AWS, Azure and GCP with SQL. With 100+ plugins and built-in support for creating dashboards, Steampipe makes it trivial to connect live cloud configuration data with internal or external data sets and create security or compliance dashboards. We've enjoyed working with Steampipe and created several such dashboards with AWS cloud configurations.

https://github.com/turbot/steampipe
1
In a recent Dev Interrupted article, Kubernetes co-founder Brendan Burns discussed the origins and growth of the open-source project. Kubernetes, a container orchestrator, was born out of the need to simplify the process of building, deploying, and maintaining distributed systems. Burns, along with co-founders Joe Beda and Craig McLuckie, were inspired by Google's internal system called Borg and wanted to create something similar for the larger development community. Docker played a crucial role in popularizing the concept of containers, which then paved the way for Kubernetes' success.

https://devinterrupted.substack.com/p/how-open-source-enabled-kubernetes
Jan Kammerath, discusses the potential pitfalls of using Kubernetes and Kafka in a medium-sized software company. The author shares a consulting experience where the CEO of a software company called for advice due to low availability (87%) and rising operational costs. The company had Kubernetes and Kafka implemented in its infrastructure, but it struggled to manage them efficiently.

https://medium.com/@jankammerath/how-kubernetes-and-kafka-will-get-you-fired-a6dccbd36c77
👍1
This blog post discusses the growing trend of Large Language Models (LLMs) and their impact on various use cases. One specific application discussed is K8sGPT, an AI-based Site Reliability Engineer (SRE) that runs inside Kubernetes clusters. It scans, diagnoses, and triages issues using SRE experience codified into its analyzers. LocalAI, another project, is a drop-in replacement API for local CPU inferencing. Combining K8sGPT and LocalAI enables powerful SRE capabilities without relying on expensive GPUs.
https://itnext.io/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65
This article explores Kubernetes Resource Manager and the Google Config Connector, comparing them to Terraform, a popular infrastructure orchestration tool. Kubernetes, an open-source container orchestration tool, has gained market dominance with its Custom Resource Definitions (CRDs), which allows managing Google Cloud resources through Kubernetes using CRDs. Config Connector, an add-on to Kubernetes, can potentially replace Terraform in some workflows. However, the author's experiment shows that while Config Connector can be used to deploy a Google Cloud landing zone, it has limitations compared to Terraform, particularly in handling interdependencies based on values unknown until a resource is created.

In conclusion, the author suggests a hybrid approach, with Terraform for platform-centric deployments and Config Connector for application-centric deployments. While Terraform's flexibility and provider support make it useful for organizations operating in multiple clouds, Config Connector has a compelling place in application-centric deployments where small amounts of infrastructure are deployed in support of Kubernetes-based services.

https://medium.com/cts-technologies/are-terraforms-days-numbered-a9a15ec0435a
K8sGPT gives Kubernetes Superpowers to everyone
k8sgpt is a tool for scanning your kubernetes clusters, diagnosing and triaging issues in simple english. It has SRE experience codified into it’s analyzers and helps to pull out the most relevant information to enrich it with AI.

https://k8sgpt.ai/
4
This post provides a guide to configuring and installing a multi-cluster observability solution for cloud computing environments like AWS, Azure, and Google Cloud. The solution includes Grafana, Prometheus, Thanos, and Loki for monitoring applications and microservices in multi-cluster environments. The guide assumes prior experience with AWS S3, Policy, IAM, EKS, and Kubernetes. It covers the creation of IAM policies and roles, the installation of Helm, Bitnami's Helm charts, and EKS, AWS CLI, eksctl, and kubectl tools. The guide details the process of setting up multi-cluster observability with metrics monitoring using kube-prometheus and Thanos and log monitoring using Grafana Loki and Promtail.

https://medium.com/@bahungxt/multi-cluster-observability-solution-with-prometheus-thanos-loki-and-grafana-5d5be42635e8
In the second part of the DevOps project, the focus is on deploying monitoring tools like ArgoCD, Prometheus, and Grafana to a Kubernetes cluster. The blog post covers installing ArgoCD, deploying Prometheus using Helm charts, setting up monitoring for ArgoCD, visualizing ArgoCD metrics using Grafana dashboards, and continuous deployment of applications using ArgoCD. A useful tool, K8sgpt, is recommended to analyze the cluster for errors and potential issues. The next blog post will discuss configuring Alert Manager for notifications, setting up Slack alerts, and installing Loki for logs, enhancing the monitoring solution.

https://blog.devgenius.io/optimizing-kubernetes-deployments-with-argocd-and-prometheus-aa86c11e2bba
A new terraform version has been released. Import already existed infrastructure to the terraform state become easier.
https://www.hashicorp.com/blog/terraform-1-5-brings-config-driven-import-and-checks
Streaming alert evaluation offers better scalability than traditional polling time-series databases, overcoming high dimensionality/cardinality limitations. This enables engineers to have more reliable and real-time alerting systems. The transition to the streaming path has opened doors for supporting more exciting use-cases and has allowed multiple platform teams at Netflix to generate and maintain alerts programmatically without affecting other users. The streaming paradigm may help tackle correlation problems in observability and offer new opportunities for metrics and events verticals, such as logs and traces.

https://netflixtechblog.com/improved-alerting-with-atlas-streaming-eval-e691c60dc61e
👍1