The article "Running Azure Kubernetes with Nvidia H100 GPUs" on Enix's blog provides an in-depth look at utilizing Nvidia H100 GPUs within Azure Kubernetes Service (AKS). It covers the setup process, configuration details, and performance benefits of integrating these powerful GPUs into your Kubernetes clusters. The article also explores real-world applications and use cases, highlighting how the combination of Azure Kubernetes and Nvidia H100 GPUs can significantly enhance computational workloads, particularly in fields such as AI and machine learning.
https://enix.io/en/blog/azure-kubernetes-gpu-h100/
https://enix.io/en/blog/azure-kubernetes-gpu-h100/
enix.io
AI & Kubernetes: On-Demand NVIDIA H100 GPUs on Azure AKS
Overcoming the Challenges of Deploying H100 GPUs in Azure Kubernetes Service
1🔥4
The article "LoxiLB Cluster Networking: Elevating K8s Networking Capabilities" on LoxiLB's blog discusses the advanced networking solutions provided by LoxiLB for Kubernetes clusters. It highlights how LoxiLB enhances Kubernetes networking capabilities by offering high-performance load balancing, improved scalability, and robust traffic management. The article details the features and benefits of using LoxiLB in a Kubernetes environment, demonstrating how it can optimize network operations and ensure reliable, efficient service delivery.
https://www.loxilb.io/post/loxilb-cluster-networking-elevating-k8s-networking-capabilities
https://www.loxilb.io/post/loxilb-cluster-networking-elevating-k8s-networking-capabilities
LoxiLB
LoxiLB Cluster Networking: Elevating Kubernetes Networking capabilities
Since the inception of microservices and distributed applications, Kubernetes reigns supreme, providing a robust platform for deploying, managing, and scaling containerized applications. At the core of Kubernetes lies Kubernetes cluster networking, a sophisticated…
1👍2
Managing AWS EKS load balancers effectively is key to ensuring reliable and efficient application performance. This article from Towards Data Science provides expert tips and best practices for handling load balancers in AWS EKS. Learn how to optimize configuration, manage costs, and enhance the performance of your Kubernetes applications with professional load balancer management techniques.
https://towardsdatascience.com/manage-your-aws-eks-load-balancer-like-a-pro-7ca599e081ca
https://towardsdatascience.com/manage-your-aws-eks-load-balancer-like-a-pro-7ca599e081ca
Medium
Manage Your AWS EKS Load Balancer Like a Pro
AWS Load Balancer advanced tips & tricks
❤1
A utility for generating Mermaid diagrams from Terraform configurations
https://github.com/RoseSecurity/Terramaid
https://github.com/RoseSecurity/Terramaid
GitHub
GitHub - RoseSecurity/Terramaid: A utility for generating Mermaid diagrams from Terraform configurations
A utility for generating Mermaid diagrams from Terraform configurations - RoseSecurity/Terramaid
1❤🔥4
♾️ Infisical is the open-source secret management platform: Sync secrets across your team/infrastructure, prevent secret leaks, and manage internal PKI
https://github.com/Infisical/infisical
https://github.com/Infisical/infisical
GitHub
GitHub - Infisical/infisical: Infisical is the open-source platform for secrets, certificates, and privileged access management.
Infisical is the open-source platform for secrets, certificates, and privileged access management. - Infisical/infisical
❤4
In modern infrastructure-as-code (IaC) practices, the efficiency of Terraform workflows is crucial for maintaining robust and scalable systems. This blog post explores the nuanced strategies of applying Terraform changes either before or after merging code, weighing the pros and cons of each approach. Dive into the best practices and key considerations for mastering your Terraform workflows to optimize your IaC processes.
https://terramate.io/rethinking-iac/mastering-terraform-workflows-apply-before-merge-vs-apply-after-merge
https://terramate.io/rethinking-iac/mastering-terraform-workflows-apply-before-merge-vs-apply-after-merge
terramate.io
Mastering Terraform Workflows: apply-before-merge vs apply-after-merge
Discover the two main Terraform and OpenTofu workflows: apply-before-merge and apply-after-merge, and learn why apply-after-merge is likely the better choice.
1👍4
Building an effective Terraform development pipeline is essential for automating infrastructure management and ensuring consistent deployments. This blog post delves into the components and best practices for setting up a robust Terraform pipeline, from version control and testing to continuous integration and deployment. Enhance your Terraform workflows and streamline your infrastructure as code processes with these practical insights and strategies.
https://mycloudrevolution.com/2024/05/23/terraform-development-pipeline
https://mycloudrevolution.com/2024/05/23/terraform-development-pipeline
My Cloud-(R)Evolution
Terraform Development Pipeline
The purpose of a development pipeline is to deploy with confidence and therefore at high frequencies.
👍5
Event-driven architecture (EDA) is a powerful design pattern that enhances the responsiveness and scalability of modern applications. This blog post provides an in-depth look at various EDA patterns, highlighting their benefits, use cases, and implementation strategies. Discover how to leverage EDA to create more efficient, resilient, and decoupled systems that can better handle real-time data and complex workflows.
https://newsletter.simpleaws.dev/p/event-driven-architecture-patterns
https://newsletter.simpleaws.dev/p/event-driven-architecture-patterns
Simple AWS
Event-Driven Architecture Patterns Deep Dive
A deep dive on Strangler Fig, Event Sourcing and Command-Query Responsibility Segregation (CQRS), their benefits, and their tradeoffs.
👍3👏2🔥1
Choosing the right continuous delivery (CD) tool is vital for the success of your DevOps practices. This blog post compares Argo CD and Flux CD, two popular GitOps tools, examining their features, strengths, and weaknesses. Gain insights into how each tool can streamline your deployment processes and help you decide which is best suited for your project's needs.
https://blog.aenix.io/argo-cd-vs-flux-cd-7b1d67a246ca
https://blog.aenix.io/argo-cd-vs-flux-cd-7b1d67a246ca
Medium
Argo CD vs Flux CD
I’ve been seeing debates about two popular GitOps tools. I use both and I want to share with you my opinion and use cases.
1👍5
NGINX Gateway Fabric provides an implementation for the Gateway API using NGINX as the data plane.
https://github.com/nginxinc/nginx-gateway-fabric
https://github.com/nginxinc/nginx-gateway-fabric
GitHub
GitHub - nginx/nginx-gateway-fabric: NGINX Gateway Fabric provides an implementation for the Gateway API using NGINX as the data…
NGINX Gateway Fabric provides an implementation for the Gateway API using NGINX as the data plane. - nginx/nginx-gateway-fabric
👏5❤2
The Feature-rich, Kubernetes-native, Next-Generation API Gateway Built on Envoy
https://github.com/solo-io/gloo
https://github.com/solo-io/gloo
GitHub
GitHub - solo-io/gloo: The Cloud-Native API Gateway and AI Gateway
The Cloud-Native API Gateway and AI Gateway. Contribute to solo-io/gloo development by creating an account on GitHub.
👍3
A file server that supports static serving, uploading, searching, accessing control, webdav...
https://github.com/sigoden/dufs
https://github.com/sigoden/dufs
GitHub
GitHub - sigoden/dufs: A file server that supports static serving, uploading, searching, accessing control, webdav...
A file server that supports static serving, uploading, searching, accessing control, webdav... - sigoden/dufs
1👍1👏1
Integrating BGP, Cilium, and FRR can revolutionize your network's performance and scalability. This blog post explores how combining these technologies at the top-of-rack (ToR) level can enhance network efficiency and security. Learn about the benefits, implementation strategies, and real-world applications of using BGP, Cilium, and FRR in your infrastructure.
https://blog.miraco.la/bgp-cilium-and-frr-top-of-rack-for-all
https://blog.miraco.la/bgp-cilium-and-frr-top-of-rack-for-all
Jay Miracola - Clouds Are Metal
BGP ,Cilium, and FRR: Top of Rack For All!
I recently came across a LinkedIn post talking about the above concepts and its trivialness to setup. The goal: Use Cilium's BGP capabilities to either expose a service or export the pod cidr and advertise its range to a peer. We are all on different...
👍2
Managing telemetry data efficiently is crucial for maintaining application performance and reducing costs. This blog post offers practical tips and strategies to minimize the amount of telemetry data generated by your app without compromising on essential insights. Explore methods to optimize data collection, enhance performance, and achieve cost-effective monitoring.
https://brightinventions.pl/blog/how-to-reduce-telemetry-data-produced-by-your-app/
https://brightinventions.pl/blog/how-to-reduce-telemetry-data-produced-by-your-app/
Bright Inventions
How to Reduce Telemetry Data Produced by Your App
In previous articles, we discussed how to connect your application to Grafana using OpenTelemetry: https://brightinventions.pl/blog/how-to…
👍2❤1
A multi-cluster batch queuing system for high-throughput workloads on Kubernetes.
https://github.com/armadaproject/armada
https://github.com/armadaproject/armada
GitHub
GitHub - armadaproject/armada: A multi-cluster batch queuing system for high-throughput workloads on Kubernetes.
A multi-cluster batch queuing system for high-throughput workloads on Kubernetes. - armadaproject/armada
👍3
Build & ship backends without writing any infrastructure files.
https://github.com/shuttle-hq/shuttle
https://github.com/shuttle-hq/shuttle
GitHub
GitHub - shuttle-hq/shuttle: Build & ship backends without writing any infrastructure files.
Build & ship backends without writing any infrastructure files. - shuttle-hq/shuttle
❤4
Request-based autoscaling in Kubernetes allows for dynamic scaling of applications based on incoming traffic, including the ability to scale down to zero when no requests are present. This article by Daniele Polencic explores the concept of request-based autoscaling in Kubernetes, detailing how it works, the benefits, and implementation strategies. Learn how to efficiently manage resources by automatically scaling your applications in response to demand.
https://dev.to/danielepolencic/request-based-autoscaling-in-kubernetes-scaling-to-zero-2i73
https://dev.to/danielepolencic/request-based-autoscaling-in-kubernetes-scaling-to-zero-2i73
DEV Community
Request-based autoscaling in Kubernetes: scaling to zero
TL;DR: In this article, you will learn how to monitor the HTTP requests to your apps in Kubernetes...
🔥3
Analyzing volatile memory on a Google Kubernetes Engine (GKE) node is crucial for understanding performance issues and security vulnerabilities. This article delves into the methods and tools used by Spotify's engineering team to examine and manage volatile memory effectively on GKE nodes, offering valuable insights and practical techniques for improving system reliability and performance.
https://engineering.atspotify.com/2023/06/analyzing-volatile-memory-on-a-google-kubernetes-engine-node/
https://engineering.atspotify.com/2023/06/analyzing-volatile-memory-on-a-google-kubernetes-engine-node/
Spotify Engineering
Analyzing Volatile Memory on a Google Kubernetes Engine Node
TL:DR At Spotify, we run containerized workloads in production across our entire organization in five regions where our main production workloads are in Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP). If we detect suspicious behavior in our…
1👍1