A new alternative to DockerDesktop was released by RedHat https://developers.redhat.com/articles/2023/05/23/podman-desktop-now-generally-available#local_kubernetes_with_kind
Red Hat Developer
Podman Desktop 1.0: Local container development made easy | Red Hat Developer
As containerization continues to gain popularity in the world of enterprise software development, there is also growing demand for tools and technologies that make container management more accessible
A web-based UI for deploying and managing applications in Kubernetes clusters
https://github.com/vmware-tanzu/kubeapps
https://github.com/vmware-tanzu/kubeapps
GitHub
GitHub - vmware-tanzu/kubeapps: A web-based UI for deploying and managing applications in Kubernetes clusters
A web-based UI for deploying and managing applications in Kubernetes clusters - vmware-tanzu/kubeapps
In this blog post, Ahmet Alp Balkan explains the peculiar and undocumented behavior of file changes in Kubernetes Secret and ConfigMap volumes when using the inotify(7) syscall. He highlights that typical file watch events like IN_MODIFY or IN_CLOSE_WRITE don't occur for files in these volumes. Instead, only the IN_DELETE_SELF event is received, requiring code to handle re-establishing the monitor each time a file is updated.
Balkan discusses the resilient file reloads from disk and the AtomicWriter algorithm used by kubelet for atomic and consistent updates to Secret/ConfigMap volumes. He explains the file structure in a mounted Secret/ConfigMap volume and the reason behind receiving only the IN_DELETE_SELF event.
To handle this behavior, Balkan suggests mounting ConfigMaps/Secrets as directories, starting inotify watches on individual files, avoiding the use of IN_DONT_FOLLOW option, handling inotify deletion events, re-establishing inotify watches when receiving deletion events, and testing the file reloading logic on Kubernetes. He also mentions opening an issue to document this behavior in the official Kubernetes documentation.
https://ahmet.im/blog/kubernetes-inotify/index.html
Balkan discusses the resilient file reloads from disk and the AtomicWriter algorithm used by kubelet for atomic and consistent updates to Secret/ConfigMap volumes. He explains the file structure in a mounted Secret/ConfigMap volume and the reason behind receiving only the IN_DELETE_SELF event.
To handle this behavior, Balkan suggests mounting ConfigMaps/Secrets as directories, starting inotify watches on individual files, avoiding the use of IN_DONT_FOLLOW option, handling inotify deletion events, re-establishing inotify watches when receiving deletion events, and testing the file reloading logic on Kubernetes. He also mentions opening an issue to document this behavior in the official Kubernetes documentation.
https://ahmet.im/blog/kubernetes-inotify/index.html
Ahmet Alp Balkan
Pitfalls reloading files from Kubernetes Secret & ConfigMap volumes
Files on Kubernetes Secret and ConfigMap volumes work in peculiar and undocumented ways when it comes to watching changes to these files with the inotify(7) syscall. Your typical file watch that works outside Kubernetes might not work as you expect...
Interesting article about experience with linkerd https://tech.loveholidays.com/linkerd-at-loveholidays-our-journey-to-a-production-service-mesh-9a6cd478d395
Medium
Linkerd at loveholidays — Our journey to a production service mesh
Explore loveholidays’ journey to production with Linkerd to solve our uniform metrics challenge.
Kubernetes v1.25 has introduced the Container Checkpointing API as an alpha feature, allowing users to backup and restore containers without stopping them. This feature is primarily aimed at forensic analysis but can also be used for general backup and restore purposes. To set up the feature, a Kubernetes cluster (v1.25+) and container runtime supporting container checkpointing are required. Currently, only CRI-O supports checkpointing, with containerd support expected soon.
The checkpointing API is exposed on the kubelet of each cluster node. To create a checkpoint, you need to have a running Pod and make a request to the kubelet directly. Once the checkpoint has been created, you can analyze the contents of the archive or restore the container from the archive by creating an image from the checkpoint and deploying a new Pod using that image.
While the feature is usable, it lacks some essential functionality, such as native restore capabilities and support from all major container runtimes. Users are advised to be aware of its limitations before enabling it in production or development environments.
https://martinheinz.dev/blog/85
The checkpointing API is exposed on the kubelet of each cluster node. To create a checkpoint, you need to have a running Pod and make a request to the kubelet directly. Once the checkpoint has been created, you can analyze the contents of the archive or restore the container from the archive by creating an image from the checkpoint and deploying a new Pod using that image.
While the feature is usable, it lacks some essential functionality, such as native restore capabilities and support from all major container runtimes. Users are advised to be aware of its limitations before enabling it in production or development environments.
https://martinheinz.dev/blog/85
martinheinz.dev
Backup-and-Restore of Containers with Kubernetes Checkpointing API
<p>
Kubernetes v1.25 introduced Container Checkpointing API as an alpha feature. This provides a way to backup-and-restore containers running in Pods, wit...
Kubernetes v1.25 introduced Container Checkpointing API as an alpha feature. This provides a way to backup-and-restore containers running in Pods, wit...
modern full-featured open source secure mail server for low-maintenance self-hosted email
https://github.com/mjl-/mox
https://github.com/mjl-/mox
GitHub
GitHub - mjl-/mox: modern full-featured open source secure mail server for low-maintenance self-hosted email
modern full-featured open source secure mail server for low-maintenance self-hosted email - mjl-/mox
👍2
Interesting article describing how dns query works in k8s https://www.nslookup.io/learning/the-life-of-a-dns-query-in-kubernetes/
NsLookup.io
The life of a DNS query in Kubernetes
In Kubernetes, DNS queries follow a specific path to resolve the IP address of a hostname. Here are all the steps and components it goes through.
Within any organization, API producers and consumers need to stay in sync about the schemas that will be used for communication among them. Especially as the number of APIs and related producers and consumers grow in the organization, what may start with simply passing around schemas among teams will start to hit scaling challenges
An API/Schema registry - stores APIs and Schemas.
https://github.com/apicurio/apicurio-registry
An API/Schema registry - stores APIs and Schemas.
https://github.com/apicurio/apicurio-registry
GitHub
GitHub - Apicurio/apicurio-registry: An API/Schema registry - stores APIs and Schemas.
An API/Schema registry - stores APIs and Schemas. Contribute to Apicurio/apicurio-registry development by creating an account on GitHub.
Enterprises now often use event streaming as the source of truth and as an information-sharing mechanism in microservices architectures. This creates the need to standardize event types and share those standards across the enterprise. Event schema registries are commonly deployed but the existing offerings tend to be specialized to a single broker such as Apache Kafka or Azure Event Hub. They also fall short of conveying rich documentation about event types that goes beyond simple schema definitions.
EventCatalog is an open-source project that provides something we often see businesses building for themselves: a widely accessible repository of documentation for events and schemas. These describe the role the events play in the business, where they belong in a business domain model and which services subscribe and publish them. If you're looking for a way to publish event documentation to your organization, this tool might save you the trouble of building it yourself.
https://github.com/boyney123/eventcatalog
EventCatalog is an open-source project that provides something we often see businesses building for themselves: a widely accessible repository of documentation for events and schemas. These describe the role the events play in the business, where they belong in a business domain model and which services subscribe and publish them. If you're looking for a way to publish event documentation to your organization, this tool might save you the trouble of building it yourself.
https://github.com/boyney123/eventcatalog
GitHub
GitHub - boyney123/eventcatalog: Discover, Explore and Document your Event Driven Architectures powered by Markdown.
Discover, Explore and Document your Event Driven Architectures powered by Markdown. - boyney123/eventcatalog
Gitleaks is an open-source SAST (static application security testing) command line tool for detecting and preventing hardcoded secrets like passwords, API keys and tokens in Git repositories. It can be used as a Git pre-commit hook or in the CI/CD pipeline. Our teams found Gitleaks to be more sensitive than some of the other secret-scanning tools. Gitleaks utilizes regular expressions and entropy string coding to detect secrets. In our experience, the flexibility to supply custom regex along with entropy coding allowed the teams to better categorize secrets based on their needs. For example, instead of categorizing all API keys as "generic-api-key," it allowed categorization as specific "cloud provider key."
https://github.com/gitleaks/gitleaks
https://github.com/gitleaks/gitleaks
GitHub
GitHub - gitleaks/gitleaks: Find secrets with Gitleaks 🔑
Find secrets with Gitleaks 🔑. Contribute to gitleaks/gitleaks development by creating an account on GitHub.
Steampipe is an open-source tool that lets you instantly query cloud services like AWS, Azure and GCP with SQL. With 100+ plugins and built-in support for creating dashboards, Steampipe makes it trivial to connect live cloud configuration data with internal or external data sets and create security or compliance dashboards. We've enjoyed working with Steampipe and created several such dashboards with AWS cloud configurations.
https://github.com/turbot/steampipe
https://github.com/turbot/steampipe
GitHub
GitHub - turbot/steampipe: Zero-ETL, infinite possibilities. Live query APIs, code & more with SQL. No DB required.
Zero-ETL, infinite possibilities. Live query APIs, code & more with SQL. No DB required. - turbot/steampipe
❤1
Automatically cordon and drain Kubernetes nodes based on node conditions
https://github.com/planetlabs/draino
https://github.com/planetlabs/draino
GitHub
GitHub - planetlabs/draino: Automatically cordon and drain Kubernetes nodes based on node conditions
Automatically cordon and drain Kubernetes nodes based on node conditions - planetlabs/draino
👎1
This is a place for various problem detectors running on the Kubernetes nodes.
https://github.com/kubernetes/node-problem-detector
https://github.com/kubernetes/node-problem-detector
GitHub
GitHub - kubernetes/node-problem-detector: This is a place for various problem detectors running on the Kubernetes nodes.
This is a place for various problem detectors running on the Kubernetes nodes. - kubernetes/node-problem-detector
In a recent Dev Interrupted article, Kubernetes co-founder Brendan Burns discussed the origins and growth of the open-source project. Kubernetes, a container orchestrator, was born out of the need to simplify the process of building, deploying, and maintaining distributed systems. Burns, along with co-founders Joe Beda and Craig McLuckie, were inspired by Google's internal system called Borg and wanted to create something similar for the larger development community. Docker played a crucial role in popularizing the concept of containers, which then paved the way for Kubernetes' success.
https://devinterrupted.substack.com/p/how-open-source-enabled-kubernetes
https://devinterrupted.substack.com/p/how-open-source-enabled-kubernetes
Dev Interrupted
How Open Source Enabled Kubernetes’ Success
The success of Kubernetes was never preordained - it took years of work.
Jan Kammerath, discusses the potential pitfalls of using Kubernetes and Kafka in a medium-sized software company. The author shares a consulting experience where the CEO of a software company called for advice due to low availability (87%) and rising operational costs. The company had Kubernetes and Kafka implemented in its infrastructure, but it struggled to manage them efficiently.
https://medium.com/@jankammerath/how-kubernetes-and-kafka-will-get-you-fired-a6dccbd36c77
https://medium.com/@jankammerath/how-kubernetes-and-kafka-will-get-you-fired-a6dccbd36c77
Medium
How Kubernetes And Kafka Will Get You Fired
Kubernetes and Kafka: dream team or horror show? Not every business can afford running Kubernetes and Kafka. Think twice before…
👍1
This blog post discusses the growing trend of Large Language Models (LLMs) and their impact on various use cases. One specific application discussed is K8sGPT, an AI-based Site Reliability Engineer (SRE) that runs inside Kubernetes clusters. It scans, diagnoses, and triages issues using SRE experience codified into its analyzers. LocalAI, another project, is a drop-in replacement API for local CPU inferencing. Combining K8sGPT and LocalAI enables powerful SRE capabilities without relying on expensive GPUs.
https://itnext.io/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65
https://itnext.io/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65
Medium
K8sGPT + LocalAI: Unlock Kubernetes superpowers for free!
As we all know, LLMs are trending like crazy and the hype is not unjustified. Tons of cool projects leveraging LLM-based text generation…