This engineering publication from DoubleVerify presents a case study on synchronizing database schema updates across multiple projects and environments. The team developed a solution using a shared, standalone schema migrations repository and Kubernetes pre-install hooks to automate and coordinate the process.
https://medium.com/doubleverify-engineering/a-case-study-in-synchronizing-database-schema-updates-between-projects-and-environments-a69a3cc38985
https://medium.com/doubleverify-engineering/a-case-study-in-synchronizing-database-schema-updates-between-projects-and-environments-a69a3cc38985
Medium
A Case Study in Synchronizing Database Schema Updates between Projects and Environments
Written By: Chaim Leichman
👍3❤2
eBPF based cloud-native load-balancer for Kubernetes|Edge|Telco|IoT|XaaS.
https://github.com/loxilb-io/loxilb
https://github.com/loxilb-io/loxilb
GitHub
GitHub - loxilb-io/loxilb: eBPF based cloud-native load-balancer for Kubernetes|Edge|Telco|IoT|XaaS.
eBPF based cloud-native load-balancer for Kubernetes|Edge|Telco|IoT|XaaS. - loxilb-io/loxilb
👍2🔥1
Kubernetes v1.35: Timbernetes — Only the Important Parts (Part 1): Deprecations, removals
Removal of cgroup v1 support
Cgroup v2 is now the modern standard, Kubernetes is ready to retire the legacy cgroup v1 support in v1.35. This is an important notice for cluster administrators: if you are still running nodes on older Linux distributions that don't support cgroup v2, your `kubelet` will fail to start. To avoid downtime, you will need to migrate those nodes to systems where cgroup v2 is enabled.
Deprecation of ipvs mode in kube-proxy
Because of this maintenance burden, Kubernetes v1.35 deprecates `ipvs` mode. Although the mode remains available in this release, `kube-proxy` will now emit a warning on startup when configured to use it.
Final call for containerd v1.X
While Kubernetes v1.35 still supports containerd 1.7 and other LTS releases, this is the final version with such support. The SIG Node community has designated v1.35 as the last release to support the containerd v1.X series.
Removal of cgroup v1 support
Cgroup v2 is now the modern standard, Kubernetes is ready to retire the legacy cgroup v1 support in v1.35. This is an important notice for cluster administrators: if you are still running nodes on older Linux distributions that don't support cgroup v2, your `kubelet` will fail to start. To avoid downtime, you will need to migrate those nodes to systems where cgroup v2 is enabled.
Deprecation of ipvs mode in kube-proxy
Because of this maintenance burden, Kubernetes v1.35 deprecates `ipvs` mode. Although the mode remains available in this release, `kube-proxy` will now emit a warning on startup when configured to use it.
Final call for containerd v1.X
While Kubernetes v1.35 still supports containerd 1.7 and other LTS releases, this is the final version with such support. The SIG Node community has designated v1.35 as the last release to support the containerd v1.X series.
👍5🔥3👏2
Not so long ago, I posted news about moving CDK for Terraform to read-only mode, and now I think this outcome was inevitable.
- Programming languages are not well suited for describing infrastructure because they provide too much flexibility.
- Different companies can use different programming languages to describe essentially the same infrastructure.
- The entry barrier becomes higher: a DevOps engineer now needs to understand code written by someone else before them. We already have problems with code smells in application development, and this problem will not be any better when it comes to infrastructure denoscription.
- HCL is not perfect, but it is more straightforward. Terraform has become a de facto standard, and even its fork is not very popular (why change something if everything works?). The IaC world is generally inert.
- The market is already occupied by Pulumi, so to succeed you would need to be significantly better—but you can’t.
For all these reasons, CDK for Terraform never became popular. The same thing will likely happen to AWS CDK sooner or later.
https://news.1rj.ru/str/devops_sre_notes/2567
- Programming languages are not well suited for describing infrastructure because they provide too much flexibility.
- Different companies can use different programming languages to describe essentially the same infrastructure.
- The entry barrier becomes higher: a DevOps engineer now needs to understand code written by someone else before them. We already have problems with code smells in application development, and this problem will not be any better when it comes to infrastructure denoscription.
- HCL is not perfect, but it is more straightforward. Terraform has become a de facto standard, and even its fork is not very popular (why change something if everything works?). The IaC world is generally inert.
- The market is already occupied by Pulumi, so to succeed you would need to be significantly better—but you can’t.
For all these reasons, CDK for Terraform never became popular. The same thing will likely happen to AWS CDK sooner or later.
https://news.1rj.ru/str/devops_sre_notes/2567
👍11💯4🔥2❤1👌1
Kubernetes v1.35: Timbernetes — Only the Important Parts (Part 2): Features Graduating to Stable
Stable: In-place update of Pod resources
This feature allows users to adjust CPU and memory resources without restarting Pods or Containers.
PreferSameNode traffic distribution
A new option, `PreferSameNode` lets services strictly prioritize endpoints on the local node if available
Job API managed-by mechanism
The Job API now includes a `managedBy` field that allows an external controller to handle Job status synchronization.
Reliable Pod update tracking with .metadata.generation
Every time a Pod's `spec` is updated, the `.metadata.generation` value is incremented.
Configurable NUMA node limit for topology manager
Cluster administrators who enable it can use servers with more than 8 NUMA nodes.
Stable: In-place update of Pod resources
This feature allows users to adjust CPU and memory resources without restarting Pods or Containers.
PreferSameNode traffic distribution
A new option, `PreferSameNode` lets services strictly prioritize endpoints on the local node if available
Job API managed-by mechanism
The Job API now includes a `managedBy` field that allows an external controller to handle Job status synchronization.
Reliable Pod update tracking with .metadata.generation
Every time a Pod's `spec` is updated, the `.metadata.generation` value is incremented.
Configurable NUMA node limit for topology manager
Cluster administrators who enable it can use servers with more than 8 NUMA nodes.
❤6👍4🔥2
Kubernetes v1.35: Timbernetes — Only the Important Parts (Part 3): New Features in Beta
Pod certificates for workload identity and security
Native workload identity with automated certificate rotation.
Expose node topology labels via Downward API
The `kubelet` can now inject standard topology labels, such as `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`, into Pods as environment variables or projected volume files.
Native support for storage version migration
With this release, the built-in controller automatically handles update conflicts and consistency tokens, providing a safe, streamlined, and reliable way to ensure stored data remains current with minimal operational overhead.
Mutable Volume attach limits
Opportunistic batching
The batching mechanism consists of two operations that can be invoked whenever needed - create and nominate. Create leads to the creation of a new set of batch information from the scheduling results of Pods that have a valid signature. Nominate uses the batching information from create to set the nominated node name from a new Pod whose signature matches the canonical Pod’s signature.
maxUnavailable for StatefulSets
You can use it to define the maximum number of pods that can be unavailable during an update.
Configurable credential plugin policy in kuberc
kuberc gains additional functionality which allows users to configure credential plugin policy.
KYAML
KYAML is a safer and less ambiguous subset of YAML designed specifically for Kubernetes.
Configurable tolerance for HorizontalPodAutoscalers
This enhancement allows users to define a custom tolerance window on a per-resource basis within the HPA `behavior` field.
Support for user namespaces in Pods
Kubernetes is adding support for user namespaces, allowing pods to run with isolated user and group ID mappings instead of sharing host IDs.
VolumeSource: OCI artifact and/or image
Support for the `image` volume type allowing Pods to declaratively pull and unpack OCI container image artifacts into a volume. This lets you package and deliver data-only artifacts such as configs, binaries, or machine learning models using standard OCI registry tools.
Enforced `kubelet` credential verification for cached images
This KEP introduces a mechanism where the `kubelet` enforces credential verification for cached images. Before allowing a Pod to use a locally cached image, the `kubelet` checks if the Pod has the valid credentials to pull it.
Fine-grained Container restart rules
Kubernetes v1.35 addresses this by enabling `restartPolicy` and `restartPolicyRules` within the container API itself. This allows users to define restart strategies for individual regular and init containers that operate independently of the Pod's overall policy.
CSI driver opt-in for service account tokens via secrets field
Kubernetes v1.35 introduces an opt-in mechanism for CSI drivers to receive ServiceAccount tokens via the dedicated secrets field in the NodePublishVolume request
Deployment status: count of terminating replicas
Kubernetes v1.35 promotes the `terminatingReplicas` field within the Deployment status to beta. This field provides a count of Pods that have a deletion timestamp set but have not yet been removed from the system.
Pod certificates for workload identity and security
Native workload identity with automated certificate rotation.
Expose node topology labels via Downward API
The `kubelet` can now inject standard topology labels, such as `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`, into Pods as environment variables or projected volume files.
Native support for storage version migration
With this release, the built-in controller automatically handles update conflicts and consistency tokens, providing a safe, streamlined, and reliable way to ensure stored data remains current with minimal operational overhead.
Mutable Volume attach limits
CSINode.spec.drivers[*].allocatable.count is now mutable so that a node’s available volume attachment capacity can be updated dynamically.Opportunistic batching
The batching mechanism consists of two operations that can be invoked whenever needed - create and nominate. Create leads to the creation of a new set of batch information from the scheduling results of Pods that have a valid signature. Nominate uses the batching information from create to set the nominated node name from a new Pod whose signature matches the canonical Pod’s signature.
maxUnavailable for StatefulSets
You can use it to define the maximum number of pods that can be unavailable during an update.
Configurable credential plugin policy in kuberc
kuberc gains additional functionality which allows users to configure credential plugin policy.
KYAML
KYAML is a safer and less ambiguous subset of YAML designed specifically for Kubernetes.
Configurable tolerance for HorizontalPodAutoscalers
This enhancement allows users to define a custom tolerance window on a per-resource basis within the HPA `behavior` field.
Support for user namespaces in Pods
Kubernetes is adding support for user namespaces, allowing pods to run with isolated user and group ID mappings instead of sharing host IDs.
VolumeSource: OCI artifact and/or image
Support for the `image` volume type allowing Pods to declaratively pull and unpack OCI container image artifacts into a volume. This lets you package and deliver data-only artifacts such as configs, binaries, or machine learning models using standard OCI registry tools.
Enforced `kubelet` credential verification for cached images
This KEP introduces a mechanism where the `kubelet` enforces credential verification for cached images. Before allowing a Pod to use a locally cached image, the `kubelet` checks if the Pod has the valid credentials to pull it.
Fine-grained Container restart rules
Kubernetes v1.35 addresses this by enabling `restartPolicy` and `restartPolicyRules` within the container API itself. This allows users to define restart strategies for individual regular and init containers that operate independently of the Pod's overall policy.
CSI driver opt-in for service account tokens via secrets field
Kubernetes v1.35 introduces an opt-in mechanism for CSI drivers to receive ServiceAccount tokens via the dedicated secrets field in the NodePublishVolume request
Deployment status: count of terminating replicas
Kubernetes v1.35 promotes the `terminatingReplicas` field within the Deployment status to beta. This field provides a count of Pods that have a deletion timestamp set but have not yet been removed from the system.
👍2🔥2❤1❤🔥1
Upgrading a critical database cluster often involves anxiety, but this practical guide outlines a method to update PostgreSQL without losing data or incurring significant downtime. It covers the essential command-line steps and verification processes needed for a smooth transition.
https://palark.com/blog/postgresql-upgrade-no-data-loss-downtime/
https://palark.com/blog/postgresql-upgrade-no-data-loss-downtime/
Palark
Upgrading PostgreSQL with no data loss and minimal downtime | Tech blog | Palark
A technical story of upgrading a production PostgreSQL cluster from v13 to v16. It focuses on high availability and minimal downtime.
👍5
Julius Volz discusses the trade-offs between different observability standards in the monitoring landscape. His argument explains why he still prefers native Prometheus instrumentation over OpenTelemetry for certain use cases.
https://promlabs.com/blog/2025/07/17/why-i-recommend-native-prometheus-instrumentation-over-opentelemetry/
https://promlabs.com/blog/2025/07/17/why-i-recommend-native-prometheus-instrumentation-over-opentelemetry/
Promlabs
Blog - Why I recommend native Prometheus instrumentation over OpenTelemetry
PromLabs - We teach Prometheus-based monitoring and observability
👍3
Finally, FluxCD has a GUI. People say it looks like ArgoCD. I’ve never used Argo, but if that’s true, it’s a good move from the FluxCD team.
The main reason for Flux’s lower adoption, in my opinion, was the lack of out-of-the-box visibility. Many people want to see the status of resources directly, rather than rely on custom notifications from their own systems or parse logs when an update hasn’t been delivered as expected.
At my current workplace, I built a feedback system that shows the current state inside a GitLab pipeline, but this approach is not efficient. It doesn’t make sense that every company has to build its own solution and spend time on this just because the tool doesn’t provide a default feedback mechanism, such as an UI.
https://fluxoperator.dev/web-ui/
The main reason for Flux’s lower adoption, in my opinion, was the lack of out-of-the-box visibility. Many people want to see the status of resources directly, rather than rely on custom notifications from their own systems or parse logs when an update hasn’t been delivered as expected.
At my current workplace, I built a feedback system that shows the current state inside a GitLab pipeline, but this approach is not efficient. It doesn’t make sense that every company has to build its own solution and spend time on this just because the tool doesn’t provide a default feedback mechanism, such as an UI.
https://fluxoperator.dev/web-ui/
Flux Operator
Web UI - Flux Operator
Mission control dashboards for Kubernetes app delivery powered by Flux CD.
👍7🔥3❤1
Not a big feature, but a small quality-of-life improvement that AWS provides. Automatic ECR repository creation is one of those features we’ve needed for a long time.
Literally a couple of weeks ago, we discussed with the team how we would automate this to simplify life for both us and the development team. Now it’s here, and we won’t have to spend time building “workarounds” around it.
https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ecr-creating-repositories-on-push/
Literally a couple of weeks ago, we discussed with the team how we would automate this to simplify life for both us and the development team. Now it’s here, and we won’t have to spend time building “workarounds” around it.
https://aws.amazon.com/about-aws/whats-new/2025/12/amazon-ecr-creating-repositories-on-push/
Amazon
Amazon ECR now supports creating repositories on push - AWS
Discover more about what's new at AWS with Amazon ECR now supports creating repositories on push
👍5
Open-source Platform for learning kubernetes and aws eks and preparation for for Certified Kubernetes exams (CKA ,CKS , CKAD)
https://github.com/ViktorUJ/cks
https://github.com/ViktorUJ/cks
GitHub
GitHub - ViktorUJ/cks: Open-source Platform for learning kubernetes and aws eks and preparation for for Certified Kubernetes…
Open-source Platform for learning kubernetes and aws eks and preparation for for Certified Kubernetes exams (CKA ,CKS , CKAD) - GitHub - ViktorUJ/cks: Open-source Platform for learning kubern...
👍8❤4🔥3
One of the simplest solutions I’ve used for managing temporal environments is the solution provided by Flux Operator. The configuration is straightforward and offers an out-of-the-box solution that can describe even complex environments. This setup makes environments available for a merge request and, at the same time, provides fast termination and cleanup of resources.
https://fluxoperator.dev/docs/resourcesets/gitlab-merge-requests/
https://fluxoperator.dev/docs/resourcesets/gitlab-merge-requests/
Flux Operator
Preview GitLab MRs - Flux Operator Docs
Flux Operator preview environments integration with GitLab Merge Requests
👍2