Cast AI, the leading Kubernetes automation platform, today announced that it is the first company to fully integrate in-place pod resizing into its autonomous workload optimization engine. This new capability leverages the latest Kubernetes beta feature 1.33 to dynamically adjust CPU and memory resources for running pods without requiring pod restarts, unlocking a new level of cost efficiency and performance.
Traditionally, resizing pods meant restarting workloads, which disrupts applications, requires manual intervention, and leads to resource overprovisioning. With in-place resizing now supported in Kubernetes v1.33+, Cast automates this process completely. The company’s platform continuously analyzes workload behavior, identifies inefficiencies, and applies optimal resource settings in real-time. If a pod supports in-place updates, Cast adjusts it live. If not, the platform safely orchestrates a restart when needed.
Cast AI was built to eliminate the guesswork and operational toil of manual optimization,” said Laurent Gil, President and Co-Founder of Cast AI. “With in-place pod resizing, we’re giving DevOps and platform teams a powerful new way to right-size workloads instantly without touching YAML files or triggering downtime.
The feature is available to all Cast AI customers and supports production-grade use cases out of the box. To learn more, visit https://cast.ai.
