Disclaimer: The client preferred to stay anonymous. Cyberscr is not a real brand.
Getting ahead in the cloud cost game
Our customer is a leading global cybersecurity platform transforming how enterprises, industries, and governments secure their networks. It allows users to share threat data to proactively address threats.
The company’s security features require a high quantity of compute resources. This inspired the company to search for a complementary solution that would scale resources automatically and help reduce cloud costs.By implementing Cast AI, our customer saw deployment cost savings of 50%, translating into $2.4 million removed from its annual cloud bill.
Book a demo with Cast AI now to get similar results
Optimizing cloud costs was always a priority
Our customer has always been interested in cost optimization because its cloud bill was growing, and the team realized that tweaking some infrastructure elements could result in significant savings.
The company tried other solutions that specialize in automating spot instances, but they were hard to implement. Back then, the team was in the process of retooling its deployment process, and Cast AI fit into that new deployment process easily.
The team slotted a month for PoC, but following the docs, getting it deployed and set up took just a day or two. “I would say within two weeks, we had a full PoC where we could start seeing what the cost savings would be. It was over Christmas break too. So we let it run over Christmas, came back, and then we were able to see results pretty much immediately,” said the company’s Cloud Infrastructure Engineer.
Scaling resources to match the demand works best when automated
With each incremental increase of ingested data requiring larger systems; the company is betting on Kubernetes and its capability for scaling up and down.
Our customer is supported here by handling all the scaling activity, which frees the team to do other, more important tasks. As they schedule more pods, the Cast AI autoscaler provisions all the cloud resources necessary to support the infrastructure. And once demand drops, the platform automatically removes the resources that are no longer needed.
If you’re looking to reduce the number of resources that you’re creating for your clusters and just be smarter about the instances that get purchased, it’s kind of a push button.
Director of Infrastructure
During the initial testing on the Cast AI autoscaler, a team member was in parallel trying to manually figure out what the size of the clusters could be and adjust capacity based on that. And for weeks, he could not even come close to how Cast AI was initially performing.
“I was using our same new deployment process on EKS, and I would set, basically, a base size and then have a node group that had a ton of space to scale up and down with using the cluster autoscaler built into Kubernetes. So that when the workload came online, it would say, give me more instances. But it was in large chunks of instances that it would do that versus Cast AI. When we need a little bit more space, Cast AI gets us a little bit more space. When we don’t need that space, that instance goes away. I couldn’t replicate that without building my own version of Cast AI,” said the company’s Director of Infrastructure.
Result: 50% of cost savings per deployment
Implementing Cast AI had a massive impact on the company’s cloud bill that primarily consists of compute (75-80%). The PoC showed a fantastic result: 50% of savings on cloud resources per deployment. If all of the company’s workloads used the new deployment process, Cast AI will have saved $2.4 million on its cloud bill.
The important thing for us is really just the fact that it looks like our average deployment’s going to be about a 50% savings. So adding another thousand customers is analogous to what we would have paid for 500 customers. That’s pretty exciting.
Director of Infrastructure
In the future, our customer also plans to take advantage of Cast AI’s integration with KEDA to achieve even greater efficiency and cost reductions thanks to new scaling capabilities.