Company
Flowcore is the startup behind a developer-first platform that makes it easy to collect, process, and act on your data in real time. It allows users to leverage AI to query, automate, and gain insights while keeping full control and security, keeping the data exactly where it belongs. Whether you want hands-on experience or prefer the solution to handle the heavy lifting, Flowcore can easily adapt and requires no specialized expertise.
Challenge
Flowcore operates a complex and dynamic infrastructure comprising single- and multi-instance databases deployed across cloud platforms and internal Kubernetes environments. This heterogeneous setup introduced significant operational challenges, particularly around data access and performance optimization.
Compounding the complexity, the system relied on manually managed database caching to support various APIs.
Traditional caching that integrates with database write channels is difficult to manage because it requires tight coordination between the cache and database to maintain consistency, often involving complex invalidation logic, custom code, and the risk of stale data when writes bypass the cache. Over time, the caching layer emerged as a critical bottleneck, creating latency and increasing the risk of data inconsistency.
Solution
The need for more efficient and reliable cache management prompted the decision to activate Cast AI’s Database Optimizer (DBO) as a strategic solution. Integrating DBO into the write channels for Flowcore was straightforward, and the solution handles the most challenging aspect – cache invalidation – out of the box. This strategic integration enabled the system to achieve cache hit rates of 80-90%, significantly reducing database load, driving cost savings on I/O-bound servers, and improving performance.
Results
- Cache hit rates of 80-90%
- Performance improvement and speed increase by up to 100 milliseconds for some queries
- Reduction of database hits by 90% for lower cost
One of the toughest challenges with database caching, especially in distributed systems like ours, is cache invalidation. But with Cast AI’s DBO, it just works right out of the box.
Our workloads are highly cacheable because we separate reads from writes, so integrating with the write channels was straightforward, even in clustering mode. We’re now seeing cache hit rates of 80-90%, which is outstanding. On I/O-bound servers, cutting database hits by 90% has a huge impact—it saves money and significantly improves performance.
Our services were already pretty fast, but DBO shaved off up to 100 milliseconds on some queries, which makes a noticeable difference. If you’re thinking about adding a cache, DBO is an easy choice.
Julius á Rógvi Biskopstø
CTO and Co-founder at Flowcore
Finding the right database optimization solution
What was the main challenge facing your team in database operations?
Well, we have a lot of databases – some single-instance, some multi-instance – deployed both in the cloud and in our own Kubernetes environments. It’s a very dynamic setup.
On top of that, we have APIs that rely on manually created caches. The caching layer between the application and the database became a critical point for us. That was really the kicker – it led me to activate DBO right away. Managing the cache effectively at that stage was what got us started.
What was onboarding Cast AI’s database optimization solution like?
You just set it up and deploy it like any other cache component. If you’re familiar with Helm charts and Kubernetes, it’s really straightforward.
Reaping the benefits: 80-90% cache for cost savings and performance improvements
What benefits have you seen after the integration?
Cache hit rates of 80-90%
The biggest challenge with caches is invalidation, especially in distributed systems. It’s hard to get right and can tightly couple services.
With Cast AI’s DBO, it just works out of the box. You don’t have to think about it – just enable it. We’re seeing 80-90% cache hit rates, which is outstanding. Getting those results out of the box is a big help.
Cost savings
Reducing database hits by 90% on I/O-bound servers is significant – it saves us a lot of money and boosts speed.
Right now, our instances are relatively small. But as we scale, the savings scale with us, so the cost-benefit becomes very real.
Performance improvement
We already had pretty fast services, but DBO improvement brought it down to the sub-100-millisecond range, which definitely helps.
Time savings
We also save time by not having to implement our own cache. That’s a win in both maintenance and development. And it’s fast – response times are quicker too. We’re currently switching our endpoints from GraphQL to REST, and we don’t have to worry about caching at all. It’s a lot of time saved compared to spinning up Redis, building a distributed cache, and so on. You just don’t need to do any of that here.
Which companies stand to benefit most from a solution like Cast’s database optimization?
If you’re considering using a cache, this is a no-brainer. You don’t need to spend time building that part. Especially if you’re running APIs with cacheable workloads or PostgreSQL databases, it’s an easy win.
Of course, if every query is dynamic, no cache will help. But for anything cacheable, this setup makes a lot of sense. It’s simple and removes caching from your development equation entirely.



