Containerized apps are now the standard, making it crucial for teams to collect usage and performance metrics effectively. Monitoring helps understand resource utilization and performance, enabling better application management. A smart approach is using an agent that gathers and exports data directly from containers. One such tool is cAdvisor.
This article explores cAdvisor, its key benefits, and implementation best practices to help you efficiently monitor the performance and use of container resources.
Intro to cAdvisor
Container Advisor (cAdvisor) is an open-source container monitoring tool created by Google. It can gather, compile, process, and export metrics based on containers, including filesystem and network information, CPU, and memory consumption.
This tool is simple to use in any containerized environment, from a single Kubernetes cluster to a full Docker installation. Because of its versatility, ease of use, and capacity to match nearly all monitoring needs, cAdvisor is one of the best options for container monitoring.
Key Features of cAdvisor
cAdvisor includes the following features:
- Out-of-the-box support for several container types and native support for Docker containers
- Automatic location of containers on its node and data gathering
- Multiple implementation methods, such as using a DaemonSet in Kubernetes, a Docker container, or an OS standalone program
- Export of data for additional processing and analysis to storage plugins like Prometheus, Elasticsearch, and InfluxDB
- A built-in Web user interface (UI) that displays metrics from acquired data in real-time
- Capacity to report total node resource consumption through root container analysis
- Strong REST API to directly query cAdvisor container metrics

cAdvisor Limitations
Although cAdvisor is an effective tool, you should be aware of the following limitations:
- cAdvisor only gathers basic resource usage data, which might not be enough if you need more detailed metrics
- Different OSs will need different setups to collect metrics. For example, RHEL and CentOS require running in privileged mode, while Debian requires enabling memory cgroups
- Additional configuration is needed to collect data for custom hardware, such as GPUs, and this configuration varies based on the underlying infrastructure
- Once cAdvisor is running, changing its runtime options requires restarting the cAdvisor container with the updated parameters. This means users need to stop the existing cAdvisor container and start a new one with the desired configuration changes
- cAdvisor needs external tooling to perform any additional analytics and store the collected data for an extended amount of time
How to Implement cAdvisor in Kubernetes
To effectively export metrics from your containers, cAdvisor needs to be deployed on each of the several nodes that make up a typical Kubernetes cluster.
Due to scale issues, manually installing cAdvisor on every cluster node is usually not an option. Use a Kubernetes DaemonSet instead, whichwill deploy an instance of a specified container to each node automatically.
Another advantage is that when installed as a DaemonSet, cAdvisor supports Kustomize. This means you can quickly install and customize cAdvisor as a DeamonSet in any Kubernetes cluster.
Check out this detailed instruction for implementing Kustomize.

Key Metrics and Monitoring Capabilities
You need to keep an eye on many Kubernetes metrics. Analyzing the cluster itself and the workloads operating on it are the two primary components of monitoring.
Cluster Monitoring
Since issues with the servers themselves will manifest in the workloads, each cluster needs to keep an eye on the underlying server components. When keeping an eye on node resources, make sure to consider metrics like CPU, disk, and network bandwidth.
When using cloud providers, operational costs are crucial – so having a summary of these indicators will help you decide when to scale the cluster up or down (if you’re up for doing that manually).
Workload Monitoring
This is where metrics related to deployments and their pods come into play. It may be useful to compare the current number of pods in a deployment to the desired number. In addition, you may search for container metrics, application metrics, and health checks.
Let’s jump into some more detailed metrics!
cAdvisor Metrics for CPU
- container_cpu_load_average_10s – the value of the container CPU load average over the last ten seconds
- container_cpu_usage_seconds_total – cumulative “user” CPU time consumed and cumulative “system” CPU time consumed
cAdvisor Metrics for Memory
- container_memory_usage_bytes – This metric measures current memory usage. Track it per container to explore the process’s memory footprint for each
- container_memory_failcnt – Use this metric to measure the number of times a container’s memory usage reaches the maximum limit. Make sure to set container memory usage limits to avoid a situation where a memory-intensive task starves the containers on the same server
- container_memory_cache – The metric measures the number of bytes of page cache memory
- container_memory_max_usage_bytes – Provides you with the number of maximum memory usage in bytes
cAdvisor Metrics For Network
- container_network_receive_errors_total – An important metric because it shows you failures: the cumulative count of errors encountered while receiving bytes over your network
- container_network_receive_bytes_total – The cumulative count of bytes received over your network
- container_network_transmit_bytes_total – The cumulative count of bytes transmitted over your network
- container_network_transmit_errors_total – The cumulative count of errors that happened while transmitting. Pay attention to the number of failures that occurred during transmission to reduce the debugging efforts
cAdvisor Metrics for Disk
- container_fs_io_time_seconds_total – The cumulative count of seconds spent on I/Os
- container_fs_writes_bytes_total – The cumulative count of bytes written
- container_fs_read_bytes_total – The cumulative count of bytes read
Use Cases and Best Practices
By increasing resource usage, cAdvisor may potentially degrade the performance of your apps if they are improperly designed and optimized. The following are recommended techniques to reduce issues with performance:
1. Using the Default cAdvisor Configuration
If you use the default cAdvisor configuration, you may see high CPU usage. To prevent this, set up runtime parameters that precisely specify the data that cAdvisor should collect and how frequently it should do that:
- housekeeping_interval: 30 seconds is the default value here to guarantee that periodic activities to collect container stats occur at the predetermined interval
- docker_only: When this is set to true, raw data is reported
- disable_metrics: Disable each unnecessary metric separately. By lowering the metrics cAdvisor asks for, you reduce the overall demand
2. Set Up A Storage Plugin When cAdvisor Is Being Used
This lets you save the data that has been gathered from the beginning. Unless you’re in a development environment, use a storage plugin when implementing cAdvisor.
3. Make Sure cAdvisor Has Sufficient Resources
cAdvisor needs sufficient resources to run, so you should make sure the containerized environment has enough headroom to accommodate this.
The number of Pods in each node and the type of data being queried will determine the resource needs of cAdvisor when it’s deployed as a DemonSet in a Kubernetes cluster. The amount of resources needed to process and export data increases with the number of pods from which it’s collected.
4. Avoid Overlapping Gathered Data
Most of the data gathered from cAdvisor can now be queried natively through Kubernetes thanks to the advent of the Metrics API and Metrics Server. For this reason, data gathered with cAdvisor and any other Kubernetes tool shouldn’t be combined.
Security Considerations
Due to their complex and transient nature, cloud-native environments call for continuous monitoring of installed services. Metrics are crucial, particularly for teams working in security operations (SecOps) and site reliability engineering (SRE), to determine the “what and when” of actions occurring in their environments.
Add the following security measures in place to maintain the security of cAdvisor instances:
- Using the Web UI (–http_auth_file, –http_auth_realm, –http_digest_file, and –http_digest_realm), enable HTTP basic or digest-based authentication
- Disable metrics (–disable_metrics) that are not needed
- Replace 8080 as the default port with a custom port (–port)
- Make use of the –prometheus_endpoint custom metrics endpoint
If your threat model doesn’t already consider public metrics exposure, don’t release metrics online. As crucial as safeguarding Prometheus dashboards is keeping your on-premises and cloud-native settings safe from such exposures. The same goes for protecting the metrics endpoint.
Wrap Up
A straightforward yet effective tool, cAdvisor can extract performance metrics and resource utilization from containers. You can customize cAdvisor to meet most monitoring requirements because it supports a wide range of platforms, from local Docker installs to sophisticated orchestration platforms like Kubernetes.
When paired with plugins that export data to Elasticsearch or Prometheus, cAdvisor becomes one of the most flexible metric collection systems out there.
cAdvisor gives you a wide range of metrics to help you understand what’s happening across all of your containers. If you’re looking for more granular cost-related metrics visualized in an easy-to-digest way, Cast AI has advanced monitoring dashboards that contain even more insights into resource utilization and costs.