7 Azure Services For Your Containerized Application

Azure’s container services can easily confuse engineers. Beginning with Azure container services, Microsoft quickly added AKS and Azure Container Applications to provide you with more Azure services for your containerized application. 

Azure services for your containerized application

Today, Azure offers many container-as-a-service options for teams moving traditional monolithic software to the cloud and containers, simplifying application deployment and maintenance. 

Here’s a comparison of Azure services for containerized applications to help you navigate Microsoft’s offering and pick the best option for your case.

If you’re interested in a pricing comparison for these services, check out this table with our calculations.

Azure services for your containerized application

1. Azure Kubernetes Services (AKS) 

What is it? 

AKS is a fully managed Kubernetes service that provides the fastest way to deploy cloud-native apps in Azure.

How does it work?

The flagship container orchestration offering on Azure, AKS delivers a standards-based Kubernetes implementation and integrates well with other Azure services across security, identity, cost management, and migration. 

You need to define references to these resources in Kubernetes resource files. This makes AKS an optimal solution for hosting all kinds of containerized applications: web apps, databases, caching clusters, and much more 

Note that Kubernetes is complex, and there’s a knowledge barrier to getting started. Expect some running overhead too: ~10% of resources will go into running Kubernetes itself and not the workloads in the cluster. However, these negatives don’t outweigh their benefits.

When to use it?

AKS is considered a general-purpose solution for containerized applications. You can use it for small and large apps alike. AKS comes in handy if you want to lift and shift existing applications to containers, train ML models, deploy microservices, or perform data streaming scenarios. 

AKS works best for large applications and more complex environments, where more automation is needed. It’s not a good match for small pet projects, websites, or similar.

Difficulty level

Creating an AKS cluster is relatively easy. Scaling it up and down is pretty straightforward. 

But it’s not just about using the Azure console for spinning your cluster up – consider the complexity and skill required to configure things in Kubernetes. This challenge extends to Day 2 operations tasks such as maintaining, monitoring, and optimizing your K8s deployment. 


AKS comes at no charge. You only have to pay for the virtual machines and the associated storage and networking resources consumed by your cluster.

Don’t forget to add the 10% overhead of Kubernetes nodes reserved for operational tasks, not your workloads. Also, there’s the configuration effort – it’s more time-consuming in K8s, which translates into more money spent on keeping things running smoothly.

2. Azure Container Instances (ACI)

What is it?

Azure Container Instances offers a way to run container workloads in Azure without worrying about managing the underlying infrastructure. It’s an alternative to running containers on Azure virtual machines.

How does it work?

Using Container Instances, you can quickly spin up containers via the Azure Portal or one of the many Azure automation tools. There’s no need for any orchestrators like Kubernetes or environment configuration. This is one of the simplest Azure services for your containerized application.

You can decide how many computing resources will be given to each container, attach a file share if your container needs persistent storage, and deploy multiple containers as a group.

One interesting thing is that ACI enables elastic bursting with AKS by providing fast, isolated compute to handle traffic spikes without forcing you to manage servers. AKS can use the Virtual Kubelet to provision pods within ACI and have them start in seconds.

When to use it?

ACI works well for simple, single-container deployments that don’t require integration with other Azure services. As mentioned before, you can combine ACI with AKS to benefit from instant compute provisioning. 

Another ACI use case is building event-driven applications by combining ACI with the Azure Logic Apps connector, Azure queues, and Azure Functions. This gives you a robust infrastructure that can scale out containers on demand, running complex tasks capable of responding to events.

Azure Container Instances also come in handy for data processing, where you ingest and process source data to finally place it in a durable store like Azure Blob. By processing this data with ACI instead of statically-provisioned virtual machines, you slash your costs thanks to per-second billing.

Difficulty level

Azure Container Instances offers probably the easiest way to run containers on Azure. You can manage containers using the Azure portal or the Azure CLI.


You pay for compute capacity following the pay-as-you-go model or savings plans. The per-second billing and custom machine sizes help you optimize the costs of this solution. 

You get charged for Azure Container Instances at the container group – assignments of vCPU/Memory resources that can be used by a single container or split by multiple containers. The price depends on the number of vCPU and GBs of memory you request for the container group. 

3. Azure Service Fabric 

What is it? 

Azure Service Fabric is a container orchestrator that you can use to deploy and manage microservices across a cluster of machines. Service Fabric engine offers a bunch of services such as orchestration, monitoring, scaling, and healing for everything you decide to run there. You can create a Service Fabric cluster anywhere – from on-premises Linux to Azure or other public cloud environments.

How does it work?

This platform for distributed systems was made to make it easier to manage, package, and deploy containers and microservices. Service Fabric offers a broad range of features that help engineers develop and manage cloud-native apps. 

Among them is a lightweight runtime environment that supports stateful and stateless microservices, offering robust support for stateful services via containerized stateful services or built-in Service Fabric programming models.

Service Fabric vs. AKS

The most significant difference between Service Fabric and AKS is that the latter only works with Docker-first applications using Kubernetes. Service Fabric, on the other hand, works and supports various runtime strategies for microservices

It can deploy Docker and Windows Server containers and supports arbitrary executables and code-level integrations as stateful services running next to containerized services.

When to use it?

You can use Azure Service Fabric to build stateful services with its handy programming model or run microservices across multiple Azure computing resources. That makes deploying applications at scale so much easier – and I’m talking about even thousands of containers or apps per machine.

Service Fabric is a good match for applications developed using Windows Server containers or ASP.NET IIS apps. The solution opens the door to cloud-based setups for apps coming from more traditional programming paradigms. No wonder engineers use it to lift and shift existing Windows-based apps to Azure without having to rearchitect them entirely.

Another perk of Service Fabric is its tight integrations with other Azure services like Azure Pipelines, DevOps Services, Monitor and Key Vault.

But there’s one thing that might motivate you to choose other Azure services for your containerized application. If your application doesn’t rely heavily on the Microsoft technology stack, a cloud-agnostic orchestration solution like Kubernetes – and AKS – will work better for your typical containerized app.

Difficulty level

To make the most of Azure Service Fabric, you need a certain level of familiarity with this offering. 


There is no charge for the Service Fabric itself if you pick Service Fabric Cluster – you just need to pay for compute resources in Azure. If you use Azure Service Fabric, you’ll follow the pricing of ACI – with a 50% discount since the service is still in preview. 

4. Azure Batch + Batch Shipyard

What is it? 

Azure Batch allows you to run long-running batch jobs on dedicated compute capacity. Batch Shipyard, on the other hand, is an add-on for Azure Batch that provisions, executes, and monitors container-based batch processing and HPC workloads on Azure Batch. Together, they offer another method of running containers in Azure.

How does it work?

By using Azure Batch with the Batch Shipyard addon, you can provision your containers with all of the software they need and pull the container once the job starts. You don’t need to build out the environment before the batch process can run.

You can process jobs on demand, not on a predefined schedule – and benefit from easy scaling.

When to use it?

Use this combination for batch processes – jobs like big data workloads, reports, long-running jobs, or compute-intensive processes.

Difficulty level

The tool makes your life easier with batch jobs. You can run batch jobs in the Kubernetes server as well, but you need to know how to do it – knowledge of the ecosystem is key. Here, you just pay extra for Azure to take over. 


Batch pricing is based on the pay-as-you-go model with per-second billing. 

5. Azure Red Hat OpenShift

What is it? 

Microsoft Azure Red Hat OpenShift is a fully-managed variant of the OpenShift platform, providing storage management, logging and monitoring, and image registries to extend Kubernetes.

How does it work?

Running Kubernetes containers in production means that you need to manage storage, image registries, as well as logging and monitoring tools, which must all be tested and versioned together.

Azure Red Hat OpenShift lifts this task off your shoulders on a single platform, providing your application teams with the tooling they need. The platform is the product of a joint partnership between RedHat and Microsoft. There is no need to operate any VMs or patch things. And you can still pick your networking, storage, registry, and CI/CD solutions.

When to use it?

This is a good solution if you’ve already invested a lot in the OpenShift ecosystem and have deals with IBM, Red Hat, and clusters running on-prem. If you’re starting fresh, Kubernetes on AKS probably looks more tempting because you’re not facing any of the extra charges from OpenShift.

Difficulty level

ARO provides a slew of additional management and integration services on top of Kubernetes. Furthermore, it is completely integrated into the RedHat/IBM environment. If you’re able to utilize them, excellent – ARO could be one of the best-suited Azure services for your containerized application. If not, use a service like Azure Kubernetes Service.

This is a costly solution due to the expenses of the master, additional infrastructure nodes, and licensing. Yet, the expense of this service is justified for businesses when the extra benefits are considered. 


Azure Red Hat OpenShift clusters work as part of your Azure subscription, covered by your charges. Compute, networking, and storage resources consumed by the cluster will be billed based on usage. Application nodes also have the extra cost for the OpenShift license component billed based on the number of application nodes and the instance type. 

Here’s an example that shows this:

Take the D4as v4 VM as an example. It has 4 CPU cores and 16 GiB. If you run it in a regular pay-as-you-go mode, you’ll pay $0.217/hour. Adding Azure Red Hat OpenShift brings that figure to $0.388/hour – the price rises by ~45%.

6. Azure Functions

What is it? 

Azure Functions is a cloud service that lets developers run code continuously, providing all the infrastructure and resources needed to run your applications. 

How does it work?

Functions deliver serverless compute for Azure. You can use Functions to build web APIs, respond to database changes, handle IoT streams, manage message queues, and more.

When to use it?

Azure Functions comes in handy for asynchronous and reactive workloads. It’s a great choice for asynchronous message processing or tasks that are just as reactive in nature. But if you’re looking to build an HTTP API, expect some serious disadvantages – for example, cold start times. 


The pay-as-you-go pricing of Azure Functions is billed based on per-second resource consumption and executions. There’s a free monthly grant too. 

Note that the service will create a storage account by default with each app (not included in the free grant). You’ll have to face storage and networking fees charged separately.

If you want to avoid the cold start penalty we mentioned, you can get the Azure Functions Premium plan that keeps instances of your Functions perpetually warm. The pricing is based on consumption, and you’ll be charged for the total number of cores and memory provisioned, per second that each instance gets allocated to you. 

Alternatively, you can use a Dedicated plan, which is priced at the regular service plan rates.

7. Azure Container Apps (ACA)

What is it? 

Introduced as GA in May 2022, Azure Container Apps lets you run microservices and containerized applications on a serverless platform. It offers a platform for hosting containerized apps without forcing you to manage complex infrastructures like Kubernetes clusters (which – as we mentioned in the point about AKS – has significant overhead even when you’re using a managed K8s service).

How does it work?

Azure Container Apps allow users to deploy code packaged in any container on Azure. It’s unopinionated about runtime or programming models, so you can use any container for any task. 

With Container Apps, you can run multiple versions of a container and manage the app’s lifecycle. You can also autoscale apps based on any KEDA-supported scale trigger and enable HTTPS ingress without having to manage other Azure infrastructure. 

When to use it?

Common uses of Azure Container Apps include:

  • Deploying API endpoints
  • Hosting background processing applications
  • Handling event-driven processing
  • Running microservices

Difficulty level

Running Kubernetes on ACA would be easier to run, but expect to get locked in with the vendor. If the pricing changes or they sunset a service, moving your application will be difficult down the road. This is a serious downside that you might avoid in other Azure services for your containerized application.


This is a little tricky, but we promise that you’ll understand Azure Container Apps pricing better by the end of this section.

First, we need to differentiate between two modes: active and inactive. Your container is active while working – for example, when it processes a request. Azure has thresholds for vCPU and bandwidth usage above which your container will be considered Active. At any other time, your container is inactive.

When using ACA, you’ll be billed based on the amount of vCPU (cores) and memory specified when provisioning the container app. The charge for memory usage doesn’t depend on the container’s mode – contrary to the charge for CPU, for which the charge is around 8 times lower when your container is inactive.

Then there’s the monthly free grant counting 180,000 vCPU-seconds (50 hours) and 360,000 GiB-seconds (100 hours)

Which Azure service is the best for your containerized application?

The answer is: that depends. Certain Azure services are more sophisticated, while others do the work for you. A fundamental requirement is a transparency about the service offering, including its benefits and drawbacks.

When analyzing your options, you should definitely ask yourself these two questions:

Do you know enough about container orchestrators like Kubernetes and have enough experience with them to take on configuration tasks? Do you have a solid understanding of the Kubernetes ecosystem? 

If you don’t, Azure still has a slew of services that you can utilize to run your app using this orchestrator. If you browse through all of the Azure services for your containerized application, the cost will inevitably be one of the decisive factors. Moving apps to the cloud and into containers is simple with Azure, but there are always some added costs you should be aware of.

  • Blog
  • 7 Azure Services For Your Containerized Application

Leave a reply

Notify of
Inline Feedbacks
View all comments

Recent posts