Technical/Business

Comparing Azure Functions to AWS Lambda - A deep dive

author picture

Stef Ceyssens

Senior cloud consultant

Follow: @stefceyssens

Many people that come from AWS Lambda struggle to find their footing on Azure Functions. This is mainly due to different choices that were made by Azure compared to AWS in building their Functions as a Service offering.

This blog is not intended to give the next theoretical difference between AWS Lambda and Azure functions. Instead of talking about cost and language support, we will dive deeper into how development and building architectures are different on Azure functions compared to AWS Lambda.

Azure vs AWS - A different approach

AWS is undoubtedly the original creator of the Serverless functions in the cloud. Back in 2014 they created AWS Lambda and with this they created yet another abstraction layer after the creation of Infrastructure as a Service and Platform as a Service services.

AWS took advantage of their head start and built the underlying infrastructure for AWS Lambda from scratch. Using things like Firecracker they were able to make this infrastructure highly scalable and reduce cold start timings.

Azure however, did not build the underlying infrastructure from scratch. Instead they used their existing PaaS service called App services. Which already leans much towards serverless as a lot of the server management is done by Azure where AWS Elastic Beanstalk still requires quite some setup and configuration in many cases.

This difference in approach has resulted in AWS being better at cold starts. On the other hand, it also left them with a really basic service which for example back in 2015 did not even support environment variables. Azure, by reusing an existing service, was able to provide more features around the function apps from the start. This difference in approach also led to the differences that we still see today and which we go over in more depth.

Azure function app components

In Azure we have an Azure function app which can contain multiple Azure functions.

This additional layer comes as a bit unexpected for AWS Lambda users. It often results in developers comparing one Azure function with one AWS Lambda function. However, this is not correct. A function app is one scalable unit on which things like environment variables and hosting of the functions is defined. For this reason we should compare one function app with one AWS lambda function.

But what is the additional layer called Azure function then? Well, in my opinion it is a layer that we are missing in AWS. When developing a serverless API, AWS best practises tell us to use a separate AWS Lambda function for each endpoint. However, in reality this gives a lot of development and deployment overhead. Code reuse for example becomes more complex. For this reason we often have one Lambda function for each resource in the API. So for example, the product crud operations are done by one Lambda function, the user crud operations by a different Lambda function.

This approach leads to the need for routing logic. Based on the API method (eg. POST vs PUT) and path, we want to route the incoming request to the right code/method running in the Lambda function. However, this routing library is not needed in Azure. Instead, Azure function apps will, based on the API request route the request to the correct function running inside that function app. While in the meantime, it is possible to share code between these different functions inside of the function app.

One Function app = one process

With this additional layer in an Azure function app, one might get the feeling that each function in a function app scales independently or runs in an independent process. However, that is not the case. All functions in a function app share the same CPU and memory. Moreover, they run in the same process! This for example means that if you have 3 NodeJS functions running in 1 function app, there will be only one event loop!

This means that you should avoid long running blocking threads in one of the functions because that will also impact the other functions in the function app. If you create such a blocking thread and another call is made to your API, Azure will need to spin up a separate function app.

This also means that global variables in for example NodeJS will be shared over the different functions. This can be useful but could also lead to hard to debug issues when 2 functions overwrite the same global variable.

 

Many developers coming from AWS Lambda will see this as a bug rather than a feature. Mainly because you lose the tight request scope that AWS Lambda provides by limiting one thread to be running in the Lambda function at all times. However, from a cold start perspective, Azure function apps give the benefit that one single function app can in fact serve multiple requests in parallel which in the case of NodeJS will optimize the utilization of the server resources and event loop.
Another benefit is that you are able to reuse database connections over different functions in a function app as well as reducing the amount of open database connections needed because multiple requests in parallel can reuse the same database connection.

The same is true for retrieval of secrets from for example Azure Key Vault. You only need to retrieve these secrets once for each function app regardless of the number of functions inside the function app.

Native integrations reduce amount of code

As with AWS Lambda, Azure functions have a lot of native integrations with Azure services and even with 3rd party services. This enables you to let a function be triggered based on almost any event that occurs in Azure. However, Azure function apps go a bit further in providing native integrations. Where AWS Lambda allows you to configure triggers, Azure function apps also allows you to configure input and output bindings.

With these bindings we can configure a function in a function app to be triggered on an API call, for example PUT /products/{product-id} to update an existing product. Next we can configure the input binding to automatically retrieve the product from Azure CosmosDB based on the provided product id. This product entity will be available in the code in a variable of which you define the name. Next we can write a little bit of code to validate the product update. Using the output binding we can define a variable that we will use in our code and we can tell the Azure function app to store the value of this variable, in this case JSON, in a CosmosDB table after the function invocation has ended.

Using these input and output bindings, we are able to create small CRUD APIs without the need to manage the database connection. A full list of available input and output bindings can be found here (https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings?tabs=csharp#supported-bindings).

 

Next to these input and output bindings, Azure function apps also provide a tight integration with Azure Key Vault. This allows you to define an environment variable in function apps for which the value is actually a reference to a secret in Key Vault. On startup, the function app will automatically retrieve the secret from Key Vault without you having to write any code. To enable this, you need to give your function app access to the secret in Key Vault.

I really hope to see this way of retrieving secrets also available in AWS Lambda when we store them in AWS Secrets Manager or Parameter Store. It will remove a decent amount of boilerplate code.

What’s next?

In 2014, we were thrilled to jump on the AWS Lambda train and we were able to overlook the vast amounts of limitations it had because it gave unmeasurable benefits in many of our projects. With AWS being first, Azure had to make up for lost time and in my opinion they managed to close the gap although they did it in their own unique way. With Azure and AWS on the Serverless front, we are sure both will challenge and learn from each other. Which will in the end benefit us and all other Serverless minded companies.

Read more

Related articles