Azure Functions can be used as a lightweight platform for building APIs. They support a number of helpful features for API developers including custom routes and a variety of output bindings that can implement complex business rules. They also have a consumption-based pricing model, which provides a low-cost, pay-per-use pricing model while you have low levels of traffic, but can scale or burst for higher levels of demand.
The Azure Functions platform also provides Azure Functions Proxies, which gives another set of features to further extend APIs built on top of Azure Functions. These features include more complex routing rules and the ability to do a small amount of request rewriting. These features have led some people to compare Azure Functions Proxies to a very lightweight API management system. However, there are a number of features of an API management platform that Azure Functions Proxies doesn't support. One common feature of an API management layer is the ability to perform rate limiting on incoming requests.
Azure API Management is a hosted API management service that provides a large number of features. Until recently, API Management's pricing model was often prohibitive for small APIs, since using it for production workloads required provisioning a service instance with a minimum of about a AUD$200 monthly cost. But Microsoft recently announced a new consumption tier for API Management. Based on a similar pricing model to Azure Functions, the consumption tier for API Management bills per request, which makes it a far more appealing choice for serverless APIs. APIs can now use features like rate limiting - and many others - without needing to invest in a large monthly expense.
In this post I'll describe how Azure Functions and the new API Management pricing tier can be used together to build a simple serverless API with rate limiting built in, and at a very low cost per transaction.
Note: this new tier is in preview, and so isn't yet ready for production workloads - but it will hopefully be generally available and supported soon. In the meantime, it's only available for previewing in a subset of Azure regions. For my testing I've been using Australia East.
In this example, we'll build a simple serverless API that would benefit from rate limiting. In our example function we simulate performing some business logic to calculate shipping rates for orders. Our hypothetical algorithm is very sophisticated, and so we may later want to monetise our API to make it available for high-volume users. In the meantime we want to allow our customers to try it out a little bit for free, but we want to put limits around their use.
There may be other situations where we need rate limiting too - for example, if we have a back-end system we call into that can only cope with a certain volume of requests, or that bills us when we use it.
First, let's write a very simple function to simulate some custom business logic.
For simplicity I'm going to write a C# script version of an Azure Function. You could easily change this to a precompiled function, or use any of the other languages that Azure Functions supports.
Our simulated function logic is as follows:
Receive an HTTP request with a body containing some shipping details.
Calculate the shipping cost.
Return the shipping cost.
In our simulation we'll just make up a random value, but of course we may have much more sophisticated logic in future. We could also call into other back-end functions or APIs too.
Here's our function code:
If we paste this code into the Azure Functions portal, we'll be able to try it out, and sure enough we can get a result:
API Management Policy
Now that we've got our core API function working, the next step is to put an API Management gateway in front of it so we can apply our rate limiting logic. API Management works in terms of policies that are applied to incoming requests. When we work with the consumption tier of API Management we can make use of the policy engine, although there are some limitations. Even with these limitations, policies are very powerful and let us express and enforce a lot of complex rules. A full discussion of API Management's policy system is beyond the scope of this post, but I recommend reviewing the policy documentation.
Here is a policy that we can use to perform our rate limiting:
This policy uses the caller's IP address as the rate limit key. This means that if the same IP address makes three API calls within a 15-second period, it will get rate limited and told to try again later. Of course, we can adjust the lockout time, the number of calls allowed, and even the way that we group requests together when determining the rate limit.
Because we may have additional APIs in the future that would be subject to this rate limit, we'll create an API Management product and apply the policy to that. This means that any APIs we add to that product will have this policy applied.
Securing the Connection
Of course, there's not much point in putting an API Management layer in front of our function API if someone can simply go around it and call the function directly. There are a variety of ways of securing the connection between an API Management instance and a back-end Azure Functions app, including using function keys, function host keys, and Azure AD tokens. In other tiers of API Management you can also use the IP address of the API Management gateway, but in the consumption tier we don't get any IP addresses to perform whitelisting on.
For this example we'll use the function key for simplicity. (For a real production application I'd recommend using a different security model, though.) This means that we will effectively perform a key exchange:
Requests will arrive into the API Management service without any keys.
The API Management service will perform its rate limiting logic.
If this succeeds, the API Management service will call into the function and pass in the function key, which only it knows.
In this way, we're treating the API Management service as a trusted subsystem - we're configuring it with the credentials (i.e. the function key) necessary to call the back-end API. Azure API Management provides a configuration system to load secrets like this, but for simplicity we'll just inject the key straight into a policy. Here's the policy we've used:
We'll inject the function key into the policy at the time we deploy the policy.
As this logic is specific to our API, we'll apply this policy to the API and not to our product.
Deploying Through an ARM Template
We'll use an ARM template to deploy this whole example. The template performs the following actions, approximately in this order:
Deploys the Azure Functions app.
Adds the shipping calculator function into the app using the deployment technique I discussed in a previous post.
Deploys an API Management instance using the consumption tier.
Creates an API in our API Management instance.
Configures an API operation to call into the shipping calculator function.
Adds a policy to the API operation to add the Azure Functions host key to the outbound request to the function.
Creates an API Management product for our shipping calculator.
Adds a rate limit policy to the product.
Here's the ARM template:
There's a lot going on here, and I recommend reading the API Management documentation for further detail on each of these. One important note is that whenever you interact with an API Management instance on the consumption tier using ARM templates, you must use API version
2018-06-01-preview or newer.
Calling our API
Now that we've deployed our API we can call it through our API Management gateway's public hostname. In my case I used Postman to make some API calls. The first few calls succeeded:
But then after I hit the rate limit, as expected I got an error response back:
Trying again 13 seconds later, the request succeeded. So we can see our API Management instance is configured correctly and is performing rate limiting as we expected.
With the new consumption tier of Azure API Management, it's now possible to have a low-cost set of API management features to deploy alongside your Azure Functions APIs. Of course, your APIs could be built on any technology, but if you are already in the serverless ecosystem then this is a great way to protect your APIs and back-ends. Plus, if your API grows to a point where you need more of the features of Azure API Management that aren't provided in the consumption tier, or if you want to switch to a fixed-cost pricing model, you can always upgrade your API Management instance to one of the higher tiers. You can do this by simply modifying the
sku property on the API Management resource within the ARM template.