Did you ever let the intern touch your code and then find secrets and credentials in the repository only to have the git blame expose it was actually you that pushed them? Yeah, me neither. This purely theoretical example can be a fear of the past when you switch to Azure’s Managed Identities.
I’m currently part of a project (which recently won an award, yay 🎉) with a cloud native Azure-hosted backend. As it tends to be in such systems, it’s composed of several different services which need to be able to communicate — authentication services, database, function apps, firewall, key vault, storage accounts. Multiply this for different environments (staging / production) and you are left with many connection strings and credentials which need to be managed and could potentially be leaked. One pretty simple way to mitigate this attack vector is to facilitate Azure Managed Identities (an aside: Don’t worry, this is not a sales pitch for Azure, all major cloud platforms offer a similar functionality).
If you ever had to deal with cloud applications, regardless of provider you have the misfortune of knowing how frustrating the documentation can be. Outdated, incomplete, at times contradicting and painfully abstract. So in the following paragraphs my goal is to give you a bit of a primer in managed identities and show you in a hands-on example how you get your own Azure function app secured and up-and-running in a few steps.
So… what are Managed Identities?
If you’re familiar with Azure’s Service Principals, you basically already know — Managed Identities (MI) are Service Principals (SP). Where they differ from SPs is scope — MIs are only available to communicate from Azure resource to Azure resource, Service Principals on the other hand can also be used to authenticate third-party connectors, e.g., your DevOps pipeline to deploy Azure resources. They are applications whose role your app can assume to access a resource.
Azure allows for 2 types of MIs — system-assigned and user-assigned
- System-assigned Identities (SAI) are tied to the resource directly. It is created with the resource automatically, can only be assigned to this one resource and is destroyed when the resource is removed.
- User-assigned Identities (UAI) on the other hand can be assigned to multiple resources and needs to be managed independently.
In essence, what makes them special is that you don’t need to store any credentials in your code base. They simply need to be identified via a client ID and Azure will deal with the rest internally.
When to choose which?
Since the lifecycle of identity and resource are coupled with SAIs, you’ll want to go with these when you want security isolation between resources. This ensures automatic identity cleanup with the resource, reducing identity sprawl and orphaned identities. They are ideal if you don’t need to reuse identities.
However, in a distributed system you’ll seldom need a role only once. So in most cases you’ll want to use UAIs. This simplifies permission management and role assignments, especially when resources come and go often, but the permission must persist. Another consideration is that with UAI you can pre-provision roles in advance, streamlining role assignment in automated deployments.
Best Practices
Let’s just run through a few general terms in role based access control.
Least Privilege
A managed identity should only be allowed to access resources which are necessary for the application function properly. We do not want to offer a global admin managed identity as an attack vector.
Lifecycle
A managed identity should live and die with the application (or applications) it is assigned to. Once an application becomes obsolete, so should its assigned identities. Again, if you work with system-assigned MIs, you don’t need to worry about this. Azure will take care of this for you. However, with user-assigned MIs, make sure you regularly remove dangling entities.
Granularity
While we do want least privilege, we also need our access management to be actually manageable. The easiest way to facilitate this is to assign roles to groups instead of users. If users drop out or new are added you can simply add them to an existing group and be sure the permissions are correct.
Best Practices in Practice
Let’s look at a simple example. We’ll build the world’s most expensive and inefficient shopping list, in the form of a function app which uploads text files to a storage blob container. As we go along, I will point out possible pitfalls ⚠️.
We’ll need the following prerequisites:
- Azure “Pay As You Go” Plan
- An Azure Subscription
- Terraform CLI installed
- Azure CLI installed
Infrastructure as Code with Terraform
Microsoft has a tutorial for deploying azure function apps using terraform, that I used as the basis for this project. You can try it out for yourself and come back to learn how to extend the configuration to create a managed identity for the function app.
First, define the managed identity resource. You’ll want to put it in the same resource group and location as the rest of your application.
And assign it to the storage account. Here, we will follow the principle of least privilege by
- assigning it directly to the storage container, not the storage account
- And assign the role of “Storage Blob Data Contributor”, allowing the application to manage data within the storage blob but does not permit managing access control.
Note: I created two storage accounts and containers. One for the function app itself, the other for the upload of files.
And finally assign the identity to the function app.
⚠️ A few notes here:
- You can define the variables for the Azure Function app settings directly in the terraform config. The default way for a function app to communicate with a storage blob is to set
AzureWebStorageto the connection string of the storage container.
Since this is the exact opposite of what we are trying to do, we need to override this behavior with the dedicated variableAzureWebJobsStorage__accountnameand set it to the name of the storage account. - To authorize the application, set
AZURE_CLIENT_IDto the client ID of the managed identity. STORAGE_URLis the endpoint used to upload data.
Here is the full definition for the flex consumption function app:
1. Now we can create our resources. First initialize Terraform
2. Create a plan from the definitions
3. Log into your Azure account via the CLI
4. Apply the plan
You can check the output or Azure portal to see if your resources have been created successfully.
Writing the function app
We’ll write our function app in typescript and use http triggers. I used this template as a starting point.
To communicate with the storage blob we’ll need a few azure libraries. I’ll also use zod to validate incoming requests.
Next, a simple post endpoint. Basically all it does is to validate the request body and upload the file by calling createBlob().
Now, createBlob() is where the magic happens. You could say, the Azure Cloud Platform is a pretty big system. It has lots of different environments, historical implementations and mechanisms to authenticate users. Not all of these mechanisms are available in every environment, e.g., the Azure CLI cannot be used to authenticate a http request.
So the DefaultAzureCredential in essence is a wrapper for all of these credential providers:
- EnvironmentCredential
- WorkloadIdentityCredential
- ManagedIdentityCredential
- SharedTokenCacheCredential
- VisualStudioCredential
- VisualStudioCodeCredential
- AzureCliCredential
- AzurePowerShellCredential
- AzureDeveloperCliCredential
- InteractiveBrowserCredential
It accepts multiple different values as a parameter, so you can pass “what you know”. In our case, the Managed Identity Client ID. Then it checks iteratively through each credential provider until it finds one which returns a valid authorization result. Even if a credential is returned, if that identity is not permitted to access the storage blob, the upload will fail. But since we’re amazing developers and always assign our scopes correctly that won’t happen.
⚠️ Notes on development
- If you haven’t set the
AzureWebJobsStorage__accountnamevariable, the credential creation will fail. - When running the function locally the
ManagedIdentityCredentialis not available, so you will only really be able to test the correct role assignment after deployment - To test the upload logic locally, you will need to replace
AzureWebJobsStorage__accountnamewithAzureWebJobsStorage:”UseDevelopmentStorage=true”and run an Azurite Docker container.
🚀 Deploying the function app
I used a Bitbucket pipeline step to deploy the app on merge into main. Here is the step definition. But if you’re just playing around you can just as well run the same commands in your local terminal.
⚠️ When deploying via the pipeline you will need to create a service principal to authorize the pipeline to edit resources. You can follow Atlassian’s Guide on that.
Let’s test it out with cURL.
You could now write another Azure function which sends you an email once a day with all the ingredients. Whether cronjobs as serverless functions are in the spirit of the whole serverless concept is another thing and we won’t speak of it here. Congratulations, you’ve built the most inefficient shopping list. Hope you had fun and your application deployed on the first try.
If you’re curious about Service Principals or enriching your development process on the cloud, let’s connect at &, where we build cloud native applications! It’s time to build!
Links
- Demystifying Service Principals — Managed Identities
- DefaultAzureCredentials Under the Hood
- Best practices for Azure RBAC
- Ditch Connection Strings, Embrace Secure Azure Access
- Manages Identities Overview
Explore Our Latest Insights
Stay updated with our expert articles and tips.
Discuss Your Web Development Project with Our Experts Today.
Discover how our tailored web development solutions can elevate your business to new heights.
Stay Connected with Us
Follow us on social media for the latest insights and updates in the tech industry.

.jpeg)





