Azure’s Best-Kept Secret: Simplifying Cloud Security with Managed Identities

Did you ever let the intern touch your code and then find secrets and credentials in the repository only to have the git blame expose it was actually you that pushed them? Yeah, me neither. This purely theoretical example can be a fear of the past when you switch to Azure’s Managed Identities.
I’m currently part of a project (which recently won an award, yay 🎉) with a cloud native Azure-hosted backend. As it tends to be in such systems, it’s composed of several different services which need to be able to communicate — authentication services, database, function apps, firewall, key vault, storage accounts. Multiply this for different environments (staging / production) and you are left with many connection strings and credentials which need to be managed and could potentially be leaked. One pretty simple way to mitigate this attack vector is to facilitate Azure Managed Identities (an aside: Don’t worry, this is not a sales pitch for Azure, all major cloud platforms offer a similar functionality).
If you ever had to deal with cloud applications, regardless of provider, you have the misfortune of knowing how frustrating the documentation can be. Outdated, incomplete, at times contradicting, and painfully abstract. So in the following paragraphs my goal is to give you a bit of a primer in managed identities and show you in a hands-on example how you get your own Azure function app secured and up-and-running in a few steps.
So… what are Managed Identities?
If you’re familiar with Azure’s Service Principals, you basically already know — Managed Identities (MI) are Service Principals (SP). Where they differ from SPs is scope — MIs are only available to communicate from Azure resource to Azure resource, Service Principals on the other hand, can also be used to authenticate third-party connectors, e.g., your DevOps pipeline to deploy Azure resources. They are applications whose role your app can assume to access a resource.
Azure allows for 2 types of MIs — system-assigned and user-assigned
- System-assigned Identities (SAI) are tied to the resource directly. It is created with the resource automatically, can only be assigned to this one resource and is destroyed when the resource is removed.
- User-assigned Identities (UAI) on the other hand, can be assigned to multiple resources and need to be managed independently.
In essence, what makes them special is that you don’t need to store any credentials in your code base. They simply need to be identified via a client ID and Azure will deal with the rest internally.
When to choose which?
Since the lifecycle of identity and resource are coupled with SAIs, you’ll want to go with these when you want security isolation between resources. This ensures automatic identity cleanup with the resource, reducing identity sprawl and orphaned identities. They are ideal if you don’t need to reuse identities.
However, in a distributed system, you’ll seldom need a role only once. So in most cases, you’ll want to use UAIs. This simplifies permission management and role assignments, especially when resources come and go often, but the permission must persist. Another consideration is that with UAI you can pre-provision roles in advance, streamlining role assignment in automated deployments.
Best Practices
Let’s just run through a few general terms in role based access control.
Least Privilege
A managed identity should only be allowed to access resources that are necessary for the application to function properly. We do not want to offer a global admin managed identity as an attack vector.
Lifecycle
A managed identity should live and die with the application (or applications) it is assigned to. Once an application becomes obsolete, so should its assigned identities. Again, if you work with system-assigned MIs, you don’t need to worry about this. Azure will take care of this for you. However, with user-assigned MIs, make sure you regularly remove dangling entities.
Granularity
While we do want least privilege, we also need our access management to be actually manageable. The easiest way to facilitate this is to assign roles to groups instead of users. If users drop out or new ones are added, you can simply add them to an existing group and be sure the permissions are correct.
Best Practices in Practice
Let’s look at a simple example. We’ll build the world’s most expensive and inefficient shopping list, in the form of a function app which uploads text files to a storage blob container. As we go along, I will point out possible pitfalls ⚠️.
We’ll need the following prerequisites:
- Azure “Pay As You Go” Plan
- An Azure Subscription
- Terraform CLI installed
- Azure CLI installed
Infrastructure as Code with Terraform
Microsoft has a tutorial for deploying azure function apps using terraform, that I used as the basis for this project. You can try it out for yourself and come back to learn how to extend the configuration to create a managed identity for the function app.
First, define the managed identity resource. You’ll want to put it in the same resource group and location as the rest of your application.
# main.tf
resource “azurerm_user_assigned_identity” “example” {
location = var.resource_group_location
name = var.managed_identity_example
resource_group_name = azurerm_resource_group.example.name
}
And assign it to the storage account. Here, we will follow the principle of least privilege by
- assigning it directly to the storage container, not the storage account
- And assign the role of “Storage Blob Data Contributor”, allowing the application to manage data within the storage blob but does not permit managing access control.
Note: I created two storage accounts and containers. One for the function app itself, the other for the upload of files.
# main.tf
resource “azurerm_role_assignment” “example” {
principal_id = azurerm_user_assigned_identity.example.principal_id
scope = azurerm_storage_container.data_store.id
role_definition_name = “Storage Blob Data Contributor”
}
And finally assign the identity to the function app.
# main.tf
identity {
type = “UserAssigned”
identity_ids = [azurerm_user_assigned_identity.example.id]
}
⚠️ A few notes here:
- You can define the variables for the Azure Function app settings directly in the terraform config. The default way for a function app to communicate with a storage blob is to set
AzureWebStorage
to the connection string of the storage container.
Since this is the exact opposite of what we are trying to do, we need to override this behavior with the dedicated variableAzureWebJobsStorage__accountname
and set it to the name of the storage account. - To authorize the application, set
AZURE_CLIENT_ID
to the client ID of the managed identity. STORAGE_URL
is the endpoint used to upload data.
# main.tf
app_settings = {
“AzureWebJobsStorage__accountname” = azurerm_storage_account.example.name
“AZURE_CLIENT_ID” = azurerm_user_assigned_identity.example.client_id
“STORAGE_URL” = azurerm_storage_account.daf_data.primary_blob_endpoint
}
Here is the full definition for the flex consumption function app:
# main.tf
resource “azurerm_function_app_flex_consumption” “example” {
name = coalesce(var.fa_name, random_string.name.result)
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
service_plan_id = azurerm_service_plan.example.id
storage_container_type = “blobContainer”
storage_container_endpoint = “${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}”
storage_authentication_type = “StorageAccountConnectionString”
storage_access_key = azurerm_storage_account.example.primary_access_key
runtime_name = var.runtime_name
runtime_version = var.runtime_version
maximum_instance_count = 50
instance_memory_in_mb = 2048
app_settings = {
“AzureWebJobsStorage__accountname” = azurerm_storage_account.example.name
“AZURE_CLIENT_ID” = azurerm_user_assigned_identity.example.client_id
“STORAGE_URL” = azurerm_storage_account.daf_data.primary_blob_endpoint
}
### Assign the managed identity to the function app ###
identity {
type = “UserAssigned”
identity_ids = [azurerm_user_assigned_identity.example.id]
}
### enable application insights
site_config {
application_insights_connection_string = azurerm_application_insights.example.connection_string
application_insights_key = azurerm_application_insights.example.instrumentation_key
}
}
- Now we can create our resources. First initialize Terraform
terraform init -upgrade
2. Create a plan from the definitions
terraform plan -out main.tfplan
3. Log into your Azure account via the CLI
az login
4. Apply the plan
terraform apply main.tfplan
You can check the output or Azure portal to see if your resources have been created successfully.
Writing the function app
We’ll write our function app in typescript and use http triggers. I used this template as a starting point.
To communicate with the storage blob, we’ll need a few azure libraries. I’ll also use zod to validate incoming requests.
npm i @azure/arm-authorization @azure/storage-blob @azure/identity zod
Next, a simple post endpoint. Basically, all it does is to validate the request body and upload the file by calling createBlob().
export const PostMessageSchema = z.object({
ingredient: z.string(),
});
export type PostMessage = z.infer<typeof PostMessageSchema>;
export async function postMessage(
request: HttpRequest,
context: InvocationContext,
): Promise<HttpResponseInit> {
context.log(`Http function processed request for url "${request.url}"`);
const body = await request.json();
const { data, error, success } = PostMessageSchema.safeParse(body);
if (!success) {
return {
status: 400,
body: `Invalid request body: ${error.message}`,
};
}
const ingredient = data.ingredient;
const fileName = `message-${Date.now()}.txt`;
let blobUrl: string;
try {
blobUrl = await createBlob(ingredient, fileName, context);
return {
status: 200,
body: JSON.stringify({ blobUrl }),
headers: {
"Content-Type": "application/json",
},
};
} catch (error) {
context.error(`Error creating blob: ${error}`);
return {
status: 500,
body: "Failed to create blob in Azure Storage.",
};
}
}
app.http("message", {
methods: ["POST"],
authLevel: "anonymous",
handler: postMessage,
});
Now, createBlob()
is where the magic happens. You could say, the Azure Cloud Platform is a pretty big system. It has lots of different environments, historical implementations and mechanisms to authenticate users. Not all of these mechanisms are available in every environment, e.g., the Azure CLI cannot be used to authenticate a http request.
So the DefaultAzureCredential in essence is a wrapper for all of these credential providers:
- EnvironmentCredential
- WorkloadIdentityCredential
- ManagedIdentityCredential
- SharedTokenCacheCredential
- VisualStudioCredential
- VisualStudioCodeCredential
- AzureCliCredential
- AzurePowerShellCredential
- AzureDeveloperCliCredential
- InteractiveBrowserCredential
It accepts multiple different values as a parameter, so you can pass “what you know”. In our case, the Managed Identity Client ID. Then it checks iteratively through each credential provider until it finds one which returns a valid authorization result. Even if a credential is returned, if that identity is not permitted to access the storage blob, the upload will fail. But since we’re amazing developers and always assign our scopes correctly that won’t happen.
⚠️ Notes on development
- If you haven’t set the
AzureWebJobsStorage__accountname
variable, the credential creation will fail. - When running the function locally the
ManagedIdentityCredential
is not available, so you will only really be able to test the correct role assignment after deployment - To test the upload logic locally, you will need to replace
AzureWebJobsStorage__accountname
withAzureWebJobsStorage:”UseDevelopmentStorage=true”
and run an Azurite Docker container.
import { BlobServiceClient } from "@azure/storage-blob";
import { DefaultAzureCredential } from "@azure/identity";
const createBlob = async (
message: string,
fileName: string,
context: InvocationContext,
) => {
const credential = new DefaultAzureCredential({
managedIdentityClientId: process.env.AZURE_CLIENT_ID,
});
const blobServiceClient = new BlobServiceClient(
process.env.STORAGE_URL,
credential,
);
const blobContainerClient =
blobServiceClient.getContainerClient("data-store");
await blobContainerClient.createIfNotExists();
const blob = blobContainerClient.getBlockBlobClient(fileName);
const buffer = Buffer.from(message, "utf-8");
await blob.uploadData(buffer);
context.log("uploaded blob: ", blob.url);
return blob.url;
};
🚀 Deploying the function app
I used a Bitbucket pipeline step to deploy the app on merge into main. Here is the step definition. But if you’re just playing around you can just as well run the same commands in your local terminal.
⚠️ When deploying via the pipeline you will need to create a service principal to authorize the pipeline to edit resources. You can follow Atlassian’s Guide on that.
- step: &function-deploy
name: Deploy Function
script:
- apt update && apt install zip
- curl -sL https://aka.ms/InstallAzureCLIDeb | bash
- npm ci
- npm run build
- zip -r deployment.zip . --exclude @.funcignore --exclude .funcignore
- az login --service-principal -u $AZURE_PIPELINE_APP_ID -p $AZURE_PIPELINE_CLIENT_SECRET --tenant $AZURE_TENANT_ID
- echo "Deploying function to $FUNCTION_APP_NAME"
- az functionapp deployment source config-zip -g $AZURE_RG -n $FUNCTION_APP_NAME --src deployment.zip
caches:
- node
- npm
Let’s test it out with cURL.
curl --location 'https://<yourfunctionapp>.azurewebsites.net/api/message' \
--header 'Content-Type: application/json' \
--data '{"ingredient": "green eggs and ham"}'
{"blobUrl":"https://<yourstorageaccount>.blob.core.windows.net/data-store/message-1753895901597.txt"}
You could now write another Azure function which sends you an email once a day with all the ingredients. Whether cronjobs as serverless functions are in the spirit of the whole serverless concept is another thing and we won’t speak of it here. Congratulations, you’ve built the most inefficient shopping list. Hope you had fun and your application deployed on the first try.
If you’re curious about Service Principals or enriching your development process on the cloud, let’s connect at &, where we build cloud native applications! It’s time to build!