Managing Secrets with Azure Key Vault, Azure Kubernetes Service and Terraform

Managing Secrets with Azure Key Vault, Azure Kubernetes Service and Terraform

Viktor Bahr

28. November 2023

Introduction

The Azure Key Vault offers a convenient way to store secrets inside the Azure ecosystem. Here at neoworx, we make heavy use of it.
But when we tried using those secrets from inside a Kubernetes cluster for the first time, it proved to be more difficult than we had expected.
In this post, I’ll demonstrate our solution to making Azure Key Vault Secrets available to an Azure Kubernetes Service cluster (much of which is based on the official Azure tutorial).

Prerequisites

I will be assuming you have an Azure account and prior knowledge with Terraform. You should have terraform and helm binaries installed.

A Bird's Eye View

Broadly speaking, we are dealing with an inter-connectivity issue here: The Kubernetes Cluster has no idea (yet) about the Key Vault secrets, how to acquire them and how to represent them internally. Fortunately, Kubernetes (and more concretely AKS) offers pre-built solutions for both of these issues.

The secret data will be made available to our cluster through the Kubernetes implementation of the Container Storage Interface (CSI, not to be confused with Crime Scene Investigation). By implementing custom drivers, this mechanism will allow us to treat the Azure Key Vault just like any other storage system we may want to make available to our container orchestration system. The data retrieved via the CSI will then be represented as Kubernetes Secrets resources, allowing our other resources to access them.

I hear you wondering: Do we need to write and deploy all of this custom connectivity logic ourselves?!
Fortunately for us, all of this has already been neatly packaged in a custom Kubernetes Resource called SecretProviderClass by the nice folks over at k8s.io. All major cloud provider have added integrations for their respective platforms (including Microsoft Azure).

Because we are lazy DevOps people here at neoworx, we’ll use Terraform to automate the remaining parts of the process, from secret generation over access configuration to deploying the Secret Provider to our cluster.

Creating Secrets

Let’s get started with creating some secrets in the Azure Key Vault using Terraform. If you prefer to create your secrets in a different way, feel free to skip this step.

First, we’ll need to setup our Azure Terraform Provider.

terraform {
 required_providers {
   azurerm = {
     source  = "hashicorp/azurerm"
     version = "=3.0.0"
   }
 }
}
provider "azurerm" {
 features {}
}

For the sake of keeping things short, I will assume that you have already created an Azure Key Vault as well as a Resource Group. For more information on how to do this via Terraform, see here.
As we’ll need additional metadata to identify the vault in the following step, let’s now access it.

data "azurerm_key_vault" "keyvault" {
 name                = "myvault"
 resource_group_name = "myresourcegroup"
}

Instead of hardcoding the values here, we could also define input variables.
Let’s get down to the secret creation. We’ll use the default Terraform resource random_password to create it.

resource "random_password" "mysecret" {
 length           = 64
}

Now, all that is left is to store the generated secret in our vault.

resource "azurerm_key_vault_secret" "mysecret" {
 name         = "mysecret"
 value        = random_password.mysecret.result
 key_vault_id = data.azurerm_key_vault.keyvault.id
}

Setting up the Cluster and the Key Vault CSI Driver

Okay, so now that we have our secrets in the vault, let’s create an AKS cluster from which to access it.

resource "azurerm_kubernetes_cluster" "cluster" {
 name                = "example-cluster"
 location            = "West Europe"
 resource_group_name = "myresourcegroup"
 dns_prefix          = "secretexample"
 default_node_pool {
   name            = "default"
   node_count      = 1
   vm_size         = "Standard_D2_v2"
 }
 identity {
   type = "SystemAssigned"
 }
 # enable CSI driver to access key vault
 key_vault_secrets_provider {
   # update the secrets on a regular basis
   secret_rotation_enabled = true
 }
}

The last block is important. This is where we enable the CSI driver. To allow our cluster to learn about changes to the vault secrets, we’ll also enable regular secret rotation.
Having setup our cluster, we can now give it access to the Azure Key Vault.

resource "azurerm_key_vault_access_policy" "vaultaccess" {
 key_vault_id = data.azurerm_key_vault.keyvault.id
 tenant_id    = data.azurerm_key_vault.keyvault.tenant_id
 object_id    = data.azurerm_kubernetes_cluster.cluster.key_vault_secrets_provider[0].secret_identity[0].object_id
 # cluster access to secrets should be read-only
 secret_permissions = [
   "Get", "List"
 ]
}

Preparing the Secret Provider

Our AKS setup is ready, so let’s now focus on the Secret Provider.
One of the requirements of the custom resource is that we provide it with a list of vault secrets that we want to make available in the cluster. We want to avoid doing this manually inside the Kubernetes manifest, but instead be able to inject this from the outside. So we do what lazy DevOps people do best: Template logic in a Helm Chart.

Create a new helm package called aks-secret-provider, remove all the default templates from the templates/ directory before adding the following secretproviderclass.yaml  file.

# This is a SecretProviderClass using user-assigned identity to access an Azure Key Vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
 name: {{ .Release.Name }}
spec:
 provider: azure
 parameters:
   usePodIdentity: "false"
   useVMManagedIdentity: "true"
   userAssignedIdentityID: {{ .Values.clientId | quote }}
   keyvaultName: {{ .Values.vaultName | quote }}
   objects: |  # expose vault secrets defined in values
     array:
       {{- range .Values.secrets }}
       - |
         objectName: {{ . }}
         objectType: secret
         objectVersion: ""
       {{- end }}
   tenantId: {{ .Values.tenantId | quote }}
 secretObjects:  # reflect exposed objects in k8s Secret
   - data:
     {{- range .Values.secrets }}
     - objectName: {{ . }}
       key: {{ . }}
     {{- end }}
     secretName: {{ .Release.Name }}-secret
     type: Opaque

Now, all that is left is to setup some defaults in the values.yaml.

clientId: ""
secrets: []
tenantId: ""
vaultName: ""
# additional config for the CSI driver
secrets-store-csi-driver:
 # rotate the secrets every 2 minutes
 enableSecretRotation: true

Deploying the Secret Provider

We are now ready to deploy the secret provider to our cluster. To do this from Terraform, we’ll first need to setup the Helm provider.

provider "helm" {
 kubernetes {
   host                   = data.azurerm_kubernetes_cluster.cluster.kube_config[0].host
   client_certificate     = base64decode(data.azurerm_kubernetes_cluster.cluster.kube_config[0].client_certificate)
   client_key             = base64decode(data.azurerm_kubernetes_cluster.cluster.kube_config[0].client_key)
   cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.cluster.kube_config[0].cluster_ca_certificate)
 }
}

Next, we’ll read out all of the secrets currently available in our Key Vault.

data "azurerm_key_vault_secrets" "secrets" {
 key_vault_id = data.azurerm_key_vault.keyvault.id
}

We are good to go, let’s deploy our Secret Provider.

resource "helm_release" "aks_secret_provider" {
 name    = "aks-secret-provider"
 chart   = "./aks-secret-provider"
 version = "0.0.1"
 values = [yamlencode({
   vaultName = data.azurerm_key_vault.keyvault.name
   tenantId  = data.azurerm_key_vault.keyvault.tenant_id
   clientId  = data.azurerm_kubernetes_cluster.cluster.key_vault_secrets_provider[0].secret_identity[0].client_id
   secrets   = data.azurerm_key_vault_secrets.secrets.names # secrets to expose
 })]
 force_update = true
}

Accessing the Secrets

Now that our provider is deployed, all that is left to do is access the secrets from our applications. For this, we’ll need to define a volume referencing our secret provider and mount that volume. Once this is done, can access secrets, e.g. by using secretKeyRef.

The following manifest provides a minimal test pod, mounting our secret provider.

kind: Pod
apiVersion: v1
metadata:
 name: aks-key-vault-test
spec:
 containers:
   - name: busybox
     image: registry.k8s.io/e2e-test-images/busybox:1.29-1 
     command:
       - "/bin/sleep"
       - "10000"
     volumeMounts:
     - name: secrets-store
       mountPath: "/mnt/secrets-store"
       readOnly: true
     env:
     - name: MYSECRET
       valueFrom:
         secretKeyRef:
           name: aks-secret-provider-secret
           key: mysecret
 volumes:
   - name: secrets-store
     csi:
       driver: secrets-store.csi.k8s.io
       readOnly: true
       volumeAttributes:
         secretProviderClass: "aks-secret-provider"

You can use the same strategy in your other resources, e.g. in your Deployment manifests.
That's it for today, thank you for reading.

 


 

Title Photo Federal Hall by Kathleen Gulley, CC BY-SA 4.0