DEV Community

Paul Reichelt-Ritter
Paul Reichelt-Ritter

Posted on

Using MongoDB Atlas with Azure Kubernetes Service - Coded with Pulumi

For the last couple of days, I've struggled with the challenge to configure my MongoDB Atlas DB to allow connections from my Azure Kubernetes Service - all within my favourite infrastructure as code solution: pulumi.

pulumi is an incredible productive infrastructure as code solution, which allows you to describe your infrastructure in a fimilar programming language of your choice - in my case Typescript.
See: https://www.pulumi.com/

Requirements

To follow this examples, you have to meet the following requirements:

  1. Azure Subscription - Create your free Azure Account
  2. MongoDb Atlas Account connected with Azure - Getting Started with MongoDb Atlas on Azure
  3. NodeJS - How to install NodeJS
  4. Pulumi - Getting started with Pulumi

Summary

We will perform the following steps to allow our Kubernetes cluster to connect with MongoDB Atlas. Feel free to skip those steps you've already done.

  1. Create an AKS Cluster
  2. Deploy a MongoDB Cluster
  3. Whitelist your AKS Cluster within your MongoDB project
  4. Create Credentials and a connection string

1. Create an AKS Cluster

First, we create a resource group. A resource group is a collection of infrastructure components in Azure that belongs together.

import * as resources from "@pulumi/azure-native/resources";

const resourceGroup = new resources.ResourceGroup("my-resources", {});
Enter fullscreen mode Exit fullscreen mode

Next we need a virtual network. This virtual network is the address range within our Kubernetes nodes are created.

import * as network from "@pulumi/azure-native/network";

const virtualNetwork = new network.VirtualNetwork(
  "virtualNetwork",
  {
    addressSpace: {
      addressPrefixes: ["10.0.0.0/16"],
    },

    resourceGroupName: resourceGroup.name,
  },
  // It's mandatory to ignore this, to prevent unnecessary updates as soon as a subnet is created
  { ignoreChanges: ["subnets"] },
);
Enter fullscreen mode Exit fullscreen mode

Now, we can create a subnet for our Kubernetes nodes

const kubernetesSubnet = new network.Subnet(
  "kubernetesSubnet",
  {
    addressPrefix: "10.0.0.0/22",
    resourceGroupName: resourceGroup.name,
    virtualNetworkName: virtualNetwork.name,
  },
  {
    dependsOn: [virtualNetwork],
  },
);
Enter fullscreen mode Exit fullscreen mode

Finally we can create a AKS Cluster. I decided to use Linux machines and enable autoscaling from 1 to 3 nodes. With nodeVmSize you can specify the type of the node. k8sVersion should be one of the supported versions. This changes regualry. At least the last option should be read from a config file to allow easy updates of your cluster version.

export const managedCluster = new containerservice.ManagedCluster(
  "managedCluster",
  {
    addonProfiles: {},
    // Scale from 1 to 3 nodes and distribute them across all availability zones. (NorthEurope has 3)
    agentPoolProfiles: [
      {
        availabilityZones: ["1", "2", "3"],
        enableAutoScaling: true,
        count: 1,
        minCount: 1,
        maxCount: 3,
        enableNodePublicIP: false,
        mode: "System",
        name: "systempool",
        osType: "Linux",
        osDiskSizeGB: 30,
        type: "VirtualMachineScaleSets",
        vmSize: nodeVmSize,
        // Change next line for additional node pools to distribute across subnets
        vnetSubnetID: kubernetesSubnet.id,
      },
    ],
    // SECURITY HINT: Change authorizedIPRanges to limit access to API server
    // Changing enablePrivateCluster requires alternate access to API server (VPN or similar)
    // For debug reasons, we allow access from everywhere.
    apiServerAccessProfile: {
      authorizedIPRanges: ["0.0.0.0/0"],
      enablePrivateCluster: false,
    },
    dnsPrefix: prefixForDns,
    enableRBAC: true,
    identity: {
      type: "SystemAssigned",
    },
    kubernetesVersion: k8sVersion,
    networkProfile: {
      networkPlugin: "azure",
      networkPolicy: "azure",
      serviceCidr: "10.96.0.0/16",
      dnsServiceIP: "10.96.0.10",
    },
    resourceGroupName: resourceGroup.name,
  },
  { dependsOn: [kubernetesSubnet], ignoreChanges: ["addonProfiles"] },
);
Enter fullscreen mode Exit fullscreen mode

2. Create a MongoDB Atlas Database

First of all you have have to log into the MongoDB Cloud Website and create an API key. In my case I created one with OWNER privileges to have the most flexibility.

I decided to store the secrets and my organization id in my pulumi config. To do so use this shell lines.

# Org ID
pulumi config set mongodbOrgId YOUR_ORG_ID

# API Keys
pulumi config set mongodbAtlasPublicKey YOUR_PUBLIC_KEY
pulumi config set --secret mongodbAtlasPrivateKey YOUR_PRIVATE_KEY
Enter fullscreen mode Exit fullscreen mode

Next, we read the values from the config and create a custom provider:

import * as pulumi from "@pulumi/pulumi";

// Read config
const config = new pulumi.Config();

// Extract the values
const mongoDbOrgId = config.get("mongodbOrgId") || '';
const mongodbAtlasPrivateKey = config.get("mongodbAtlasPrivateKey") || '';
const mongodbAtlasPublicKey = config.get("mongodbAtlasPublicKey") || '';

const provider = new mongodbatlas.Provider("mongodb-atlas-provider", {
  publicKey: mongodbAtlasPublicKey,
  privateKey: mongodbAtlasPrivateKey,
});
Enter fullscreen mode Exit fullscreen mode

In MongoDB Atlas, every database cluster has to be part of a project. Thus, we have to create the project first, and right after that we can create the cluster.

// Create a new MongoDB Atlas project
const project = new mongodbatlas.Project(
  "humanas-dev",
  {
    name: "humanas-dev",
    orgId: mongoDbOrgId,
  },
  { provider },
);

// Deploy a MongoDB Atlas cluster in the project
const cluster = new mongodbatlas.Cluster(
  "dev-cluster",
  {
    projectId: project.id,
    name: "humanas-dev",
    providerName: "AZURE",
    providerRegionName: "EUROPE_NORTH",
    providerInstanceSizeName: "M10",
  },
  { provider },
);
Enter fullscreen mode Exit fullscreen mode

It's important, that you ensure to user the custom created provider. Otherwise, you get some 401 Unauthorized error.

3. Whitelist your AKS Cluster within your MongoDB project

This was the most difficult part for me, just because I didn't get the concept on first sight. Background: MongoDB Atlas is blocking every request to your database, unless you whitelist the IP of the requester. So the first question was: How to whitelist something that is automatically autoscaled and also hidden in a private network?! But after some research the solution was quite simple. We create a single public IP and add a NAT Gateway to our subnet to ensure that every request from AKS to MongoDB has the same IP address.

A Network Address Translation (NAT) service is a recommended way to handle, monitor and secure your outbound traffic.
See: What is Azure NAT Gateway

// Public IP for the Outbound Gateway
const outboundIp = new network.PublicIPAddress("outboundIp", {
  resourceGroupName: resourceGroup.name,
  publicIPAllocationMethod: "Static",
  sku: { name: "Standard" },
});

const outboundGateway = new network.NatGateway("outboundGateway", {
  resourceGroupName: resourceGroup.name,
  sku: { name: "Standard" },
  publicIpAddresses: [
    {
      id: outboundIp.id,
    },
  ],
});
Enter fullscreen mode Exit fullscreen mode

Ensure that the VNet is using our outbound gateway. I decided to use it for the Kubernetes subnet, but you can also specify it for the whole virtual network.

const kubernetesSubnet = new network.Subnet(
  "kubernetesSubnet",
  {
    addressPrefix: "10.0.0.0/22",
    resourceGroupName: resourceGroup.name,
    virtualNetworkName: virtualNetwork.name,
+    natGateway: {
+      id: outboundGateway.id,
+    },
  },
  {
-    dependsOn: [virtualNetwork],
+    dependsOn: [virtualNetwork, outboundGateway],
  },
);
Enter fullscreen mode Exit fullscreen mode

Next, we want to whitelist our outboundIp for our MongoDB Atlas Cluster:

const ipWhitelist = new mongodbatlas.ProjectIpAccessList(
  `whitelist-azure-atlas`,
  {
    projectId: project.id,
    ipAddress: pulumi.interpolate`${outboundIp.ipAddress}`,
    comment: `Whitelisted IP for Azure AKS Outbound NAT`,
  },
  {
    provider,
    dependsOn: [outboundIp],
  },
);
Enter fullscreen mode Exit fullscreen mode
  1. Create Credentials and a connection string

Before we can build the connection string, we have to add a user. I decided to pick a simple username/password authentication, but you can choose other authentication methods depending on your requirements:

// Generate a password
const dbPassword = new random.RandomPassword("db-default-user-password", {
  length: 32,
  special: false,
});

const dbUser = new mongodbatlas.DatabaseUser(
  "default-user",
  {
    projectId: project.id,
    authDatabaseName: "admin",
    username: USERNAME,
    password: dbPassword.result,
    roles: [
      {
        databaseName: "admin",
        roleName: "readWriteAnyDatabase",
      },
    ],
    scopes: [
      // Restrict user access to our one cluster
      {
        name: cluster.name,
        type: "CLUSTER",
      },
    ],
  },
  {
    provider,
    dependsOn: [project, cluster],
  },
);
Enter fullscreen mode Exit fullscreen mode

Finally, we can combine all this information to create a connection string:

// Strip the `mongodb+srv://` part as we want to add credentials to our connection string
const srv = cluster.srvAddress.apply((val) => val.substring(14));

const url = pulumi.interpolate`mongodb+srv://${USERNAME}:${dbPassword.result}@${srv}/${DB_NAME}?retryWrites=true&w=majority`;

// Export the connection string as a secret to be available in our application deployment.
export const connectionString = pulumi.secret(url);
Enter fullscreen mode Exit fullscreen mode

This connection string contains all information required to access the MongoDB Atlas Database. You can pass this to your Pod / Deployment or store is as a Kubernetes Secret.

Thanks for reading. I hope this article helps the next person struggling with AKS and MongoDB Atlas.

Top comments (0)