1. Docs
  2. Pulumi IaC
  3. Clouds
  4. Kubernetes
  5. Guides
  6. App services

Kubernetes App services

App services are general services scoped at the Kubernetes application level. These services tend to include datastores, and managers for ingress, DNS, and TLS. They can be shared amongst several apps or be specific to workloads, and are usually a mix of cloud provider and custom services.

Overview

We’ll explore how to setup:

Prerequisites

Authenticate as the admins role from the Identity stack.

$ aws sts assume-role --role-arn `pulumi stack output adminsIamRoleArn` --role-session-name k8s-admin
$ export KUBECONFIG=`pwd`/kubeconfig-admin.json
Copy

Authenticate as the ServicePrincipal from the Identity stack.

$ az login --service-principal --username $ARM_CLIENT_ID --password $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
$ export KUBECONFIG=`pwd`/kubeconfig-admin.json
Copy

Authenticate as the admins ServiceAccount from the Identity stack.

$ gcloud auth activate-service-account --key-file k8s-admin-sa-key.json
$ export KUBECONFIG=`pwd`/kubeconfig.json
Copy

Datastores

Apps may want to persist data to databases or in-memory datastores. Often times these services are provisioned directly with the cloud provider to simplify running and managing their lifecycles.

Postgres database

Create a Postgres database instance in AWS RDS, and store its connection information in a Kubernetes Secret for apps to refer to and consume.

import * as aws from "@pulumi/aws";
import * as random from "@pulumi/random";
import * as k8s from "@pulumi/kubernetes";

// Generate a strong password for the Postgres DB.
const postgresDbPassword = new random.RandomString(`${projectName}-db-password`, {
	length: 20,
	special: true
}, {additionalSecretOutputs: ["result"]}).result;

// Create a Postgres DB instance of RDS.
const dbSubnets = new aws.rds.SubnetGroup(`${projectName}-subnets`, {
    subnetIds: privateSubnetIds
});
const db = new aws.rds.Instance("postgresdb", {
    engine: "postgres",
    instanceClass: "db.t2.micro",
    allocatedStorage: 20,
    dbSubnetGroupName: dbSubnets.id,
    vpcSecurityGroupIds: securityGroupIds,
    name: "testdb",
    username: "alice",
    password: postgresDbPassword,
    skipFinalSnapshot: true,
});

// Create a Secret from the DB connection information.
const provider = new k8s.Provider("eks-provider", {kubeconfig: config.kubeconfig.apply(JSON.stringify)});
const dbConn = new k8s.core.v1.Secret("postgres-db-conn",
    {
        data: {
            host: db.address.apply(addr => Buffer.from(addr).toString("base64")),
            port: db.port.apply(port => Buffer.of(port).toString("base64")),
            username: db.username.apply(user => Buffer.from(user).toString("base64")),
            password: postgresDbPassword.apply(pass => Buffer.from(pass).toString("base64")),
        },
    },
    {provider: provider},
);
Copy

Redis datastore

Create a Redis datastore instance in AWS ElastiCache, and store its connection information in a Kubernetes ConfigMap for apps to refer to and consume.

import * as aws from "@pulumi/aws";
import * as k8s from "@pulumi/kubernetes";

// Create a Redis instance.
const cacheSubnets = new aws.elasticache.SubnetGroup(`${projectName}-cache-subnets`, {
    subnetIds: privateSubnetIds,
});
const cacheCluster = new aws.elasticache.Cluster("cachecluster", {
    engine: "redis",
    nodeType: "cache.t2.micro",
    numCacheNodes: 1,
    subnetGroupName: cacheSubnets.id,
    securityGroupIds: securityGroupIds,
});

// Create a ConfigMap from the cache connection information.
const cacheConn = new k8s.core.v1.ConfigMap("redis-db-conn",
    {
        data: {
            host: cacheCluster.cacheNodes[0].address.apply(addr => Buffer.from(addr).toString("base64")),
        },
    },
    {provider: provider},
);
Copy

MongoDB

Create a MongoDB database instance in Azure CosmosDB, and store its connection information in a Kubernetes Secret for apps to refer to and consume.

import * as azure from "@pulumi/azure";
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";

const name = pulumi.getProject();

// Define a separate resource group for app services.
const resourceGroup = new azure.core.ResourceGroup(name);

// Create a MongoDB-flavored instance of CosmosDB.
const cosmosdb = new azure.cosmosdb.Account("k8s-az-mongodb", {
    resourceGroupName: resourceGroup.name,
    kind: "MongoDB",
    consistencyPolicy: {
        consistencyLevel: "Session",
    },
    offerType: "Standard",
    geoLocations: [
        { location: resourceGroup.location, failoverPriority: 0 },
    ],
});

// A k8s provider instance of the cluster.
const provider = new k8s.Provider(`${name}-aks`, {
    kubeconfig: config.kubeconfig,
});

// Create secret from MongoDB connection string.
const mongoConnStrings = new k8s.core.v1.Secret(
    "mongo-secrets",
    {
        metadata: { name: "mongo-secrets", namespace: config.appsNamespaceName},
        data: mongoHelpers.parseConnString(cosmosdb.connectionStrings),
    },
    { provider },
);
Copy

Postgres database

Create a Postgres database instance in CloudSQL, and store its connection information in a Kubernetes Secret for apps to refer to and consume.

import * as gcp from "@pulumi/gcp";
import * as k8s from "@pulumi/kubernetes";
import * as random from "@pulumi/random";

// Generate a strong password for the Postgres DB.
const postgresDbPassword = new random.RandomString(
    `${projectName}-db-password`,
    {
        length: 20,
        special: true,
    },
    { additionalSecretOutputs: ["result"] },
).result;

// Create a Postgres DB instance.
const db = new gcp.sql.DatabaseInstance("postgresdb", {
    project: config.project,
    region: "us-west1",
    databaseVersion: "POSTGRES_9_6",
    settings: { tier: "db-f1-micro" },
});

// Configure a new SQL user.
const user = new gcp.sql.User("default", {
    project: config.project,
    instance: db.name,
    password: postgresDbPassword,
});

// Create a new k8s provider.
const provider = new k8s.Provider("provider", {
    kubeconfig: config.kubeconfig,
});

// Create a Secret from the DB connection information.
const dbConn = new k8s.core.v1.Secret(
    "postgres-db-conn",
    {
        data: {
            host: db.privateIpAddress.apply(addr => Buffer.from(addr).toString("base64")),
            port: Buffer.from("5432").toString("base64"),
            username: user.name.apply(user => Buffer.from(user).toString("base64")),
            password: postgresDbPassword.apply(pass => Buffer.from(pass).toString("base64")),
        },
    },
    { provider: provider },
);
Copy

Redis datastore

Create a Redis datastore instance in Google Cloud MemoryStore, and store its connection information in a Kubernetes ConfigMap for apps to refer to and consume.

import * as gcp from "@pulumi/gcp";
import * as k8s from "@pulumi/kubernetes";

// Create a Redis instance.
const cache = new gcp.redis.Instance("redis", {
    tier: "STANDARD_HA",
    memorySizeGb: 1,
    redisVersion: "REDIS_3_2",
});

// Create a ConfigMap from the cache connection information.
const cacheConn = new k8s.core.v1.ConfigMap(
    "postgres-db-conn",
    {
        data: {
            host: cache.host.apply(addr => Buffer.from(addr).toString("base64")),
        },
    },
    { provider: provider },
);
Copy

General App Services

General app services stack on GitHub

NGINX Ingress Controller

The NGINX Ingress Controller is a custom Kubernetes Controller. It manages L7 network ingress / north-south traffic between external clients, and the servers in the cluster’s apps.

Install NGINX

Deploy the example YAML manifests into the ingress-nginx namespace, and publicly expose it to the Internet using a load balanced Service.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/cloud/deploy.yaml
Copy

Check that the NGINX deployment is running.

$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-7dcc95dfbf-k99k6   1/1     Running   0          21s
Copy

Deploy the Helm chart into the app-svcs namespace created in Configure Cluster Defaults, and publicly expose it to the Internet using a load balanced Service.

Note: NGINX requires a privileged PSP given its use of allowPrivilegeEscalation: true.

import * as k8s from "@pulumi/kubernetes";

// Deploy the NGINX ingress controller using the Helm chart.
const nginx = new k8s.helm.v3.Chart("nginx",
    {
        namespace: config.appSvcsNamespaceName,
        chart: "nginx-ingress",
        version: "1.24.4",
        fetchOpts: {repo: "https://charts.helm.sh/stable/"},
        values: {controller: {publishService: {enabled: true}}},
        transformations: [
            (obj: any) => {
                // Do transformations on the YAML to set the namespace
                if (obj.metadata) {
                    obj.metadata.namespace = config.appSvcsNamespaceName;
                }
            },
        ],
    },
    {providers: {kubernetes: provider}},
);
Copy

Deploy a workload

Deploy a kuard Pod, service, and ingress resources to test the NGINX ingress controller.

Create the ingress resource for kuard that NGINX will manage by keying off the ingress.class used.

NGINX will front the app through it’s desired host and paths, and the apps are will be accessible to the public internet as they share the public load balancer endpoint provisioned for NGINX’s service.

Traffic is then routed to the app by inspecting the host headers and paths expected by NGINX onto the service that the kuard Pod runs.

$ kubectl run --generator=run-pod/v1 kuard --namespace=`pulumi stack output appsNamespaceName` --image=gcr.io/kuar-demo/kuard-amd64:blue --port=8080 --expose
Copy
$ cat > ingress.yaml << EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kuard
  labels:
    app: kuard
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: apps.example.com
    http:
      paths:
        - path: "/"
          backend:
            serviceName: kuard
            servicePort: http
EOF
Copy
$ kubectl apply -f ingress.yaml --namespace=`pulumi stack output appsNamespaceName`
Copy

Check that the ingress is created, and after a few moments the Address will be set to the NGINX LoadBalancer Service address.

$ kubectl describe ingress kuard --namespace=`pulumi stack output appsNamespaceName`
Copy
import * as k8s from "@pulumi/kubernetes";

// Create a kuard Deployment
const name = "kuard"
const labels = {app: name}
const deployment = new k8s.apps.v1.Deployment(name,
    {
        metadata: {
            namespace: config.appsNamespaceName,
            labels: {app: name},
        },
        spec: {
            replicas: 1,
            selector: { matchLabels: labels },
            template: {
                metadata: { labels: labels, },
                spec: {
                    containers: [
                        {
                            name: name,
                            image: "gcr.io/kuar-demo/kuard-amd64:blue",
                            resources: {requests: {cpu: "50m", memory: "20Mi"}},
                            ports: [{ name: "http", containerPort: 8080 }]
                        }
                    ],
                }
            }
        },
    },
    {provider: provider}
);

// Create a Service for the kuard Deployment
const service = new k8s.core.v1.Service(name,
    {
        metadata: {labels: labels, namespace: config.appsNamespaceName},
        spec: {ports: [{ port: 8080, targetPort: "http" }], selector: labels},
    },
    {provider: provider}
);

// Export the Service name and public LoadBalancer endpoint
export const serviceName = service.metadata.name;

// Create the kuard Ingress
const ingress = new k8s.extensions.v1beta1.Ingress(name,
    {
        metadata: {
            labels: labels,
            namespace: config.appsNamespaceName,
            annotations: {"kubernetes.io/ingress.class": "nginx"},
        },
        spec: {
            rules: [
                {
                    host: "apps.example.com",
                    http: {
                        paths: [
                            {
                                path: "/",
                                backend: {
                                    serviceName: serviceName,
                                    servicePort: "http",
                                }
                            },
                        ],
                    },
                }
            ]
        }
    },
    {provider: provider}
);
Copy

Check that the ingress is created, and after a few moments the Address will be set to the NGINX LoadBalancer Service address.

$ kubectl describe ingress kuard-<POD_SUFFIX> --namespace=`pulumi stack output appsNamespaceName`
Copy

Use the NGINX LoadBalancer Service address to access kuard on its expected hosts & paths. We simulate the headers using curl.

$ curl -Lv -H 'Host: apps.example.com' <INGRESS_ADDRESS>
Copy

Clean up

Delete the pod, service, and ingress controller.

$ kubectl delete pod/kuard svc/kuard ingress/kuard
$ kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/f1f90ef4954effb122412d9cd2d48e02063038a4/deploy/static/mandatory.yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/f1f90ef4954effb122412d9cd2d48e02063038a4/deploy/static/provider/cloud-generic.yaml
Copy

Delete the nginx definition in the Pulumi program, and run a Pulumi update.

Was this page helpful?

PulumiUP May 6, 2025. Register Now.