Skip to main content

Provision a Kubernetes Cluster

The create-k8s-cluster template provisions a Kubernetes cluster via a wizard — locally using kind or k3d, or on a cloud provider (AWS EKS, GCP GKE, Azure AKS) via Terraform. The cluster is registered in the ForgePortal catalog as a kind: resource entity.

Prerequisites

  • ForgePortal is running — see Quick Start
  • Your role is developer or higher
  • For local clusters: Docker is installed; kind or k3d CLI is installed on the machine where you will run setup.sh
  • For cloud clusters: An infra repo exists with Terraform backend configured; cloud credentials are available (AWS_PROFILE, GOOGLE_CREDENTIALS, ARM_*)

Step 1 — Open the Template

  1. Click Templates in the navigation.
  2. Find the Create Kubernetes Cluster card.
  3. Click "Create →".

Step 2 — Fill the Wizard

Common fields

FieldExampleNotes
Cluster namedev-clusterUsed for the kind/k3d cluster name or Terraform resource name
DestinationkindSee options below
Kubernetes version1.29Used by kind/k3d config and Terraform provider
Ownerteam-platformRegistered in the catalog

Destination: kind (local)

FieldDefaultNotes
Worker nodes1Add more for testing multi-node scenarios
API server port6443Host port mapping for the API server

Destination: k3d (local)

FieldDefaultNotes
Agents (worker nodes)1k3d agents count
API server port6443
Load balancer port8080Host port for the k3d load balancer

Destination: eks (AWS)

FieldExampleNotes
AWS Regionus-east-1
Node count3Managed node group size
Node instance typet3.medium
Infra repomy-org/infraTerraform module pushed here

Destination: gke (Google Cloud)

FieldExample
GCP Projectmy-gcp-project
Regioneurope-west1
Node count3
Machine typee2-standard-2
Infra repomy-org/infra

Destination: aks (Azure)

FieldExample
Resource Grouprg-platform
Locationwesteurope
Node count3
VM sizeStandard_D2s_v3
Infra repomy-org/infra

Step 3 — Watch the Run

StepWhat happens
template.renderConfig files generated
scm.openPullRequestPR opened in infra repo (cloud destinations only)
catalog.registerEntityresource:dev-cluster registered in the catalog

For local destinations (kind / k3d), a setup.sh script and a config YAML are shown as step outputs — no PR is opened.


Step 4 — Create the Cluster

kind (local)

Copy setup.sh from the run outputs and run it:

bash setup.sh
# or manually:
kind create cluster --name dev-cluster --config kind-config.yaml

The generated kind-config.yaml looks like:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: dev-cluster
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- role: worker

Verify:

kubectl cluster-info --context kind-dev-cluster
kubectl get nodes

k3d (local)

bash setup.sh
# or manually:
k3d cluster create dev-cluster --config k3d-config.yaml

Verify:

kubectl cluster-info
kubectl get nodes

EKS (AWS)

Merge the PR in your infra repo, then:

cd infra/clusters/dev-cluster
terraform init
terraform plan -out=tfplan
terraform apply tfplan

# Update your kubeconfig
aws eks update-kubeconfig --name dev-cluster --region us-east-1
kubectl get nodes

GKE (Google Cloud)

cd infra/clusters/dev-cluster
terraform init && terraform apply -auto-approve

gcloud container clusters get-credentials dev-cluster --region europe-west1
kubectl get nodes

AKS (Azure)

cd infra/clusters/dev-cluster
terraform init && terraform apply -auto-approve

az aks get-credentials --resource-group rg-platform --name dev-cluster
kubectl get nodes

Once your cluster is running, use the create-monitoring-stack template to deploy Prometheus + Grafana in minutes. This gives you instant visibility into cluster and workload metrics.

See Create Monitoring Stack.


Step 6 — See the Entity in the Catalog

Go to Catalog → search for dev-cluster. The kind: resource entity shows:

  • Owner and lifecycle
  • spec.type: kubernetes-cluster
  • Annotations added by the Kubernetes plugin (if configured)

Services can declare they run on this cluster by adding to their entity.yaml:

spec:
dependsOn:
- resource:dev-cluster

Next Steps