Cluster customizations
The cluster configuration handlers wrap all the other mutation handlers in a convenient single patch for inclusion in
your ClusterClasses, allowing for a single configuration variable with nested values. This provides the most flexibility
with the least configuration.
To enable the handler, add the provider-specific clusterconfigvars
and clusterconfigpatch
external patches on
ClusterClass
. This will enable all of the generic cluster customizations, along with the
relevant provider-specific variables.
Regardless of provider, a single variable called clusterConfig
will be available for use on the ClusterClass
. The
schema (and therefore the configuration options) will be customized for each provider. To use the exposed configuration
options, specify the desired values on the Cluster
resource:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
kubernetesImageRepository: "my-registry.io/my-org/my-repo"
etcd:
image:
repository: my-registry.io/my-org/my-repo
tag: "v3.5.99_custom.0"
extraAPIServerCertSANs:
- a.b.c.example.com
- d.e.f.example.com
proxy:
http: http://example.com
https: https://example.com
additionalNo:
- no-proxy-1.example.com
- no-proxy-2.example.com
imageRegistries:
credentials:
- url: https://my-registry.io
secretRef:
name: my-registry-credentials
cni:
provider: calico
AWS
See AWS customizations for the AWS specific customizations.
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: <NAME>
spec:
patches:
- name: cluster-config
external:
generateExtension: "awsclusterconfigpatch.cluster-api-runtime-extensions-nutanix"
discoverVariablesExtension: "awsclusterconfigvars.cluster-api-runtime-extensions-nutanix"
Docker
See generic customizations for the Docker specific customizations.
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: <NAME>
spec:
patches:
- name: cluster-config
external:
generateExtension: "dockerclusterconfigpatch.cluster-api-runtime-extensions-nutanix"
discoverVariablesExtension: "dockerclusterconfigvars.cluster-api-runtime-extensions-nutanix"
1 - Generic
The customizations in this section are applicable to all providers.
1.1 - Audit policy
Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a
cluster. The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the
control plane itself.
There are currently no configuration options for the Audit Policy customization and this customization will be
automatically applied when the provider-specific cluster configuration patch is included in the
ClusterClass
.
1.2 - etcd
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
The etcd configuration can then be manipulated via the cluster variables. If the etcd
property is not specified, then
the customization will be skipped.
Example
To change the repository and tag for the container image for the etcd pod, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
etcd:
image:
repository: my-registry.io/my-org/my-repo
tag: "v3.5.99_custom.0"
Applying this configuration will result in the following value being set:
1.3 - Extra API Server Certificate SANs
If the API server can be accessed by alternative DNS addresses then setting additional SANs on the API server
certificate is necessary in order for clients to successfully validate the API server certificate.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To add extra SANs to the API server certificate, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
extraAPIServerCertSANs:
- a.b.c.example.com
- d.e.f.example.com
Applying this configuration will result in the following value being set:
1.4 - Global Image Registry Mirror
Add containerd image registry mirror configuration to all Nodes in the cluster.
When the globalImageRegistryMirror
variable is set, files
with configurations for
Containerd default mirror.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To provide an image registry mirror with a CA certificate, specify the following configuration:
If the registry mirror requires a private or self-signed CA certificate,
create a Kubernetes Secret with the ca.crt
key populated with the CA certificate in PEM format:
kubectl create secret generic my-mirror-ca-cert \
--from-file=ca.crt=registry-ca.crt
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
globalImageRegistryMirror:
url: https://example.com
credentials:
secretRef:
name: my-mirror-ca-cert
Applying this configuration will result in following new files on the
KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
resources:
/etc/containerd/certs.d/_default/hosts.toml
/etc/certs/mirror.pem
To use a public hosted image registry (e.g. ECR) as a registry mirror, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
globalImageRegistryMirror:
url: https://123456789.dkr.ecr.us-east-1.amazonaws.com
Applying this configuration will result in following new files on the
KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
resources:
/etc/containerd/certs.d/_default/hosts.toml
1.5 - HTTP proxy
In some network environments it is necessary to use HTTP proxy to successfuly execute HTTP requests.
This customization will configure Kubernetes components (containerd
, kubelet
) with appropriate configuration for
control plane and worker nodes, utilising systemd drop-ins to configure the necessary environment variables.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To configure HTTP proxy values, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
proxy:
http: http://example.com
https: http://example.com
additionalNo:
- no-proxy-1.example.com
- no-proxy-2.example.com
The additionalNo
list will be added to default pre-calculated values that apply on k8s networking
localhost,127.0.0.1,<POD CIDRS>,<SERVICE CIDRS>,kubernetes,kubernetes.default,.svc,.svc.cluster.local
, plus
provider-specific addresses as required.
Applying this configuration will result in the following value being set:
Applying this configuration will result in new bootstrap files on the KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
.
1.6 - Image registries
Add image registry configuration to all Nodes in the cluster.
When the credentials
variable is set, files
and preKubeadmnCommands
with configurations for
Kubelet image credential provider
and dynamic credential provider will be added.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
If your registry requires static credentials, create a Kubernetes Secret with keys for username
and password
:
kubectl create secret generic my-registry-credentials \
--from-literal username=${REGISTRY_USERNAME} --from-literal password=${REGISTRY_PASSWORD}
To add image registry credentials, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
imageRegistries:
- url: https://my-registry.io
credentials:
secretRef:
name: my-registry-credentials
Applying this configuration will result in new files and preKubeadmCommands
on the KubeadmControlPlaneTemplate
and KubeadmConfigTemplate
.
1.7 - Kubernetes Image Repository
Override the container image repository used when pulling Kubernetes images.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To configure HTTP proxy values, specify the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
kubernetesImageRepository: "my-registry.io/my-org/my-repo"
Applying this configuration will result in the following value being set:
- KubeadmControlPlaneTemplate:
/spec/template/spec/kubeadmConfigSpec/clusterConfiguration/imageRepository: my-registry.io/my-org/my-repo
1.8 - Users
Configure users for all machines in the cluster, the user's superuser capabilities using sudo
user specifications, and
the login authentication mechanism.
SSH authorized keys are just public SSH keys that are used to authenticate a login. See the SSH man
page for more information.
For information on sudo user specifications, see the sudo
documentation.
Local password authentication is disabled for the user by default. It is enabled only when a hashed password is
provided.
Examples
Admin user with SSH public key login
Creates a user with the name admin
, grants the user the ability to run any command as the superuser, and allows you to
login via SSH using the username and private key corresponding to the authorized public key.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
users:
- name: username
sshAuthorizedKeys:
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAua0lo8BiGWgvIiDCKnQDKL5uERHfnehm0ns5CEJpJw optionalcomment"
sudo: "ALL=(ALL) NOPASSWD:ALL"
Admin user with serial console password login
Creates a user with the name admin,
grants the user the ability to run any command as the superuser, and allows you to
login via serial console using the username and password.
Note that this does not allow you to login via SSH using the username and password; in most cases, you must also
configure the SSH server to allow password authentication.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
users:
- name: admin
hashedPassword: "$y$j9T$UraH8eN4XvapXBmmSaUrP0$Nyxdf1cJDGZcp0WDKu.CFHprrkPG4ubirqSqiD43Ix3"
sudo: "ALL=(ALL) NOPASSWD:ALL"
2 - AWS
The customizations in this section are applicable only to AWS clusters. They will only be applied to clusters that
use the AWS
infrastructure provider, i.e. a CAPI Cluster
that references an AWSCluster
.
2.1 - AWS Additional Security Group Spec
The AWS additional security group customization allows the user to specify security groups to the created machines.
The customization can be applied to both control plane and nodepool machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify addiitonal security groups for all control plane and nodepools, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
- name: workerConfig
value:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
aws:
additionalSecurityGroups:
- id: "sg-0fcfece738d3211b8"
Applying this configuration will result in the following value being set:
2.2 - AWS AMI ID and Format spec
The AWS AMI customization allows the user to specify the AMI or AMI Lookup arguments for a AWS machine.
The AMI customization can be applied to both control plane and nodepool machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the AMI ID or format for all control plane and nodepools, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
ami:
# Specify one of id or lookup.
id: "ami-controlplane"
# lookup:
# format: "my-cp-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
- name: workerConfig
value:
aws:
ami:
# Specify one of id or lookup.
id: "ami-allWorkers"
# lookup:
# format: "my-default-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
We can further customize individual MachineDeployments by using the overrides field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
ami:
# Specify one of id or lookup.
id: "ami-customWorker"
# lookup:
# format: "gpu-workers-ami-{{.BaseOS}}-?{{.K8sVersion}}-*"
# org: "123456789"
# baseOS: "ubuntu-20.04"
Applying this configuration will result in the following value being set:
2.3 - Control Plane Load Balancer
The control-plane load balancer customization allows the user
to modify the load balancer configuration for the control-plane's API server.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To use an internal ELB scheme, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
controlPlaneLoadBalancer:
scheme: internal
Applying this configuration will result in the following value being set:
AWSClusterTemplate
:
spec:
controlPlaneLoadBalancer:
scheme: internal
2.4 - IAM Instance Profile
The IAM instance profile customization allows the user to specify the profile to use for control-plane
and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the IAM instance profile, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
iamInstanceProfile: custom-control-plane.cluster-api-provider-aws.sigs.k8s.io
- name: workerConfig
value:
aws:
iamInstanceProfile: custom-nodes.cluster-api-provider-aws.sigs.k8s.io
Applying this configuration will result in the following value being set:
2.5 - Instance type
The instance type customization allows the user to specify the profile to use for control-plane
and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the instance type, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
aws:
instanceType: m5.xlarge
- name: workerConfig
value:
aws:
instanceType: m5.2xlarge
Applying this configuration will result in the following value being set:
2.6 - Network
The network customization allows the user to specify existing infrastructure to use for the cluster.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify existing AWS VPC, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
network:
vpc:
id: vpc-1234567890
To also specify existing AWS Subnets, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
network:
vpc:
id: vpc-1234567890
subnets:
- id: subnet-1
- id: subnet-2
- id: subnet-3
Applying this configuration will result in the following value being set:
AWSClusterTemplate
:
spec:
network:
subnets:
- id: subnet-1
- id: subnet-2
- id: subnet-3
vpc:
id: vpc-1234567890
2.7 - Region
The region customization allows the user to specify the region to deploy a cluster into.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the AWS region to deploy into, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
aws:
region: us-west-2
Applying this configuration will result in the following value being set:
AWSClusterTemplate
:
spec:
template:
spec:
region: us-west-2
3 - Docker
The customizations in this section are applicable only to AWS clusters. They will only be applied to clusters that
use the Docker
infrastructure provider, i.e. a CAPI Cluster
that references an DockerCluster
.
3.1 - Custom image
The custom image customization allows the user to specify the OCI image to use for control-plane and worker Machines.
This customization will be available when the
provider-specific cluster configuration patch is included in the ClusterClass
.
Example
To specify the custom image, use the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
docker:
customImage: ghcr.io/mesosphere/kind-node:v1.2.3-cp
- name: workerConfig
value:
docker:
customImage: ghcr.io/mesosphere/kind-node:v1.2.3-worker
The configuration above will apply customImage to all workers.
You can further customize individual MachineDeployments by using the overrides
field with the following configuration:
spec:
topology:
# ...
workers:
machineDeployments:
- class: default-worker
name: md-0
variables:
overrides:
- name: workerConfig
value:
docker:
customImage: ghcr.io/mesosphere/kind-node:v1.2.3-custom
Applying this configuration will result in the following value being set:
4 - Nutanix
The customizations in this section are applicable only to Nutanix clusters. They will only be applied to clusters that
use the Nutanix
infrastructure provider, i.e. a CAPI Cluster
that references an NutanixCluster
.
4.1 - Control Plane Endpoint
Configure Control Plane Endpoint. Defines the host IP and port of the CAPX Kubernetes cluster.
Examples
Set Control Plane Endpoint
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
nutanix:
controlPlaneEndpoint:
host: x.x.x.x
port: 6443
Applying this configuration will result in the following value being set:
spec:
template:
spec:
controlPlaneEndpoint:
host: x.x.x.x
port: 6443
4.2 - Machine Details
Configure Machine Details of Control plane and Worker nodes
Examples
Set Machine details of Control Plane and Worker nodes
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
controlPlane:
nutanix:
machineDetails:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
subnets:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
- name: workerConfig
value:
nutanix:
machineDetails:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
subnets:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
Applying this configuration will result in the following value being set:
- control-plane
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-cp-nmt
spec:
template:
spec:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
providerID: nutanix://vm-uuid
subnet:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
- worker
NutanixMachineTemplate
:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: nutanix-quick-start-md-nmt
spec:
template:
spec:
bootType: legacy
cluster:
name: pe-cluster-name
type: name
image:
name: os-image-name
type: name
memorySize: 4Gi
providerID: nutanix://vm-uuid
subnet:
- name: subnet-name
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
4.3 - Prism Central Endpoint
Configure Prism Central Endpoint to create machines on.
Examples
Set Prism Central Endpoint
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
nutanix:
prismCentralEndpoint:
credentials:
name: secret-name
url: https://x.x.x.x:9440
insecure: false
Applying this configuration will result in the following value being set:
spec:
template:
spec:
prismCentral:
address: x.x.x.x
insecure: false
port: 9440
credentialRef:
kind: Secret
name: secret-name
Provide an Optional Trusted CA Bundle
If the Prism Central endpoint uses a self-signed certificate, you can provide an additional trust bundle
to be used by the Nutanix provider.
This is a base64 PEM encoded x509 cert for the RootCA that was used to create the certificate for a Prism Central
See Nutanix Security Guide for more information.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: <NAME>
spec:
topology:
variables:
- name: clusterConfig
value:
nutanix:
prismCentralEndpoint:
# ...
additionalTrustBundle: "LS0...="
Applying this configuration will result in the following value being set:
spec:
template:
spec:
prismCentral:
# ...
additionalTrustBundle:
kind: String
data: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----