Creating Clusters on Huawei Cloud Stack

This document provides comprehensive instructions for creating Kubernetes clusters on the Huawei Cloud Stack platform using Cluster API.

Prerequisites

Before creating clusters, ensure all of the following prerequisites are met:

1. Required Plugin Installation

Install the following plugins on the 's global cluster:

  • Alauda Container Platform Kubeadm Provider
  • Alauda Container Platform HCS Infrastructure Provider

For detailed installation instructions, refer to the Installation Guide.

2. HCS Platform Preparation

Ensure the following HCS resources are configured:

  • VPC: Virtual Private Cloud for network isolation
  • Subnet: At least one subnet for VM deployment
  • Security Group: Proper security group rules for cluster communication

3. HCS Credentials

Prepare HCS access credentials with the following information:

  • AccessKey: HCS access key ID
  • SecretKey: HCS access key
  • ProjectID: HCS project ID
  • Region: HCS region (e.g., cn-global-1)
  • ExternalGlobalDomain: HCS domain name

Cluster Creation Overview

At a high level, you'll create the following Cluster API resources in the 's global cluster to provision infrastructure and bootstrap a functional Kubernetes cluster.

WARNING

Important Namespace Requirement

To ensure proper integration with the as business clusters, all resources must be deployed in the cpaas-system namespace. Deploying resources in other namespaces may result in integration issues.

The cluster creation process follows this order:

  1. Configure HCS authentication (Secret)
  2. Create machine configuration pool (HCSMachineConfigPool)
  3. Configure machine template (HCSMachineTemplate)
  4. Configure KubeadmControlPlane
  5. Configure HCSCluster
  6. Create the Cluster

Control Plane Configuration

The control plane manages cluster state, scheduling, and the Kubernetes API. This section shows how to configure a highly available control plane.

WARNING

Configuration Parameter Guidelines

When configuring resources, exercise caution with parameter modifications:

  • Replace only values enclosed in <> with your environment-specific values
  • Preserve all other parameters as they represent optimized or required configurations
  • Modifying non-placeholder parameters may result in cluster instability or integration issues

Configure HCS Authentication

HCS authentication information is stored in a Secret resource.

apiVersion: v1
kind: Secret
metadata:
  name: <cluster-name>
  namespace: cpaas-system
type: Opaque
data:
  accessKey: <base64-encoded-access-key>
  secretKey: <base64-encoded-secret-key>
  projectID: <base64-encoded-project-id>
  region: <base64-encoded-region>
  externalGlobalDomain: <base64-encoded-domain>
ParameterDescription
.data.accessKeyHCS access key ID (base64-encoded)
.data.secretKeyHCS access key (base64-encoded)
.data.projectIDHCS project ID (base64-encoded)
.data.regionHCS region such as cn-global-1 (base64-encoded)
.data.externalGlobalDomainHCS domain name (base64-encoded)

Configure Machine Configuration Pool

The HCSMachineConfigPool defines pre-configured hostnames and static IP addresses for VMs.

WARNING

Pool Size Requirement

The configuration pool must include at least as many entries as the number of control plane nodes you plan to deploy.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: HCSMachineConfigPool
metadata:
  name: <cluster-name>
  namespace: cpaas-system
spec:
  configs:
    - hostname: master-1
      networks:
        - subnetName: <subnet-name>
          ipAddress: 192.168.1.11
    - hostname: master-2
      networks:
        - subnetName: <subnet-name>
          ipAddress: 192.168.1.12
    - hostname: master-3
      networks:
        - subnetName: <subnet-name>
          ipAddress: 192.168.1.13
ParameterTypeRequiredDescription
.spec.configs[].hostnamestringYesVM hostname
.spec.configs[].networks[].subnetNamestringYesSubnet name in HCS
.spec.configs[].networks[].ipAddressstringYesStatic IP address

Configure Machine Template

The HCSMachineTemplate defines the VM specifications for control plane nodes.

WARNING

Storage Requirements

The following data disk mount points are recommended for control plane nodes:

  • /var/lib/etcd - etcd data (10GB+)
  • /var/lib/kubelet - kubelet data (100GB+)
  • /var/lib/containerd - container runtime data (100GB+)
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: HCSMachineTemplate
metadata:
  name: <cluster-name>-control-plane
  namespace: cpaas-system
spec:
  template:
    spec:
      imageName: <vm-image-name>
      flavorName: <instance-flavor>
      availabilityZone: <availability-zone>
      rootVolume:
        type: SSD
        size: 100
      configPoolRef:
        name: <cluster-name>
      dataVolumes:
        - size: 10
          type: SSD
          mountPath: /var/lib/etcd
          format: xfs
        - size: 100
          type: SSD
          mountPath: /var/lib/kubelet
          format: xfs
        - size: 100
          type: SSD
          mountPath: /var/lib/containerd
          format: xfs
ParameterTypeRequiredDescription
.spec.template.spec.imageNamestringYesVM image name such as microos-4.2.1-new
.spec.template.spec.flavorNamestringYesInstance flavor such as c3.xlarge.2
.spec.template.spec.availabilityZonestringNoAvailability zone such as az1, az2
.spec.template.spec.rootVolume.typestringYesVolume type (SSD or SATA)
.spec.template.spec.rootVolume.sizeintYesSystem disk size in GB
.spec.template.spec.configPoolRef.namestringYesReferenced HCSMachineConfigPool name
.spec.template.spec.dataVolumes[]arrayNoData volume configurations
.spec.template.spec.dataVolumes[].sizeintYes*Data disk size in GB
.spec.template.spec.dataVolumes[].typestringYes*Volume type
.spec.template.spec.dataVolumes[].mountPathstringYes*Mount path
.spec.template.spec.dataVolumes[].formatstringYes*File system format (xfs or ext4)

*Required when dataVolumes is specified.

Configure KubeadmControlPlane

The KubeadmControlPlane defines the Kubernetes control plane configuration.

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: <cluster-name>
  namespace: cpaas-system
spec:
  replicas: 3
  version: <kubernetes-version>
  kubeadmConfigSpec:
    clusterConfiguration:
      imageRepository: <image-repository>
      dns:
        imageTag: <dns-image-tag>
      etcd:
        local:
          imageTag: <etcd-image-tag>
    initConfiguration:
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "kube-ovn/role=master"
          volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
    joinConfiguration:
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "kube-ovn/role=master"
          volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
  machineTemplate:
    nodeDrainTimeout: 1m
    nodeDeletionTimeout: 5m
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: HCSMachineTemplate
      name: <cluster-name>-control-plane

For component versions (DNS image tag, etcd image tag), refer to the OS Support Matrix.

Configure HCSCluster

The HCSCluster resource defines the HCS infrastructure configuration.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: HCSCluster
metadata:
  name: <cluster-name>
  namespace: cpaas-system
spec:
  controlPlaneLoadBalancer:
    vipAddress: <vip-address>
    vipSubnetName: <elb-subnet-name>
    vipDomainName: <vip-domain-name>
    elbVirsubnetL4Ips:
      - subnetName: <subnet-name>
        ips:
          - 192.168.15.101
          - 192.168.15.102
    elbVirsubnetL7Ips:
      - subnetName: <subnet-name>
        ips:
          - 192.168.15.103
          - 192.168.15.104
  networkType: kube-ovn
  network:
    vpc:
      name: <vpc-name>
    subnets:
      - name: <subnet-name>
    securityGroup:
      name: <security-group-name>
  identityRef:
    name: <cluster-name>
  controlPlaneEndpoint:
    host: <vip-address>
    port: 6443
ParameterTypeRequiredDescription
.spec.networkTypestringYesNetwork type, currently supports kube-ovn
.spec.network.vpc.namestringYesVPC name
.spec.network.subnets[].namestringYesSubnet name list
.spec.network.securityGroup.namestringYesSecurity group name
.spec.identityRef.namestringYesReference to credential Secret name
.spec.controlPlaneEndpoint.hoststringNoAPI server host (VIP address or domain name)
.spec.controlPlaneEndpoint.portintNoAPI server port (default: 6443)
.spec.controlPlaneLoadBalancer.vipAddressstringNo*Control plane VIP address
.spec.controlPlaneLoadBalancer.vipSubnetNamestringNo*ELB subnet name
.spec.controlPlaneLoadBalancer.vipDomainNamestringNoVIP domain name with DNS configured
.spec.controlPlaneLoadBalancer.elbVirsubnetL4IpsarrayNoL4 load balancer IP configuration
.spec.controlPlaneLoadBalancer.elbVirsubnetL7IpsarrayNoL7 load balancer IP configuration

*Required when configuring VIP address.

Configure Cluster

The Cluster resource in Cluster API declares the cluster and references the control plane and infrastructure resources.

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: <cluster-name>
  namespace: cpaas-system
  annotations:
    cpaas.io/sentry-deploy-type: Baremetal
    cpaas.io/alb-address-type: ClusterAddress
    capi.cpaas.io/resource-group-version: infrastructure.cluster.x-k8s.io/v1beta1
    capi.cpaas.io/resource-kind: HCSCluster
    cpaas.io/kube-ovn-join-cidr: <kube-ovn-join-cidr>
    cpaas.io/kube-ovn-version: <kube-ovn-version>
  labels:
    cluster-type: HCS
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
        - <pods-cidr>
    services:
      cidrBlocks:
        - <services-cidr>
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: <cluster-name>
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: HCSCluster
    name: <cluster-name>
AnnotationDescription
cpaas.io/sentry-deploy-typeDeployment type, set to Baremetal
cpaas.io/alb-address-typeALB address type, set to ClusterAddress
capi.cpaas.io/resource-group-versionResource group version
capi.cpaas.io/resource-kindResource type, set to HCSCluster
cpaas.io/kube-ovn-join-cidrKube-OVN join subnet CIDR
cpaas.io/kube-ovn-versionKube-OVN version

Cluster Verification

After deploying all cluster resources, verify that the cluster has been created successfully.

Using kubectl

# Check cluster status
kubectl get cluster -n cpaas-system <cluster-name>

# Verify control plane nodes
kubectl get kubeadmcontrolplane -n cpaas-system <cluster-name>

# Check machine status
kubectl get machines -n cpaas-system

# Verify cluster deployment status
kubectl get clustermodule <cluster-name> -o jsonpath='{.status.base.deployStatus}'

Expected Results

A successfully created cluster should show:

  • Cluster status: Running or Provisioned
  • All control plane machines: Running
  • Kubernetes nodes: Ready
  • Cluster Module Status: Completed

Adding Worker Nodes

For instructions on adding worker nodes to the cluster, refer to Managing Nodes.

Upgrading Clusters

For instructions on upgrading cluster components, refer to Upgrading Clusters.