Skip to main content

Cluster attachment

When a tenant cluster requests bare metal nodes, vMetal provisions servers and joins them to that cluster automatically. Each server appears in the cluster as a standard Kubernetes node.

Private nodes

When vMetal provisions a server for a tenant cluster, that server is a private node. The platform allocates it exclusively to one cluster at a time. No other cluster or tenant shares the physical hardware.

Private nodes appear in the tenant cluster alongside any virtual nodes from vCluster. Schedule workloads to private nodes using standard Kubernetes node selectors and resource requests.

How joining works

The platform generates cloud-init user data for each Machine. For private nodes, the cloud-init runs a bootstrap script downloaded from the vCluster server. The script installs the container runtime and Kubelet, then runs kubeadm join with a dedicated join token for the target tenant cluster. When the server finishes installing the OS and reboots, cloud-init runs the script automatically.

The server registers with the tenant cluster's API server and begins accepting workloads. The join happens without manual intervention.

Machine lifecycle and cluster lifecycle

A Machine exists for as long as the cluster needs that node. When scale-down removes a node or the cluster is deleted, the platform deletes the Machine. This triggers deprovisioning through Metal3. Metal3 cleans the server and returns it to the available pool.

If a Machine enters the error state, it is not automatically replaced. Investigate the BareMetalHost status conditions, resolve the issue, and return the BareMetalHost to available before the platform can use it again.

Node resources

After joining, the server is a standard Kubernetes node. The resources it advertises depend on what software runs on it.

vMetal does not set GPU or accelerator resources. Device plugins running on the node advertise them to the kubelet. For GPU nodes, deploy the NVIDIA GPU Operator or equivalent. vMetal provisions the server with the correct OS. The device plugin discovers the hardware and reports it to the scheduler.

To target GPU nodes, use standard Kubernetes resource requests:

resources:
limits:
nvidia.com/gpu: "1"

Configuring private nodes

Configure private nodes in the tenant cluster spec. The autoNodes field tells the platform to provision and maintain a specific count of nodes from a given provider and node type:

privateNodes:
enabled: true
autoNodes:
- provider: metal3-provider
static:
- name: gpu-nodes
quantity: 2
nodeTypeSelector:
- property: vcluster.com/node-type
value: gpu-node

The platform creates and deletes Machines to match the requested quantity.

For all available fields, see Configuration.