PROXMOX - Clustering
Proxmox Cluster Configuration with 4 Nodes
Before starting, ensure that:
- All nodes have Proxmox installed and updated.
- The nodes are on the same network.
- SSH key-based authentication is set up between the nodes.
- You have a consistent hostname and DNS setup across all nodes.
- Ensure NTP (Network Time Protocol) is configured and synchronized across all nodes.
Cluster Configuration Overview
A Proxmox cluster uses the Corosync and Ceph for clustering and high availability. The configuration is done using the Proxmox web interface or command line. Below, the process will focus on setting up the cluster using the command line interface.
Initial Configuration on Node 1
First, on Node 1, we will create the Proxmox cluster.
# SSH into Node 1 ssh root@node1 # Create the cluster pvecm create pve-cluster # Check cluster status pvecm status
This command will create the cluster with the name `pve-cluster` and initialize the cluster configuration on Node 1.
Adding Additional Nodes to the Cluster
On Node 2, 3, and 4, we will join them to the cluster. Ensure you have SSH key-based authentication set up so that Node 1 can connect to the other nodes without requiring a password.
On Node 2:
# SSH into Node 2 ssh root@node2 # Join Node 2 to the cluster pvecm add node1 # Check cluster status pvecm status
Repeat the same process for Node 3 and Node 4:
# SSH into Node 3 ssh root@node3 pvecm add node1 # SSH into Node 4 ssh root@node4 pvecm add node1
Once this is done, all nodes should be part of the cluster.
Configuring Quorum and Corosync
To ensure the cluster is properly synchronized and there is quorum, we need to ensure the Corosync configuration is correct. Quorum is important for preventing split-brain scenarios.
# Check quorum status pvecm quorum status # Check Corosync status systemctl status corosync
Corosync should be running on all nodes in the cluster. If you notice issues, check the logs:
journalctl -u corosync
Configuring Clustered Storage
Proxmox supports several types of clustered storage. Here we will configure shared storage using NFS, but other options like Ceph or iSCSI are also available.
To configure NFS storage:
# On Node 1, create an NFS share mkdir -p /mnt/nfs_share chmod 777 /mnt/nfs_share # Edit /etc/exports to share the directory echo "/mnt/nfs_share *(rw,sync,no_subtree_check)" >> /etc/exports exportfs -a # Install NFS server apt install nfs-kernel-server # Restart NFS service systemctl restart nfs-kernel-server
Now, mount this NFS share on all other nodes.
On Node 2, 3, and 4:
# Create a mount point mkdir -p /mnt/nfs_share # Mount NFS share from Node 1 mount node1:/mnt/nfs_share /mnt/nfs_share
To make it persistent across reboots, add the following line to `/etc/fstab` on all nodes:
node1:/mnt/nfs_share /mnt/nfs_share nfs defaults 0 0
Verifying Cluster Configuration
Once all nodes are added and storage is configured, verify the cluster status:
# Check cluster status on any node pvecm status
This should show a list of all nodes in the cluster and their respective status.
Also, verify that all nodes can communicate with each other via the Proxmox web interface. The cluster configuration should show as `green` in the cluster status page.
HA (High Availability) Configuration
High availability allows virtual machines to automatically failover to another node in case of failure. To configure HA:
1. First, enable HA support by installing the `pve-ha-manager` package:
apt-get install pve-ha-manager
2. Next, you need to create an HA resource for your VM. Here’s an example of creating a HA resource for a VM named `vm100`:
ha-manager resource add --name vm100 --type vm --vmid 100 --group pve-cluster --enabled 1
3. You can check the HA status and resources:
ha-manager status
Networking Configuration
Proxmox clusters require a reliable and fast internal network for communication between nodes. You should configure a dedicated management network for the cluster communication. A common setup is using a VLAN or a separate physical NIC for this purpose.
To configure bonding for network redundancy:
# Edit /etc/network/interfaces auto bond0 iface bond0 inet static address 192.168.1.100 netmask 255.255.255.0 gateway 192.168.1.1 bond-slaves eth0 eth1 bond-mode 802.3ad bond-miimon 100
This will bond the `eth0` and `eth1` interfaces into `bond0`.
Useful Links
- [Proxmox Documentation](https://pve.proxmox.com/pve-docs/)
- [Proxmox Cluster Setup Guide](https://pve.proxmox.com/wiki/Cluster_Manager)
- [Proxmox High Availability Documentation](https://pve.proxmox.com/wiki/High_Availability)
- [Corosync Documentation](https://corosync.github.io/corosync/)
- [NFS Documentation](https://linux.die.net/man/5/exports)
- [Proxmox Support Forum](https://forum.proxmox.com/)
