NOTE: This scenario will only apply on a greenfield implementation OR you have a separate infrastructure to move your VMs while doing this. Since all data will be lost when destroying your cluster. Also, this cluster was deployed with vSphere ESXi hypervisor.
I just had a situation with a Nutanix cluster implementation where we had a little mix-up with the VLANs and IP segments on the Cluster Deployment Questionnaire sheet. Long story short, after the hardware was physically installed on-site and deployed, we needed to change the VLAN and IP address segment of the cluster, per client request.
After doing some research on cluster re-IPing and asking around with my peers, the consensus was pretty clear; just destroy and re-depoy the cluster. Nutanix has some official documents and I also found a couple of community posts on some pieces of the process, but I couldn’t manage to find the exact same scenario of re-IP and VLAN change in a single place. Actually, after seeing this mystery from an official document, I figured it was going to be a roller-coaster:
Before you decide to change the CVM, hypervisor host, and IPMI IP addresses, consider the possibility of incorporating the existing IP address schema into the new infrastructure by reconfiguring your routers and switches instead of Nutanix nodes and CVMs. If that is not possible and you must change the IP addresses of CVMs and hypervisor hosts, proceed with the procedure described in this document.
https://portal.nutanix.com/page/documents/details?targetId=Advanced-Setup-Guide-AOS-v5_19:ipc-cvm-ip-address-reconfigure-t.html
I could have tried to attempt the surgery, but time is money and since the cluster was empty, I just went with re-deploying. However, in order to avoid coordinating another site visit (COVID-19 times), I gave it a thought and figured it could be all done remotely from the IPMI console, this way I would have access to the ESXi shell, and also use the backplane “Hypervisor LAN” between ESXi and the CVMs in order to SSH to the CVMs from a network that will not be touched in this process.
Most relevant references I found regarding re-IP and VLAN changes in case you ever need them:
Changing the Controller VM IP Addresses in your Nutanix Cluster (CLI Script)
How-To: Change VLAN Tag for Nutanix Cluster (HV Host, CVM, IPMI)
[Tips]
- To access the ESXi Shell, login to the node’s IPMI, open the console, and hit Alt + F1. Once on the ESXi Shell, you can simply go back to the DCUI by typing literally the
dcui
command, and again aCtrl+C
kicks you back to the shell. - To access the CVM from the ESXi Shell, SSH using the “Hypervisor LAN” with the following command:
[root@esxi:~] ssh nutanix@192.168.5.2
, that will take you to that hosts specific CVM.
[Stop & Destroy the Nutanix Cluster]
nutanix@cvm$ cluster stop
nutanix@cvm$ cluster destroy
[Change the ESXi and CVM VLAN tag]
- Change the VLAN tag of the port-group associated to the management vmk:
root@esxi# esxcli network vswitch standard portgroup set -p "Management Network" -v new-vlan
- Change the VLAN tag of the port-group associated with the CVM:
root@esxi# esxcli network vswitch standard portgroup set -p "VM Network" -v new-vlan
- Update your uplink switch configuration to the new VLAN. In my case, the ports where already trunks with the native VLAN being the original, so I just need to allow the new VLAN on those trunks.
[Change the CVM IP addresses]
- Open the
ifcfg-eth0
file, update your NETMASK, IPADDR, BOOTPROTO, and GATEWAY entries as needed, save changes and restart the CVM:
nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
nutanix@cvm$ sudo restart
Quick tip: If you havn’t used
vi
, hit “i” to modify text, hit “Esc” to exit the input mode, and quit saving changes with “:wq“.
[Change the ESXi management vmk IP addresses]
Since I’m already on the console, I just made my changes through the DCUI. If you need more details on the ESXi management interface configuration, you can check out this VMware KB 2084629.
Note: Once you changed the VLAN tag and IP addresses on all your ESXi hosts and CVMs, I’d say that’s a good time to check connectivity between all hypervisors and CVMs.
[Create Nutanix Cluster]
From any of the CVMs:
nutanix@cvm$ cluster --cluster_name=name --cluster_external_ip=x.x.x.x --dns_servers=x.x.x.x,x.x.x.x --ntp_servers=x.x.x.x,x.x.x.x --redundancy_factor=2 --svm_ips=x.x.x.x,x.x.x.x,x.x.x.x,n... create
Update: I forgot to include the details for the IPMI, you can check that out on this post:
Nutanix Quickie: How to Change IPMI VLAN ID & IP Address
References:
Manually Configuring CVM IP Addresses
Useful Commands for AOS components and ESX
Nutanix Cluster Commands