Configuring LACP in an Acropolis Cluster

Source : Nutanix KB 1681: Nutanix Support for Link Aggregation Groups (LAG) and Link Aggregation Control Protocol (LACP) for VMware , Hyper-V , Acropolis hypervisor’s configuration information.

Configuring LACP in an Acropolis Cluster
The following instructions assume that you have to add a bridge, add a bond on the bridge, and then
configure LACP for the interfaces in the bond.
To configure LACP for an Open vSwitch bond in the Acropolis hypervisor (AHV), do the following:
1. Log on to the Controller VM with SSH.
root@host# ssh nutanix@192.168.5.254
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
2. Create a bridge.
nutanix@cvm$ allssh ‘ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br bridge’
Replace bridge with a name for the bridge. The output does not indicate success explicitly, so you can
append && echo success to the command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh ‘ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success’
3. Create a bond with the desired set of interfaces.
nutanix@cvm$ manage_ovs –bridge_name bridge –interfaces interfaces —
bond_name bond_name update_uplinks
Replace bridge with the name of the bridge on which you want to create the bond. Omit the —
bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of –bond_name is bond0.
Replace interfaces with one of the following values:
KB 1681: Nutanix Support for Link Aggregation Groups (LAG) and Link Aggregation Control Protocol (LACP)
| Nutanix | 6
• A comma-separated list of the interfaces that you want to include in the bond. For example,
eth0,eth1.
• A keyword that indicates which interfaces you want to include. Possible keywords:
• 10g. Include all available 10 GbE interfaces
• 1g. Include all available 1 GbE interfaces
• all. Include all available interfaces
For example, create a bond named bond1, with interfaces eth0 and eth1.
nutanix@cvm$ manage_ovs –interfaces eth0,eth1 –bond_name bond1 update_uplinks
4. Log on to the Acropolis host with SSH, and then configure LACP for the bond.
ovs-vsctl set port bond1 lacp=active
5. Verify the status of the bond.
ovs-appctl bond/show bond_name
Note: The following command shows more detailed LACP-specific information:
ovs-appctl lacp/show bond_name
Output similar to the following is displayed:
—- bond1 —-
status: active negotiated
sys_id: 00:13:81:ef:ac:ba
sys_priority: 65534
aggregation key: 4
lacp_time: fast

You can also use the following command to view the configuration details of the bond:
ovs-vsctl list port bond1
Output similar to the following is displayed:
_uuid : abdefeac-0421-4166-bdfe-3216532899e0

lacp : active
mac : []
name : “bond1”
other_config : {lacp-time=fast}
qos : []

Nutanix Host Network Management

Source : Nutanix AdminGuide 4.5

Host Network Management

<!–

–>

Network management in an Acropolis cluster consists of the following tasks:

  • Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you configure bridges, bonds, and VLANs.
  • Optionally changing the IP address, netmask, and default gateway that were specified for the hosts during the imaging process.

Prerequisites for Configuring Networking

Change the configuration from the factory default to the recommended configuration. See Default Factory Configuration and Recommendations for Configuring Networking in an Acropolis Cluster.

Recommendations for Configuring Networking in an Acropolis Cluster

Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as described in this documentation:

  • Viewing the network configuration
  • Configuring an Open vSwitch bond with desired interfaces
  • Assigning the Controller VM to a VLAN

For performing other OVS configuration tasks, such as adding an interface to a bridge and configuring LACP for the interfaces in an OVS bond, log on to the Acropolis hypervisor host, and then follow the procedures described in the OVS documentation at http://openvswitch.org/.

Nutanix recommends that you configure the network as follows:

Table: Recommended Network Configuration
Network Component Recommendations
Open vSwitch Do not modify the OpenFlow tables that are associated with the default OVS bridge br0.
VLANs

Add the Controller VM and the Acropolis hypervisor to the same VLAN. By default, the Controller VM and the hypervisor are assigned to VLAN 0, which effectively places them on the native VLAN configured on the upstream physical switch.

Do not add any other device, including guest VMs, to the VLAN to which the Controller VM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.

Virtual bridges Do not delete or rename OVS bridge br0.

Do not modify the native Linux bridge virbr0.

OVS bonded port (bond0)

Aggregate the 10 GbE interfaces on the physical host to an OVS bond on the default OVS bridge br0 and trunk these interfaces on the physical switch.

By default, the 10 GbE interfaces in the OVS bond operate in the recommended active-backup mode.

1 GbE and 10 GbE interfaces (physical host)

If you want to use the 10 GbE interfaces for guest VM traffic, make sure that the guest VMs do not use the VLAN over which the Controller VM and hypervisor communicate.

If you want to use the 1 GbE interfaces for guest VM connectivity, follow the hypervisor manufacturer’s switch port and networking configuration guidelines.

Do not include the 1 GbE interfaces in the same bond as the 10 GbE interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge br0, either individually or in a second bond. Use them on other bridges.

IPMI port on the hypervisor host Do not trunk switch ports that connect to the IPMI interface. Configure the switch ports as access ports for management simplicity.
Upstream physical switch

Nutanix does not recommend the use of Fabric Extenders (FEX) or similar technologies for production use cases. While initial, low-load implementations might run smoothly with such technologies, poor performance, VM lockups, and other issues might occur as implementations scale upward (see Knowledge Base article KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches with larger buffers for production workloads.

Use an 802.3-2012 standards–compliant switch that has a low-latency, cut-through design and provides predictable, consistent traffic latency regardless of packet size, traffic pattern, or the features enabled on the 10 GbE interfaces. Port-to-port latency should be no higher than 2 microseconds.

Use fast-convergence technologies (such as Cisco PortFast) on switch ports that are connected to the hypervisor host.

Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for each port.

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture. This simple, flat network design is well suited for a highly distributed, shared-nothing compute and storage architecture.

Add all the nodes that belong to a given cluster to the same Layer-2 network segment.

Other network layouts are supported as long as all other Nutanix recommendations are followed.

Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the native Linux bridge virbr0.

This diagram shows the recommended network configuration for an Acropolis cluster. The interfaces in the diagram are connected with colored lines to indicate membership to different VLANs:

Figure:

Layer 2 Network Management with Open vSwitch

The Acropolis hypervisor uses Open vSwitch to connect the Controller VM, the hypervisor, and the guest VMs to each other and to the physical network. The OVS package is installed by default on each Acropolis node and the OVS services start automatically when you start a node.

To configure virtual networking in an Acropolis cluster, you need to be familiar with OVS. This documentation gives you a brief overview of OVS and the networking components that you need to configure to enable the hypervisor, Controller VM, and guest VMs to connect to each other and to the physical network.

About Open vSwitch

Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and designed to work in a multiserver virtualization environment. By default, OVS behaves like a Layer 2 learning switch that maintains a MAC address learning table. The hypervisor host and VMs connect to virtual ports on the switch. Nutanix uses the OpenFlow protocol to configure and communicate with Open vSwitch.

Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch. As an example, the following diagram shows OVS instances running on two hypervisor hosts.

Figure: Open vSwitch

Default Factory Configuration

The factory configuration of an Acropolis host includes a default OVS bridge named br0 and a native linux bridge called virbr0.

Bridge br0 includes the following ports by default:

  • An internal port with the same name as the default bridge; that is, an internal port named br0. This is the access port for the hypervisor host.
  • A bonded port named bond0. The bonded port aggregates all the physical interfaces available on the node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces are aggregated on bond0. This configuration is necessary for Foundation to successfully image the node regardless of which interfaces are connected to the network.
    Note: Before you begin configuring a virtual network on a node, you must disassociate the 1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with Desired Interfaces.

The following diagram illustrates the default factory configuration of OVS on an Acropolis node:

Figure: Default factory configuration of Open vSwitch in the Acropolis hypervisor

The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to communicate with the hypervisor host.

Viewing the Network Configuration

Use the following commands to view the configuration of the network elements.

Before you begin:

Log on to the Acropolis host with SSH.

  • To show interface properties such as link speed and status, log on to the Controller VM, and then list the physical interfaces.
    nutanix@cvm$ manage_ovs show_interfaces

    Output similar to the following is displayed:

    name mode link speed 
    eth0 1000 True 1000 
    eth1 1000 True 1000 
    eth2 10000 True 10000 
    eth3 10000 True 10000
  • To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then list the uplink configuration.
    nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks

    Replace bridge with the name of the bridge for which you want to view uplink information. Omit the –bridge_name parameter if you want to view uplink information for the default OVS bridge br0.

    Output similar to the following is displayed:

    Uplink ports: bond0
    Uplink ifaces: eth1 eth0
  • To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the configuration of Open vSwitch.
    root@ahv# ovs-vsctl show

    Output similar to the following is displayed:

    59ce3252-f3c1-4444-91d1-b5281b30cdba 
              Bridge "br0" 
                  Port "br0" 
                      Interface "br0" 
                          type: internal 
                  Port "vnet0" 
                      Interface "vnet0" 
                  Port "br0-arp" 
                      Interface "br0-arp" 
                          type: vxlan 
                          options: {key="1", remote_ip="192.168.5.2"} 
                  Port "bond0" 
                      Interface "eth3" 
                      Interface "eth2" 
                  Port "bond1" 
                      Interface "eth1" 
                      Interface "eth0" 
                  Port "br0-dhcp" 
                      Interface "br0-dhcp" 
                          type: vxlan 
                          options: {key="1", remote_ip="192.0.2.131"}
            ovs_version: "2.3.1"
    
  • To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the configuration of the bond.
    root@ahv# ovs-appctl bond/show bond_name

    For example, show the configuration of bond0.

    root@ahv# ovs-appctl bond/show bond0

    Output similar to the following is displayed:

    ---- bond0 ----
    bond_mode: active-backup
    bond may use recirculation: no, Recirc-ID : -1
    bond-hash-basis: 0
    updelay: 0 ms
    downdelay: 0 ms
    lacp_status: off
    active slave mac: 0c:c4:7a:48:b2:68(eth0)
    
    slave eth0: enabled
            active slave
            may_enable: true
    
    slave eth1: disabled
            may_enable: false
    

Creating an Open vSwitch Bridge

To create an OVS bridge, do the following:

  1. Log on to the Acropolis host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Create an OVS bridge on each host in the cluster.
    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br bridge'

    Replace bridge with a name for the bridge. The output does not indicate success explicitly, so you can append && echo success to the command. If the bridge is created, the text success is displayed.

    For example, create a bridge and name it br1.

    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'

    Output similar to the following is displayed:

    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
    Executing ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success on the cluster
    ================== 192.0.2.203 =================
    FIPS mode initialized
    Nutanix KVM
    success
    ...
    

Configuring an Open vSwitch Bond with Desired Interfaces

When creating an OVS bond, you can specify the interfaces that you want to include in the bond.

Use this procedure to create a bond that includes a desired set of interfaces or to specify a new set of interfaces for an existing bond. If you are modifying an existing bond, the Acropolis hypervisor removes the bond and then re-creates the bond with the specified interfaces.

Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond, so the disassociation is necessary to help prevent any unpredictable performance issues that might result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that you aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate OVS bridge.

To create an OVS bond with the desired interfaces, do the following:

  1. Log on to the Acropolis host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Create a bond with the desired set of interfaces.
    nutanix@cvm$ manage_ovs --bridge_name bridge --interfaces interfaces update_uplinks --bond_name bond_name

    Replace bridge with the name of the bridge on which you want to create the bond. Omit the –bridge_name parameter if you want to create the bond on the default OVS bridge br0.

    Replace bond_name with a name for the bond. The default value of –bond_name is bond0.

    Replace interfaces with one of the following values:

    • A comma-separated list of the interfaces that you want to include in the bond. For example, eth0,eth1.
    • A keyword that indicates which interfaces you want to include. Possible keywords:
      • 10g. Include all available 10 GbE interfaces
      • 1g. Include all available 1 GbE interfaces
      • all. Include all available interfaces

    For example, create a bond with interfaces eth0 and eth1.

    nutanix@cvm$ manage_ovs --bridge_name br1 --interfaces eth0,eth1 update_uplinks --bond_name bond1

    Example output similar to the following is displayed:

    2015-03-05 11:17:17 WARNING manage_ovs:291 Interface eth1 does not have link state
    2015-03-05 11:17:17 INFO manage_ovs:325 Deleting OVS ports: bond1
    2015-03-05 11:17:18 INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1 
    2015-03-05 11:17:22 INFO manage_ovs:364 Sending gratuitous ARPs for 192.0.2.21

Virtual Network Segmentation with VLANs

You can set up a segmented virtual network on an Acropolis node by assigning the ports on Open vSwitch bridges to different VLANs. VLAN port assignments are configured from the Controller VM that runs on each node.

For best practices associated with VLAN assignments, see Recommendations for Configuring Networking in an Acropolis Cluster. For information about assigning guest VMs to a VLAN, see the Web Console Guide.

Assigning an Acropolis Host to a VLAN

To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:

  1. Log on to the Acropolis host with SSH.
  2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be on.
    root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag

    Replace host_vlan_tag with the VLAN tag for hosts.

  3. Confirm VLAN tagging on port br0.
    root@ahv# ovs-vsctl list port br0
  4. Check the value of the tag parameter that is shown.
  5. Verify connectivity to the IP address of the AHV host by performing a ping test.

Assigning the Controller VM to a VLAN

By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public interface from a device that is on the new VLAN.

Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the internal interface that has IP address 192.168.5.254.

Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the following:

  1. Log on to the Acropolis host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Assign the public interface of the Controller VM to a VLAN.
    nutanix@cvm$ change_cvm_vlan vlan_id

    Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.

    For example, add the Controller VM to VLAN 10.

    nutanix@cvm$ change_cvm_vlan 10

    Output similar to the following us displayed:

    Replacing external NIC in CVM, old XML: 
           
        
    
  1. 
    
     
    
    new XML:
    
        
        
    
     
    
    
  1.     
    
    CVM external NIC successfully updated.

Changing the IP Address of an Acropolis Host

To change the IP address of an Acropolis host, do the following:

  1. Edit the settings of port br0, which is the internal port on the default bridge br0.
    1. Log on to the host console as root.

      You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to the node.

    2. Open the network interface configuration file for port br0 in a text editor.
      root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0
    3. Update entries for host IP address, netmask, and gateway.

      The block of configuration information that includes these entries is similar to the following:

      ONBOOT="yes" 
      NM_CONTROLLED="no" 
      PERSISTENT_DHCLIENT=1
      NETMASK="subnet_mask" 
      IPADDR="host_ip_addr" 
      DEVICE="br0" 
      TYPE="ethernet" 
      GATEWAY="gateway_ip_addr"
      BOOTPROTO="none"
      • Replace host_ip_addr with the IP address for the hypervisor host.
      • Replace subnet_mask with the subnet mask for host_ip_addr.
      • Replace gateway_ip_addr with the gateway address for host_ip_addr.
    4. Save your changes.
    5. Restart network services.
      /etc/init.d/network restart
  2. Log on to the Controller VM and restart genesis.
    nutanix@cvm$ genesis restart

    If the restart is successful, output similar to the following is displayed:

    Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
    Genesis started on pids [30378, 30379, 30380, 30381, 30403]

    For information about how to log on to a Controller VM, see Controller VM Access.

  3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an Acropolis Host to a VLAN.

Configuring 1 GbE Connectivity for Guest VMs

If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0 and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign guest VM interfaces to the network.

To configure 1 GbE connectivity for guest VMs, do the following:

  1. Log on to the Acropolis host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Determine the uplinks configured on the host.
    nutanix@cvm$ allssh manage_ovs show_uplinks

    Output similar to the following is displayed:

    Executing manage_ovs show_uplinks on the cluster
    ================== 192.0.2.49 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.50 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.51 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    
  4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond name are br0 and br0-up, respectively.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up update_uplinks'

    The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

  5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1.
    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'
  6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to a bond named br1-up.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g update_uplinks --bond_name br1-up'
  7. Log on to each CVM and create a network on a separate VLAN for the guest VMs, and associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN 10.
    nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
  8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on the guest VMs to the network.

Recommendations for Configuring Networking in an Acropolis Cluster

Source :Nutanix AdminGuide V5.0

Recommendations for Configuring Networking in an Acropolis Cluster

<!–

–>

Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as described in this documentation:

  • Viewing the network configuration
  • Configuring an Open vSwitch bond with desired interfaces
  • Assigning the Controller VM to a VLAN

For performing other OVS configuration tasks, such as adding an interface to a bridge and configuring LACP for the interfaces in an OVS bond, log on to the AHV host, and then follow the procedures described in the OVS documentation at http://openvswitch.org/.

Nutanix recommends that you configure the network as follows:

Table: Recommended Network Configuration
Network Component Best Practice
Open vSwitch Do not modify the OpenFlow tables that are associated with the default OVS bridge br0.
VLANs

Add the Controller VM and the AHV host to the same VLAN. By default, the Controller VM and the hypervisor are assigned to VLAN 0, which effectively places them on the native VLAN configured on the upstream physical switch.

Do not add any other device, including guest VMs, to the VLAN to which the Controller VM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.

Virtual bridges Do not delete or rename OVS bridge br0.

Do not modify the native Linux bridge virbr0.

OVS bonded port (bond0)

Aggregate the 10 GbE interfaces on the physical host to an OVS bond on the default OVS bridge br0 and trunk these interfaces on the physical switch.

By default, the 10 GbE interfaces in the OVS bond operate in the recommended active-backup mode. LACP configurations are known to work, but support might be limited.

1 GbE and 10 GbE interfaces (physical host)

If you want to use the 10 GbE interfaces for guest VM traffic, make sure that the guest VMs do not use the VLAN over which the Controller VM and hypervisor communicate.

If you want to use the 1 GbE interfaces for guest VM connectivity, follow the hypervisor manufacturer’s switch port and networking configuration guidelines.

Do not include the 1 GbE interfaces in the same bond as the 10 GbE interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to bridge br0, either individually or in a second bond. Use them on other bridges.

IPMI port on the hypervisor host Do not trunk switch ports that connect to the IPMI interface. Configure the switch ports as access ports for management simplicity.
Upstream physical switch

Nutanix does not recommend the use of Fabric Extenders (FEX) or similar technologies for production use cases. While initial, low-load implementations might run smoothly with such technologies, poor performance, VM lockups, and other issues might occur as implementations scale upward (see Knowledge Base article KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches with larger buffers for production workloads.

Use an 802.3-2012 standards–compliant switch that has a low-latency, cut-through design and provides predictable, consistent traffic latency regardless of packet size, traffic pattern, or the features enabled on the 10 GbE interfaces. Port-to-port latency should be no higher than 2 microseconds.

Use fast-convergence technologies (such as Cisco PortFast) on switch ports that are connected to the hypervisor host.

Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for each port.

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture. This simple, flat network design is well suited for a highly distributed, shared-nothing compute and storage architecture.

Add all the nodes that belong to a given cluster to the same Layer-2 network segment.

Other network layouts are supported as long as all other Nutanix recommendations are followed.

Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the native Linux bridge virbr0.

This diagram shows the recommended network configuration for an Acropolis cluster. The interfaces in the diagram are connected with colored lines to indicate membership to different VLANs:

Change the CVM Name in Nutanix

Here are the steps to change the Display name:

Before doing this check if all services are up. Also check that the data resiliency is good.

  1. Find current CVM name:

$ virsh list –all

  1. Shutdown CVM:

$ virsh shutdown

  1. Take XML dump of the CVM to preserve configuration changes Name the file appropriately, based on the new name of the CVM.

$ virsh dumpxml > NTNX–CVM.xml

  1. Undefine CVM

$ virsh undefine

  1. Modify the NTNX–CVM.xml file to use the new name:

 

NTNX–CVM

  1. Define the CVM

$ virsh define NTNX–CVM.xml

  1. Start the CVM.

$ virsh start NTNX–CVM

  1. Configure autostart in KVM so the CVM boots up with the host

$ virsh autostart NTNX–CVM

NCC: ncc health_check run_all might fail after upgrading

Source : Nutanix KB

ARTICLE NUMBER

000002511

Description

Note: This KB applies to NCC 2.0. For NCC 2.3 or later, please see KB 3620
Note: NCC 2.0 is not supported on NOS versions other than 4.1.3.

The ncc_health_checks run_all command might fail after NCC upgrade.

After running the checks, you might see the following messages:

Old NCC process found on the Controller VMs, Please contact Nutanix support
Old NCC process with PID 5855,6075,13707,13879,17667,17879,19887,20108,20673,20898,22677,22867found on

Solution

Log on to any Controller VM in the cluster and restart the Prism web console service.

Note: The cluster start command restarts the Prism service only in this case, not the entire cluster.

nutanix@cvm$ allssh ‘~/cluster/bin/genesis stop prism’

nutanix@CVM$ cluster start
Note: In a recent case we needed to reboot each CVM. This was after trying the genesis stop prism and killing pids with pkill.

In case you see multiple PIDs in each CVMs:

Old NCC process found on the cvms, Please contact Nutanix support
Old NCC process with pid 465,466,467,468,481,503,504,19356 found on 10.81.29.23
Old NCC process with pid 8028,8050,8051 found on 10.81.29.26
Old NCC process with pid 7530,7552,7553 found on 10.81.29.29
Old NCC process with pid 30634,30656,30657 found on 10.81.29.32

Check if email_alert is configured and if so, disable it.

nutanix@cvm$ ncc –show_email_config
[ info ] NCC is set to send email every 12 hrs.
[ info ] NCC email has not been sent after last configuration.
[ info ] NCC email configuration was last set at 2015-08-24 21:29:44.243925.

nutanix@cvm$ ncc –delete_email_config

Nutanix Default Cluster Credentials

Nutanix Default Cluster Credentials

Reference post – Default Cluster Credentials

Interface Target Username Password
Nutanix web console Nutanix Controller VM admin admin
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console KVM host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
IPMI web interface or ipmitool Nutanix node ADMIN ADMIN
IPMI web interface or ipmitool Nutanix node (NX-3000 only) admin admin