How to remove “No Management Network Redundancy” warning on vSphere using das.ignoreRedundantNetWarning

Source: virtuallylg

Whilst updating my ApplicationHA lab images for various Symantec Partner events that i will be running later this year I noticed that I was seeing a “This Host has no Management network redundancy” message on my ESX servers, whilst it doesn’t effect my labs from running properly it is a bit off putting seeing warning triangles next to the ESX servers. I typically use a single NIC for creating ESX servers for the lab and this is the reason why it gets displayed, in real production and test environments you would normally have redundant NICs configured and this message wouldn’t appear.

Here are the steps that will remove the warning using the “das.ignoreRedundantNetWarning” attribute.

There are a couple of ways to remove the message from appearing, the correct way is to add redundancy to the management networks by adding in another NIC to the configuration or alternativeliy you can use the “das.ignoreRedundantNetWarning” attribute in the advanced vMwareHA/vSphere HA settings.

1. From within the vSphere Client right click on the cluster and edit settings.

2. Select (Depending on your version) the VMwareHA or vSphere HA feature and click “Advanced Options”

3. In the options column enter the tag “das.ignoreRedundantNetWarning”

4. In the value column enter “true” for the value.

5. Click on “OK” and then “OK” again to make the changes in the advanced options

6. right click on the ESX server which is diplaying the warning symbol and select “Reconfigure for vSphere HA” or “Reconfigure for VMware HA”

Thats it, once the reconfigure task completes the warning should disappear and you are left with a warning icon clean ESX server.

Management Redundant Network warning cleared

of course on the screen shot I do have a warning sign on the Win2003-SQL virtual machine but thats a story for another day….

Advertisements

How to Shrink OS disk in Windows running on VMware.

Source : Expertsexchange

The following procedure is split into two parts

 Shrink the Operating System partition – This is covered in Step 1.
 Shrink the VMware Virtual Machine Disk (VMDK) – This is covered in Step 2

1. Login to the Virtual Machine and shrink the OS Partition

Before we can shrink the VMware virtual machine disk (VMDK), we need to shrink the OS partition. (to avoid file system corruption). In this example I am using Windows 2008 R2, which has a shrink function. If you are using another OS, please see the other 3rd party partition utilities which are available, they are listed in my Experts Exchange article

Using an RDP (Remote Desktop Protocol) connection or connect via the Console, using the vSphere Client, login to the virtual machine as an Administrator.
Logon to the Virtual MachinePress Control-Alt-Delete to login to the virtual machine.
Right Click My Computer and Select ManageRight Click My Computer and Select Manage
Select Disk ManagementSelect Disk Management, and select the partition you need to shrink.
Right Click VolumeRight Click the Volume/Partition to shrink, and select Shrink.
Shrink Disk QueryThe above dialogue will briefly appear whilst the file system is queried.
Shrink C:the above dialogue will appear. Enter a size to reduce the OS partition.

In this example the VMware virtual machine disk (VMDK) is 40GB, and we would like to reduce the size of the VMware virtual machine disk (VMDK) to 20GB. The Disk Management utility scans the available file system, and reports a maximum size the OS partition can be reduce by, this is based on current file system usage.

Enter the figure 19.5 (GB) x 1024 = 19968
Shrink SizeOS Partition size after Shrink Operation.
Disk Manager after Shrink OperationAs can be clearly seen in the above screenshot, there is now an unallocated 19.5GB space on the virtual disk, in Step 2 the VMware virtual machine disk (VMDK) will be “chopped”, removing this unallocated storage space, and finally reducing the virtual machine disk (VMDK) to 20GB. Providing that we DO NOT affect the existing partitions, this is a safe operation. So in effect the “cut” will be made in the unallocated storage space, after the OS partition.

2. Reducing the size of the VMware Virtual Machine Disk (VMDK)

Login and connect to the VMware vSphere Host ESXi server which hosts the virtual machine.

see my previous Experts Exchange articles

Power OFF the Virtual Machine, and change to the datastore path where the VMware virtual machine disk (VMDK) is located.

cd /vmfs/volumes//
VM Folder PathWe need to edit the *.vmdk, which is the descriptor file, which contains the variables for the size of the *.-flat.vmdk. Using cat, this is what the descriptor file contains
VMDK Descriptor fileThe number highlighted above, under the heading #Extent description, after the letters RW, defines the size of the VMware virtual disk (VMDK).

this number – 83886080, and it’s calculated as follows:

40 GB = 40 * 1024 * 1024 * 1024 / 512  =  83886080

We wanted to reduce the size of the VMware virtual machine disk (VMDK) from 40 GB to 20 GB. So the value we need to enter into the descriptor file is:-

20 GB = 20 * 1024 * 1024 * 1024 / 512  =  41943040

Using vi, edit the descriptor file, and change the number from 83886080 to 41943040, and save the file.
VMDK Edited with viMigrate or Copy the virtual machine to another datastore, if you do not have the migrate option, see my Experts Exchange article here

After the virtual machine disk (VMDK) has been moved, you will notice the disk size reflects the desired size of 20GB.
Size of virtual disk, as viewed from vSphere ClientSize of virtual disk as viewed from consoleAfter restarting the virtual machine, and checking with Disk Management, you will notice the 19.5GB unallocated storage space, has been removed, and disappeared.
Size of virtual disk as viewed from Disk Management in the OS
Congratulations, you have successfully Shrunk a VMware Virtual Machine Disk (VMDK)

Powercli script to capture Cpu & Memory usage stats

Connect-VIServer <server> -User <user> -Password <password>
$allvms = @()
$allhosts = @()
$hosts = Get-VMHost
$vms = Get-Vm

foreach($vmHost in $hosts){
$hoststat = “” | Select HostName, MemMax, MemAvg, MemMin, CPUMax, CPUAvg, CPUMin
$hoststat.HostName = $vmHost.name

$statcpu = Get-Stat -Entity ($vmHost)-start (get-date).AddDays(-30) -Finish (Get-Date)-MaxSamples 10000 -stat cpu.usage.average
$statmem = Get-Stat -Entity ($vmHost)-start (get-date).AddDays(-30) -Finish (Get-Date)-MaxSamples 10000 -stat mem.usage.average

$cpu = $statcpu | Measure-Object -Property value -Average -Maximum -Minimum
$mem = $statmem | Measure-Object -Property value -Average -Maximum -Minimum

$hoststat.CPUMax = $cpu.Maximum
$hoststat.CPUAvg = $cpu.Average
$hoststat.CPUMin = $cpu.Minimum
$hoststat.MemMax = $mem.Maximum
$hoststat.MemAvg = $mem.Average
$hoststat.MemMin = $mem.Minimum
$allhosts += $hoststat
}
$allhosts | Select HostName, MemMax, MemAvg, MemMin, CPUMax, CPUAvg, CPUMin | Export-Csv “c:\Hosts.csv” -noTypeInformation

foreach($vm in $vms){
$vmstat = “” | Select VmName, MemMax, MemAvg, MemMin, CPUMax, CPUAvg, CPUMin
$vmstat.VmName = $vm.name

$statcpu = Get-Stat -Entity ($vm)-start (get-date).AddDays(-30) -Finish (Get-Date)-MaxSamples 10000 -stat cpu.usage.average
$statmem = Get-Stat -Entity ($vm)-start (get-date).AddDays(-30) -Finish (Get-Date)-MaxSamples 10000 -stat mem.usage.average

$cpu = $statcpu | Measure-Object -Property value -Average -Maximum -Minimum
$mem = $statmem | Measure-Object -Property value -Average -Maximum -Minimum

$vmstat.CPUMax = $cpu.Maximum
$vmstat.CPUAvg = $cpu.Average
$vmstat.CPUMin = $cpu.Minimum
$vmstat.MemMax = $mem.Maximum
$vmstat.MemAvg = $mem.Average
$vmstat.MemMin = $mem.Minimum
$allvms += $vmstat
}
$allvms | Select VmName, MemMax, MemAvg, MemMin, CPUMax, CPUAvg, CPUMin | Export-Csv “c:\VMs.csv” -noTypeInformation

Enabling Password Free SSH Access on ESXi

When people ask  “how” to enable password free SSH, the question I always ask in return is “should” you enable password free SSH?  In most situations I would dare say the answer is probably not.  I often find that the decision to enable password free access is not based on any real requirement, but rather is done for the sake of convenience – admins want easy access to their vSphere hosts.  In my opinion, this is a case where security should trump convenience.  However, having said that I do realize that there are valid situations where SSH access is unavoidable, and depending the situation it might make sense to enable password free access.  My point here is that just because you can setup password free SSH doesn’t mean it’s a good idea.  Keep in mind, once you enable password free SSH:


 

  • Anybody with access to the root account on the remote host will have full root access to your ESXi host.
  • Root users allowed password free access to ESXi are not affected by password changes.
  • Root users allowed password free access to ESXi are not affected by lockdown mode.

 

With that I’ll jump down off my soapbox and go over the steps to enable password free SSH.   It’s really pretty easy.  Two basic steps:

1.  On the remote host use “ssh-keygen” to create a private/public key pair.  You can use an RSA or DSA token.  Make sure you leave the passphrase empty/blank.

1

2.  Next, append the user’s public key (created by the ssh-keygen tool) to the /etc/ssh/keys-root/authorized_keys file on the ESXi host.  Here’s an easy way to do this  (I got this nifty syntax from here):

# cat /root/.ssh/id.dsa.pub | ssh root@<esx host> ‘cat >> /etc/ssh/keys-root/authorized_keys’

2

With the remote host’s public key stored in the “authorized_keys” file, anytime this user SSH’s to the vSphere host instead of prompting for a password the host will check the remote users public key against what’s in the authorized_keys file, and if a match is found access is allowed.

Note:  I’ve seen a few articles that mentioned the need to edit the /etc/ssh/sshd_config file.  On ESXi 5.0  you do not need to edit the sshd_config file. The file is already configured to allow password free SSH.  All you need to do is load the user’s public keys into the /etc/ssh/keys/authorized_keys file.

Default Passwords for VMware and EMC

Default Passwords

Here is a collection of default password to save you time googling for them:

EMC Secure Remote Support (ESRS) Axeda Policy Manager Server:
•Username: admin
•Password: EMCPMAdm7n

EMC VNXe Unisphere (EMC VNXe Series Quick Start Guide, step 4):
•Username: admin
•Password: Password123#

EMC vVNX Unisphere:
•Username: admin
•Password: Password123#
NB You must change the administrator password during this first login.

EMC CloudArray Appliance:
•Username: admin
•Password: password
NB Upon first login you are prompted to change the password.

EMC CloudBoost Virtual Appliance:
https://:4444
•Username: localadmin
•Password: password
NB You must immediately change the admin password.
$ password

EMC Ionix Unified Infrastructure Manager/Provisioning (UIM/P):
•Username: sysadmin
•Password: sysadmin

EMC VNX Monitoring and Reporting:
•Username: admin
•Password: changeme

EMC RecoverPoint:
•Username: admin
Password: admin
•Username: boxmgmt
Password: boxmgmt
•Username: security-admin
Password: security-admin

EMC XtremIO:

XtremIO Management Server (XMS)
•Username: xmsadmin
password: 123456 (prior to v2.4)
password: Xtrem10 (v2.4+)

XtremIO Management Secure Upload
•Username: xmsupload
Password: xmsupload

XtremIO Management Command Line Interface (XMCLI)
•Username: tech
password: 123456 (prior to v2.4)
password: X10Tech! (v2.4+)

XtremIO Management Command Line Interface (XMCLI)
•Username: admin
password: 123456 (prior to v2.4)
password: Xtrem10 (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)
•Username: tech
password: 123456 (prior to v2.4)
password: X10Tech! (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)
•Username: admin
password: 123456 (prior to v2.4)
password: Xtrem10 (v2.4+)

XtremIO Easy Installation Wizard (on storage controllers / nodes)
•Username: xinstall
Password: xiofast1

XtremIO Easy Installation Wizard (on XMS)
•Username: xinstall
Password: xiofast1

Basic Input/Output System (BIOS) for storage controllers / nodes
•Password: emcbios

Basic Input/Output System (BIOS) for XMS
•Password: emcbios

EMC ViPR Controller :
http://ViPR_virtual_ip (the ViPR public virtual IP address, also known as the network.vip)

•Username: root
Password: ChangeMe

EMC ViPR Controller Reporting vApp:
http://:58080/APG/

•Username: admin
Password: changeme

EMC Solutions Integration Service:
https://:5480

•Username: root
Password: emc

EMC VSI for VMware vSphere Web Client:
https://:8443/vsi_usm/
•Username: admin
•Password: ChangeMe

Note:
After the Solutions Integration Service password is changed, it cannot be modified.
If the password is lost, you must redeploy the Solutions Integration Service and use the default login ID and password to log in.

Cisco Integrated Management Controller (IMC) / CIMC / BMC:
•Username: admin
•Password: password

Cisco UCS Director:
•Username: admin
•Password: admin
•Username: shelladmin
•Username: changeme

Hewlett Packard P2000 StorageWorks MSA Array Systems:
•Username: admin
•Password: !admin (exclamation mark ! before admin)
•Username: manage
•Password: !manage (exclamation mark ! before manage)

IBM Security Access Manager Virtual Appliance:

•Username: admin
•Password: admin

VCE Vision:
•Username: admin
•Password: 7j@m4Qd+1L
•Username: root
•Password: V1rtu@1c3!

VMware vSphere Management Assistant (vMA):
•Username: vi-admin
•Password: vmware

VMware Data Recovery (VDR):
•Username: root
•Password: vmw@re (make sure you enter @ as Shift-2 as in US keyboard layout)

VMware vCenter Hyperic Server:
https://Server_Name_or_IP:5480/
•Username: root
•Password: hqadmin

https://Server_Name_or_IP:7080/
•Username: hqadmin
•Password: hqadmin

VMware vCenter Chargeback:
https://Server_Name_or_IP:8080/cbmui
•Username: root
•Password: vmware

VMware vCenter Server Appliance (VCSA) 5.5:
https://Server_Name_or_IP:5480
•Username: root
•Password: vmware

VMware vCenter Operations Manager (vCOPS):

Console access:
•Username: root
•Password: vmware

Manager:
https://Server_Name_or_IP
•Username: admin
•Password: admin

Administrator Panel:
https://Server_Name_or_IP/admin
•Username: admin
•Password: admin

Custom UI User Interface:
https://Server_Name_or_IP/vcops-custom
•Username: admin
•Password: admin

VMware vCenter Support Assistant:
http://Server_Name_or_IP
•Username: root
•Password: vmware

VMware vCenter / vRealize Infrastructure Navigator:
https://Server_Name_or_IP:5480
•Username: root
•Password: specified during OVA deployment

VMware ThinApp Factory:
•Username: admin
•Password: blank (no password)

VMware vSphere vCloud Director Appliance:
•Username: root
•Password: vmware

VMware vCenter Orchestrator :
https://Server_Name_or_IP:8281/vco – VMware vCenter Orchestrator
https://Server_Name_or_IP:8283 – VMware vCenter Orchestrator Configuration
•Username: vmware
•Password: vmware

VMware vCloud Connector Server (VCC) / Node (VCN):
https://Server_Name_or_IP:5480
•Username: admin
•Password: vmware
•Username: root
•Password: vmware

VMware vSphere Data Protection Appliance:
•Username: root
•Password: changeme

VMware HealthAnalyzer:
•Username: root
•Password: vmware

VMware vShield Manager:
https://Server_Name_or_IP
•Username: admin
•Password: default type enable to enter Privileged Mode, password is ‘default’ as well

Teradici PCoIP Management Console:
•The default password is blank

Trend Micro Deep Security Virtual Appliance (DS VA):
•Login: dsva
•password: dsva

Citrix Merchandising Server Administrator Console:
•User name: root
•password: C1trix321

TP-Link ADSL modem / router, Wi-Fi :
•User name: admin
•password: admin

VMTurbo Operations Manager:
•User name: administrator
•password: administrator
If DHCP is not enabled, configure a static address by logging in with these credentials:
•User name: ipsetup
•password: ipsetup
Console access:
•User name: root
•password: vmturbo

How To: Use PowerCLI to find (and disconnect) all CD Drives on VMs

VMs that leave ISOs mounted cause problems. I’d like to find all the VMs that have CD-ROM drives loaded with ISOs, look over that list, and then remove them if necessary.

Solution :

The first solution I provided here wasn’t that great, so I’m updating this post. The original contents have been changed because they previously would disconnect the entire CD-ROM drive, vs. just unmounting the ISO. As you can imagine, pulling the equivalent of ripping a CD-ROM drive out while a machine is running can cause some interesting behavior. The solution below outlines a much better way to do this.

Two one-line PowerCLI scripts will help us with this.

Firstly, to search for all Connected CD-ROMs for all VMs:

Get-VM | Where-Object {$_.PowerState –eq “PoweredOn”} | Get-CDDrive | FT Parent, IsoPath

And as long as there aren’t any you need to keep up, you can just select them all and then set the state to “No Media” for each CD-Drive:

Get-VM | Where-Object {$_.PowerState –eq “PoweredOn”} | Get-CDDrive | Set-CDDrive -NoMedia -Confirm:$False

Note the -Confirm:$False to allow it to just proceed with what it needs to do.

 

Advanced : VMware HA Important Points

Advanced : VMware HA Important Points

post source : isupportyou

VMWare HA Important Points

  • HA maintains the high availability of virtual machines when an event of host failure / isolation occurs by powering on them on running hosts.
  • Every host in cluster exchanges its heartbeat with other hosts to notify them that it is alive.
  • A host is declared as isolated when its heartbeat is not received within 12 seconds.
  • A host is declared as dead when its heartbeat is not received within 15 seconds, we can increase this duration to avoid false positives by defining an advanced setting das.failuredetectioninterval in vCenter.
  • If we set das.failuredetectioninterval to 60 seconds we can avoid false isolations, which means if an isolated host comes back within 60 seconds VM’s will continues to run on the same host, which means HA will never interfere.
  • When a host is declared as isolated after defined interval isolation response will be executed on that host.
    • If isolated response is set to “Leave Powered on” the vm’s will continues to run on the isolated, however if another host tries to power on the same vm on other host it may not be possible to do so, because of underlying lock mechanism provided by vmfs.
    • If isolated response is set to “Shutdown” the vm’s will undergo clean shutdown by the host and then they will be powered on other host.
    • If isolated response is set to “Power off” the vm’s will powered off immediately by the host and then they will be powered on other host. At the same time if the host has VM’s which are shutdown gracefully, they will never powered on new host. Only abnormally powered off vm’s will be powered on other hosts to avoid service interruptions.
  • When a host is declared as dead all the vm’s running on it will be powered on immediately on other hosts following admission control policies and restart priorities.
  • Admission control policies are defined to ensure sufficient resources are available to virtual machines when a host failure/isolation occurs, in other words HA will reserve some resources to provide room for virtual machines in worst case scenario. There are three policies available to make these reservations.
    • Host failures a cluster can tolerate
      • If we select this option, HA will reserve resources based on a concept called slots. “A slot is a logical representation of highest CPU and Memory reservations that satisfy requirements for any powered on vm in the cluster”. When slots are calculated a host with highest slots will be removed out of equation, this decision ensures resources for all the VM’s if any other host fails throughout the cluster.
      • A slot is defined from highest CPU and memory reservations in the cluster, if a VM has 4GHZ CPU and 1GB RAM and other VM has 2GHZ CPU and 4GBRAM, so 4GHZ and 4GB is defined as slot.
      • Number of slots available on a host will be calculated based on most restrictive number, for example if a host has 256GB of RAM and 16GHZ of CPU, it has 64 slots for memory and 4 slots for CPU, so the host will be considered to have 4 slots for failovers, which means it can accommodate 4 VM’s in an event of failure.
      • A custom slot size also can be defined using advanced options to increase number of slots and avoid resource wastage in case of a cluster has a VM with high amount  of resource reservations.
      • An example calculation of slots, Example 1
        • Host A -> 12GHZ + 12 GB
        • Host B ->  8 GHZ + 8 GB
        • Host C -> 12GHZ + 12 GB
        • VM 1 -> 2GHZ + 4 GB
        • VM 2 -> 1GHZ + 2 GB
        • VM 3 -> 4GHZ + 2 GB
        • VM 4 -> 4GHZ + 1 GB
        • VM 5 -> 2GHZ + 3 GB
        • VM 6 -> 3GHZ + 3 GB

So this cluster has 3 hosts with different set of resources, it has 32 GHZ CPU and 32 GB RAM throughout the cluster. As discussed earlier slot is calculated from the largest reservations of CPU and RAM assigned to the VM’s in this cluster. In our case the slot size would be 4GHZ and 4GB, it can satisfy the requirements for any powered on VM in this cluster.

So Host A has 3 slots, Host B has 2 slots and Host C also has 3 slots. A total of 8 slots available in this cluster.

If Host B fails this cluster will have only 5 slots available, this is called current capacity. Which does not satisfy to power on all VM’s so VM’s will be never powered or migrated to these hosts.

  • Another example 2
    • Host A -> 9GHZ + 9 GB
    • Host B -> 9GHZ + 6 GB
    • Host C -> 6GHZ + 6 GB
    • VM 1 -> 2GHZ + 1 GB
    • VM 2 -> 2GHZ + 1 GB
    • VM 3 -> 1GHZ + 2 GB
    • VM 4 -> 1GHZ + 1 GB
    • VM 5 -> 1GHZ + 1 GB

So this cluster has 3 hosts with different set of resources, it has 24GHZ CPU and 21GB RAM throughout the cluster. As discussed earlier slot is calculated from the largest reservation of CPU and RAM assigned to the VM’s in this cluster. In our case the slot size would be 2GHZ and 2GB, it can satisfy the requirements for any powered on VM in this cluster.

So HA has 4 slots, Host B has 3 slots and Host C also has 3 slots. A total of 10 slots available in this cluster.

If Host B fails this cluster will have 7 slots available, this is called current capacity. And we have 5 VM’s to power on so all can be powered on without any problem as they are not violating resource constrains.

  • Percentage of resources to be reserved
    • This is the most flexible mechanism to reserve resources for HA failovers.
    • This does not use slots concept to reserve resources, available resources will be calculated based on following formula
      • (Total available resources – Total reserved resources)/(Total available resources)
    • Using this option we can reserve resources at cluster level, not at host level.
    • There are no advanced options needs to be configured.
  • Dedicating a host for fail over
    • A dedicated host will be designated for failover purposes, this wastes resources as we need to dedicate a host and it is sitting idle all the time.
  • Restart priorities are followed in the below order
    • Agent based VM’s
    • FT enabled VM’s
    • VM’s with high priority option defined
    • VM’s with medium priority option defined
    • VM’s with low priority option defined
  • When a host is declared as dead all the VM’s running on it will be powered on on other hosts running in that cluster based on restart priority defined above.

HA in vSphere 4.1

  • It is called as Automated Availability Manager in this version.
  • When we configure HA on vSphere 4.1 cluster, the first 5 hosts will be designated as Primary nodes, out of these 5 one node will act as “Master Primary” and which will handle restarts of VM’s in the event of a host failure.
  • All the remaining hosts will join as Secondary Nodes.
  • Primary nodes maintain information about cluster settings and secondary node states.
  • All these nodes exchange their heartbeat with each other to know the health status of other nodes.
  • Primary nodes sends their heart beats to all other primary and secondary nodes
  • Secondary nodes sends their heart beats to primaries only
  • Heart beats will be exchanged between all nodes every second.
  • In case of a primary failure, other primary node will take the responsibility of restarts.
  • If all primaries goes down at same point, no restarts will be initiated, in other words to initiate reboots at least one primary is required.
  • Election of primary happens only during following scenarios
    • When a host is disconnected
    • When a host is entered into maintenance mode
    • When a host is not responding
    • And when cluster is reconfigured for HA.

 HA in vSphere 5.0

  • It is called as Fault Domain Manager in this version
  • HA is completely re-designed in this version, ha agent directly communicates with hostd instead of using a translator to communicate to vpxa. So every host has information of other host resources, which helps in an event of VC failure.
  • DNS dependency has been completely lifted off
  • When we configure HA on vSphere 5.0 cluster, the first node will be elected as master and all other nodes will be configured slaves.
  • Master node will be elected based on number of data stores it is connected to, and if all the hosts in cluster are connected to same number of data stores, host’s managed id will be taken into consideration. Host with highest managed id will be elected as master.
  • All hosts exchanges their heartbeats with each other to know about their health states.
  • Host Isolation response has been enhanced in this version, by introducing data store heart beating. Every host creates a hostname-hb file on the configured data stores and keeps it updated at specific interval. Two data stores will be selected for this purpose.
  • If we want to know who is master and who slaves are, just need to go to vCenter and click on Cluster Status from homepage in HA area.