Taken from vmware.com
- 64-bit x86 CPUs with at least two cores.
- ESXi 5.0 supports only LAHF and SAHF CPU instructions.
- Supports up to 2TB RAM although the free version is limited to 32GB
- ESXi requires a minimum of 2GB of physical RAM. VMware recommends 8GB of RAM to take full advantage of ESXi features and run virtual machines in typical production environments.
- One or more Gigabit or 10Gb Ethernet controllers.
Any combination of one or more of the following controllers:
- Basic SCSI controllers. Adaptec Ultra-160 or Ultra-320, LSI Logic Fusion-MPT, or most NCR/Symbios SCSI.
- RAID controllers. Dell PERC (Adaptec RAID or LSI MegaRAID), HP Smart Array RAID, or IBM (Adaptec) ServeRAID controllers.
- SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the virtual machines.
- For Serial ATA (SATA), a disk connected through supported SAS controllers or supported on-board SATA controllers. SATA disks will be considered remote, not local. These disks will not be used as a scratch partition by default because they are seen as remote.
Note can upgrade from ESX and ESXI 4.x to ESXi 5.0
- Image Builder allows VMware Admins to customize their installation media by adding and removing VMware Infrastructure Bundles (VIBs).
- You can only add VIBs to an image with an “acceptance level” of VMWareCertified and VMWareAccepted. PartnerSupported and CommunitySupported VIBs cannot be added
Allows you to create a “baseline” of settings/configurations that can be applied to multiple ESXi servers. It helps maintain consistency. Popular options to set are DNS and NTP.
Is setup via an OVF template with DHCP, TFTP, HTTP servers and a deploy-cmd CLi and Database. Note that the ESXi install is stateless the OS is gone on reboot! The auto-deploy process is:-
- PXE boot the target server
- ESXi is then auto-deployed to the host
- The ESXi host will then be added into your vCenter
- The ESXi host will then have a Host Profile applied to it.
You manage your auto-deploy images via Power-CLI e.g. you can create rules to associate hosts with image profiles, host profiles and location on vCenter.
- 64TB device support
- Unified block size (of 1MB). Note that volumes upgraded from VMFS 3 will retain their original block size.
- Improved sub block mechanism for files smaller than 1kb, sub blocks are used for better performance.
- Support for pass-through RDMs larger than 2TB
- Upgrade from VMFS3 does not require downtime.
- The partition format for upgraded VMFS3 partitions will change automatically & seamlessly from MBR to GPT when the size of the upgrade VMFS-5 volume is grown above the 2TB threshold.
- Supports RDM sizes of 64TB in physical compatibility mode and 2TB-512bytes in virtual compatibility mode
- Provides smart virtual machine placement and load balancing mechanisms based on I/O and space capacity.
- Works with VMFS and NFS although it’s not recommended to mix the 2 in a datastore cluster.
Datastore clusters a collection of datastores. This forms the basis of DRS
- During the configuration of a virtual machine you can store its drives on a datastore cluster and vSphere will decide where they are physically stored.
- I/O load is evaluated by default every 8 hours.
Affinity Rules Decide which virtual disks can be place on the same datastore. There are 3:
- VMDK Anti-Affinity – Virtual disks of a virtual machine are placed on different datastores.
- VMDK Affinity – Virtual disks are kept together on the same datastore.
- VM Anti-Affinity – Two specified virtual machines, including associated disks, are placed on different datastores.
- There is a datastore maintenance mode which will migrate all data off a datastore (i.e. much in the same way as server maintenance mode)
Profile Driven Storage
- Allows the placement of virtual machines based on a number of requirements (performance, availability SLA etc)
- Storage capability are either passed to vSphere via the storage API or they can be manually set
- Support NFS, iSCSI and FC
- You can now vMotion VMs with snapshots
- Is more efficient in vSphere 5 due to mirror mode which uses the mirror driver
- You cannot use storage vMotion with NPIV
- You cannot perform a storage vMotion during the installation of VMWare tools
- No longer requires configuration through the command line
- Discovery methods are “send targets” and “static target”
Fibre Channel over Ethernet Software Initiator
- Enables the use of FCoE without an FCoE adapter
- Requires a network adaptor that can support partial FCoE offload capabilities
Storage I/O Control
- Set shares and limits for datastores (including NFS)
- Doesn’t support RDM
- Doesn’t support datastores with multiple extents
- Must be managed by a single VC
vSphere Storage APIs Array Integration (VAAI)
vSphere Thin Provisioning
- Dead Space Reclamation informs the array when files are moved/deleted
- Out-of-Space Conditions will warn when LUNs are running out of space
Hardware acceleration for NAS
- Full File Clone Similar to Full Copy. Enables virtual disks to be cloned by the NAS device
- Reserve Space Enables creation of thick virtual disk files on NAS
- Complies with the SCSI T10 standard ( for full copy, block zeroing and hardware-assisted locking)
- It is recommend to have only 1 VMFS volume per LUN
- The diagnostic partition should not be setup on a SAN (unless the servers are diskless)
- Each LUN must present the same LUN ID to enable multipathing
- You can set the queue depth for the HBA during system setup
- The predictive scheme utilises a number of LUNs with different storage capabilities
- Adaptive scheme utilises a smaller number of large LUNs
Where there are multiple paths to storage the following multipathing polices can be used:-
- Most Recently Used (MRU) Uses the path found at boot time.
- Fixed Uses the “preferred flag” attribute if set.
- Round Robin (RR) Cycles through the available paths
- VMW_PSP_MRU called VMW_PSP_FIXED_AP in previous versions. Will query the storage array for the preferred path.
- Now supports Realtek network cards
- Teaming and failover options for virtual Distributed Switches (vDS)
Distributed vSwitches (DvS) v5.0 have the following new features:-
- User-defined network resource pools in network i/o control
- Netflow and port mirroring
The dynamic binding dvport type has been removed in 5.0
Configuration options for vSwitches
- Promiscuous mode Enables a vNIC to receive all traffic passed on a vSwitch.
- Forged Transmits Allows a vNIC to send traffic with a “fake” MAC address
- MAC address changes allows the guest OS on a VM to change its MAC address
- The notify switches option will notifiy physical switches when a virtual NICs location changes i.e. after vmotion
ESXi shapes outbound traffic on vSS and inbound and outbound traffic on vDS. The following traffic shaping options are given:
- Average bandwidth The number of bits per second averaged over time (kbits)
- Peak Bandwidth Maximum number of bits per second when sending or receiving a burst (kbits)
- Burst size Maximum number of bytes to send in a burst “bonus”. If a port doesn’t use all its allocated bandwidth it has the option to use a burst “bonus” (kbytes)
- Uses multiple CPUs to process a network load. Can provide improved network performance.
- Only support on VMXNET3
- Must be enabled in the vmx file
Network IO Control
Only available for vDS. It allows you to control traffic by class. For each class you can assign shares, specify a host limit or QoS. The predefined classes are:-
- Now supports “monster VMs” with up to 32 vCPUs and 1TB of memory
- VMDK’s are still limited to 2TB 512 bytes
- Thick provisioned eager-zeroed disks provide the best performance (the whole disk is written with zeros.)
- Must be running at least hardware version 4 to be supported by ESXi 5.0
- vlance is a legacy adapter. VMXnet are better optimised and offer gigabit connections but require vmtools
- Uses an agent called Fault Domain Manager (FDM)
- vSphere 5 elects one host as the “master” whilst the rest are “slaves”
- The master is responsible for monitoring the state of VMs and restarting failed VMs
- Uses a network and datastore heartbeat to determine whether hosts have failed. The datastore heartbeat is only used when the network heartbeat has failed.
- HA slot sizes are used to calculate the number of VMs that can be powered on in an HA cluster with “Host failures cluster tolerates” selected. The slots size is calculated based on the size of reservations on the VMs in the cluster. HA Admission Control then prevents new VMs being powered on if it would not leave any slots available should a host fail.
vSphere Update Manager
Baselines for VM upgrades:-
- VM Hardware upgrade to match host
- VM Tools Upgrade to match host
- VA upgrade to latest (virtual appliance)
- Cannot upgrade vCenter appliance need to install a brand new appliance and import the config
vSphere Web Client
An optional component of vCenter that allows for management of vCenter via a web interface (https://localhost:9443/vsphere-client/ui.jsp)
A way of grouping VMs, resource pools or other vApps so you can manage then (e.g. apply performance criteria, shutdown, etc)
- You can backup an ESXi hosts config with vicfg-cfgbackup s command (in the vi-client or vma).
vCenter Server Linked Mode
- vCenter does not support linked mode with older versions of vCenter
- Will store information for every vCenter server (including roles and licenses) in a Active Directory Application Mode (ADAM) DB on each vCenter.
- Allows you to search across all vCenter instances
- View all inventories in a single view
- You cannot migrate hosts or VMs between vCenter servers connected in linked mode
Unplanned device loss
- This is when a ESXi thinks that a storage device is permanently unavailable e.g. deleted, unmapped or hardware error
- You should do an adaptor rescan to remove any links to the device
- Supports VMDK’s and RDMs that are thick provisioned. If disks are thin provisioned you are prompted to convert them.
- Single vCPU
- VM must be running a supported guest OS (windows, solaris, netware, free BSD)
Scheduled Task Options
|ESXi host management||443|
|ESXi dump collector||6500
|Management and console||902|
|vCenter server linked mode||636|
|vSphere Web Server||9443|
In a Resource Pool you can set shares, reservation and limits for CPU and Memory Resources
- Shares either low, normal or high. A resource pool with a “high” share will have more access to resources that a resource pool with “low” shares.
- Reservation A guaranteed allocation
- Limits The maximum amount allocated
Expandable Reservation when ticked a VM can uses resources from further up the hierarchy if available.
Key ESXTOP/RESXTOP fields
|Memory||MEMSZ||amount of memory allocated to a VM (MB)||n/a|
|GRANT||how much memory is a VM is actually using (i.e. not what its been allocated)|
|MEMCTL||memory balloon statistics|
|SWAP||ESXi swap usage|
|SWCUR||Current swap file usage if larger than 0 indicates over-commitment|
|MCTLSZ||The amount of guest memory reclaimed by the balloon driver.||>0|
|CPU||MLMTD||Percentage of time the ESX Server VMKernel deliberately didn’t run the Resource Pool/World because that would violate the Resource Pool/World’s limit setting|
|%RDY||The % of time the VM was ready but couldn’t access the physical CPU||>5|
|%WAIT||The % of time the CPU is waiting|
|%USED||The % of CPU used by the VM|
|DISK||DAVG||Disk latency on array.||Depends on disk type:
FC > 20
SATA > 200
|GAVG||Response time as perceived by the guest OS (DAVG + KAVG)||>20|
|CONS||SCSI reservation conflicts||>20|
Virtual Machines Executable (VMX) swap files allow ESXI to swap to disk some of the memory it reserves for the VMX process (e.g. virtual device monitor)
Swap to Host Cache
vSphere 5 reclaims memory by storing the swapped out pages in the host cache on a SSD. This is obviously faster than storing on a non-SSD
vNUMA (Non Uniform Memory Architecture)
- Allows Guest OSes to identify that they are running on NUMA topology. Provides a performance increase by cutting down on non-local memory access.
- Only enabled on VMs with more than 8 vCPUs
OTHER VMWARE PRODUCTS
- Used to provide high availability for vCenter through the use of a backup vCenter.
- Will monitor the network, OS, applications and hardware of the vCenter server. In the event of a failover a backup vCenter server will take over.
- Replicates all settings from the “live” to the “backup” vCenter
- Used by enterprises to build private clouds
- Software that allows vSphere resources to be managed as a web-based service.
- Used to allow for automation and orchestration in your virtual environment
vCenter Data Recovery (vDR)
- The vCenter Data Recovery Appliance is a disk-based backup and recovery solution that enables quick, simple and complete data protection for virtual machines.
- Installed as an OVF appliance you need to install a plugin for vCenter
- There is also a windows installer package that allows you to mount backed up vmdks
- Each vDR appliance can have no more than two dedupe destinations
- It is recommended that each dedupe destination is no more than 1TB in size when using virtual disks, and no more than 500GB in size when using a CIFS network share.
VMWare Storage Appliance
Installed as a vCenter plugin, during installation it automatically
- creates a HA cluster containing selected ESXi servers
- Creates VSA front end and back end as well as vMotion networks.
- Creates a VSA VM on each ESXi server
- Setups up and NFS server in each VSA VM which is present back to ESXi
vSphere 5 is licensed on a per physical processor basis with a vRAM entitlement. vRAM entitlements can be pooled across multiple servers.
Figure 1 – taken from vmware.com
Kits are all in one packages to enable companies to deploy a complete vmware infrastructure.
Figure 2 taken from vmware.com