Brownfield Install of VMWare Storage Appliance VSA

In this example I am installing the VMWare Storage Appliance onto ESXi servers that have existing running VMs. This is known as a brownfield installation.

Basics

  • The VSA Manager must be installed on a 64-bit Windows vCenter machine that runs vCenter Server version 5.0 or later.
  • vCenter does not need to be on the same subnet as the cluster
  • The VSA cluster service must be installed on a machine in the same subnet as the cluster
  • Once installed you cannot add another ESXi host to a running vCenter cluster
  • You can resize the size of the VSA storage after installation
  • You will need at least 2GB free space on the machine where you are installing the VSA cluster service.
  • The VSA Cluster Service is only necessary in two node configurations

Scenario

  • 2x ESXi servers in head office
  • 1x ESXI server in branch office

Pre-requisites

  • You must have a vcenter server, with a data center created and the ESXi hosts added

Heap Size

  • I recommend changing the heap size on each ESXi server in the cluster to 256 (see below).

EVC mode

You have 2 options:-

  • Power off all the virtual machines before installing the VSA, or
  • Change the dev.properties file to raise the EVC baseline

The dev.properties file is located on the system where the vCenter Server is installed, under the C:\Program Files\VMware\Infrastructure\tomcat\webapps\VSAManager\WEB-INF\classes. Change the line evc.config.baseline=lowest to evc.config.baseline=highest

 

Switch Configuration

The switching setup is very important, therefore I recommend writing out what NICs are used for what. I recommend using VLANs to isolate cluster traffic so you will need to know the physical switch port that each VMnic connects to.

ESXi1    

VMnic

Switch

Port

Active Use

Standby Use

Vmnic0

1

1

VM Network

Management Network

VSA Front End

Vmnic1

1

2

VSA Front End

VM Network

Management Network

Vmnic2

1

13

VSA-Back End

VSA-VMotion

Vmnic3

1

5

VSA-VMotion

VSA-Back End

 

ESXi2

VMnic

Switch

Port

Active Use

Standby Use

Vmnic0

2

1

VM Network

Management Network

VSA Front End

Vmnic1

2

2

VSA Front End

VM Network

Management Network

Vmnic2

2

13

VSA-Back End

VSA-VMotion

Vmnic3

1

17

VSA-VMotion

VSA-Back End

 

I then created a VLAN on the switches for the VSA-Back End (and VSA-VMotion) NICs. This is to isolate the traffic from the main network.

 

vSwitch Configuration

  • On each ESXi server create the vSwitches as shown below. Note that the Port-group names are case sensitive.
  • You will need to enable vMotion on the VSA-VMotion port group and assign an IP address.

As per the table in the switch section you need to set one active and one standby adaptor for the port groups.

Vmnic

Active for

Standby for

Vmnic0

VM network

Management Network

VSA-Front End

Vmnic1

VSA-Front End

VM network

Management Network

Vmnic2

VSA-Back End

VSA-VMotion

Vmnic3

VSA-VMotion

VSA-Back End

 

You can set the active/standby adapters for a port group on the below tab.

 

Install VSA Cluster Service

In the example below I am installing the VSA cluster service on the VMWare Management assistant. You will need to connect to the vMA and have internet access from the vMA. Alternatively there are Windows and Linux versions that can be downloaded and installed on separate OSes. I am not sure if VMWare support installation of the cluster service on the VMA so I would recommend installing it on a separate Windows or Linux VM.

From the vMA enter the below commands (for more information about this install see the excellent guide here):-

  • sudo zypper –gpg-auto-import-keys ar http://download.opensuse.org/distribution/11.1/repo/oss/ vMA-SLES-11.1
  • sudo zypper refresh
  • sudo zypper se gettext
  • sudo zypper in gettext-tools

From the VMware website download the VSA cluster service for Linux (VMware-VSAClusterService-5.1.1.0-858549-linux.zip). Create a folder(tmp) under the /home/vi-admin folder and copy the zip file into that.

Once the copy has completed enter the below commands

  • cd /home/vi-admin/tmp
  • unzip *.*
  • cd V*
  • cd setup
  • sudo ./install.sh

Apparently the above errors are not important…

Installation of VSA Manager

On the VC download “VSA Manager” from the VMWare website (in this instance I used VMware-vsamanager-all-5.1.0-859644.exe)

Once installed open the vi client on the virtual center and you should see a VSA manager tab.

Run through the installer and choose the appropriate data center. Then select the hosts to go into the cluster

Note I have entered the IP of the VMA for the cluster service IP address.

Fill out the necessary IP info

Note that the VSA size below is 1TB. This will actually create 2x 500GB VSA datastores. You may want to check if any of your VMs have drives larger than the size of the VSA datastores. The reason it creates 2x 500GB datastores is that each server must replicates the other server’s datastore.

If you choose to format the disks immediately it may take a while.

Note that I have not used dedicated VLANs for the cluster front-end and back-end portgroups. As mentioned about I have created port based VLANs on the switch to isolate the back-end traffic.

I was initially concerned by the below message but I can confirm that after installation it did not wipe the datastores on which the existing VMs resided.

After a short while the installation will complete.

The VSA manager tab should now be populated with information about the cluster and storage. Note the “change password” option. As mentioned above it is recommended to change your password.

The Cluster is now installed and you now have the option to migrate your running VMs onto the VSA storage (e.g. VSADs-0 and VSADs-1)

THE END

Comments 1

  • Excellent work.
    Only one query.
    Can we have 2 VSA HA clusters under single v-center in vmware esxi 5.1 environment? If yes, then what are the prerequisites for deploying the 2nd VSA cluster.

    Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *