VCF on VxRail 4.1 to 4.2 Upgrade – My Experience

Previously we looked at the 4.0 to 4.1 upgrade. This is a continuation of the next iteration of VCF Upgrades. I can’t stress this enough read and use the VVD documentation for guidance throughout this process. The Documents are well written and easy to follow. VCF 4.1 is “vRSLCM aware”. One of the benefits is that all the required bundles for the vRealize components are downloaded from the VMware depot via SDDC Manager. During any vRealize updates, vRSLCM will pull the required bundles from SDDC Manager. Thus giving you a single tool to manage all your bundles downloads for your SDDC Stack. 

Firstly a Quick overview of what I have installed in my Lab.

Dell 5248-N Switches (25Gbe)
VCF MGMT Cluster – 4 x E560F
2 Disk Groups (NVME Cache @ 1.4TB + 3 SDDs @ 1.8TB)
2 Intel XEON Gold CPU @ 2.50Ghz
1TB Ram

I am running a consolidated architecture. Where I have the typical VCF stack running alongside

vRealize Lifecycle Manager
Work Space ONE – VIDM
vRealize Operations Manager
vRealize Log Insight
vRealize Automation
vSphere With Kubernetes

Within the VCF-MGTM-Cluster-rp-user-vm resource pool I have a bunch of Windows and Ubuntu VMs that have been deployed via the VRA Catalogue during customer demo sessions etc.

Overview of the software versions within the VCF Instance

Before each component upgrade I recommend running the Upgrade Precheck.

1st SDDC Manager 4.1 -> 4.1.0.1 (23mins 39 Secs)

2nd SDDC Manager 4.1.0.1 -> 4.2 (25mins 17 Secs)

3rd Update VxRail 7.0.100 -> 7.0.101. (2hours 25mins 17 Secs)

4th Update 4.2 Configuration Drift Bundle. (1min 10 Secs)

5th Update – VSRLCM 8.1.0-16776528 ->8.2.0.17513665 (7mins)

Once the 4.2 Configuration drift bundle has been completed the focus then shifts to the vRealize Suite. First up is vRSLCM. The guidance here is to run a health check on the Workspace one Cluster. Login to vRSLCM from “My services” page, click Lifecycle operations, click Manage environments. In the globalenvironment card, click View details. On the VMware Identity Manager tab, click the horizontal ellipsis, select Trigger cluster health, and click Submit. Once the heath check comes back healthy run the upgrade for vRSLCM via SDDC Manger.

6th Update – vRealize Log Insight 8.1.1-16281169 -> 8.2.0.16957702. (17mins 31 Secs)

While the upgrade is initiated from SDDC Manager it can be monitored via vRSLCM. It’s also worth noting before each vRealize upgrade SDDC Manager will ensure that vRSLCM runs a health check for the vRealize component before the upgrade and again after the upgrade is complete. Another advantage of SDDC Manager being vRSLCM Aware.

Don’t forget to update the content packs. Login directly into vRLI, choose content packs, then select updates. Push the Update All option

7th Update vRealize Operations 8.1.1-16522874 -> 8.2.0.16949153. (1hr 20mins)

The job request will take a snapshot of the vROPS Collectors and the Manager appliances before the upgrade. These will need to be deleted manually afterwards.

There are some post upgrade tasks that should be taken care of . The vRealize Operations to vSphere Integration (Actions) role which is define in vCenter gets additional privileges that are required for VM configuration and management . Once happy with your upgrade delete snapshots.

8th Update vRealize Automation. 8.1.0-16633378 -> 8.2.1701654 (1hr 20mins)

Before you can run this upgrade there is a manual step that needs to be taken. The Log Partition on the VRA appliances need to be increased to 30GB in size. The VMDKs can be increased while the VMs are running and the

vracli disk-mgr resize

command can be used to expand the disks. Note during my upgrade the disk expansion failed and I had to revert to this KB https://kb.vmware.com/s/article/79925 to expand the disk on each appliance. The pre-check saw that the disk expansion didn’t happen as expected and flagged it.

LCMVRAVACONFIG590047

The error itself was vague and I had to go digging through the logs in order to figure out the issue. H/T to Brian O’Connell’s post which was very insightful. The VRSLCM logs can be found here.

/var/log/vrlcm/vmware_vrlcm.log

"id":null,"checkName":"Disk space check on services-logs partition","checkType":"ERROR","status":"FAILED","recommendations":["Disk space on services-logs partition (/services-logs) on VM Disk 3 on node: vcf-vra-a.dubai.ad needs to be increased to at least 22 GB to perform up grade and work normally. To satisfy the requirement increase services-logs partition (/services-logs) on VM Disk 3 to 22 GB or more WITHOUT POWERING OFF THE VA. Then run 'vracli disk-mgr resize' on the vRA VA from the command shell. Refer KB - https://kb.vmware.com/s/article/79925 if new size is not reflected after resize."],"resultDescription":"Disk space on services-logs partition (/services-logs) on VM Disk 3 (/dev/sdc) needs to be increased to at least 22 GB. Current size of the partition on vRA VA 

It would have been cool if this detail would have appeared in the UI. But after I ran this disk expansion I never checked to make sure it has expanded – Trust but verify . Once I followed the KB listed above and reran the upgrade all was fine.

8th Update vIDM – Workspace One Access. 3.3.2 -> 3.3.4 (2 Hours 1 Min)

This upgrade step is completely manual. Firstly sync the binary mapping for VIDM from SDDC Manager.

Create a Snapshot of the vIDM cluster using vRSLCM

The Snapshot workflow will complete a number of tasks

  • Gracefully Shutdown VMware Identity Manager
  • Take Snapshots
  • Power On VMware Identity Manager
  • Remediate VMware Identity Manager
  • Update the Inventory

Once the snapshot completes Trigger an Inventory sync

Confirm which of the two Identity Manager nodes are running as the secondary nodes.

Disable the secondary nodes from the wsa-server-pool group within the NSX-T UI

Now we can start the upgrade. Note the upgrade button did not appear when I was using Chrome as my browser. I switched to Firefox and the upgrade option was there.

Confirm that you have created a snapshot and run the health check. Trigger the inventory sync and then proceed

Confirm the version that you are upgrading to.

Pass all the prechecks.

Submit.

Post upgrade there are a few cleanup items that need to be completed.
  • Re-enable the disabled server pool member in NSX-T wsa-server-pool.
  • Update resolve.conf to include search and domain.
  • Update timesyncd.conf.
  • Disable time sync via VMware Tools.
  • Update vRLI Agent and edit the liagent.ini file.

9th Update NSX-T 3.0.2.0.0-16887200 -> 3.1.0.0.0-17107167 (1hr 47mins)

Back to SDDC Manager for this update. Ensure no errors in the NSX-T UI prior to update. This update will enable dark mode in the UI once the upgrade is complete.

10th Update vCenter 3.0.2.0.0-16887200 -> 3.1.0.0.0-17107167 (26mins 38 Seconds)

11th Update VxRail 7.0.101 -> 7.0.131 (5hrs 2mins)

This update included

  • ESXi 7.0.U1d
  • vSAN 7.0.U1d
  • VxRail Manager 7.0.131
  • Bios 2.9.4
  • iDRAC 4.40.00.201
  • PT Agent 2.3
  • ISM 3.6.0
  • Backplane Expander 2.52

Final SDDC software titles

All in all pretty straight forward while there are a number (11) of updates required the only Error I hit with VRA disk expansion ( which is something that I should have avoided if I double-checked the volume sizes) along with the manual vIDM upgrades that required any real work. The rest was just scheduling the upgrades via SDDC Manager

2 thoughts on “VCF on VxRail 4.1 to 4.2 Upgrade – My Experience

  1. Thankyou Cliff for sharing this blog ,This is amazing blog which helped us a lot during the upgrade process of VCF 4.1 to 4.2.In detail information about the steps kudo’s to you cliff

    Liked by 1 person

Leave a comment