Browse Category

Uncategorized

Disable vCLS (vSphere Cluster Services) in vSphere

While doing maintenance on my vSAN cluster recently i had the need to disable the vCLS in order to fully shut down the cluster. Doing some reasearch i found kb article 80472 that talk about temporarily disabling the service in order to perform maintenance. The steps are fairly easy

First we need to get the Cluster id from vSphere. To do so all we need to do is select the cluster and look at the url. For example

 https://<fqdn-of-vCenter-server>/ui/app/cluster;nav=h/urn:vmomi:ClusterComputeResource:domain-c8:eef257af-fa50-455a-af7a-6899324fabe6/summary

In the case above all we care about is the 8 number in domain-c8.

Next we need to navigate to the vCenter server -> Configure -> Advanced Settings example:

Click on edit Setting to the right:

Add a new key. Replace the <number> with the number found in the previous step

config.vcls.clusters.domain-c<number>.enabled and value False

It would look like this:

Click on Add and click Save

vCLS monitoring will initiate a clean-up of the VMs and we should notice that all of the vCLS VMs are gone.

After the maintenance is complete dont forget to set the same value to True in order to re enable the HA and DRS services.

Shutting down a vSAN Cluster

I have the need to completely shut down some of my vSAN clusters for various clusters and ive been having a hard time finding the proper procedure. As of 2/16/2021 VMware released guidance here

Here are the steps i took to do it on my end. If you have the vCLS service enabled follow my other instructions here prior to starting the rest of this guide.

Disable cluster member updates from vCenter on each ESXi host in the cluster by running

esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates

After the above is completed run the below only on 1 of the ESXi hosts. Take note of the host

python /usr/lib/vmware/vsan/bin/reboot_helper.py prepare

Place all ESXi hosts in maintenance mode with NoAction

esxcli system maintenanceMode set -e true -m noAction

Perform the necessary maintenance. Once the hosts are back up we will run the above in reverse

Remove the maintence mode on all ESXi hosts by running

esxcli system maintenanceMode set -e false

Run the below command on the same host it was originally run

python /usr/lib/vmware/vsan/bin/reboot_helper.py recover

vSAN availability can be checked by running

esxcli vsan cluster get

Enable cluster member updates for vCenter

esxcfg-advcfg -s 0 /VSAN/IgnoreClusterMemberListUpdates

vRA cloud API getting started

I wanted to keep track of what needs to be done prior to actually being able to query API on vRA cloud. Ive been having a hard time finding the documentation i needed in the past

First step is to get an API token for the specific username. We can do this by going to My account under User Settings in vRA Cloud

Go to API Tokens and click on Generate a new api token

Give it a name and select what it will have access to and click on Generate

Once we have the api token generated we can use it to generate an access token by running

curl --location --request POST 'https://console.cloud.vmware.com/csp/gateway/am/api/auth/api-tokens/authorize' --header 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'refresh_token=api token generated earlier'

Now we can use the output from “access_token”. There`s a number of Cloud Assembly examples here

Full guide available here

vIDM upgrade to 3.3.4 no networking detected

If you are like me and you tried to perform an upgrade of vIDM 3.3.x to 3.3.4 you were most likely greeted by no network connectivity after upgrade with the following screen:

If you made a backup of the network configuration this is where we would restore it.

If not we can perform it manually by running

/opt/vmware/share/vami/vami_config_net

Press 6 and go through the screens to configure the ip

Press 2 for default gateway

Press 4 for the DNS

Press 0 to show the current configuration

If you notice in my configuration the dns server didnt take. In order to fix it i recreated the /etc/resolv.conf.

Remove the resolv.conf running

rm -f /etc/resolv.conf

Create a symlink for resolv.conf

ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

In my case vIDM was still not responding so i had to reboot the server. After the reboot everything started working properly

vIDM 3.3.4 online upgrade

I wanted to go through the upgrade of vIDM 3.3.4 outside of lifecycle manager. I wanted to do it using the online upgrade via the console. Official documentation can be found here. First step is to log on to the console via the root credentials and upgrade the upgrade manager to 3.3.4

/usr/local/horizon/update/updatemgr.hzn updateinstaller

Set the upgrade version that you are upgrading from in my case was 3.3.2 so i ran

/usr/local/horizon/update/configureupdate.hzn manifest --set-version 3.3.2.0

Because we are switching operating systems to Photon OS 3 we need to download the dual boot update package from here. Look for VMware Identity Manager Service Virtual Appliance Dual Boot Update and transfer it to the vIDM server

And finally run the update

/usr/local/horizon/update/updatemgr.hzn update

Specify the path to the dualboot file. It would look something like this:

If everything went through properly you would be greeted with something like this

All we need to do now is to reboot.

If you are greeted by the following screen follow my other post here

Upgrading vRSLCM (vRealize Lifecycle Manager) to 8.3

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance upgraded to the latest 8.3 release

The first step is to lo in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> System Upgrade

Click on Check for Upgrade

We can see that the check found a new version available for 8.3

Click on Upgrade

This will fire up the upgrade process and start upgrading packages. The system will automatically reboot on 8.3 once completed. We can check the version by going to Settings -> System Details

Adding events from vCenter to Operations Manager

I recently ran in to a situation where one of the events in vCenter wasn’t showing up in Operations Manager

Doing some research and i found this kb article: 65106

Basically theres a file called eventlist.txt under /usr/lib/vmware-vcops/user/plugins/inbound/vmwarevi_adapter3/conf/ on the master node so i logged on to master node via ssh and did

vi /usr/lib/vmware-vcops/user/plugins/inbound/vmwarevi_adapter3/conf/eventlist.txt

Comment out the event that we want to get alerts for and restart the service by running

service vmware-vcops restart collector

Doing a full restore of the Mac mini M1

I would try the restore options first. You can find instructions for the restore menu here

If the recovery menu doesn’t work put the Mac mini in DFU mode by following the instructions I have here

One the device is in DFU open Apple Configurator 2 (Mac Only sorry Windows folks). Make sure you are at least at version Version 2.13.2

Connect the USB-C cable to the Mac and the Mac mini to the port closest to the ethernet port

If you are presented with the DFU screen below in Apple Configurator 2 the procedure was done correctly

Right click -> Advanced -> Revive device

This will download the operating system and perform a reinstall of the OS preserving the user data

If you would like to erase everything pick the restore option

Apple Configurator 2 will go through downloading the firmware and perform a restore of the system

Booting Mac mini M1 in DFU mode

  1. Disconnect the power cord from the Mac Mini.
  2. Plug USB-C/Thunderbolt cable into the USB-C port next to the ethernet port.
  3. Plug the other end into the Host Mac.
  4. While holding down the power button, connect the Mac Mini to power and continue to hold the power button for about 3-5 seconds
  5. You should now see the DFU logo on the Host Mac.

How to restart the new Mac mini M1 in recovery mode?

I recently purchased one of the newer Mac mini devices with the new M1 chipset. Unfortunately I went with the lower edition that had 256 GB and I wanted to send it back to purchase the bigger 512 GB edition, but I wanted to clean my installation.

In order to to boot in the recovery mode to reinstall the os I had to perform the following steps:

  1. Make sure the Mac mini is off, if its not off just shut it off
  2. Hold the power button until the options appear, it might seem like a long time but keep holding it. It took about 15 seconds for me.
  3. A menu should appear that shows the hard drive and Options. Select options and continue
  4. You will see the apple logo and then a new menu should appear with additional options. It would look like this:

5. If you want to perform a clean install go to Disk Utility first and erase the current partitions, after that step is complete come back to the options screen and Select Reinstall macOS Big Sur and click on continue

  • 1
  • 2