Browse Month

May 2022

Upgrading vROPS (vRealize Operations Manager) to 8.6.3 via vRSLCM

In this post i will go over upgrading my 8.x vROPS appliance to 8.6.3 using vRSLCM (vRealize Suite Lifecycle Manager). As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.8 Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. Make sure we select the upgrade package not the install package. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes vROPS

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.6.3 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

Run the Upgrade Assessment tool to make sure the currently used dashboards, reports, metrics etc are still compatible with the new version

Once the report has finished running we can either Download or view the report. Once everything has been reviewing we can click on the I have viewed the report and agree to proceed box and click next to proceed to the next step.

A new feature that was added was the capability to automatically create a snapshot prior to the upgrade and remove it after the upgrade. On this screen we also have the ability to chose if we want to keep the snapshots post upgrade for validation testing for example. Click next

Run the Precheck to make sure there are no errors or issues.

Once the check is complete we can review the checks that were performed and we can continue by clicking Next.

Review the upgrade details and click on Next and the Submit. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.6.3

Since we are doing a major upgrade i strongly recommend to clean the cache before using the new vROPS version.

Upgrading vRLI (vRealize Log Insight) to 8.6.2 via vRSLCM

In this post i will go over upgrading my 8.x vRLI appliance to 8.6.2 using vRSLCM (vRealize Suite Lifecycle Manager). As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.6.2 or 8.7. Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. Make sure we select the upgrade package not the install package. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes vRLI

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.8 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

A new feature that was added was the capability to automatically create a snapshot prior to the upgrade and remove it after the upgrade. On this screen we also have the ability to chose if we want to keep the snapshots post upgrade for validation testing for example. Click next

Run the Precheck to make sure there are no errors or issues.

Once the check is complete we can review the checks that were performed and we can continue by clicking Next.

Review the upgrade details and click on Next then Finish. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.8

Since we are doing a major upgrade i strongly recommend to clean the cache before using the new vRLI version.

Upgrading vRA (vRealize Automation) to 8.8

In this post i will go over upgrading my 8.x vRA appliance to 8.8. As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.78 Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes vRA

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.8.0 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

Anew feature that was added was the capability to automatically create a snapshot prior to the upgrade and remove it after the upgrade. On this screen we also have the ability to chose if we want to keep the snapshots post upgrade for validation testing for example. Click next

Run the Precheck to make sure there are no errors

The next screen remind us of hardware requirements for vRA and vIDM which can be checked here for vRA and vIDM which can be checked here. As far as i can tell they haven’t changed since the 8.3 release. Check the I took care of the manual steps above and am ready to proceed check mark and click on Run Precheck

Once the check is complete we can review the checks that were performed and we can continue by clicking Next.

Review the upgrade details and click on Next. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.8

Since we are doing a major upgrade i strongly recommend to clean the cache before using the new vRA version.

SSC 8.8 urllib3 (1.25.11) or chardet (4.0.0) doesn’t match a supported version

I recently upgraded my LCM deployed SSC server to 8.8. If you need a guide to go through the upgrade you can find my other post here.

After the upgrade was completed i was noticing strange behavior in the SSC UI so i checked the status the of the services. Here are the errors i found and how i fixed them

The first step was to check the status of the service

systemctl status salt-master

The return was this

* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-05-16 03:32:35 UTC; 6s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 4577 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 335.0M
   CGroup: /system.slice/salt-master.service
           |-4577 /bin/python3 /usr/bin/salt-master
           |-4581 /bin/python3 /usr/bin/salt-master
           |-4589 /bin/python3 /usr/bin/salt-master
           |-4593 /bin/python3 /usr/bin/salt-master
           |-4602 /bin/python3 /usr/bin/salt-master
           |-4606 /bin/python3 /usr/bin/salt-master
           |-4608 /bin/python3 /usr/bin/salt-master
           |-4609 /bin/python3 /usr/bin/salt-master
           |-4616 /bin/python3 /usr/bin/salt-master
           |-4697 /bin/python3 /usr/bin/salt-master
           |-4699 /bin/python3 /usr/bin/salt-master
           |-4703 /bin/python3 /usr/bin/salt-master
           |-4711 /bin/python3 /usr/bin/salt-master
           |-4712 /bin/python3 /usr/bin/salt-master
           |-4713 /bin/python3 /usr/bin/salt-master
           |-4714 /bin/python3 /usr/bin/salt-master
           |-4715 /bin/python3 /usr/bin/salt-master
           `-4717 /bin/python3 /usr/bin/salt-master

May 16 03:32:34 ssc-01a.corp.local systemd[1]: Starting The Salt Master Server...
May 16 03:32:35 ssc-01a.corp.local salt-master[4577]: [WARNING ] /usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.11) or chardet (4.0.0) doesn't match a supported version!
May 16 03:32:35 ssc-01a.corp.local salt-master[4577]:   RequestsDependencyWarning)
May 16 03:32:35 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.

The way i got around the error was by running

pip3 install --upgrade requests

Alternatively the official documentation is here talks about extracting the .whl file from the my vmware customer connect portal here. The file we are looking for is vRA_SaltStack_Config-8.8.0.7-1_Installer.tar.gz

Once Downloaded we are looking for SSEAPE-8.8.0.7-py2.py3-none-any.whl found under sse-installer/salt/sse/eapi_plugin/files

The file needs to be uploaded on the node having the issue and we would run

sudo pip3 install SSEAPE-8.8.0.7-py2.py3-none-any.whl --prefix /usr 

Finally we can restart the saltstack service and verify that its running without errors:

* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-05-15 20:02:56 UTC; 51s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 31309 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 330.0M
   CGroup: /system.slice/salt-master.service
           |-31309 /bin/python3 /usr/bin/salt-master
           |-31315 /bin/python3 /usr/bin/salt-master
           |-31320 /bin/python3 /usr/bin/salt-master
           |-31323 /bin/python3 /usr/bin/salt-master
           |-31325 /bin/python3 /usr/bin/salt-master
           |-31326 /bin/python3 /usr/bin/salt-master
           |-31327 /bin/python3 /usr/bin/salt-master
           |-31328 /bin/python3 /usr/bin/salt-master
           |-31330 /bin/python3 /usr/bin/salt-master
           |-31397 /bin/python3 /usr/bin/salt-master
           |-31398 /bin/python3 /usr/bin/salt-master
           |-31400 /bin/python3 /usr/bin/salt-master
           |-31411 /bin/python3 /usr/bin/salt-master
           |-31412 /bin/python3 /usr/bin/salt-master
           |-31413 /bin/python3 /usr/bin/salt-master
           |-31414 /bin/python3 /usr/bin/salt-master
           |-31415 /bin/python3 /usr/bin/salt-master
           `-31416 /bin/python3 /usr/bin/salt-master

May 15 20:02:54 ssc-01a.corp.local systemd[1]: Starting The Salt Master Server...
May 15 20:02:56 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.

If the status page a return similar to this

sseapi_rpc_queue: could not connect to SSE server

Follow my other guide here

SSC 8.8 sseapi_rpc_queue: could not connect to SSE server

I recently upgraded my LCM deployed SSC server to 8.8. If you need a guide to go through the upgrade you can find my other post here.

After the upgrade was completed i was noticing strange behavior in the SSC UI so i checked the status the of the services. Here are the errors i found and how i fixed them

The first step was to check the status of the service

systemctl status salt-master

The return was this

root@ssc-01a [ ~ ]# systemctl status salt-master
* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-05-15 16:39:26 UTC; 8min ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 3035 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 357.0M
   CGroup: /system.slice/salt-master.service
           |-3035 /bin/python3 /usr/bin/salt-master
           |-3041 /bin/python3 /usr/bin/salt-master
           |-3110 /bin/python3 /usr/bin/salt-master
           |-3115 /bin/python3 /usr/bin/salt-master
           |-3119 /bin/python3 /usr/bin/salt-master
           |-3122 /bin/python3 /usr/bin/salt-master
           |-3123 /bin/python3 /usr/bin/salt-master
           |-3124 /bin/python3 /usr/bin/salt-master
           |-3125 /bin/python3 /usr/bin/salt-master
           |-3203 /bin/python3 /usr/bin/salt-master
           |-3204 /bin/python3 /usr/bin/salt-master
           |-3206 /bin/python3 /usr/bin/salt-master
           |-3214 /bin/python3 /usr/bin/salt-master
           |-3216 /bin/python3 /usr/bin/salt-master
           |-3219 /bin/python3 /usr/bin/salt-master
           |-3220 /bin/python3 /usr/bin/salt-master
           |-3221 /bin/python3 /usr/bin/salt-master
           `-4871 /bin/python3 /usr/bin/salt-master

May 15 16:39:26 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.
May 15 16:39:27 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_rpc_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:27 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_event_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:27 ssc-01a.corp.local salt-master[3035]: [ERROR   ] Failed to get the salt environments: [Errno 111] Connection refused
May 15 16:39:28 ssc-01a.corp.local salt-master[3035]: [ERROR   ] Failed to retrieve commands from SSE: [Errno 111] Connection refused
May 15 16:39:32 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_rpc_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:32 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_event_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:37 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_rpc_queue: could not connect to SSE server: [Errno 111] Connection refused
May 15 16:39:38 ssc-01a.corp.local salt-master[3035]: [ERROR   ] Failed to retrieve commands from SSE: [Errno 111] Connection refused
May 15 16:39:38 ssc-01a.corp.local salt-master[3035]: [ERROR   ] sseapi_event_queue: could not connect to SSE server: [Errno 111] Connection refused

The way i got around the error was by editing the /etc/salt/master.d/raas.conf. The file seems to be missing a key auth parameter. The engines section should look like this

engines:
  - sseapi: {}
  - eventqueue: {}
  - rpcqueue: {}
  - jobcompletion: {}
  - keyauth: {}

After restarting the salt master i was able to verify that the error was gone. To restart the service i ran

systemctl restart salt-master

To verify the status i ran

systemctl restart salt-master
* salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-05-15 20:02:56 UTC; 51s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltproject.io/en/latest/contents.html
 Main PID: 31309 (salt-master)
    Tasks: 39 (limit: 9830)
   Memory: 330.0M
   CGroup: /system.slice/salt-master.service
           |-31309 /bin/python3 /usr/bin/salt-master
           |-31315 /bin/python3 /usr/bin/salt-master
           |-31320 /bin/python3 /usr/bin/salt-master
           |-31323 /bin/python3 /usr/bin/salt-master
           |-31325 /bin/python3 /usr/bin/salt-master
           |-31326 /bin/python3 /usr/bin/salt-master
           |-31327 /bin/python3 /usr/bin/salt-master
           |-31328 /bin/python3 /usr/bin/salt-master
           |-31330 /bin/python3 /usr/bin/salt-master
           |-31397 /bin/python3 /usr/bin/salt-master
           |-31398 /bin/python3 /usr/bin/salt-master
           |-31400 /bin/python3 /usr/bin/salt-master
           |-31411 /bin/python3 /usr/bin/salt-master
           |-31412 /bin/python3 /usr/bin/salt-master
           |-31413 /bin/python3 /usr/bin/salt-master
           |-31414 /bin/python3 /usr/bin/salt-master
           |-31415 /bin/python3 /usr/bin/salt-master
           `-31416 /bin/python3 /usr/bin/salt-master

May 15 20:02:54 ssc-01a.corp.local systemd[1]: Starting The Salt Master Server...
May 15 20:02:56 ssc-01a.corp.local systemd[1]: Started The Salt Master Server.

VMware documentation also talks about the procedure above in the Upgrade the Master Plugin documentation found here

If the status page a return similar to this

 [py.warnings      :110 ][WARNING ][5488] /usr/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.11) or chardet (4.0.0) doesn't match a supported version!
  RequestsDependencyWarning)

Follow my other guide here

Upgrading SSC (SaltStack Config) to 8.8

In this post i will go over upgrading my 8.x SSC appliance to 8.8. As a pre requirement we do need to have vRSLCM (vRealize Lifecycle Manager) upgraded to 8.8. Instructions can be found here. The upgrade already includes the latest Product Support Pack so an update to the Product Support Pack is not required.

To get started we can go to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> Binary Mapping. (If you haven’t added your My VMware credentials you will need to do that first by going to vRealize Lifecycle Manager -> Lifecycle Operations -> Settings -> My VMware)

Click on Add Binaries under Product Binaries

Select My VMware and click on Discover

We can see a list of binaries that have been discovered. We can select what we need and click on Add

This will create a request and start downloading the package. To view the progress we can click on the Click Here hyperlink

Click on the in Progress button to view the details

We now have to wait for the download to complete

After the download is complete we can go to Environments -> View Details on the environment that includes SSC

Click on Upgrade

An Inventory sync is recommended if the environment has changed since LCM performed the last sync. We trigger the sync from the UI or click on Proceed to continue

Select product Version 8.8.0 and click Next. We can also review the compatibility matrix to make sure the environment is compatible.

We can automatically create and delete a snapshot part of the upgrade process

Run the Precheck to make sure there are no errors

Once the check is complete, click on Next. Review the upgrade details and click on Next. We are taken to the progress screen where we can follow the progress.

The system will get rebooted and once its back up we will be on 8.8

Upgrading vRSLCM (vRealize Lifecycle Manager) to 8.8

In this guide i will go over the steps of getting an existing 8.x vRSLCM appliance upgraded to the latest 8.8 release. The release notes can be found here

The first step is to lo in to vRealize Suite Lifecycle Manager under the Lifecycle Operations section

Go to settings -> System Upgrade

Click on Check for Upgrade

We can see that the check found a new version available for 8.8

Click on Upgrade

Verify that a snapshot or backup exists in case the process fails. Check the check mark for I took a snapshot of the vRealize Suite Lifecycle Manager before I performed this operation. Click Next

Click on Run Precheck

Verify that all check have passed and click on upgrade

This will fire up the upgrade process and start upgrading packages. The system will automatically reboot on 8.8 once completed. We can check the version by going to Settings -> System Details

If you get the below error clear the browser cache and try again